3 Tips for Sustaining Your Analytical Software, an All Analytics post by Bryan Beverly, left me with some doubts. Speaking as one who has interacted with many highly product-partisan software users, I must suggest a few cautions here.
Referring frequently to a product name, rather than using common technical or plain-English terms, has consequences. For one thing, it is often difficult for people who are not experts or familiar with your subject to understand what you are talking about. Use of such jargon also contributes to the analyst’s image as a geek whose concerns are not important to the bottom line.
Product-loyal analysts take other risks as well. They may become so attached to their tools that they are unaware of, or unwilling to use, easier or more cost-effective alternatives. They may appear inflexible to current employers, untrained or untrainable to prospective employers. They may have difficulty collaborating with others who use different products.
I have worked with organizations burdened with managing multiple products and multiple versions, creating serious inefficiencies in purchasing, training and support. Product loyalty implies much more than just believing a certain tool is the best one to do your job. It’s also a roadblock to interacting with others who have differing needs or preferences. Snobbery tied to product preferences is a common thing. It is the profession’s class warfare.
I love the ideas of creating in-house user groups, letting management know about the value of your tools to the organization, and doing all we can to learn and help others learn about good tools. In the end, though, a tool is just a tool.
Noreen Seebacher asked: “What if an analyst has a very specific reason for favoring one piece of software over another? Is it fair for him to promote that product or does it create risks for the organization in terms of cost controls, etc?”
Analysts always have very specific reasons for preferring one product over another!
It’s important to separate the reason – what we need to accomplish – from the product. Often, analysts become attached to specific products, believing them to be better than all others for performing specific tasks. When that happens, and it happens often, there are many possibilities to consider.
The product may not actually have been superior in the first place
A real product advantage that existed at one time may not persist as other products change and improve
A new product may have better capabilities, or eliminate the need for certain tasks or methods
The benefits of standardizing methods or tools across an organization may outweigh the specific benefits that drive the preference for a specific product
Challenges of using some products, or integrating them into the business, which affect many stakeholders, may outweigh advantages to a subset of the organization
The costs (think total cost of ownership as well as pricetag) may not be justified by the reasons behind the analyst’s preference
and so on…
If you have a good reason for preferring a specific product, you must state it in terms of the business. What do you need to do, why is it necessary to the organization, how does it translate into dollars or some other meaningful business metric? If you can only state your preferance in statistical jargon, you won’t be persuasive.
Let me give an example of an organization that was deeply affected by this issue. A state government agency was planning an organization-wide operating system update. The staff used several different products, and often several versions of each, running on several operating systems. Through many lengthy and detailed discussions, we found that all the analyses that they needed to perform could be addressed with just one product family.
Standardization offered an advantage in purchase cost, but it also addressed many other issues. Technical support would be simplified. Training weaknesses could be addressed in a manageable way. Users could understand and help each other more easily with a common platform. Sharing of work would become practical. And the new tools offered valuable capabilties which had not been available to the organization before.
Some of the staff was quite open about their dislike of the change. They were certainly faced with some legitimate challenges – they would have to learn to use a new product, and they may have had code written for their old tools which would now have to be replaced. But in my discussions with them, those concerns never came up – instead they grumbled – publicly -that the old stuff could do everything the new stuff could do. That simply wasn’t true, and since they constantly made such claims during public presentations, I was forced to contradict them in front of their coworkers. It made them look stubborn and foolish.
This wasn’t an isolated case. Very similar things go on with every organization that explores standardizing tools. The people most resistant to change are usually the analysts with the most sophisticated statistical training – that is, the same people who should be the best equipped either to make a good business case for their preferred tools or learn to use the alternatives, and learn well. When they choose to do neither, they look like highly educated babies.