The computer industry loves surveys. Whether it's because of the inherently technical nature of the profession, the Druckerism that what you can't measure you can't manage, or some traces of 19th century Taylorism, there's this fascination with finding out how people really feel, through what appears to be a scientific method.
The problem is, a lot of industry surveys are crap.
Whether they're crap on purpose (of course, no one would suggest that surveys are slanted to promote a particular point of view) or because the people generating and writing them don't know what they're doing, is immaterial. The point is if you're making any of your IT decisions based on surveys, you owe it to yourself to learn the signs of when a survey is crap and when it can be trusted.
Who was surveyed? One of the hallmarks of a reliable survey is that the population of people surveyed is random. If you ask a bunch of people about their favorite sandwich, you're going to get a different answer if you ask a bunch of people from Philadelphia vs. a bunch of people from New Orleans — and a different answer still if you ask a bunch of people from all over the U.S. Similarly, if a company surveys only its own customers, it's going to get different answers from a survey that asks users of a variety of products.
In the same way, look out for surveys where people are invited to take the survey by filling out a web page. In statistics, that's called self-selection. You're limiting the survey to people who have access to the survey method (the web page), people who have time right then, and, perhaps, people who have a bone to pick or some reason to promote a particular point of view (even if it's just for the Amazon gift card they get for replying).
You also want to see how many people were surveyed. If it's just a few, you really can't draw meaningful conclusions from what they said. And that's also true of particular categories. While a particular survey might have talked to 300 people in total, by the time you slice it up into titles or geographic locations and so on, you may find that there are only three or four people in a category — and one person changing their mind can change the survey by 25 percent or more.
What's a good sign of a reliable population? Look for sentences in the survey such as "This survey has a margin of error of" and a percent, or the phrase "standard deviation." You don't have to know what these terms mean, just that they can only be used if the population was chosen randomly.
Anybody who's ever been on the receiving end of "You're not going to eat that, are you?" "Does this make me look fat?" or "Is that what you're wearing?" knows that it's possible to telegraph the expected answer in the way the question is phrased. Similarly, whether by accident or by design, it's possible to phrase questions in such a way as to elicit certain kinds of answers, or at least to prevent certain kinds of answers. Examples of this in surveys include having a list of possible answers from which the respondent has to choose, putting a range of feelings with more choices on one end of the spectrum than another, or featuring a question that's front-loaded with some information — true or not — that may be new to the respondent (known as a push poll).
Of course, you have no way of knowing this if the surveyor doesn't release the list of questions or the raw survey results. Releasing the complete list of questions also makes it more obvious if the surveyor is avoiding answers that don't fit into its narrative.
When was the survey conducted? Naturally, it takes time to generate a survey and analyze the data, but survey timing can affect the results. If the survey was done months and months ago, a pertinent news event could have happened in the meantime and the survey results may now be moot. Or a survey could have asked about budgets at a time of the year when people don't know yet what their budgets are going to be. Again, you can only know this if surveyors tell you — and if they don't, take that under advisement.
Hope this helps. But, you know, 100 percent of those surveyed found this information useful!
Simplicity 2.0 is where we examine the intricate and transitory world of technology—through a Laserfiche lens. By keeping an eye on larger trends, we aim to make software that’s relevant to modern day workers, rather than build technology for technology’s sake.
Subscribe to Simplicity 2.0 and follow us on Twitter. If what we’re saying piques your interest, head over to Laserfiche.com where you’ll see how we apply the lessons learned on Simplicity 2.0 to our own processes, products and industry.