First, though, a disclaimer: You probably won't care at all about the subject matter of this post unless you follow politics as closely as I do (and maybe not even then). I tried to make this thing a reasonable length, but alas, I see it's become a short novel. To mitigate that problem, I've actually added anchor links (I think this is the first time I've done such a thing in a blog):
- A single poll is just a snapshot in time.
- Polls are used to sell headlines, and headlines set the tone.
- No poll tells the whole story.
- The wording of poll questions can alter the results entirely.
- Robots versus human beings: It's not just fodder for a sci-fi movie.
A single poll is just a snapshot in time.
One poll conducted by one pollster during one time period is not an adequate gauge of public sentiment. As I'll explain below, there's no such thing as a perfect poll; and if you treat one as though it is, you ignore not only the inevitable biases that skew results, but also the reality that public opinion is constantly evolving, sometimes very rapidly. If a political candidate commits a major speaking gaffe today, his or her public support may drop dramatically tomorrow — but any poll conducted yesterday or earlier won't reflect this at all.
Therefore, polls do carry predictive value — but only when viewed collectively over an extended period of time.
Polls are used to sell headlines, and headlines set the tone.
Unfortunately, the mainstream media don't usually recognize the problem with putting too much stock into a single poll, because they want to sell news, and polls are always framed as being newsworthy, even though they're usually not. In the months ahead, we can expect to see lots of headlines about polls that show Obama with a modest lead over Romney, or Obama and Romney running in a statistical tie, or even Romney running a point or two ahead of Obama.
But what if there was one or even two polls that suddenly showed either candidate opening up a wider lead nationally — say, on the order of 10 percentage points or more?
In all likelihood, such polls would be outliers. That wouldn't matter to media outlets, though. We'd start to see headlines like "POLL: Bad news for Obama" or "POLL: Romney's prospects dim."
And with each headline, the tone is set — both for the respective campaigns and for supporters, who end up either encouraged or demoralized, possibly because of a poll that's way off-base. After all, there's less reason to be excited about a candidate who appears to be losing ground, even if he's really not.
Former Ohio Gov. Ted Strickland, a casualty of the 2010 Republican sweep, obviously recognized this. He blasted a Quinnipiac poll released just a few weeks before the election that showed his Republican challenger, John Kasich, with a commanding 10-point lead. It appeared that this incumbent's fate was sealed in a very anti-incumbent year. The actual result on Election Day? Strickland lost by just two points. Here's a great example of why flawed polls like that one shouldn't — but still do — set the public narrative. Which leads me to my next point.
No poll tells the whole story.
This is particularly true of polling on policy issues. My favorite example is how Rasmussen Reports polls the federal health care law. Taken at face value, these polls make it appear as though a majority of Americans unwaveringly support full repeal of the legislation. A closer look, however, reveals serious problems with how that conclusion is reached. Rasmussen asks nothing at all about the substance of the legislation; it just asks respondents whether the law is "good" or "bad," followed by a series of simplistic questions about its anticipated effects (again, without even once delving into its specific provisions).
This is a bit like polling Americans on whether they love or hate California ("ambivalent," of course, is not an optional response). Most respondents hate it, you find? Well, why is that? Is it the smog and sprawl of Los Angeles? Or the gays and hippies of San Francisco? Oh, they love the gays and hippies, but hate the smog and sprawl, and their hatred of the latter trumps their love of the former, so they just say they hate the whole place?
Well, that's worth noting. California is a huge, diverse state. It can't be judged categorically and holistically without considering the many traits that comprise its overall identity. Neither can the Affordable Care Act of 2010. If you ask people whether they support or oppose such a large, complex, multifaceted law without asking them about any of its specifics or offering any middle ground in the response options, you egregiously misrepresent public opinion.
Another poll conducted by CNN (pdf) confirms this. Even this one doesn't dive too deep into the elements of the legislation — but at least it asks respondents to identify whether they'd retain the law in its entirety, or overturn some or all of its provisions. Opponents of the law are also given the option of specifying whether they hold that stance because they believe it's "too liberal" or "not liberal enough." Tellingly, between supporters of the law and opponents who believe it doesn't go far enough, the CNN poll finds that 53 percent of Americans favor health care reform. That's a stark contrast with the Rasmussen poll, which would have you believe that at least the same percentage wants the current law repealed completely. Wiped off the books.
It's all about what gets asked — and how it's asked. On that note…
The wording of poll questions can alter the results entirely.
If you want to be certain that you'll produce results showing that Americans dislike the health care law, you should ask respondents whether they support or oppose Obamacare.
If you want to produce a poll that proves that Americans oppose gay marriage, you should ask respondents whether traditional marriage should be redefined so that homosexuals can marry.
On the other hand, if you want the opposite result, you should ask whether states should ban same-sex marriage.
All of the hypothetical questions above contain loaded words that are sure to influence responses. "Obamacare" is self-explanatory. Republicans should have trademarked this derisive, stigmatic label. Whether you support or oppose the health care reform law, the term undoubtedly causes your blood pressure to rise. No poll that uses it in a question will elicit accurate results. Many respondents won't even remember what the question was after they hear "Obamacare."
Likewise, "homosexual" is a clinical-sounding, pejorative term (at least now it is, though it wasn't always) that makes gayness sound like a disorder. That's why polling paid for by anti-gay groups like the National Organization for Marriage use it all the time. They also use phrases like "redefine traditional marriage." Tradition, of course, makes us feel all warm and fuzzy inside. Who the hell would want to redefine something traditional?
On the other hand, you'll get results more favorable for proponents of equality if you frame the issue as being about someone's rights. We don't like the word "ban." To "ban" something means to infringe on rights or freedoms. We don't like that. So if you ask people whether same-sex marriage should be banned (notice: no reference to homosexuals or anything traditional being redefined), your result may be much different, as indeed it was on a series of polls done in Minnesota on this very issue.
Robots versus human beings: It's not just fodder for a sci-fi movie.
Finally, the two most popular methods of polling are done either by live callers, who interview respondents over the phone, or by an automated system, where the respondent listens to recorded questions and either speaks his responses into the phone or indicates them by pressing buttons. (This is sometimes called "robo-polling.")
Obviously, there are pros and cons to both methods. Automated polling is a bit like fast food — it's cheap and can be done quickly in large quantities, but its quality leaves something to be desired. People often hang up when they realize they're being spoken to by a robot; and a robot cannot discern a respondent's tone of voice, thought process, or whether he or she even understood the question. This can lead to faulty responses being recorded as valid data.
On the other hand, polls administered by live callers can ask more in-depth questions about issues that wouldn't be feasible under the automated approach, and faulty responses are less likely to be recorded, because the live caller can detect problems that a robot can't. However, live interviews are much more expensive and time-consuming to conduct than automated ones are. Also, live polling is subject to what's called the Bradley effect — a theory that respondents will give answers they deem to be politically correct or socially acceptable, rather than ones that reflect their true opinions, since they fear being judged by the person administering the poll. Of course, this phenomenon wouldn't pertain to automated polls.
The two methods tend to produce disparate results. Automated polls often overstate support for conservative candidates and policy positions, while live-interview polls frequently give an edge to liberal ones. This is illustrated with the CNN-versus-Rasmussen example. Rasmussen is an automated polling service, while ORC International (the firm that CNN uses) employs live interviewers.
* * *
As I mentioned above, public opinion polls can carry some predictive value — but only when multiple polls taken by multiple pollsters using various methods are viewed over an extended period of time. Otherwise, they can become nothing more than a cheap tool used to sell headlines or advance someone's agenda.