The
Polls, the Pundits, and the Elections of 2004 |
Many Americans
love political polls. Others love to hate them. Those who love polls
enjoy the game of politics — who is ahead, who has a chance to win,
who has the most popular position on health care or the economy. These
"political junkies" closely watch the job performance ratings
of presidents, governors, and mayors. And many voters just seem to like
the idea of being connected to others in their community or in the nation.
In an era when more and more Americans are atomized in work cubicles
or long commutes to and from work, polls give citizens a sense of where
they stand in relation to others as part of a national community. Polling Then and Now There was a time when only one or two polling organizations dominated the center stage. Today, in an era of instant news, the Internet, and 24-hour cable news channels, large news holes are frequently filled with commissioned and non-commissioned opinion polls from various sources. While the first political poll was conducted in 1824 by the local newspaper in Harrisburg, Pennsylvania, independent polls did not become a staple of media news coverage of political campaigns until the 1930s. The earliest and best modern political polls were conducted by names like Gallup and Roper, and later joined by other U.S. household names such as Sindlinger, Yankelovich, and Harris. In addition, by the 1970s, all three major U.S. television network news operations were offering their own polls for the presidential races, and shortly thereafter for important statewide races for the governor's office and for the U.S. Congress. Media polls — those conducted in the name of a news network and a newspaper partner (e.g., CBS/New York Times, ABC/Washington Post, NBC/Wall St. Journal) — differ in many ways from the polls conducted privately for candidates and political parties, and they have become an important part of the political process. The key difference is that media polls are public and are intended mainly to inform voters of which candidates are in the lead in a political contest. They are designed to be neutral and independent. This objectivity is particularly important because it prevents candidates from dissembling about their own "private" polls. For example, a candidate once could claim that his private polls showed him ahead, while the conventional wisdom suggested otherwise. Over the decades, independent political polling has offered an objective look at election races, an assessment of each candidate's strengths and weaknesses, and an examination of the demographic groups supporting each candidate. Such independent polling gives reporters and editors the ability to make and report honest assessments of the status of a campaign. The kind of transparency found in independent polling offers a useful service for readers and viewers. But even independent polling can be problematic. In 1996, former Republican Senate leader Bob Dole challenged President Bill Clinton, the incumbent Democrat, for the White House. While most polls showed Dole trailing by as much as 25 points throughout the campaign, my own polls for Reuters showed a significantly closer race — perhaps in the 7- to 12-point range. In that race, however, other organizations' network and major newspaper polls drove the media coverage. Thus, day in and day out, Dole was referred to as the candidate who was "seriously trailing" the president "by as many as 25 points." When only the most lopsided polls are used as the basis for coverage of a campaign, they can seriously skew the reporting, suggest an outcome, and then become a self-fulfilling prophecy. It also does not make it any easier for the candidate to raise money or to receive a fair hearing. Does this mean that pre-election polls actually affect voter turnout and/or the results? Generally, the short answer is no. Although the Dole-Clinton coverage posed serious problems for Senator Dole, there is no hard evidence that Dole could have won the election. There is also no clear evidence showing that any candidate in a competitive race ever lost because of pre-election polls showing him behind. Some, however, assert that there is too much polling today — so-called poll-ution. I have alluded to the 24-hour cable news networks and their need to fill large news holes. That is one reason for the proliferation of political polls. Sheer competition among media organizations is certainly another factor. In 2000, there were at least 14 major independent polls during the campaign season, and their results were not always consistent. But voters should not complain — there is choice, and voters need to be good consumers when viewing polls, just as they should be when they buy a car or a house. There are some basic rules to follow in polling, and here is my guide to how best to look at the polls. Sample Sizes and Margin of Error Sometimes, overnight polls are conducted after a major event such as the president's annual State of the Union Address or a debate between candidates for political office. Often these polls are done in one night for quick publication the next day and feature a sample of only 500 adults nationwide. While these "overnighters" might offer a fast reading on public reaction, experts believe they are flawed. First of all, a sample of just 500 citizens is too small for serious consideration in a nation of 280 million people. It may be accurate 95 percent of the time, plus or minus 4.5 percent, but that really is not sufficient in a presidential or major state race. In addition, the 500-sample size, in my view, is not sufficient to produce the statistically significant subgroup analysis required in a national or major state election. There are other methodological problems as well. A one-night sample means that a broad representation of people might not have been at home. While pollsters will apply weights to their sample to make it more accurately reflect the demographics of the population, weighting procedures do not always compensate for groups that are substantially underrepresented. For example, an overnight poll could under represent African Americans. Or, on another night, the poll might have contacted too many African Americans from Nebraska or Kansas, and not enough from New York, Mississippi, or South Carolina. Another problem common to rushed polls is that they may survey "adults" instead of "likely voters." The demographics between the two groups can be quite different. Generally, the adults pool includes more minorities, people from lower-income households, and union members. Given that each of these groups leans toward the Democratic Party and its candidates, any overrepresentation in their numbers in a poll can skew the results. Watch for the sample size and makeup of a poll, then. A good national poll in the United States will pose questions to at least 1,000 likely voters and report a margin of sampling error of no more than plus or minus three points. When a Victory Is Not a Victory Like the Mona Lisa or a great work of fiction, even the most thorough polls are open to interpretation. They also establish a set of expectations for the reporters and pundits who read them. In this way, the pollster and the pundits establish that elusive animal known as "the conventional wisdom." And both groups love to see candidates defy this conventional wisdom. Thus, there is a rich history of candidates who have risen from the "pack," contrary to trends suggested by early polling results. Take, for example, the case of Senator Eugene McCarthy in his crusade against the war in Vietnam and his race against President Lyndon Johnson in 1968. Although anti-war sentiment was developing in the United States, no one thought that a little-known senator from Minnesota could possibly mount a serious challenge to the powerful President Johnson. But when the votes were counted after the first (New Hampshire) primary contest, McCarthy garnered 41 percent of the vote to Johnson's 49 percent. Although the president's name was not even on the ballot and had to be written in by those wishing to vote for him, the pundits decided that McCarthy so far exceeded any expectations established in the pre-election polls that they declared him the victor. The McCarthy "victory" stunned the political world, and, within two weeks, President Johnson decided to not seek re-election. A similar pundit-declared victory occurred in the Democratic primary in New Hampshire in 1972. South Dakota Senator George McGovern, who had briefly taken up the anti-war mantle in 1968 and then led a reform movement within the Democratic Party, challenged the clear front-runner for the presidential nomination, Senator Edmund Muskie. McGovern's private polls showed that he could exceed 40 percent of the primary vote in New Hampshire; thus, he wisely suggested to the media that he would be happy with a 35 percent showing. When he scored 43 percent to Muskie's 48 percent, the press argued that (as in 1968) the challenger had "won" by exceeding pundit expectations. As in 1968, the "victory" gave McGovern what historians argue are the greatest benefits of winning in New Hampshire: media, money, and momentum. McGovern went on to win the Democratic Party nomination for president, although he lost the general election in a landslide to Richard Nixon. In 1976, former Georgia Governor Jimmy Carter was at first labeled "Jimmy Who" by the Washington press corps. Carter's 28 percent showing in New Hampshire against five better-known Democratic candidates was enough to propel him into the front-runner status and the eventual nomination. Thus, the lesson in all these cases is that pre-election polling potentially can be used to bolster the position of a front-runner or undercut it. Polling does, in fact, establish standards for coverage of campaigns and a sense of the conventional wisdom as to expectations for victory Exit Polls Exit polls have been a major staple in U.S. national and state elections since the 1970s. They also are arguably the most controversial polls conducted today because they attempt to predict election victories beyond the polling place door based on interviews with people who have just voted. Exit polls achieved particular infamy in the 2000 U.S. presidential election, when they were misused by the television networks to make not one, but two, incorrect projections of the winner who had been selected by voters in Florida. However, exit polls, when used properly, are a vital tool for pollsters, the press, and academics. Above and beyond their use in projecting winners early on election night, they provide experts and political scientists with details of how specific demographic groups have voted and the expressed reasons for their vote. They also help pollsters develop voter turnout models for future elections — that is, a sense of how many of each demographic group can be expected to turn out for an election. This is vital to ensure that future-voter samples used for policy purposes are representative. But exit polls become problematic when they are used to project a winner. No matter how good the sampling process involved in an exit poll, it is still sampling, which means that there is a margin of sampling error. This matters less if the election is a landslide, but in a close election, a one- or two-point margin of error looms large. Based on the pre-election polling and the exit polls throughout the day in 2000, there was no legitimate way that television networks could have possibly determined whether George W. Bush or Al Gore had won the state of Florida before all the votes were counted. The pressure to get the projection first trumped the pressure to get the projection right. This view may appear
strange coming from a professional pollster, but I think that the exit
poll crash in November 2000 was a good lesson to learn. We simply do
not need to know who won an election before the actual returns come
in. The election process will be better served if exit polls are used
throughout the evening of the election solely to illuminate who voted
and why they voted the way they did. There is a lot of
talk these days about lower polling response rates. When I started working
in this profession, response rates averaged 65 percent — that is, for
every three people reached on the phone, two would agree to respond
to a survey. Today, average response rates are about 30 percent, and
they tend to be much lower in some metropolitan areas. Some pundits
are ready to declare that polls are dead because of this. That is hardly
the case. Lower response rates mean that it takes longer to complete
polls, but it is still possible to get good samples. Though much has
been made about some polling firms — my company included — missing some
major election calls, the fact is that we all generally are still able
to obtain results well within the margins of sampling error. I think
that having reasonable expectations for what polls can and cannot do,
combined with the healthy skepticism of the consumer of political information,
is the best approach to take as we all prepare for another major election
year in 2004. -------------------------------------------------------------------------------- |
DISCLAIMER
Any reference obtained from this server to a specific commercial product, process, or service does not constitute or imply an endorsement by the United States Government of the product, process, or service, or its producer or provider. The views and opinions expressed in any referenced document do not necessarily state or reflect those of the United States Government. |
U.S.
Diplomatic Mission to Germany
/Public Affairs/ Information Resource Centers Updated: March 2004 |