By Sunny Um Wired Korea
On Wednesday, Koreans will cast their votes to choose representatives who would serve in the National Assembly for the next four years. No one can guess which party would have control over the legislative body, but pre-election surveys suggest some hints at voters’ preferences as well as the political landscape that is about to unfold.
Pre-election surveys are useful for both politicians and voters; while candidates can plan strategies in accordance with poll results, they can also help voters get a glimpse into who has a high chance of winning in the election.
But survey results are based on data collected from a selected group that would represent the entire population. This means they may be incongruent with census data created with responses from every citizen.
On the contrary, poll results can vary as they are hinged upon the number of participants, methods used to collect responses and the distribution of samples.
Some say that the response rate is a significant factor in determining the credibility of pre-election surveys. A response rate is the proportion of the people who reply to the survey and is calculated by dividing the number of returned responses by the numbers of surveys sent out and multiplied by 100.
In the 2017 presidential election in Korea, then-presidential hopeful Ahn Cheol-soo said survey results with low response rates could be easily “fabricated” and “distort” voters’ opinions as they do not reflect the entire survey recipients.
But others say that the response rate does not make a stark difference in the poll outcome. Even with a low response rate, surveys could be credible if collected samples represent the population well.
“It is true that the credibility of a survey goes up if the response rate is high, but a survey can be accurate even with a low response rate,” says Huh Myung-Hoe, a statistics professor at Korea University in an interview with Yonhap News Agency.
Pollsters based in Korea ask questions to voters with phone calls, emails, online links, or applications on smart devices. One of the oldest and most used method is a telephone survey as many people are likely to ignore survey invitations they receive on apps or emails, according to Communications for Research.
But results could also differ among telephone surveys based on the type of phone respondents use.
In the experiment conducted by the Field Methods journal, landline survey respondents were likely to respond only to the topics that are making headlines in the news, while mobile phone users showed similar response rates to all topics in general. Cell phone users also tended to show less participation in lengthy surveys.
Political stances of landline and mobile phone users could have an impact on the results as well. A Los Angeles Times report says that landline phone users tend to be more conservative than mobile phone users, who are likely to be younger.
The proportion of landline phone respondents to mobile phone respondents is often a factor that changes the survey outcome. For instance, two polls conducted in the same week from Dongjak-B in Seoul, where Na Kyung-won of the conservative United Future Party competes with Lee Soo-jin from the ruling Democratic Party of Korea in the upcoming general election, expected two different winners. A poll that consists of almost 31 percent of respondents using landline phones predicted Na to win. The other poll, which had 9.5 percent of landline phone respondents, expected Lee to win.
If answers are collected only from a particular group of the population, which is called “over-sampling” in statistics, the survey result could turn out to be biased.
For instance, if a pollster asks 10 survey participants from the same town what their favorite fruit is and they choose apple, the pollster might come to a conclusion that all people living in this town like apple. But it might not be true as the rest could prefer other fruits. This is an over-sampling from a certain group – in this case, the group that prefers apples.
Pollsters commonly categorize returned survey responses based on biological gender, age, and place of residence. Some also try to reflect the ratios of each category such as the proportion of female and male respondents and the size of age groups.
Some say that pre-election polls do not reflect what voters truly think, with more samples from left-wing than right-wing supporters. Researchers at the Yeouido Institute criticized in a report released in April that some surveys that expected a big win of the left-wing in this year’s general election had 20 to 30 percent more left-wing respondents than the right-wing.
The problem of over-sampling would not be always pollsters’ fault, however, as some supporters of the opposition party tend to be more reluctant to participate in a survey less than the ruling party supporters, says Kim Min-jeon, head of the office of career development at Kyung Hee University in a report by Asia Business Daily.
He was quoted as saying: “The opposition party supporters respond less to the survey invitation. This tendency becomes conspicuous especially when those supporters are shy to speak their opinions.”