Categories
University

Commentary: Enough with social media surveys

Such data manipulation would have gotten failing marks in an introductory statistics class; startling that it was even considered good enough for public consumption.

Almost everybody is doing surveys now on social media.

One might have seen it before: a friend shares an ongoing poll, or someone in a Facebook group asks people to swing the results on a random nondescript post. 

Other times, these surveys make headlines. For the past few months, Manila Bulletin has conducted a series of presidential and vice presidential surveys to determine which candidates were leading ahead of the election.

Bongbong Marcos, son of late dictator Ferdinand Marcos, led earlier Facebook and news site surveys by a mile while Vice President Leni Robredo dominated the Twitter polls. But in a later survey, Robredo overtook Marcos, while Sen. Manny Pacquiao came out as the new Twitter frontrunner.

The truth is almost none of that information is useful.

While they present that some candidates are favored over others, they arrive at these conclusions dubious. Without getting too deep into the technical details, the first problem with mounting a social media survey is representation, that the number of respondents accurately reflects the overall voting population. It is unclear who answers Manila Bulletin’s surveys or if they are even registered voters, let alone real people. The newspaper likewise makes no effort to dissuade any doubts.

Even if we assume a majority of these are actual people, the respondents are still limited to those already on these platforms. 

Despite playing a prominent role in some people’s lives, only 10.2 million Twitter users are actually from the Philippines, according to an October 2021 report from the internet activity analytics website, DataReportal. By comparison, about three times more Filipinos are on TikTok.

But how about Facebook, which the same report estimates almost 90.5 million Filipinos inhabit? It does seem likely that enough Filipinos are on the platform. But it is hard to be certain when we have been a testing ground for electoral propaganda on the website for years.

Perhaps one is of the opinion that because thousands—sometimes millions—of people respond to these polls, they must be more accurate. This is compared to Pulse Asia or Social Weather Stations, which ask only a few thousand people. Perhaps there is this perception that the large sample must account for something. It does not.

Despite constantly receiving accusations of publishing “false” results, Pulse Asia is arguably a more reliable source than a collection of random Facebook users. For one, their methodology—the details of which are online, free for anyone to scrutinize—is based on statistical methods to warrant its smaller sample size.

Pulse Asia looks at age—you have to be at least 18 to vote, after all—geographic location and socioeconomic status as factors when identifying respondents. Questionnaires are vetted beforehand to avoid any ambiguity. But above all else, real people are being asked to answer these questions.

By comparison, Manila Bulletin seems uninterested in putting any rigor in their approach, to the point that they merged results from two separate tweets of the varied number of options as though they were comparable.

Such data manipulation would have gotten failing marks in an introductory statistics class; startling that it was even considered good enough for public consumption.

Granted, being transparent in one’s methodology does not guarantee trustworthiness either. An example is Publicus Asia—another firm that conducts opinion polls and presidential surveys. While they also disclose details of their methods, some glaring items bear mentioning. 

The respondents are sourced from a “market research panel”, the details of which are left unclear in the firm’s executive summary. These panels are often composed of prescreened individuals who agreed to participate in market research studies. From here, they then filtered responses through purposive sampling, an approach where a researcher constructs their sample set based on their judgment. It is clear why this becomes a problem: the selection is not entirely random, which opens the data to a higher level of bias.

Perhaps instead of arguing over proper methods, the question should be whether there is even a need to conduct such surveys. Imperfect snapshots as they sometimes are, the results of opinion polls have been found to influence public sentiment, based on research. When informed that a majority supports a stance, some people may change their perception to conform. We call this outcome the “bandwagon effect”.

Although the effect of survey performance on swaying voters remains a topic for debate, it does have other consequences. For one, survey frontrunners gain an indirect advantage in media coverage. News outlets sometimes indulge in what is called “horse-race reporting”—a form of spectator viewing of election developments. Rather than emphasizing platforms, news reports would instead center the narrative on who is leading in opinion polls.

These same survey leaders are given the benefit of a wider platform, as seen in Jessica Soho’s interviews, where only the “leading presidential aspirants” were invited, or in the KBP Presidential Candidates Forum, which limited its program to the “top six” runners.

But instead of relying on polls to decide which candidate to vote for, it is better to examine their track record and detailed action plans. While it is disheartening to see a less scrupulous candidate lead in reputable surveys, it might be prudent to take it as a sign that more needs to be done on the campaign trail. 

Dismissing polls outright and baselessly accusing them of bias simply because one disagrees with their results makes one no different from online trolls who already engage in the same type of behavior. Aimless speculation benefits no one, not even one’s preferred candidate.

Anything can happen in the next few months. Surveys for past election cycles have shown us that it remains anyone’s game until May, no matter who leads before voting day. What matters is to vote based on platform and experience, not by the number of people who choose them on poorly prepared polls.

Frank Santiago

By Frank Santiago

Leave a Reply