It is undeniable that in the past two decades, social media has been a very powerful tool that can be used to disseminate, present, and digest information. With the free flow of information and data that reach thousands of people in a span of seconds, it is easy to believe that social media is a democratized arena. Netizens believe that they have complete control and freedom of their posts on their accounts.
In reality, however, the situation is dicey as there exist boundaries and responsibilities that individuals must know and understand. To ensure online users abide by such guidelines, social media providers often moderate the content posted on their platforms.
Digital growing pains
With the near-ubiquity of social media use in the present, it can be easy to forget that just a little over a decade ago, in the late 2000s, the social internet was still a largely unregulated space: a “wild west” of sorts. But with the emergence of popular social media platforms like Facebook, Twitter, and YouTube in the late 2000s to the early 2010s, the social media landscape transformed into the earliest recognizable version of what it is today.
This homogenization of the once largely decentralized social internet created a pantheon of social media platforms that made it easier to use this then-new form of communication. This user base growth, however, was realized in tandem with increasing public scrutiny—therefore drawing the attention of government regulators. Into the mid-2010s, the internet slowly lost the “wild west” edge it had once carried, as it was forced to regulate content with the passage of laws that promoted safety in the digital space.
With user bases reaching up to billions of people, social media platforms are, in the digital world, comparable to nation states—with their own unique cultures, citizenship guidelines, and, of course, forms of policing. While content moderation aimed at constantly improving user experience, it is more so sustained to retrain and attract advertisers. The user is then commodified in various ways: their data harvested and their attention sought after—all sold to the highest bidder.
Moderating the content generated by millions of users is no easy task; one cannot just manually rifle through every post, video, or picture ever posted, and even automated systems have their limitations. In 2019, Statista reported that YouTube users uploaded over 500 hours’ worth of videos every minute—over 82 years’ worth of videos a day.
The yearly increase of video uploads puts YouTube at a major risk if unpalatable content gets filtered through, especially if the content is monetized. Advertisers do not want their brand associated with violent, pornographic, or otherwise unsavory media content; they are not hesitant in pulling out their ads, just like what happened in the YouTube boycott of 2017.
Then there is also the issue of copyright. According to cyberspace specialist Atty. John Paul Gaba, “You cannot just post something that’s already created by some other person…that’s why most—if not all—social media platforms, especially YouTube…have a technical tool for detecting if the work you posted contains copyrightable subject matter, and they automatically filter that.”
This precautionary measure is done to avoid what the lawyer calls “contributory infringement” of copyright laws, which YouTube can be prosecuted for. This filtering, however, can be easily abused. It is well-documented that this system can also be weaponized by creators to silence dissent by falsely copyright striking those who portray them in a negative light.
What jurisdiction governments have on social media platforms is hard to say, not least because they are global entities with minimal to no physical presence in the countries they are accessible. Take, for example, a 2013 incidence of antisemitic hate speech on Twitter: the implicated users were anonymous, but the French government wanted Twitter to divulge their identities to be prosecuted for hate speech. If not, Twitter would be fined even though the company had no operations in France.
Hate speech is a very contentious issue when it comes to freedom of expression both online and in the real world. In the Philippines’ case, it does “not have a law specifically against hate speech, but we have laws that regulate or proscribe speech that is directed against individuals or groups that threaten, incite violence, or tend to cause harm against them,” remarks Geronimo Sy, who has experience in presiding over a government panel on cybercrime. Both Facebook and Twitter have very strict hate speech guidelines, though what counts as hate speech is still a matter of debate. In the end, Sy recognizes, “Hate speech exists and comes in many forms.”
Exceptions to expression
Free speech has the simple definition of expressing one’s rights with the guarantee of protection from the Constitution. On the other hand, there are limitations to the “freedom” being referred to when it comes to expressing thoughts.
Sy shares, “The right to free speech or of free expression is not absolute. It is subject to limitations, including moral and ethical ones.” Gaba cites several examples that could escalate issues, “If it would incite people to do lawless acts, if it is defamatory or libelous, if it is illegal, that is, it violates intellectual property rights, so may mga exceptions ‘yan.”
(There are exceptions to freedom of speech.)
For people to further understand the limitations, Sy and Gaba both allude to the same simple scenario. Sy illustrates, “The classic example is that one cannot shout ‘fire’ in a crowded place simply to exercise the right.” Adding context, Gaba explains, “You’ll be prosecuted for some misdemeanor or crime if it creates panic or injury.”
Politics at play
Moderation on sites like Facebook and Twitter is even more arduous, more so when peddled misinformation is often derived from political polarization in the real world and compounded upon by the echo chambers people put themselves in.
Last June, Twitter received flak for flagging US President Donald Trump’s tweets as containing misinformation and manipulated media. Republicans chastised the company for “censoring” the president, while others delighted in the fact that Trump was being held accountable for his egregious lying.
Still, a regular user would have been suspended for actions that Trump got away with. Twitter says it is in the interest of the public that Trump remains unrestricted and unhinged in their platform. Facebook CEO Mark Zuckerberg also commented on the issue, arguing that “Facebook shouldn’t be the arbiter of truth,” in a rebuke to Twitter. This is despite Facebook’s more stringent reporting system, which also accounts for fake news and takes posts down if deemed to contain misinformation.
How far social media platforms can take moderation in pursuit of disseminating truthful information is debatable, though certainly there is interest in doing so. What needs to be watched out for is when moderation becomes censorship and who gains to benefit from having voices muffled into background noise.