A cure for cancel culture: content-moderation juries

(Chris Schilling/Wikimedia)

(Chris Schilling/Wikimedia)

Social media has become one of the more important aspects of our lives. We use it to communicate with others, post about our daily endeavors and even get our news. The benefits that these media platforms have provided us are numerous, but many turn a blind eye to the problems they’ve caused.

One unique issue that has come with the rise of social media is this new concept of ‘cancel culture,’ where people with unpopular opinions, who express their views on platforms like Twitter and Facebook, are silenced through immediate removal.

There are posts and people who clearly violate the guidelines that tech companies create. For example, to ensure safe and effective community participation in these online public forums, we may all agree that the incitement of violence and targeted harassment at individuals should institute a ban. 

At the same time, anyone that is an avid user of social media can cite at least one instance when someone has been unfairly penalized for expressing an opinion about a topic that deviates from the norm. 

There are even certain words or phrases that can get one yanked from a site thanks to algorithms ー computer-dictated systems that companies use to help them carry out their rules and decide on punishments for not following them. 

On the surface, this appears convenient: if someone breaks with the terms of service, they can quickly be disciplined for their decision to use words that are deemed harmful. But, one can also see how this can be used in a manner that limits speech, accidentally or purposefully. 

A situation that epitomizes this phenomenon happened as early as March of this year, and it’s one you may have heard of.

On Twitter, people were being removed for simply using the #LearnToCode. At its origins, the hashtag wasn’t used to target any individuals in particular; it was a meme directed to appeal to laid-off journalists. Because many people did not find the joke funny, anyone who used the hashtag, which still did not violate Twitter’s rules, was reported and subsequently removed from the site. The issue even extended to people who tweeted #LearnToCode by itself, with no context. Because of algorithms set by Twitter, many users were unjustly punished. 

Another example of removal based on popular sentiment was that of Alex Jones, the well-known conspiracy theorist who runs a site called ‘Infowars.’ For years, Jones has been known to be the peddler of several controversial conspiracy theories, but in early August of 2018, most of his pages were taken down by all of the major tech companies, including Google, Spotify, Facebook, Apple and Twitter. Jones was initially removed for content on his site that claimed that the Sandy Hook shooting was a hoax, a very insensitive assertion to make, but was cited by many of the companies for ‘hate speech’ and ‘misinformation.’ 

While these companies have very broad definitions for these terms, Jones is also known to be a performer. He often goes into lengthy, comical tangents about different political issues and has hilarious outbursts that much of his base look to for entertainment. Removing Jones has not only eliminated a vast majority of his audience but has also left many of his followers with limited accessibility to his shows.

So, what exactly can these social media companies do to fix this problem? One solution could be to change their rules, but that would rest on the shoulders of those who work for the company, and we have seen how that plays out. A better solution might be to install an idea that has floated around for quite some time now: content-moderation juries.

It’s a system where random social media users would be selected by a company to look at a post, review the report(s) against the post and the rule cited as its violation and determine whether or not they broke a rule and, if so, what punishment they should face.

Periscope, a daughter company of Twitter, already has a similar system in place, which they call ‘flash juries.’ On a miniature scale, anonymous users of the media platform viewing a live-stream video are selected to deem if a reported comment is appropriate or not. 

Despite the idea’s popularity and exoteric origins, opposition to the idea of instituting such a system does exist. One could argue that the responsibility for determining a punishment for those who break the rules should be left in the hands of the rule-makers, but that has landed us in the predicament we are currently in.

Content-moderation juries take power out of the hands of unreliable, unnatural algorithms created by tech giants and put it directly into the hands of actual humans who use social media. Not only that, but this would introduce a new, innovative experience for those to participate in a public forum in a democratic way. 

Content-moderation juries would also speak to the issue of de-platforming, another highly-controversial practice on social media and in the public sphere. If you don’t know what that is, de-platforming is a newer form of political activism that seeks to shut down speakers or those with a large following on social media by either not allowing them to talk at a designated venue or reporting the views they express on their media accounts.

With this system, tech companies would provide due process to those who feel that they’ve been unfairly punished on social media, especially those with sizable bases of support.

Facebook Comments