Thursday

November 21st , 2024

FOLLOW US
pc

WINFRED KWAO

8 months ago

HEY SIRI, SHOULD ARTIFICIAL INTELLIGENCE DECIDE WHAT IS SAFE ONLINE?

featured img
Technology

8 months ago



Over the last decade, and especially within the last five years, technology has raced to keep up with an ever-expanding Internet. Believe it or not, the Internet has become more than just a hub for conversation, eCommerce, and pornography.

The Internet, even in its earliest iteration, was always a place for communities to gather online. From all around the world, people learned to use their computers to chat with friends on AOL, sell their junk on eBay, and yes, watch adult videos on Sex.com.

(Speaking of Sex.com, considered the first-of-its-kind pornography website, journalist Kieran McCarthy wrote a brilliant account of the legal battle that was fought over this coveted Internet domain. The title of the book alone is a riveting read: "One Domain, Two Men, Twelve Years, and the Brutal Battle for the Jewel in the Internet's Crown.”)

Today, those who use the Internet are faced with ethical questions few would have anticipated in 1994. Back then, the only bookmarks on most WorldWideWeb browsers (Internet Explorer wasn’t released until 1995) were Yahoo, Zappos, and, you guessed it, Sex.com. Few could have predicted these ethical quagmires because even those of us watching what is unfolding before us can hardly comprehend the magnitude of what is at stake. With each passing day and each new Big Tech platform update, the competencies of Artificial Intelligence (AI) progress "further and faster" than anyone thought possible, and suddenly we find ourselves asking the terrifyingly dystopian question:

Should computers be allowed to make ethical decisions?

I use the phrase “further and faster” because that is exactly the plea that former UK Prime Minister Theresa May made to technology platforms in 2017, effectively placing national and international security upon the shoulders of these giants. "Industry needs to go further and faster in automating the detection and removal of terrorist content online, and developing technological solutions which prevent it being uploaded in the first place,” May said at a terrorism prevention panel at the UN in 2017. In the four years to follow, terrorism prevention and modern content moderation practices would be a top priority for executives at Facebook, Google, Amazon, Microsoft, and Twitter. Under direct orders to go “further, faster,” the ethics and cultural liabilities of certain decisions were perhaps overlooked at best, or deliberately swept under the rug at worst. Effects of excessive social media use on developing brains? Disinformation campaigns guised as scientific think tank research? The delegation of nuanced decision-making to computer algorithms and AI? Further and faster these giants went until September 2021, when a lone protestor came forward to stand in their way. Like Tank Man in Tiananmen Square on June 5, 1989, former Facebook employee turned-whistleblower Frances Haugen stood up and said: “No more."

I write this story with three primary intentions:

  1. I aim to discuss some of the legal and technical history of content moderation, and bring to light one of the most prominent shortcomings in any model that relies on AI: "context blindness."
  2. I aim to summarize some of the revelations revealed to the world by a brave woman, Frances Haugen, who chose to stand in opposition against one of the most powerful companies in the world, not in an attempt to shut down the platform, but to argue, among other things, that self-regulation of content and media via ultra-complex and generally inaccessible technology can be dangerous. This leads to a second prominent shortcoming with algorithmic systems, which is the problem of transparency.
  3. I aim to present a case for pushback against the adoption of certain work-in-progress technology models by large social media and publishing platforms for the purposes of content moderation. These models, while potentially valuable under strict human supervision, cannot be accurately relied upon independently to moderate and adjudicate user-generated content without violating long-standing rights to speech and expression online, particularly in an evolving society that, through public lockdowns and social distancing protocols, is forced to exist and function en masse within virtual communities.

Parts of this conversation may read like an academic essay, but that is not my goal. Academia largely epitomizes the point I aim to make in the latter half of this discussion, which argues that inaccessible, overly technical dialogue can't be relied upon to convey information to a wide audience. This isn’t to say that academic journals are not valuable; I don’t believe this, and my colleagues at the GWU would have my head for implying such a thing. These resources and outlets are invaluable to their respective fields. But when weighty research around polarizing topics affects the well-being and livelihood of millions, and it isn’t made accessible at a broad level, there comes a point when it is easy to draw conclusions of corruption, concealment, and misinformation, whether that’s the story or not. Many comparisons have been made between the current state of affairs at Facebook and the Big Tobacco revelations in the late 1990s. Whether those comparisons are valid is still up for debate. What we do know is this: the Big Tobacco companies paid to conceal crucial information that was not made available to the audience it affected, and if Facebook is found to have made similar decisions, they will face the consequences of a truly great injustice.

Part 1: Content Moderation

The Laws of Online Discourse

I’ve written a bit about online speech in the past, and I’ll link those two stories below. The former takes a look at how this platform, Vocal, goes about drafting and enforcing its Community Guidelines. The latter is a brief look at how communication laws in the United States have shaped modern content moderation practices.

Let's briefly recap what these laws look like in the U.S., as well as comparable laws around the world.

In the U.S., online discourse is governed by the Communications Decency Act (CDA). By the way, here’s a fun fact about the CDA: it was passed in 1996. Do you know what else was happening in 1996? You guessed it, lots of people were watching porn on Sex.com. That’s right, the law that governs our free speech online was introduced in an effort to stamp out “indecency” and “pornographic material" on the Internet.

[Disclaimer, it didn’t work.]

The CDA is a very unique piece of legislation, in that it releases U.S.-based platforms from liability for user-generated content (UGC). Other countries have vastly different legislation governing online speech. In 2017, Germany passed a controversial Network Enforcement Act ("Gesetz zur Verbesserung der Rechtsdurchsetzung in sozialen Netzwerken” or "NetGZ"), which imposes costly fines on platforms that do not take immediate or near-immediate action against uploaded material that falls under vague umbrella descriptors like “clearly illicit content” with little further explanation. France codified similar legislation in 2020 with a bill simply titled "Lutte contre la haine sur internet” ("Fight Hatred on the Internet"). Two of the strictest countries in the world are Australia and Thailand, where platforms and their executives risk crippling fines and even jail time for not establishing proactive measures to prevent dangerous and inappropriate content from being published. However, similar to Germany, these measures are legislated via staggeringly hard-to-interpret rules like “no abhorrent violence.”

These Internet intermediaries, or Internet Service Providers (ISPs), have reached a point in some countries, especially the U.S., where their power is viewed as “devolved law enforcement,” as EU Advocacy Coordinator Joe McNamme put it in 2011. In other words, these platforms have an enormous amount of power to dictate what conversations can and can not take place. In context, McNamme made this claim in 2011, which was the same year that Osama bin Laden was killed, Castro resigned, and the last of the U.S. troops in Iraq were flown home. In light of these and other highly polarizing talking points, McNamme was certainly justified in his concerns over free speech in online spaces. Since then, platform self-regulation has evolved considerably, and it will likely continue to evolve as platforms struggle to find the balance between human and algorithmic moderation of content.

The Technology at Work

I am not a programmer, so my knowledge of code is limited, and I say this once again to affirm that pertinent information does not and should not be inaccessible to anyone who wants to understand the technology that affects our day-to-day lives.

Algorithms date back to the origin of the Web. They act automatically, and most importantly, they allow platforms to scale, since the energy, supervision, and cost required to sustain a series of actions via constantly-running computers is far less than the human power that would be necessary to maintain the same output. In a piece published in the Columbia Law Review journal in 2018, Jack Balkin quipped that algorithms “do not have families, and they do not take coffee breaks.” Circling back to Facebook, we see a perfect example of a platform that was largely moderating content reactively via a bloated team of unhappy moderators working in restricted safe houses in Texas and Dublin. As the 2010s progressed, Facebook was slowly falling behind in their efforts to scale their moderation needs as society moved more and more into online spaces. Within these spaces, people maintained their everyday practices of conversation, commerce, and, as always, sexual activity, but now it was all happening virtually. These spaces came to be known as “quasi-public spaces,” that is, public arenas that exist solely online.

At the onset of the Covid-19 pandemic, as more and more of our public spaces became quasi-public spaces, social media and UGC platforms began to garner more and more criticism for their methods of adjudicating these venues. In the absence of many real-life spaces, these quasi-public spaces were all that was available to the majority of people, making their user experience (UX) and overall safety much more pressing. With this transition to online-exclusive discourse came a dire need for platforms to set boundaries and moderate content in order to ward off undesirable and illegal activity. Between 2020-2022, every platform from Facebook to OnlyFans has faced the issue of scaling their platforms to moderate a steep influx of content. Eventually, the platforms with the most resources at their disposal began turning to AI.

Algorithms do not have families, and they do not take coffee breaks.

- Jack Balkin

It would be woefully uninformed of me to imply that the use of AI to moderate content began in March 2020. Facebook and Google had been discussing and experimenting with this concept for years prior, in earnest since about 2016. Through that experimentation, the algorithms that have been developed have come to rely on “hash” technology, a method of automatic detection that can be programmed to scan videos, images, and textual content for unique digital fingerprints called hashes. These hashes can encompass everything from detecting a gun in a photo to flagging a racial slur in a story. The advantage of having this data locked and loaded in an algorithm helped shift moderation best practices from being a reactive process to a proactive process—and this was key. With human moderation, a post containing an uncensored video of a violent murder might exist online for hours or even days before it is reported, assessed, and taken down, putting tens of thousands of innocent viewers at risk in that window. With AI, that window can be reduced to milliseconds, which is the primary advantage of this model.

The first problem, however, is that machine learning, despite being a novel tool, can only be so accurate without the preeminent advantage of being, well, human. Some studies refer to this shortcoming as “context blindness,” and it is the most important fallibility in algorithmic content moderation. One of the most salient essays on this subject is by lecturer and researcher Thiago Dias Oliva, who has studied content moderation and online expression extensively over the last few years. In a 2020 issue of Sexuality & Culture, a Springer Interdisciplinary Journal, Dias looked at risks toward LGBTQ voices online, and whether efforts to combat hate speech could errantly target members of marginalized communities using reclaimed rhetoric and “mock impoliteness” to build rapport and solidarity with like-minds. This research, although academically structured and heavy in nature, is a must-read for anyone interested in the nuances of language and machine learning in these contexts:

This paper uses ‘Perspective’, an AI technology developed by Jigsaw (formerly Google Ideas), to measure the levels of toxicity of tweets from prominent drag queens in the United States. The research indicated that Perspective considered a significant number of drag queen Twitter accounts to have higher levels of toxicity than white nationalists.

Essentially, in its current models, there is a glaring inability for this technology to comprehend contextual irregularities. Other researchers have pointed out various loopholes in algorithmic logic, such as adding an offsetting word like “love” to a hateful rant to decrease its toxicity score, or replacing letters with similar-looking keyboard characters to disrupt hash scanning. AI is simply not at a point where it can be relied upon to arbitrate content free of human supervision without great risk of misattribution. Facebook alone has revealed this truth time and time again.

The technology at play will someday reach a threshold of understanding (or very nearly) comparable to a human moderator, however. We know this because machine learning is still evolving and being trained to do just that: think, read, and react like a human. Skeptical? I was too, but then I asked Siri, and she confirmed it to be true. In fact, she said she doesn't really need me in her life anymore. Frankly I saw this coming.

We’ve identified the main problem with AI serving as a content adjudicator to be “context blindness.” Eventually, given the trends and significant advancements in technology, that issue will be resolved, but unfortunately, this leads to another problem, which is at the heart of our conversation today.

Part 2: The Whistle-blower, Big Tech, and the Problem of Transparency

“Facebook and Big Tech are facing a Big Tobacco moment,” Senator Richard Blumenthal said in his opening remarks at the 2021 congressional hearing for Frances Haugen, the Facebook Whistle-blower who has leaked thousands of documents highlighting Facebook’s ethical shortcomings in building and maintaining their social media empire.

Why the comparison to tobacco companies? Smoking, of course, is dangerous and lethal. There is no doubt in our mind about this today, but this was not always the case. Historically, we now place the blame for this longstanding ignorance at the feet of a powerful group of men who represented "Big Tobacco" in a feat of carefully crafted ignorance, often referred to as "Operation Berkshire." Serving as a defence against all anti-smoking legislation in the 1990s, these representatives of Phillip Morris, Imperial Tobacco, and others spent a fortune to hide, dismiss, and otherwise cover up anti-smoking propaganda, and they manipulated public perception of smoking at the expense of countless lives. This scandal was kicked around as a conspiracy theory until 1998, when nearly 35 million pages of previously confidential documents were brought to light, confirming the dark rumors to be true. According to the WHO's introduction in the published edition of these documents, dubbed "The Tobacco Industry Documents,” this revelation revealed "the hidden face of the tobacco industry.” In short:

[These documents contain] letters and memos discussing global and local plans to counteract tobacco-control forces, and ways to confuse the public about the evidence showing the great damage tobacco does to health.

Blumenthal concluded his opening remarks in Haugen’s congressional testimony by asserting that, much like the Big Tobacco companies in the 1990s, Facebook appears to have “misled the public, and investors, and if that’s correct, it ought to face real penalties.”

This introduces the second problem with algorithmic content moderation, which is the problem of transparency and the inaccessibility of pertinent information. “Almost nobody outside of Facebook knows what is happening inside Facebook,” Haugen stated emphatically in her testimony. She went on to discuss in more detail the ethics and efficacy of Facebook’s artificial intelligence systems:

How is the public supposed to assess that Facebook is resolving conflicts of interest in a way that is aligned with the public good if the public has no visibility into how Facebook operates? This must change. Facebook wants you to believe that the problems we’re talking about are unsolvable. They want you to believe in false choices. They want you to believe you must choose between a Facebook full of divisive and extreme content and losing one of the most important values our country was founded upon: free speech.

Perhaps no word has entered our common vernacular when discussing technology and Big Tech over the last five years more than the word “transparency.” As a society, we have a storied history of both demanding and being provided with transparent glimpses into a process or system. Often these glimpses reveal more than what was previously known or reported. The food industry, the vehicle manufacturing industry, and so many others have at one time or another been analyzed for unethical practices and commercially-driven decisions that directly impacted their consumers, who were not given access to all the information necessary to either condone or condemn the process or product they were otherwise endorsing with their funds and their time.

In Haugen’s testimony, she spoke often of Facebook’s decision to use engagement-based rankings and "meaningful social interaction" (MSI) metrics to push content on their consumers via algorithms. The average user has no idea how to interpret, alter, or revoke these invisible forces, often leading to an unsanctioned flow of potentially alarming content in news feeds that retains users by triggering and captivating them via polarizing content. This has fed into what Facebook referred to in leaked documents as “problematic use” of the platform, which Haugen said could otherwise be called “addiction to Facebook.”

Similarly, Haugen referred multiple times to a need to reform Section 230, which is the crux of the CDA, in order to hold companies ethically and financially responsible for decisions related to their algorithms. It is worth noting that Haugen did not call for a full reform of Section 230. As we can see from UGC platforms like YouTube, Medium and others, the breadth of user-generated content knows no limits, so to hold ISPs personally and financially accountable for content posted by users, as we see in countries like Australia and Thailand, would foster a system that validates and rewards censorship. In light of this, Section 230 is an invaluable piece of legislation, and any reform must retain that protection for host platforms if we want to maintain the integrity and freedom of expression that is uniquely enjoyed in the United States. However, as Haugen said, reforming Section 230 (which, again, is legislation that dates back to the 1990s) to account for evolving technology is a valid claim, and is overdue.

AI that governs what users see on their social media news feeds, as well as algorithms that make decisions about hate speech and misinformation independently of human supervision, should offer the same transparency that has been required of other industries. However, when it comes to conversations around these complex technologies, few are able to understand the ethical implications of a line of code, and of those few, many are on the payroll of companies whose interests and profits are at stake. When Monsanto and Tyson were exposed for their environmental and animal abuse practices, the response and subsequent regulation was not dictated by a niche field who could then evaluate in isolation the ethics of the industry moving forward. As a public, everyone had the power to hold agribusiness industries accountable for their response to an outcry for reform sparked by Food, Inc. and other revelatory data. Modern technology is unique in that, while the issues are clear and on some levels even glaring, particularly now thanks to Haugen’s testimony, the industry is still a community in and of itself existing within an isolated position of knowledge and power that is largely inaccessible to the general public.

Part 3: Where Do We Go From Here?

So how can the tech industry be reformed and held to ethical standards? Similar to my plan for this conversation to not hide any thoughts or revelations behind a guise of academia or isolating jargon, any legislation introduced in response to purported ethical violations must be made transparent to everyone, immediately, and if questioned, must be patiently explained and made clear even to those without industry knowledge. This applies to both algorithmic news feeds and the use of AI in content moderation.

This is why I don't support the use of artificial intelligence within the content moderation sub-industry until the technology can be trained to account for context blindness and avoid the wide-spread risk of misattribution toward marginalized communities. Additionally, platforms should wait until such a time when the information behind the technology can be explained in a way that makes it less opaque to its user base. This could mean years of public information seminars and panels. It could require countless hours of interviews where those who designed these systems can present models, answer questions, and proceed only with the approval of those who will be adopting the technology into their lives. It will certainly entail a strong push in non-English speaking communities for more complete transparency, something that has come to light as being one of Facebook’s most egregious shortcomings in recent weeks.

Will this be inconvenient for Big Tech and other popular outlets? Of course. Will it likely result in financial repercussions in the short- and medium-term? Yes, it will. However, we have reached a point where much more is at stake than was previously realized, and for that lack of transparency, companies like Facebook will have to pay a price.

At the onset of this conversation, I compared Frances Haugen to Beijing’s Tank Man, who bravely stood up in support of pro-democracy demonstrations in Tiananmen Square. What some may not know is that the identity of Tank Man was allegedly confirmed by a UK tabloid in later years. The Sunday Express claimed the protestor was a 19-year-old student named Wang Weilin, and it has been reported that he was arrested by the Communist Party of China in the days following the encounter. For his actions, Weilin faced a very unique political repercussion: for attempting to subvert the People’s Liberation Party, he was officially charged with “political hooliganism” by the Chinese government.

Frances Haugen, like any whistle-blower, put her personal safety at risk by coming forward. It takes a very brave person to put their life and career on the line for the sake of truth and transparency. I call it bravery; some may call it political hooliganism. Either way, the technology and social media industries are approaching a watershed moment. Much like public perception of Big Tobacco was irreversibly altered by a period of revelation and disclosure, our views of Facebook and Big Tech platforms may be quite different in a few short years. We shall see.

Total Comments: 0

Meet the Author


PC
WINFRED KWAO

Blogger And Article writer

follow me

INTERSTING TOPICS


Connect and interact with amazing Authors in our twitter community