Image: Phunkod/Shutterstock - Photo: 2022

Building Tech “Trust and Safety” for a Digital Public Sphere

Viewpoint by Lisa Schirch 

Toda Peace Institute issued this article, and it is being republished with their permission.

Notre Dame, Indiana, USA (IDN) — The first annual “Trust and Safety” conferences took place this week in Palo Alto, California. Tech platform staff from Zoom, Meta, TikTok, and DoorDash met together with researchers studying how to reduce the harmful content on social media platforms.

Harmful content on tech platforms started small; now, the internet has become a superhighway for abuse and exploitation, violent content and hateful commentary from users ranging from extremists to the general public. Political entrepreneurs began using platforms to spread propaganda while reports surfaced of governments in Sri Lanka and Myanmar using Facebook to spread disinformation and hate speech leading to mass violence against Muslims. 

By 2020, the University of Oxford Programme on Democracy and Technology warned of “industrialized disinformation” by over 80 countries with cyberarmies using computers to spread computational propaganda. Researchers across all regions of the world report social media playing a key role in further polarizing already divided societies, undermine public trust in democratic institutions, and increase public support for autocrats.

It is not uncommon to hear people refer to the weaponization of social media or refer to platforms as weapons of mass distraction and mass destruction. Yet at the same time, there is growing public anger at tech content moderation, particularly among conservative audiences who complain of censorship.

The Dawn of Tech Trust and Safety

Over the years, tech companies attempted to address harmful content by adding new layers to what they refer to as their “trust and safety” infrastructure. They wrote community guidelines on content, hired dedicated content moderators, and created new AI to detect harmful content. They hired new Trust and Safety teams to review and anticipate potential harms from new product features. They built partnerships with governments, the UN and civil society groups to improve content moderation. 

But the scale of the problem of harmful content fomented a tech lash. Tech staff protested from within the companies over policies, algorithms, and products they believed caused harm. Some tech staff resigned in protest. Some set up new initiatives such as the Center for Humane Technology, the Integrity Institute, Zebras Unite, All Tech is Human, and other organizations devoted to improving tech safety.

Universities sprouted new initiatives to address the crisis of technology, polarization, and democracy. These include the Stanford Internet Observatory University of Toronto’s Citizen Lab, Harvard’s Berkman Klein Center for Internet and Society, and many more.

The Trust and Safety Professional Association (TSPA) and the Trust and Safety Foundation grew out of this ecosystem of concern. TSPA is a professional association of tech staff who develop and enforce principles and policies that define acceptable behavior and content online. The Foundation holds a wider mission of improving society’s understanding of trust and safety issues and the inherent tradeoffs between free speech and harmful content.

The September 2022 “TrustCon” and the adjacent Trust and Safety Research Conference at Stanford University’s Internet Observatory are signs of progress amidst the rapidly worsening landscape of harmful content on tech platforms.

Tech platforms social engineer the public sphere

Tech staff face an impossible task in dealing with humanity’s worst demons all day every day. The job is traumatizing. Content moderators sift through thousands of pieces of content each day, viewing abuse, torture, suicide.

In conversations and interviews with many tech staff who work on content moderation, I have heard two main metaphors. First, tech staff often lament that their platforms are “mirrors” of society. The platforms do not create harmful content. Users create that content. Second, Trust and Safety Staff compare their efforts to reduce harmful content with the never-ending game of “Whack a Mole”.

These metaphors seem to obscure the role of tech platforms themselves in amplifying humanity’s best angels and our worst demons. Tech platforms are not neutral mirrors.  They create opportunities through their affordances for scaling the best and worst of humanity.

The Challenge of Trust

Stanford Law School professor Evelyn Douek argues that while most discussion is on making platforms safer for users, companies also need to build trust with users, journalists, academics, and government regulators concerned about how toxic content is affecting society.

Some of the most vocal calls for improving trust come from former tech insiders. Former Facebook Chief of Security Alex Stamos now runs the Stanford Internet Observatory (SIO). Stamos teaches courses on Trust and Safety to computer science majors which include a riveting introduction to the many unintentional harms that platforms like Facebook cause.

Most tech platforms were created by computer engineers, not sociologists, political scientists, psychologists, or ethicists. Universities who teach computer science need to be mandating courses in tech ethics and trust and safety. Stamos teaches students about the cases he dealt with involving users who died by suicide, those affected by child sexual abuse, or online suicide.

Del Harvey, former Twitter vice president of trust and safety is oft described as Silicon Valley’s chief sanitation officer. At this week’s conference, Del Harvey offered a bigger vision. “The goal of Trust and Safety should be to work in the service of health.” Drawing on public health lessons, Harvey noted that it is not enough to just treat the illness of harmful content. Tech companies are also responsible for promoting healthy conversations. People want to be on platforms where they encounter positive healthy conversations, not toxic content.

Ethan Zuckerman, designer of the dreaded pop-up ad and leading thinker on the future of digital communities, offered the closing session at the Trust and Safety Conference. Zuckerman rightly asserts that “democracy cannot exist without a healthy public sphere.”  The big tech companies emerging from Silicon Valley have been built on an ad-based profit model. Zuckerman notes that we get what we paid for with these social media and search engines. If we want a healthy public sphere, we need to focus on governance of the new digital public sphere.  A focus on content moderation will not get us away from the sucking sound of democracy going down the drain.

Governance, not Content Moderation

Dueck, Stamos, Harvey, and Zuckerman each offered a compass for new directions to move beyond the current focus on how to remove harmful content. A focus on content moderation looks at symptoms rather than the system. The future of democracy and global security is at stake. 

Tech companies are building the new public sphere; the places where people go to talk with one another. They have more power than governments or courts to determine what people are allowed to say in the new public square that social media platforms offer.

The first ever conference on tech Trust and Safety was not filled with experts in global security or even cyber security. The people in charge of this weapon of mass destruction are staff at tech companies. Amongst the tech company staff at the conference, I met a woman from the UN, and a handful of government representatives from New Zealand, and the UK.

The Trust and Safety conversation needs to be wider, deeper, and larger. More of us need to pay attention. It isn’t fair to simply foist anger at tech companies. Tech companies are daunted by the scale of the harm in front of them. There are no easy or quick answers to how we reduce harmful content.

*Dr Lisa Schirch is Research Fellow with the Toda Peace Institute and is on the faculty at the University of Notre Dame in the Keough School of Global Affairs and Kroc Institute for International Peace Studies. She holds the Richard G. Starmann Sr. Endowed Chair and directs the Peacetech and Polarization Lab.

A former Fulbright Fellow in East and West Africa, Schirch is the author of eleven books, including The Ecology of Violent Extremism: Perspectives on Peacebuilding and Human Security and  Social Media Impacts on Conflict and Democracy: The Tech-tonic Shift. Her work focuses on tech-assisted dialogue and decision-making to improve state-society relationships and social cohesion. [IDN-InDepthNews – 21 October 2022]

Image: Phunkod/Shutterstock

Original link: https://toda.org/global-outlook/global-outlook/2022/building-tech-trust-and-safety-for-a-digital-public-sphere.html

IDN is the flagship agency of the Non-profit International Press Syndicate.

We believe in the free flow of information. Republish our articles for free, online or in print, under Creative Commons Attribution 4.0 International, except for articles that are republished with permission.

Related Posts

Begin typing your search term above and press enter to search. Press ESC to cancel.

Back To Top