ai-computer-cover.jpg

Artificial Intelligence Is a Threat to Society

By Don Byrd*

Toda Peace Institute issued this article, which is being republished with their permission.

FLORIDA, United States | 6 December 2023 (IDN) — I spent decades working on the fringes of and sometimes within the bounds of artificial intelligence. Currently, I co-chair a task force at Braver Angels, one of the most effective organizations in the United States working to reduce the toxic polarization of society.

Many things that AI is being used for now or is likely to be used for soon bother me. Some actually scare me, and I hope they scare you! Consider three facts:

(1) A “deepfake” is an AI-generated audio or video. Deepfakes can be used for scams or political dirty tricks, or to create pornography. An example that has actually appeared on social media: “I actually like Ron DeSantis a lot,” Hillary Clinton appears to say in a video. “He’s just the kind of guy this country needs, and I really mean that.” And pornographic videos in which one person’s face is pasted onto another’s body are becoming more and more common.

(2) Both the U.S. and China appear to be scrambling to field AI-controlled weapons of some kind. This includes serious consideration of lethal autonomous weapons (LAWs), systems that can decide on their own to kill people as opposed to merely destroying enemy “assets” (drones, railroad tracks, etc.).

(3) The developers of “large language models” like ChatGPT generally add features to keep them from making harmful statements either because of bias or because they convey dangerous information; but over and over again, loopholes have been found in the “guardrail” features, often within minutes of the LLM’s release to the public. Imagine a terrorist exploiting such a loophole to learn how to create a deadly new pathogen whose symptoms aren’t visible for a week, giving it plenty of time to spread.

Trust among citizens

Trust among citizens is an essential ingredient for democratic societies, and AI is already reducing it. Early this year, the well-known AI researcher and critic Gary Marcus said “I think we wind up very fast in a world where we just don’t know what to trust anymore. I think that’s already been a problem for society over the last, let’s say, decade. And I think it’s just going to get worse and worse.” Audio and video deepfakes are one way trust is being reduced; even those that appear harmless blur the boundary between the real and the artificial.

But there are several other threats such as disinformation campaigns using old-fashioned text; social media that often recommend material from progressively more extreme and polarizing views; and so-called “hallucinations”. The latter is a surprisingly common phenomenon in which LLMs confidently assert something that is absolutely wrong. In her article “How far will we let AI go?”, Michelle Williams reports that “ChatGPT made up research claiming guns aren’t harmful to kids”, citing papers in highly respected journals that simply don’t exist. At the same time, advances in computer vision are threatening to make ReCAPTCHA and other methods for identifying computers masquerading as humans ineffective. And while “Explainable AI” that explains its decisions has been an active research area for years, don’t bet on explainability becoming common in a year or three.

When asked how AI threatens us, both experts and laypeople generally respond in one of two ways: either that a future AGI—“artificial general intelligence”—would be a serious threat to the survival of civilization or even of humanity, though not in the immediate future, or that it’s already a serious threat to our democracy and/or society. I’m in the second group: AI is a serious danger to democracy and society today. What can we do?

For a variety of reasons, most of the obvious remedies—a pause or freeze in development, licensing, watermarking, etc—are unlikely to accomplish very much. However, some of the threats to society are greatly magnified by social media. Social media companies already examine postings before disseminating them; requiring companies to examine the content in a different way might help quite a bit. In a short but thought-provoking paper entitled “Addressing the harms of AI-generated inauthentic content”, some well-known disinformation researchers argue:

 “Aside from the obvious challenges of preserving free speech, regulation of AI would work only for tools developed by compliant entities. But as AI algorithms and data are open-sourced and computing power gets cheaper, bad actors will build their own generative AI tools without any incentive to comply with regulatory frameworks, such as proposed watermarking standards. Alternative regulatory frameworks need to be explored to target not the generation of content by AI, but instead its dissemination via social media platforms. One could impose requirements on content based on its reach. For instance, before a large number of people can be exposed to some claim, we may require its creator to prove its factuality or provenance.”

But no technology can really solve these problems. In “AI has become dangerous. So, it should be central to foreign policy”, Robert Wright expresses a viewpoint that’s especially relevant to the mission of the Toda Peace Institute, a viewpoint I agree with.

“[T]here is serious talk in Washington of regulating AI…The AI challenge calls not just for innovative domestic policies but also for a basic reorientation of foreign policy — a change comparable in magnitude to the one ushered in by George Kennan’s 1947 ‘X’ article in Foreign Affairs, which argued for a policy of ‘containing’ the Soviet Union. But this time the adversary is China, and the required redirection is toward engagement, not confrontation. In light of the AI revolution, it is in America’s vital interest to reverse the current slide toward Cold War II and draw China into an international effort to guide technological evolution responsibly.”

AI-amplified misinformation is especially dangerous because so many people these days jump on anything extreme that appears to come from the Other Side.  Educating citizens to have a better chance of identifying AI-generated content is important and should help somewhat.  But the only real solution I see: Convince enough people to react with scepticism instead of outrage at extreme ideas that appear to come from across The Divide.

 What organisations like Braver Angels can do remains to be seen.

Related articles:

Building tech “trust and safety” for a digital public sphere (3-minute read)

A roadmap for collaboration on technology and social cohesion (20-minute read)

*Don Byrd received his Ph.D. in Computer Science from Indiana University in 1984. He is noted for his contributions to music information retrieval—a field he helped to found—and music information systems. He has also worked both inside and outside academia on text information retrieval, information visualization, user-experience design, and math education, among other things. Byrd is the author of the open source music notation system Nightingale. Now retired, he spends some time on music, but more working with Braver Angels (https://braverangels.org), a grassroots national organization dedicated to reducing the toxic polarization of society, where he co-chairs a “task force” to combat AI-driven polarization. [IDN-InDepthNews]

Image: da-kuk/istock

Original link: https://toda.org/global-outlook/2023/artificial-intelligence-is-a-threat-to-society.html

IDN is the flagship agency of the Non-profit International Press Syndicate.

Related Posts

Begin typing your search term above and press enter to search. Press ESC to cancel.

Back To Top