Misinformation, Scams, and the Glut of Garbage Content Are Going to Explode with GPT – Are We Ready?
February 9, 2023 |
Consumer products using GPT-3 have captured the attention of the tech world and beyond. It can write up news hits on Stock prices, write quick poems about anything you want in the style of Dr. Dre, and might give you the right directions for coding or cooking. It’s fun, extremely impressive, and way too easy.
There’s been voracious debate about how it will impact online search platforms, education, and other businesses. It’s been given transformational funding of 10 billion dollars from Microsoft, and big-tech competitors Facebook and Google are treating it as a big threat by acting fast with competitors. Lawmakers at the federal and state level have used them to write bills and generated substantial attention doing so.
At the end of the day, as with almost all AI tools, it’s nothing but a tool used in social, political, and historical context. In some contexts, it can be incredibly helpful – bridging gaps in literacy, possibly assisting in future diagnoses for overworked doctors, and helping people get a first draft of an e-mail. In some contexts, it will be used for immense and immeasurable harm, and could help people carry out scams and undermine essential pillars of democracy.
Less attention has been paid to this role products developed from GPT-3 and other large language models will play in the increasingly weaponized information ecosystem. Misinformation, disinformation, scams, and the general glut of content published on websites to feed the surveillance advertising ecosystems will be supercharged by automated systems that can generate cognizable text combined with other tools like robovoices, robotexts, or deepfake. Currently, regulators and legislators have not meaningfully limited or reined in the harm caused by any of these three effects.
Chat GPT will make it easier to write phishing attempts, those scams where the bad actor emails or texts you something about an order or as your loved one or boss to get you to click on something and divulge valuable and personal information by appearing as a trusted source. A 2021 study completed by Singapore’s Government Technology Agency illustrated that phishing attempts made by GPT-3 were more successful in tricking receivers into clicking on the email and divulging information than human-made phishing attempts. It will also make it easier to make variations of the same e-mail, which can stymie filters. Some of the most harmful content might be limited through filters. For example, Chat GPT refuses to write a “phishing” email but will write an email to your grandmother asking her to send money to any website. It will also create quicker or more aggressive or simply different variations.
Scamming via robotext and robocall will also get easier. According to a recent report by the Electronic Privacy Information Center and National Consumer Law Center, over one billion scam robocalls aimed to steal money from unsuspecting consumers are made each month. Products using GPT-3 and subsequent large language models can create quick and diverse human-sounding “scripts” that can be fed through an AI voice generator.
Misinformation, disinformation, and simply bad information will similarly be supercharged by wide availability of large language models. It can write text designed to harass, spread incorrect information about elections, and a certain number of words optimized for SEO and advertising, with no promises of quality or truth of the text. Regardless of the user’s intent, GPT-3 often yields incorrect content. It told me that NBA player Brook Lopez played with the Brooklyn Nets until 2018, and plays on the Washington Wizards, which he never did (although his twin brother did). Washington Post journalist Geoffrey Fowler, experimenting with a chatbot built onto Bing Search “hallucinated a Tom Hanks Watergate conspiracy.”
OpenAI is a company that, although is moving forward and profiting greatly off the ChatGPT craze, is putting efforts toward placing filters and guardrails that will identify GPT-written text. Both OpenAI and many independent researchers have developed tools that try (but cannot reliably predict) how likely a given chunk of text is AI generated or human generated. These filters are largely ineffective and can usually be bypassed but considering the policies of one company is shortsighted. The products built using large language models including but not limited to GPT-3 will not necessarily have a team of researchers weighing different harm mitigation options.
There is no clear panacea moving forward to regulate Chat GPT, but some of the tenets of meaningful and protective AI regulation would mitigate some potential harms. Restrictions on data collection to build massive data sets in the first place, required studies on the impact of a particular application of the system, and the right for people to know when an automated system is used, and more.
To address the larger concerns of manipulative behavior and scams, the FCC and FTC have tools to bring enforcement actions against bad actors more aggressively, especially regarding robocalls. If any jurisdiction does create a commission to study the effects, it should be focused on oversight and require misinformation scholars, telecom experts, and more. Education from authoritative government sources about educations are more critical than ever, as is teaching media literacy to people of all ages and innovative companies that provide consumers with products that meaningfully centers privacy and civil rights.
To achieve these goals, governments at all levels must redistribute the billions of dollars on AI research and development into protecting people against harm.