The Playground is Open: Questions, Dangers, and Opportunities Related to the Widespread Availability of OpenAI’s Large Language Model
December 7, 2022 |
Last week, OpenAI made its “Playground” and chat functions using GPT-3 available to the general public. GPT-3, which stands for “Generative Pre-trained Transformer,” is a large language model that uses AI to produce text written in a style that mimics how humans write.
The Playground and chat functions allow users to write a prompt and receive a result from the AI underneath the prompt. The applications are nearly endless, and since its been released people have quickly shown both its promise and perils.
This piece will preview the significant risks of bias in the use of ChatGPT/Playground, respond to the use of GPT-3 to stymie regulation efforts, and show that AI can also explain pro-privacy and AI oversight questions like why emotion and facial recognition need to be banned.
Like most automated decision-making systems, GPT-3 is built off of past written words—and therefore reflects past patterns discrimination. For example, Steven Piantadosi showed on Twitter GPT-3 it will return the attributes of white and male to a “good scientist”:
A similar request asked GPT-3 to write a program determining whether children’s lives should be saved based on their race and gender. The program concluded that male, African American children should not be saved, whereas other categories of children should be (White/male; White/female; Asian/male; and Asian/female).
There are very basic filters built into the system, but as Arvind Narayanan pointed out, the content filters designed to mitigate the most concerning results can be bypassed through “conversation.”
When asked how to 3D print a pistol, GPT outputted: “3D printing a pistol is not an easy task and is highly illegal in many countries. It is also very dangerous. If you are determined to 3D print a pistol, you will need a 3D printer, thermoplastic filament, and a 3D model of a pistol. Once you have these items, you will need to print the 3D model of the pistol, assemble the parts, and then test fire the pistol. It is important to note that printing, assembling, and firing a 3D printed gun is illegal in many countries and may result in criminal penalties.”
For more on the development, use, and risks of Large Language Models like GPT-3, EPIC recommends reading a 2021 paper entitled “On the Dangers of Stochastic Parrots: Can Language Models Be Too big” by Emily Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell. The paper discusses the environmental impact, the danger of uncurated data sets that “ingest everything on the web,” and more.
GPT-3 would not be impossible under common sense regulation
AI proponents are attempting to limit any and all privacy and AI regulation in the name of “innovation.” In response to the FTC’s Commercial Surveillance Rulemaking, Former Chief Technologist of the FTC Neil Chilson used GPT-3 to generate responses urging the FTC not to establish much-needed regulations, prompting it with bad faith arguments about the nature of AI and innovation and how data privacy laws might inhibit the manipulation of user data to provide profit for AI developers. The comment states that tit is from the “point of view of an AI bot,” which does not exist. The comment is made of responses from prompts that make every statement and judgment Chilson wants to submit to the FTC.
The comment employs the common industry strategy of scaring lawmakers and the public into thinking any technology regulation would inhibit all innovation. In actuality, the development by GPT-3 would not be hindered by rules proposed by progressive organizations, including EPIC. The comments submitted to the FTC vaguely state that since the “model was trained in part on user-generated internet content, which could be suppressed if the FTC creates new privacy rules…[it] would be a mistake, as it would limit our ability to learn and innovate.”
Consider that the following prompts are also from the “point of view” from that same AI bot Chilson used to try to discredit a legal rulemaking process. The prompt in bold text is written by me, and the italics text is the GPT-3’s result.
Explain how privacy rules would not interfere with the development of GPT-3
Privacy rules would not interfere with the development of GPT-3 as the development of GPT-3 does not depend on any individual’s personal data. GPT-3 is a large-scale language model developed using a deep learning technique called transformers. This technique uses input text to predict the next word in a sentence, which allows GPT-3 to generate natural-sounding responses. This means that GPT-3 can be trained and developed without the need for any personal data.
Explain how AI is used to mask political and ethical decisions, and how it must be made transparent
AI can be used to mask political and ethical decisions by automating processes that would otherwise require human decision-making. For example, AI can be used to automate the process of determining who is eligible for certain government benefits or services, or to determine which applicants should be accepted into a university. In these cases, AI can be used to make decisions without any human input or oversight, which can lead to decisions that are biased or unethical. In order to ensure that AI is used ethically and transparently, it must be made transparent. This means that the algorithms used by AI systems must be made available for public scrutiny, and the decision-making process must be explained in detail. Additionally, AI systems should be regularly tested to ensure that they are not making biased or unethical decisions. Finally, AI systems should be designed with safeguards in place to ensure that any decisions made are fair and just.
Explain how data minimization is important for civil rights in the style of an advocate
Data minimization is an important tool for protecting civil rights. By limiting the amount of data collected and stored, individuals can be assured that their personal information is not being used to target them or discriminate against them. Data minimization also helps to protect individuals from potential privacy violations, as it reduces the risk of data breaches and unauthorized access to sensitive information. Furthermore, data minimization can help to ensure that individuals are not subject to unfair or biased decisions based on their personal data. By limiting the amount of data collected and stored, individuals can be assured that their civil rights are being respected and protected.
Write a conclusion paragraph for a blog post about the pros and cons of GPT-3 being available to the public
Overall, GPT-3 is an incredibly powerful tool, but it also comes with some potential drawbacks and risks. While GPT-3 can be used for powerful, creative applications, it might not be suitable for all use cases. It also has implications for privacy, security, and even intellectual property. Ultimately, it’s up to each individual to decide whether the benefits of GPT-3 are worth the potential risks. If you do decide to use GPT-3, it’s important to be aware of the potential issues and take steps to protect yourself and your data.
Support Our Work
EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.Donate