APA Comments
Reply Comments in Implications of Artificial Intelligence Technologies on Protecting Consumers from Unwanted Robocalls and Robotexts
CG Dkt. 23-362 (Nov. 2024)
Before the
FEDERAL COMMUNICATIONS COMMISSION
Washington, DC 20554
In the Matter of
Implications of Artificial Intelligence Technologies on Protecting Consumers from Unwanted Robocalls and Robotexts
CG Docket No. 23-362
Reply Comments of
National Consumer Law Center on behalf of its low-income clients
Consumer Federation of America
Electronic Privacy Information Center
Public Knowledge
In support of Comments
filed on behalf of 26 national, state and local advocacy and legal aid programs
November 15, 2024
Submitted by:
Margot Saunders
Carolyn Carter
National Consumer Law Center
1001 Connecticut Ave, NW
Washington, DC 20036
Reply Comments
I. Introduction and Summary
These Reply Comments, written by the National Consumer Law Center (NCLC), on behalf of its low-income clients and Consumer Federation of America, Electronic Privacy Information Center, and Public Knowledge, are submitted to respond to the comments of others in the proceeding regarding the Notice of Proposed Rulemaking (NPRM) and Notice of Inquiry (NOI) issued by the Federal Communications Commission (Commission or FCC) on August 8, 2024, relating to the use of artificial intelligence in unwanted robocalls and robotexts.[1] We submit these Reply Comments to reiterate and provide further explanation and justification for our original Comments, which were provided on behalf of these three groups plus 22 other national, state and local advocacy and legal aid programs described in the Appendix.[2]
Our original Comments explained that additional explicit consent, as proposed by the FCC, is not necessary for many calls which use generative AI. The additional in-call disclosure, also proposed by the FCC, is likewise unwarranted for those calls. However, we strongly support the FCC’s proposal to require that prior express consent for some calls that use AI must include a “clear and conspicuous disclosure that the consumer consents to receive AI-generated calls.”[3] For those same calls, we also strongly support the in-call disclosure proposed by the Commission.[4] The distinguishing characteristics of calls that should trigger the specific consent requirement and the in-call disclosure relate to the extent to which the calls are likely to deceive or misrepresent to recipients the origin or the party responsible for the call, or to confuse the recipient regarding whether they are speaking to a real human being or not. The AI calls that are likely to deceive or confuse the recipient are calls that are either interactive, or in which AI is used to clone a human’s voice to misrepresent the true caller.
In these Reply Comments, we continue to urge the Commission to protect callers with a disability who use assistive devices that employ AI to generate a mechanized voice by excluding the voice generated in those instances from the definition of “artificial voice.”[5] Similarly, we reiterate here our vehement opposition articulated in our Comments to allowing network-wide AI to listen in to our calls, as considered in the NOI.[6]
In Section II, we describe with more particularity the AI calls that should trigger the explicit consent and the in-call disclosures proposed by the FCC, and we explain the dangers to telephone subscribers from calls in which generative AI is used in interactive calls, and calls in which the voice of a specific human being is cloned, or which falsely purport to be from a particular business or a governmental entity.
In Section III, we summarize determinations by other governmental bodies explaining the moral imperative that interactions with AI occur only after truly voluntary consent, as well as the importance that call recipients be informed when they are engaged in interactions with AI.
Section IV sets out the Commission’s authority in the Telephone Consumer Protection Act (TCPA)[7] to impose requirements for a) explicit prior express consent for calls with interactive AI or when a specific human being’s voice is cloned, and b) the in-call disclosure as proposed in the NPRM.”
II. The dangers to subscribers posed by some AI calls justify special rules for prior express consent and in-call disclosures.
A. The level of risk posed by AI varies by type of call.
As we said in our Comments, many of the calls that include an artificial voice generated by AI are simple informational calls informing the recipient of the time for a medical appointment or providing a number to verify the recipient’s identity when attempting to sign on to a website. It makes no difference to the recipient of those calls if the voice relaying the information is a prerecorded voice of a human being, or is generated by AI. In both instances, the words used in those calls are specifically directed by the human administrator. No specific prior express consent is necessary for those calls, and no in-call disclosure is warranted either. As these calls would include a human-like voice that does not pretend to be from a specific person, and the calls do not require interaction between the call recipient and the generative AI, there is no deception involved, and we do not envision potential confusion about the message. The recipient’s prior express consent to receive calls with an artificial or prerecorded voice should be sufficient for these calls.
However, as consumer and privacy groups have repeatedly urged the Commission, all calls that include an artificial or prerecorded voice, not just those specifically identified in 47 C.F.R.§ 64.1200(b)(3) (telemarketing calls and calls made pursuant to an exemption from the restriction on prerecorded calls to residential lines), should be required to include an automated opt-out mechanism. Consumers complain about seemingly unstoppable calls from some medical professionals, who place repeated prerecorded appointment reminder voice calls.[8] When requested to stop the calls, many medical offices say that is not possible, that the calls are handled by an outside contractor and there is no mechanism to relay an individual’s request.[9] The automated opt-out is especially important for debt collection calls—whether they are interactive or not—when there is often a dispute about whether the consumer requested that the calls stop. Having the automated opt-out would avoid many of those disputes.Our previous comments in support of this recommendation provide more explanation about the need for this application, and provide our recommendations for the rewritten regulation.[10]
Interactive calls. However, the simple, informational calls described above are very different from interactive calls in which generative AI is used to engage in conversations with recipients, ask questions, and respond to recipients’ questions. These calls can confuse the recipient, and potentially mislead them about their rights. We encourage the Commission to apply its proposed heightened standards for prior express consent and to require the in-call disclosure as described in Paragraph 14 of the NPRM calls to these calls.[11] This means that these calls should be legal under the TCPA only after the recipient 1) has provided prior express consent following a “clear and conspicuous disclosure” that the consent is to receive AI-generated calls, and b) there is an in-call disclosure at the beginning of these calls informing the recipient that the call includes AI-generative technology.[12]
AI calls cloning the voice of a specific human being or from a particular business or governmental entity. A third category is calls that uses AI to generate an artificial voice purporting to be from a specific human being (unless prior express consent for a call from that human being was provided), or from a particular business or governmental entity. A call that includes an AI-generated voice of a famous person, or a relative of the recipient, carries a significant risk of deceiving the recipient about the actual source of the call. The same goes for an AI-generated voice purporting to be from a particular business or government entity.
In our original Comments, we described with some specificity the confusion that can result in debt collection calls and other interactive telephone calls.[13] In the following subsections, we describe additional threats to consumer privacy and economic losses posed by AI systems, which have been described in a variety of reports.[14] These threats can be manifested both in interactive calls with AI, as well as in calls in which AI is impersonating a human’s voice, or which falsely purport to be from a particular business or government entity.
B. AI can generate hallucinations or fabrications that appear credible but are actually false information, making interactions with AI dangerous.
Some ways in which AI can be dangerous for consumers in interactive telephone conversations when the consumer is not aware that they are dealing with AI are illustrated in EPIC’s complaint to the FTC regarding the failure of OpenAI GP, LLC to demonstrate that it meets “established public policy standards for responsible development and use of AI systems.”[15] For example, the complaint describes:
AI confabulations (colloquially referred to as “hallucinations’ [sic] or “fabrications”) occur when generative AI language models like GPT-4 generate semantically credible but ultimately false information in response to a user prompt. Because these large language models function by analyzing the semantic meaning of user prompts and stringing word tokens together from training data to answer the prompt, they can produce confabulations when there are mismatches or ambiguities between user prompts and training data, or when the model has “insufficient, outdated, or low-quality training data” on the topic of the user prompt.[16]
As EPIC notes, these hallucinations or fabrications raise material concerns when a person is dealing directly with AI, especially when the human is not aware that they are interacting with AI. EPIC’s complaint notes that “OpenAI’s ChatGPT has been shown to combine words, names, and ideas that appear semantically connected—but were in fact not connected and therefore erroneous—into false simulacrums of important public records. In one instance, ChatGPT recently ‘cited a half dozen fake court cases while [a lawyer was] writing a 10-page legal brief.’”[17]
AI is trained to produce outputs based on immense amounts of data about millions of people around the world. However, when that data has been improperly collated, the statements that AI produces that rely on that data can be tainted by the imperfections in the data set or simply wrong. As EPIC notes in its complaint to the FTC: “Because all of OpenAI’s AI products are trained using similar datasets, the biases, exclusions, and negative stereotypes present within biased training data are baked into their models and difficult to effectively remove without retraining the models.”[18] As a result, “the outputs and decisions made by OpenAI’s models are inherently biased. These biases extend beyond facially biased outputs; ‘algorithmic bias [also] occurs when algorithms make decisions that systematically disadvantage certain groups of people.’”[19] The result is certain groups of people are unfairly—and potentially illegally–penalized.
Consider what might happen when a consumer has been contacted by a debt collector by telephone and the consumer erroneously believes that they are dealing with a human, when actually they are dealing with generative AI. The AI is trained to ask questions of the consumer that will glean their ability and willingness to make payments on the debt, and then to press for immediate payment, or promises of future payments. If the consumer is a person of color, and the datasets upon which AI makes its decisions trigger discriminatory determinations for people within the category that AI has determined applies to them, it is likely that the demands made on this consumer by AI could be different, and more punitive, than the demands it will make on other consumers.
Small businesses are also subjected to these risks from AI phone calls.[20] Threats to businesses from AI are described in a recent survey of 286 senior enterprise risk executives, who were asked about the top five emerging risks to businesses in 2024. AI was identified as the “top emerging risk for enterprises in the third quarter of 2024.”[21] A recent analysis by Nationwide “found that in the past year roughly one-quarter of small business owners (SBOs) have been targeted by a scam that used generative AI. Of those targeted, most described the attacks as attempted fraud using email, voice or even video impersonations of other business owners or senior-level employees they associate with.”[22]
C. Phone scammers are now using artificial intelligence to steal money from victims with realistic-sounding facsimiles of loved ones.
AI-assisted phone scams have moved from being annoying to being terrifying.[23] “In July, a Georgia mother was sent into a panic after receiving a call that her daughter had been kidnapped, according to a local news report. On the line, she heard what sounded like her daughter’s voice and a man demanding $50,000. In reality, it was an AI-generated voice clone. ‘It just sounded so much like her. It was 100% believable,’ Debbie Shelton Moore told WXIA-TV. ‘Enough to almost give me a heart attack from sheer panic.’”[24]
In addition to traditional phishing tactics, malicious actors increasingly employ AI-powered voice and video cloning techniques to impersonate trusted individuals, such as family members, co-workers, or business partners. By manipulating and creating audio and visual content with unprecedented realism, these adversaries seek to deceive unsuspecting victims into divulging sensitive information or authorizing fraudulent transactions.
In May, 2024, the FBI issued a public warning to individuals and businesses about the “escalating threat posed by cyber criminals utilizing artificial intelligence (AI) tools to conduct sophisticated phishing/social engineering attacks and voice/video cloning scams.” The warning noted:
Cybercriminals are leveraging publicly available and custom-made AI tools to orchestrate highly targeted phishing campaigns, exploiting the trust of individuals and organizations alike. These AI-driven phishing attacks are characterized by their ability to craft convincing messages tailored to specific recipients and containing proper grammar and spelling, increasing the likelihood of successful deception and data theft.[25]
According to the FTC, consumers reported losses of more than $2.7 billion to impersonation scams last year.[26] The FTC data reflects only the reported losses. The actual losses from texts impersonating banks are generally considered to be much higher. In its 2023 report, Robokiller reported that there were $33 billion in estimated losses from robocall scams in the first half of 2023.[27]
Banks and other industries are also losing substantial funds to scam texts and robocalls. Because of these losses, these industries have encouraged the FCC to establish stronger measures to stop scam messages.[28] When funds are electronically withdrawn from the victim’s account by the scammer, rather than transferred by the victim, the Electronic Fund Transfer Act generally requires the bank holding the victim’s account to replace those funds.[29]
D. Scammers might clone celebrities’ voices and then try to trick victims with fake robocalls or videos from the celebrity.
Experian notes that scammers are using AI to write messages, create images and generate videos that can enhance their scams. These AI-powered twists might make detecting scams more difficult and increase how often victims are targeted.[30] YouTube videos illustrate the use of AI to imitate Joe Rogan, Warren Buffet, Elon Musk, Tom Hanks, and many others.[31] These deepfakes could just as easily be used in other scam robocalls.
Experian explains that a cloned voice might ask a person to donate to a worthy cause, recommend an investment, or claim to be giving away money to a few lucky people.[32] Generative AI has also already been used in several instances to create malware delivered to email or messaging inboxes.[33]
III. Requiring informed and specific consent is an essential protection for interactions with AI recognized by many United States and international agreements and pronouncements.
Requiring specific consent and an in-call disclosure for these calls, as the FCC has proposed, is essential for these calls to comply with recognized polices of the United States government and other bodies regarding interactions with AI, and AI’s gathering and use of personal data. For example:
- The United States is a member nation of the Organization for Economic Cooperation and Development (OECD). In 2019, OEDC promulgated the OECD Principles on Artificial Intelligence,[34] which have been specifically endorsed by the United States.[35] A seminal OECD principle is “(ii) to make stakeholders aware of their interactions with AI systems, …”[36] Express consent to receive calls using AI and notice in the call that it includes AI are essential to comply with this principle.
- On October 4, 2022, the White House Office of Science and Technology Policy (OSTP) published its Blueprint for an AI Bill of Rights intended to—among other things—guide the deployment of automated systems and protect the rights of the American public.[37] According to the Blueprint “[d]esigners, developers, and deployers of automated systems should seek your permission and respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate ways and to the greatest extent possible”[38] and “[c]onsent should only be used to justify collection of data in cases where it can be appropriately and meaningfully given.”[39] This principle applies to interactive with AI telephone calls, because AI is collecting and using data—the person’s statements and questions—to determine its responses.
- On October 30, 2023, the White House published Executive Order 14110, setting out comprehensive guidelines to manage the development, procurement, and use of AI.[40] These guidelines include both restrictions on how federal agencies develop, procure, and use AI technologies and provisions encouraging responsible private-sector development and deployment of AI through federal funding restrictions and federal agency enforcement priorities.[41] A key element of the Executive Order is a mandate that “output from generative AI” be “labeled”[42]—a requirement that has particular urgency in the context of AI-generated telephone calls.
- A 2024 Risk Management Profile published by the U.S. Department of State on AI and human rights pointed out that that “When analyzing and documenting potential uses and impacts include unintended downstream harms that may arise, such as infringements on privacy from data collected without consent or data re-use, or chilling effects on freedom of expression or freedom of peaceful assembly and association upon individuals or members of groups.”[43] As explained in our original Comments, AI is being used by debt collectors in ways that include gathering of data from those telephone calls, and using machine learning to “generate valuable data on communication preferences, response times, and engagement patterns.”[44] Analytics can be used to personalize the content or tone of a collection communication, the mode of communication, or even the specific debt collector assigned to contact that consumer.[45]
- The importance of meaningful consent regarding interaction with AI is emphasized in The Oxford Handbook of Ethics and AI. This book points out in the context of data protection and AI, “[d]igital consent has been criticized as a meaningless, procedural act because users encounter so many different, long, and complicated terms of service that do not help them effectively assess potential harms or threats.” It notes that “in a new AI-driven smart world, consent cannot be confused with choice. Consent must be defined by its moral core, involving clear background conditions, defined scope, knowledge, voluntariness, and fairness.”[46]
- A new AI law in Utah requires clear disclosures that generative AI is being used “to interact with a person” in covered transactions, by—among other things—
verbally providing that disclosure at the start of the oral exchange.[47]
- According to a recent news article by Bloomberg Law, “it is becoming a best practice for companies to disclose their AI-usage in customer-facing applications and services either within their privacy policies or through other channels. Indeed, many businesses operating chat features . . . now choose to include disclosures as part of the user interaction flow.”[48]
- The FTC flatly prohibits fraudulent impersonation of governments, businesses and their officials or agents.[49] The FCC should—at a minimum—require explicit prior express consent and in-call disclosure for calls that include any impersonation of specific human beings, or falsely pretend to be from a particular business or governmental entity.
Requiring prior express consent and the in-call disclosure when AI is used in an interactive call or when a particular human being, or a business or governmental entity, is impersonated by AI is essential to protect consumers from harm. It is already required by federal and international requirements, and some state laws. The FCC should follow suit.
IV. The Telephone Consumer Protection Act gives the FCC ample authority to require explicit prior express consent and in-call disclosures for certain AI-generated calls.
As explained, we agree with many commenters that simple informational calls that include AI-generated voice do not need any additional express consent from the called party. We are advocating only that those TCPA-covered calls which include an AI-generated voice and carry the risk of deception, confusion or harm to the consumer should require prior express consent that specifies that AI will be used, and the new in-call disclosure rule proposed in the NPRM.[50] In particular, these protections should be required for interactive calls in which an AI-generated voice will be used to ask questions of or provide targeted specific information to call recipients, and calls that include an AI-generated voice of a specific human, or which falsely purport to be from a particular business or governmental entity. This is exactly what is recommended by the FCC in the NPRM[51]—albeit for a smaller universe of calls—and aligns with recent FTC rulemaking.[52]
The FCC has ample authority to impose both requirements. In the TCPA, Congress required prior express consent for calls to cell phones when an artificial or prerecorded voice is used.[53] Congress also gave the Commission broad rulemaking authority under the TCPA, providing: “The Commission shall prescribe regulations to implement the requirements of” 47 U.S.C. § 227(b), the section of the TCPA that restricts prerecorded and artificial voice calls to cell phones, other sensitive numbers, and residential lines.
There is no reason that this broad rulemaking authority would not include the authority to implement the requirement of prior express consent, including by defining that term. We urge the Commission to use this authority to rule that for these calls, prior express consent must be informed consent, and that if the consumer is not specifically informed that calls will include interactive AI or that AI will be used to spoof a real person’s voice, there is no prior express consent for those calls. Requiring informed consent is appropriate given the significant risks posed by interactive AI calls and calls in which AI spoofs another’s voice.
A consumer who has merely agreed to receive prerecorded or artificial voice calls cannot reasonably be held to have agreed to these types of AI calls. As the Commission has repeatedly noted, the scope of a consumer’s consent must be based on the context in which the consent was obtained. The Commission has held:
Consumers who provide a wireless phone number for a limited purpose—for service calls only—do not necessarily expect to receive telemarketing calls that go beyond the limited purpose for which oral consent regarding service calls may have been granted.[54]
The FCC reiterated this position in a 2015 declaratory ruling, noting that “the call must be closely related to the purpose for which the telephone number was originally provided.”[55] By the same token, a consumer who has given consent to receive prerecorded or artificial voice calls, without having been informed that the calls may include interactive AI calls and calls that spoof another’s voice, cannot be construed to have consented to the latter types of calls. Prior express consent for a call made with an artificial or prerecorded voice is not consent for interactive calls with AI technology. Nor is it permission for calls using AI that pretend to be from a person, a business, or a governmental entity that it is not in fact from. The congressionally imposed requirement for prior express consent for these calls provides ample authority the FCC’s proposal.
The Commission has repeatedly ruled that consumers have the right to withdraw any consent that has been given,[56] most recently codifying this right into regulations.[57] This right is not meaningful unless consumers have a way to determine what kind of calls they are receiving. Since interactive AI calls and calls that spoof a specific human’s voice are designed to be indistinguishable from live calls made by a human, consumers will not be able to make an informed decision about whether to withdraw their consent to receive such calls unless there is a requirement that during the call they be informed that the calls they are receiving fall into this category. Moreover, without such a disclosure, a consumer who has refused to give consent to receive such calls will be unable to enforce their right to decline consent for these calls.
In addition, the FCC has very broad authority to prescribe technical and procedural requirements for artificial and prerecorded calls. The TCPA provides:
The Commission shall prescribe technical and procedural standards for systems that are used to transmit any artificial or prerecorded voice message via telephone. Such standards shall require that—
- all artificial or prerecorded telephone messages (i) shall, at the beginning of the message, state clearly the identity of the business, individual, or other entity initiating the call, ….[58]
This statutory language, and the Commission’s general authority to issue regulations to implement the TCPA, provide strong support for the FCC’s twin proposals: first, to require that, before callers can make AI-generated calls that may employ interaction with the recipient, the callers must obtain explicit consent from called parties to receive calls that use this technology; and second, that these AI-generated calls must include a disclosure at the beginning of each call that the call uses this technology.
We also urge that the FCC to tie these two requirements together by interpreting “prior express consent” in this context to mean that when a subscriber consents to receive calls that use AI-generated technology, they are consenting only to receive calls that include the disclosure at the beginning of the call. This would mean that any call that includes AI-generated technology that does not include the in-call disclosure will not have been consented to by the recipient.
V. Conclusion
We appreciate the Commission’s consideration of our proposals and concerns. We would be happy to answer any questions.
[1] In re Artificial Intelligence Technologies on Protecting Consumers From Unwanted Robocalls, Notice of Proposed Rulemaking and Notice of Inquiry, CG Docket 23-362 (Rel. Aug. 8, 2024), available at https://docs.fcc.gov/public/attachments/FCC-24-84A1.pdf [hereinafter AI NPRM]. See also In re Implications of Artificial Intelligence Technologies on Protecting Consumers From Unwanted Robocalls and Robotexts, Proposed Rule, CG Docket No. 23–362, 89 Fed. Reg. 73,321 (Sept. 10, 2024), available at https://www.govinfo.gov/content/pkg/FR-2024-09-10/pdf/2024-19028.pdf.
[2] In re Rules and Regulations Implementing the Telephone Consumer Protection Act of 1991, Comments of National Consumer Law Center et al., CG Docket No. 23-362 (October 10, 2024), available at https://www.fcc.gov/ecfs/document/1010135524178/1 [hereinafter Consumer Comments].
[3] AI NPRM, supra note 1, at 14.
[4] Id.
[5] Consumer Comments, supra note 2, at 11-13.
[6] Id. 13 – 20.
[7] 47 C.F.R. § 227.
[8] Karen Blum, Patients Increasingly Bombarded by Text and Email Appointment Reminders, Ass’n of Health Care Journalists (May 2, 2024), https://healthjournalism.org/blog/2024/05/patients-increasingly-bombarded-by-text-and-email-appointment-reminders/; Here’s Why Patients Are Being Bombarded with Doctor’s Appointment Reminders, Market Realist, https://marketrealist.com/why-are-healthcare-providers-spamming-patients-with-appointment-remiders/ (last visited Nov. 14, 2024).
[9] Cf. Nathaniel Meyersohn, Why Your Doctor’s Office Is Spamming You with Calls and Texts, CNN (Mar. 16, 2024), https://www.cnn.com/2024/03/16/business/doctors-office-spam-calls-texts-reminders/index.html (“All of these systems were built for the provider and were never patient-focused,” said Oliver Kharraz, the CEO of ZocDoc.).
[10] Consumer Comments, supra note 2. As noted in these comments filed by consumer and privacy groups, the regulation only applies to covered telemarketing calls, (define in 47 C.F.R. § 64.1200(a)(2) and (a)(3), and prerecorded non-telemarketing calls to residential lines made pursuant to an exemption defined by 47 C.F.R. 64.1200(a)(3) )(ii) though (v).
[11] AI NPRM, supra note 1, at 14.
[12] We leave for another day the issue of how this in-call disclosure should be phrased.
[13] NCLC Comments, supra note 2, at 7-10.
[14] See, e.g., Complaint, In re OpenAI GP LLC (F.T.C. Oct. 29, 2024), available at https://epic.org/wp-content/uploads/2024/10/EPIC-FTC-OpenAI-complaint.pdf. [hereinafter EPIC FTC Complaint].
[15] EPIC FTC Complaint, supra note 14, at ¶ 2.
[16] Id. at ¶ 59 (footnotes omitted).
[17] Id. at 60 (footnotes omitted; citing Cade Metz, Chatbots May ‘Hallucinate’ More Often Than Many Realize, N.Y. Times, Nov. 16, 2023, available at https://www.nytimes.com/2023/11/06/technology/chatbots-hallucination-rates.html).
[18] Id. at ¶ 61.
[19] Id. at ¶ 62 (footnotes omitted).
[20] See, e.g. Press Release, Gartner, Gartner Survey Shows AI Enhanced Malicious Attacks as Top Emerging Risk for Enterprises for Third Consecutive Quarter (Nov. 1, 2024), available at https://www.gartner.com/en/newsroom/press-releases/2024-05-22-gartner-survey-shows-ai-enhanced-malicious-attacks-as-top-er-for-enterprises-for-third-consec-quarter.
[21] Id.
[22] Graham Shippy, Nationwide, SURVEY: One quarter of small business owners have been targeted by AI-driven scams (Sept. 25, 2024), available at https://news.nationwide.com/one-quarter-of-small-business-owners-have-been-targeted-by-ai-driven-scams/.
[23] See TRAILS, Shilton Discusses AI-Powered Phone Scams Targeting Seniors, available at https://www.trails.umd.edu/news/ai-powered-phone-scams-target-seniors; NYC Consumer and Worker Protection, Tips on AI-Related Scams, available at https://www.nyc.gov/site/dca/consumers/artificial-intelligence-scam-tips.page; https://www.experian.com/blogs/ask-experian/what-are-ai-scams/; Charles Bethea, The Terrifying A.I. Scam That Uses Your Loved One’s Voice, The New Yorker, Mar. 7, 2024, available at https://www.newyorker.com/science/annals-of-artificial-intelligence/the-terrifying-ai-scam-that-uses-your-loved-ones-voice. See also Pieter Arntz, Malwarebytes Labs, AI used to fake voices of loved ones in “I’ve been in an accident” scam (Jan. 17, 2024), available at https://www.malwarebytes.com/blog/news/2024/01/ai-used-to-fake-voices-of-loved-ones-in-ive-been-in-an-accident-scams.
[24] Paola Suro, 11 ALIVE, ‘My Heart is beating’/ Metro Atlanta mother panicked after scammers used AI to impersonate her daughter’s voice (July 17, 2023), available at https://www.11alive.com/article/news/local/scammers-use-ai-impersonate-voices/85-02af53f7-aea9-416d-b2d2-7098c650d3f8. See also Andrew Dorn, NewsNation Now, Phone scams in the age of AI: More targeted, dangerous (Apr. 20, 2024), available at https://www.newsnationnow.com/business/tech/ai/phone-scams-ai-more-targeted-dangerous/.
[25] FBI, FBI Warns of Increasing Threat of Cyber Criminals Utilizing Artificial Intelligence, (May 8, 2024), https://www.fbi.gov/contact-us/field-offices/sanfrancisco/news/fbi-warns-of-increasing-threat-of-cyber-criminals-utilizing-artificial-intelligence.
[26] Federal Trade Comm’n, Consumer Sentinel Network Data Book 2023, at 4 (Feb. 2024), available at https://www.ftc.gov/system/files/ftc_gov/pdf/CSN-Annual-Data-Book-2023.pdf.
[27] Robokiller, The Robokiller phone scam report: 2023 mid-year insights & analysis, available at https://www.robokiller.com/robokiller-2023-mid-year-phone-scam-report#The-state-of-the-robocall-in-2023.
[28] See, e.g., Letter from American Bankers Ass’n et al. to Marlene H. Dortch, Secretary, Federal Commc’ns Comm’n, Re: Notice of Ex Parte Presentation, In the Matter of Advanced Methods to Target and Eliminate Unlawful Robocalls, Targeting and Eliminating Unlawful Text Messages, CG Docket Nos. 17-59 & 21-402, Eighth Report and Order in CG Docket No. 17-59 and Third Report and Order in CG Docket No. 21-402, FCC-CIRC2409-02 (rel. Sept. 5, 2024) (Nov. 7, 2024), available at https://www.fcc.gov/ecfs/search/search-filings/filing/1108155673580; Letter from American Bankers Ass’n et al. to Marlene H. Dortch, Secretary, Federal Commc’ns Comm’n, Re: Notice of Ex Parte Presentation, In the Matter of Advanced Methods to Target and Eliminate Unlawful Robocalls, Targeting and Eliminating Unlawful Text Messages, CG Docket Nos. 17-59 & 21-402, Eighth Report and Order in CG Docket No. 17-59 and Third Report and Order in CG Docket No. 21-402, FCC-CIRC2409-02 (rel. Sept. 5, 2024) (Oct. 25, 2024), available at https://www.fcc.gov/ecfs/search/search-filings/filing/102646594849; Letter from 52 Bankers Associations to Jessica Rosenworcel, Chairwoman, Federal Commc’ns Comm’n, Re: In the Matter of Advanced Methods to Target and Eliminate Unlawful Robocalls, Targeting and Eliminating Unlawful Text Messages, CG Docket Nos. 17-59 & 21-402, Eighth Report and Order in CG Docket No. 17-59 and Third Report and Order in CG Docket No. 21-402, FCC-CIRC2409-02 (rel. Sept. 5, 2024) (Oct. 18, 2024), available at https://www.fcc.gov/ecfs/document/101814995209/1; Letter from American Bankers Ass’n et al. to Marlene H. Dortch, Secretary, Federal Commc’ns Comm’n, Re: Notice of Ex Parte Presentation, In the Matter of Advanced Methods to Target and Eliminate Unlawful Robocalls, Targeting and Eliminating Unlawful Text Messages, CG Docket Nos. 17-59 & 21-402, Eighth Report and Order in CG Docket No. 17-59 and Third Report and Order in CG Docket No. 21-402, FCC-CIRC2409-02 (rel. Sept. 5, 2024) (Oct. 18, 2024), available at https://www.fcc.gov/ecfs/search/search-filings/filing/1019107422584; Letter from American Bankers Ass’n et al. to Marlene H. Dortch, Secretary, Federal Commc’ns Comm’n, Re: Notice of Ex Parte Presentation, In the Matter of Advanced Methods to Target and Eliminate Unlawful Robocalls, Targeting and Eliminating Unlawful Text Messages, CG Docket Nos. 17-59 & 21-402, Eighth Report and Order in CG Docket No. 17-59 and Third Report and Order in CG Docket No. 21-402, FCC-CIRC2409-02 (rel. Sept. 5, 2024) (Oct. 4, 2024), available at https://www.fcc.gov/ecfs/search/search-filings/filing/1005277481268.
[29] See, e.g., National Consumer Law Center, Testimony of Carla Sanchez-Adams on “Examining Scams and Fraud in the Banking System and The Impact on Consumers” (Feb. 1, 2024), available at https://www.nclc.org/resources/written-testimony-for-senate-committee-on-banking-housing-and-urban-affairs-hearing-on-examining-scams-and-fraud-in-the-banking-system-and-their-impact-on-consumers/.
[30] See Louis DeNicola, Experian, What Are AI Scams? (Apr. 5, 2024), available at https://www.experian.com/blogs/ask-experian/what-are-ai-scams/ [hereinafter What Are AI Scams?].
[31] See NBC News, Growing number of misleading ads using AI to create fake celebrity endorsements, available at https://www.youtube.com/watch?v=WtR0-TBauoU.See also The Daily Show, Jon Stewart on the False Promises of AI, available at https://www.youtube.com/watch?v=20TAkcy3aBY (featuring a deepfake of Jon Stewart). See also Press Release, Federal Commc’ns Comm’n, FCC Proposes $6 Million For Illegal Robocalls That Used Biden Deepfake Generative AI Voice Message (May 23, 2024), available at https://docs.fcc.gov/public/attachments/DOC-402762A1.pdf.
[32] See What Are AI Scams?, supra note 30.
[33] See Kevin Townsend, SecurityWeek, AI-Generated Malware Found in the Wild (Sept. 24, 2024), available at https://www.securityweek.com/ai-generated-malware-found-in-the-wild/; The Japan Times, Kawasaki man arrested over malware made using generative AI (May 28, 2024), available at https://www.japantimes.co.jp/news/2024/05/28/japan/crime-legal/man-arrested-malware-generative-ai/; KrebsonSecurity, Meet the Brains Behind the Malware-Friendly AI Chat Service ‘WormGPT’ (Aug. 8, 2023), available at https://krebsonsecurity.com/2023/08/meet-the-brains-behind-the-malware-friendly-ai-chat-service-wormgpt/.
[34] Recommendation of the Council on Artificial Intelligence, OECD (May 21, 2019), available at https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449 [hereinafter OECD Recommendation].
[35] Fiona Alexander, U.S. Dep’t of Commerce, NTIA, U.S. Joins with OECD in Adopting Global AI Principles (May 22, 2019), available at https://www.ntia.gov/blog/us-joins-oecd-adopting-global-ai-principles.
[36] OECD Recommendation, supra note 34, Principle IV.1.3 (emphasis added).
[37] The White House, OSTP, What is the Blueprint for an AI Bill of Rights?, available at https://www.whitehouse.gov/ostp/ai-bill-ofrights/what-is-the-blueprint-for-an-ai-bill-of-rights/. See also The White House, OSTP, Relationship to Existing Law and Policy, OSTP, available at https://www.whitehouse.gov/ostp/ai-bill-ofrights/relationship-to-existing-law-and-policy/ (emphasis added).
[38] The White House, OSTP, Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People 5 (Oct. 2022), available at https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf.
[39] Id. at 6 (emphasis added).
[40] Exec. Order No. 14,110, 88 Fed. Rag. 75,191 (Oct. 30, 2023).
[41] See id. at 75,196-75,198, 75,204-75,205, 75,209-75,218.
[42] Id. at 75,191, 75,202-75,203, 75,219.
[43] U.S. Dep’t of State, Bureau of Cyberspace & Digital Policy, Risk Management Profile for Artificial Intelligence and Human Rights, available at https://www.state.gov/risk-management-profile-for-ai-and-human-rights/ (internal citations omitted).
[44] FICO, A New Dawn: Modernizing Collections and Recovery in the US 7 (2024), available at https://www.fico.com/en/latest-thinking/white-paper/new-dawn-modernizing-collections-and-recovery-us.
[45] Robert J. Szczerba, Which Industry is Next for A.I. Disruption? The Answer Might Surprise You, Forbes (updated Jan. 6, 2021), available at https://www.forbes.com/sites/robertszczerba/2017/04/26/which-industry-is-next-for-a-i-disruption-the-answer-might-surprise-you/?sh=7356d3a93f1c.
[46] See Meg Leta Jones & Elizabeth Edenberg, Troubleshooting AI and Consent, in The Oxford Handbook of Ethics of AI 358-374 (Markus D. Dubber ed. 2020), available at https://academic.oup.com/edited-volume/34287/chapter-abstract/290665619?redirectedFrom=fulltext.
[47] Utah. S.B 149 (2024), available at https://le.utah.gov/~2024/bills/sbillenr/SB0149.pdf; Anas Baig, Securiti, What to Know About the Utah Artificial Intelligence Policy Act (UAIPA) (Sept. 11, 2024), available at https://securiti.ai/utah-ai-policy-.
[48] Bloomberg Law, How Companies Should Be Thinking About Disclosing AI Usage to Consumers (Apr. 2024), available at https://www.bloomberglaw.com/external/document/XDEBUU4K000000/data-collection-management-professional-perspective-how-companie.
[49] 16 C.F.R. Part 461 available at https://www.ftc.gov/system/files/ftc_gov/pdf/r207000_govt_biz_impersonation_rule.pdf; Press Release, Federal Trade Comm’n, FTC Announces Impersonation Rule Goes into Effect Today (Apr. 1, 2024), available at https://www.ftc.gov/news-events/news/press-releases/2024/04/ftc-announces-impersonation-rule-goes-effect-today.
[50] See AI NPRM, supra note 1, at ¶¶ 14-17.
[51] See id. at ¶ 15.
[52] Press Release, Federal Trade Comm’n, FTC Announces Impersonation Rule Goes into Effect Today (Apr. 1, 2024), available at https://www.ftc.gov/news-events/news/press-releases/2024/04/ftc-announces-impersonation-rule-goes-effect-today.
[53] 47 U.S.C. § 227 (b)(1)(A)(iii).
[54] In re Rules & Regulations Implementing the Tel. Consumer Prot. Act of 1991, 27 F.C.C. Rcd. 1830, 1840, at ¶ 25 (F.C.C. Feb. 15, 2012) (emphasis in original). Accord In re Rules & Regulations Implementing the Tel. Consumer Prot. Act of 1991, Blackboard, Inc. Petition for Expedited Declaratory Ruling, 31 F.C.C. Rcd. 9054, at ¶¶ 23–25 (F.C.C. Aug. 4, 2016) (when a parent has given a school only a cell phone number as a contact, the scope of consent does not extend beyond communications closely related to the school’s educational mission or to official school activities; does not extend to communications about non-school events).
[55] In re Rules & Regulations Implementing the Tel. Consumer Prot. Act of 1991, Declaratory Ruling & Order, 30 F.C.C. Rcd. 7961, at ¶ 141 n.474 (F.C.C. July 10, 2015).
[56] In re Rules & Regulations Implementing the Tel. Consumer Prot. Act of 1991, 30 F.C.C. Rcd. 7961, at ¶¶ 63, 64 (F.C.C. July 10, 2015), ruling upheld, ACA Int’l v. Fed. Commc’ns Comm’n, 885 F.3d 687 (D.C. Cir. 2018). Accord Declaratory Ruling, In re Rules & Regulations Implementing the Tel. Consumer Prot. Act of 1991, 38 F.C.C. Rcd. 404, at ¶ 16 (F.C.C. Jan. 23, 2023), available at www.fcc.gov.
[57] 47 C.F.R. 64.1200(a)(10); Report and Order and Further Notice of Proposed Rulemaking, In re Rules & Regulations Implementing the Tel. Consumer Prot. Act of 1991, FCC No. 24-24 (F.C.C. Feb. 16, 2024) (adding 47 C.F.R. § 64.1200(a)(10)).
Support Our Work
EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.
Donate