Testimony
Testimony in Support of Connecticut S.B. 2
February 25, 2025
Joint Committee on General Law
Legislative Office Building, Room 3500
300 Capitol Avenue
Hartford, CT 06106
Dear Co-Chairs Maroney and Lemar and Members of the Committee,
EPIC writes in support of S.B. 2, the Connecticut Artificial Intelligence Act of 2025. We commend Senator Maroney for bringing this bill forward again this year and for his leadership on tech issues. Connecticut played a crucial role in kickstarting the national conversation about managing AI risks last session, and it has an opportunity now to lead the nation with innovative policy that both protects the rights and privacy of Connecticut residents and encourages technological innovation.
The Electronic Privacy Information Center (EPIC) is an independent, nonpartisan, non-profit research organization in Washington, D.C., established in 1994 to protect privacy, freedom of expression, and democratic values in the information age.1 EPIC has advocated for strong AI and privacy laws at both the state and federal level for many years.2
In my testimony, I will discuss why it is so critical that Connecticut take immediate action to place common-sense regulations on the development and use of high-risk AI systems, the key provisions that make S.B. 2 a significant step toward this goal, and the ways S.B. 2 can be amended to ensure robust protections for Connecticut consumers and workers.
AI regulation is urgently needed, and Connecticut should act now.
Legislation like S.B. 2 that seeks to address the harms of companies using AI systems in making important decisions about people’s lives is urgently needed. Connecticut residents need protections from harms they are already suffering because of the unregulated use of AI systems in life-altering decisions. Both public and private entities use high-risk AI systems in making decisions about people’s housing, employment, education, health care, and finances every day. Passing this bill is essential to ensure that the AI systems used in making these key decisions about the lives of Connecticut residents are transparent, nondiscriminatory, and accurate – and that individuals have the information and ability to hold companies accountable if AI systems they use do cause harm.
The use of AI systems has led to the wrongful denial of people’s access to housing,3 medically necessary coverage,4 job opportunities,5 loans,6 and more. The use of AI in making important decisions is a widespread practice that affects most Americans. Low-income individuals are even more likely to have important decisions about their lives made using AI; a recent report by TechTonic Justice found that virtually all 92 million low-income Americans have “some basic aspect of their lives decided by AI.”7
Research shows Americans are uncomfortable with companies using AI systems to make these kinds of decisions about their lives. According to a nationally representative survey conducted by Consumer Reports last year, the majority of American adults surveyed expressed concern over AI systems being used in making decisions in several important contexts covered by S.B. 2.8 Specifically, 72% of U.S. adults would be uncomfortable with AI having a role in a job interview process; 66% would be uncomfortable with banks using AI to determine if they qualified for a loan; 69% would be uncomfortable with apartments, condos, or senior communities using AI to screen potential tenants; and 58% would be uncomfortable with hospitals using AI to help make health care decisions about diagnoses or treatment.9
The proliferation of opaque and unproven AI systems into the most sensitive aspects of people’s lives coupled with Americans’ discomfort with this reality makes it essential that the Connecticut Legislature move forward with a strengthened version of S.B. 2 that puts careful guardrails on the development and use of these systems in consequential decisions.
S.B. 2 is a significant step toward protecting Connecticut residents from the harms of unregulated AI development and use.
Right now, Connecticut residents have no way to know whether companies are using AI in making decisions about their jobs, education, housing, health care, finances, or insurance. Even in cases where consumers do know that companies are using AI, they have no ability or right to know how the AI works, how much the company relied on AI to make its decision, what personal information was processed by the AI, and how the company monitors the AI for discrimination or inaccuracy. S.B. 2 requires real transparency from developers and deployers about AI systems and gives important rights to Connecticut residents who are subject to decisions made using AI.
S.B. 2 requires real transparency from developers and deployers.
This bill would require developers to provide important information about their AI systems to deployers, the public, and the Attorney General, increasing transparency and reducing the “black box” nature of high-risk AI systems. Under this bill, developers must provide deployers with documentation about the data used to train the system and data governance measures, the purpose and any limitations of the system, how the developer tested the system’s performance, how the developer mitigated the risks of algorithmic discrimination, and how the system should and should not be used. Developers would also be required to post a public statement on their website explaining the types of high-risk AI systems they have developed and how they manage the risk of algorithmic discrimination from their systems. Finally, developers would be required to report to the Attorney General any known or reasonably foreseeable risks of algorithmic discrimination, including those risks discovered through ongoing testing or from reports by deployers. EPIC would suggest removing the threshold requirement that at least 1,000 consumers be harmed by algorithmic discrimination before developers are required to report the issue to the Attorney General but supports this reporting framework, which parallels similar reporting requirements in the context of data breaches. Requiring AI developers to follow these transparency and disclosure requirements is an important step toward ensuring AI systems will be safer and less likely to produce discriminatory or inaccurate results.
Similarly, S.B. 2 requires deployers to disclose information about their use of AI systems to both the public and the Attorney General. Under this bill, deployers must publicly post on their website information summarizing the types of high-risk AI systems they have deployed; how they manage these systems’ risks of algorithmic discrimination; and the information they collect and use. Just as developers are required to do, deployers would also be required to report to the Attorney General if they discover their use of a system has caused algorithmic discrimination. EPIC suggests the same revision to this provision as the one above for the developer reporting requirement. These deployer disclosure requirements, on top of those required of developers, take steps toward a much more transparent AI ecosystem in the context of consequential decisions.
S.B. 2 grants important rights to Connecticut residents who are subject to decisions made by companies using high-risk AI systems.
Importantly, S.B. 2 grants Connecticut residents certain rights if an entity uses an AI system to make a consequential decision about them. Under this bill, deployers must inform consumers if they are using an AI system in making a consequential decision about them. This notice must include a plain-language description of the AI system and its purpose, the nature of the decision, contact information for the deployer, and instructions about how to access the deployer’s required public posting that includes more information about its use of AI systems. This notice must also inform the consumer that they have the right to opt out of any decision based on their personal data.
This bill would also give consumers the right to an explanation of any adverse decision made using an AI system, the right to correct any incorrect personal data, and the right to appeal an adverse decision. Deployers would have to tell consumers the principal reasons for the decision, the degree to which they relied on an AI system in making the decision, and the types and sources of data that were fed to the AI system. Consumers would then have the right to review any personal data processed by the AI system and to correct any incorrect personal data. Research has shown that these rights—to know what personal information AI systems use to produce recommendations and to be able to correct incorrect information—are important to most Americans.10 S.B. 2 would also give consumers the right to appeal decisions based on incorrect personal data. While EPIC commends the sponsors of S.B. 2 for granting these important rights to Connecticut residents, we would recommend that the right to appeal be further strengthened, as discussed below.
S.B. 2 requires deployers to take concrete steps to reduce the risks of using high-risk AI systems.
The bill places important obligations on deployers to mitigate the risks of using high-risk AI systems, including requiring deployers to implement a risk management program and conduct impact assessments every year or whenever they substantially modify the AI system. The impact assessment for each AI system the deployer uses must include information about the purpose and deployment context of the system; an analysis of the risks of algorithmic discrimination and the steps that they have taken to mitigate these risks, a description of the data processed by the system and its outputs; information about any data the deployer used to customize the system; a description of transparency measure the deployer has taken; and a description of the post-deployment monitoring and evaluation metrics the deployer has used. These measures will ensure deployers spend time thinking intentionally about the risks of using AI systems in making consequential decisions and require them to take clear actions to mitigate those risks wherever possible.
S.B. 2 includes smart measures to prepare the state for both responsible use of AI and for continued thoughtful regulation.
S.B. 2 contains some novel and forward-thinking strategies to prepare Connecticut and its residents for a future that is likely to include AI. EPIC commends the sponsors for including provisions instructing the Attorney General to implement a public education and assistance program to help Connecticut small businesses adjust to the requirements of this bill. EPIC also supports this bill’s establishment of the Connecticut AI Academy to ensure young people, nonprofits, small businesses, and participants in workforce training programs are equipped to pursue the responsible use of AI. Initiatives like the Connecticut AI Symposium and the state agency studies of generative AI are measured approaches to ensuring the state is prepared to both use AI safely and responsibly and to regulate state agencies’ use of generative AI thoughtfully in the future. Finally, EPIC is heartened to see the establishment of a working group for stakeholder and expert engagement in future AI regulation. Especially important is that the makeup of the working group is fair and balanced and includes voices from consumer protection organizations, labor groups, and academics in addition to industry representatives.
While S.B. 2 as written is a positive step toward more fair and transparent use of AI, there are changes that can be made to more fully protect Connecticut residents from AI harms.
S.B. 2 should be amended in a few key ways to ensure robust protections for Connecticut residents.
Definitions of “high-risk artificial intelligence system” and “substantial factor” should be clarified.
The precise wording of the definitions in this bill is critical to ensuring that the bill covers all companies that develop and use high-risk AI systems used in making, or as a substantial factor in making, consequential decisions. A study about a similar New York City local law provides an illustrative example of why clear definitions are so essential. New York City’s Local Law 144, which went into effect last summer, prohibits employers from using AI systems in employment decisions unless they conduct an annual bias audit on the AI system, information about the bias audit has been made publicly available, and disclosures have been given to job applicants about the AI system.11 However, when researchers at Cornell University, Data & Society, and Consumer Reports studied whether companies were complying with this new law, they found extremely low compliance rates—less than 20 of the 391 employers in the study had posted the required audit reports or transparency notices.12 The researchers concluded that these low rates of compliance was likely due, in part, to the law’s definitions leaving too much room for companies to decide for themselves whether or not they needed to comply with the law.13 The law covered companies that used AI to “substantially assist or replace discretionary decision making.”14 Unfortunately, the definitional problems with Local Law 144 that incentivized this extremely low compliance are quite similar to those found in S.B. 2.
To this end, EPIC recommends amending two definitions, “high-risk artificial intelligence system” and “substantial factor,” to ensure that they do not contain loopholes that would allow companies to decide that their uses of AI in consequential decisions are not substantial enough to trigger compliance with S.B. 2’s obligations.
As currently written, the definition of “high-risk AI system” contains several exceptions that may allow deployers to claim that their use of an AI system does not qualify as high-risk when it otherwise would. For example, this definition exempts uses of AI that is “intended to perform any narrow procedural task” or to “detect decision-making patterns, or deviations from decision-making patterns.” These exceptions seem to have been taken from the EU AI Act, but because S.B. 2 does not adopt the same structure as the EU AI Act nor does Connecticut have the robust data privacy requirements found in the EU’s General Data Protection Regulation (GDPR), it is problematic to keep these exemptions in the definition of “high-risk AI systems.” In an American context, these exceptions are simply unnecessary and, worse, could allow companies to argue that any manner of tasks are “narrow” and “procedural” and therefore exempt from this bill. For example, a company might argue that screening resumes for a job opportunity or setting the price of insurance are “narrow procedural tasks” and thus decide they do not need to follow the transparency and consumer rights provisions of S.B. 2. However, it is clear that this bill’s intent is to cover exactly these sorts of decisions because they are explicitly identified in the “consequential decision” definition of the bill. To ensure companies cannot self-select out of complying with this bill, the exceptions in Section 9(B) should be removed from the definition of “high-risk AI system.”
On top of these concerns with the definition of “high-risk AI system,” the definition of “substantial factor” also contains a couple of problematic loopholes. First, limiting the definition of substantial factor to only something that “alters the outcome” of a decision narrows the scope of the bill so much that it would exclude most of the common ways companies use AI in making the important decisions covered by this bill. The “substantial factor” definition in S.B. 2 is much narrower than the definitions found in similar bills in other states, including the Colorado AI Act.15
EPIC recommends amending the definition of “substantial factor” to include a factor that “assists in making a consequential decision” in addition to a factor that “alters the outcome” of a consequential decision. Broadening the definition this way ensures that companies that use AI outputs as part of their decision are covered by this bill, even if the output may not alter the final decision. This is essential not only as a common-sense amendment but also because humans are not always aware of how much automated systems influence their thinking. Research shows that AI systems both amplify biases that human decisionmakers already have and that humans are more influenced by the decisions of AI systems than they perceive themselves to be.16 Because of this, it is critical that S.B. 2 cover all consequential decisions in which an AI output assisted in making that decision.
Second, the current definition includes the use of AI systems “as a basis to make a consequential decision.” This language should be revised to clarify that an AI system is a substantial factor in making a consequential decision even if it is used only as a partial basis in making the decision. The output of the AI system does not need to be the whole basis of a decision – or even the primary basis – to still be considered a substantial factor in the decision. Without this change, companies could simply require a human to rubber-stamp recommendations generated by AI systems as an easy way to avoid complying with this bill. Because a human is now technically involved in making the decision – even if that person has no power to diverge from the recommendation generated by the AI system – companies can say that the AI was not the whole basis of the decision, and thus they do need to comply with this bill. The definition of “substantial factor” should be amended to close these loopholes.
Section 1(14)(A)(i) and Section 1(14)(B) should be amended to read as follows: “Substantial factor” (A) means a factor that (i) alters the outcome of a consequential decision or assists in making a consequential decision and (ii) is generated by an artificial intelligence system, (B) includes, but is not limited to, any use of an artificial intelligence system to generate any content, decision, predication or recommendation concerning a consumer that is used as a basis, in whole or in part, to make a consequential decision concerning the consumer.
Anti-discrimination provisions should be made consistent with other civil rights laws.
While S.B. 2 seeks to address algorithmic discrimination, the language needs a few key amendments to fully achieve its goal. First, this bill places a duty of care on developers and deployers to protect consumers from “known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses” of high-risk AI systems. While this may seem sufficient at first glance, this duty of care standard actually treats algorithmic discrimination as less harmful than discrimination by any other means. Traditional civil rights laws impose a full prohibition on discrimination rather than a duty of care. It is not enough for companies to merely use “reasonable care” to avoid discrimination in other contexts, such as employment or housing; they simply must not discriminate. A full prohibition is the standard that should be applied in the context of the development and use of high-risk AI systems as well.
To ensure that algorithmic discrimination is treated the same as other methods of discrimination, Section 2(a), 3(a), and 4(a) should all be amended as follows: “Beginning on October 1, 2026, a developer of a high-risk artificial intelligence system shall not sell, lease, distribute, share, or otherwise make available to deployers a high-risk AI system that results in algorithmic discrimination.” Sections 3(a) and 4(a) should similarly be changed to full prohibitions on algorithmic discrimination by integrators and deployers rather than duties of care.
For these same reasons, the rebuttable presumptions in S.B. 2 should be removed. As written, developers, integrators, and deployers can avoid liability for algorithmic discrimination if they have complied with the other requirements in the bill. Companies should not be able to avoid liability for discriminating just because they have complied with their requirements under this bill, which, while important, are largely documentation obligations. Further, the inclusion of these rebuttable presumptions could confuse and undermine existing laws against discrimination. This sort of “get-out-of-jail free” card does not exist in any other non-discrimination laws, and it should not be included in S.B. 2 either. In a court case implicating both this bill and a traditional non-discrimination law, for example, these rebuttable presumptions are likely to cause confusion and could weaken interpretations of existing civil rights laws.
To ensure S.B. 2 meets its goal of addressing the problem of algorithmic discrimination, the rebuttable presumptions in Sections 2(a), 3(a), and 4(a) should be struck as follows: “In any enforcement action brought on or after said date by the Attorney General pursuant to section 10 of this act, there shall be a rebuttable presumption that a developer used reasonable care as required under this subsection if the developer complied with the provisions of this section or, if the developer enters into a contract with an integrator as set forth in subsection (b) of section 3 of this act, the developer and integrator complied with the provisions of this section and section 3 of this act.”
Remove the exemptions to consumers’ right to appeal decisions made using high-risk AI systems.
EPIC commends the sponsors of S.B. 2 for providing consumers with the right to appeal decisions that companies have made about them using high-risk AI systems. The right to appeal is an important right that will rightly allow consumers to take action if they receive an adverse decision.
However, there is a large loophole that allows deployers to withhold the right to appeal from consumers if the company believes an appeal would not be in the “best interest” of the consumer. This language should be removed. The consumer—not the deployer—is in the best position to know whether appealing a decision is in their own best interest. If a consumer decides an appeal is not in their best interest, that consumer can simply not appeal, but this decision should be left in the consumer’s hands, not the company’s.
Section 4(e)(2)(C)(ii) should be amended to remove this exemption, which would read as follows: “No deployer shall be required to provide an opportunity to appeal pursuant to subparagraph (C)(i) of this subdivision in any instance in which providing such opportunity to appeal is not in the best interest of the consumer, including, but not limited to, in any instance in which any delay might pose a risk to the life or safety of the consumer.”
The consumer right to appeal should also be broadened so that a consumer can appeal any consequential decision that she believes is incorrect or unfair, not only if the adverse decision was based on inaccurate personal data. Thus, Section 4(e)(2)(C)(i) should be amended to read as follows: “Except as provided in subparagraph (C)(ii) of this subdivision, an opportunity to appeal such adverse consequential decision if such adverse consequential decision is based upon inaccurate personal data, taking into account both the nature of such personal data and the purpose for which such personal data was processed. Such appeal shall, if technically feasible, allow for human review.”
Strengthen enforcement structure by adding a private right of action and limiting companies’ right to cure.
S.B. 2 classifies violations of the bill as unfair trade practices under Connecticut’s consumer protection laws, but it takes the unusual step of removing a right from harmed consumers. S.B. 2 takes away from consumers the right to hold companies liable for harming them. All 50 states have laws prohibiting unfair, deceptive, or abusive practices that provide consumers with a private right of action.17 S.B. 2 would stray from this norm by treating violations by deployers, integrators, and developers of this bill differently from other consumer protection violations. Consumers should be able to seek redress for violations of S.B. 2 the same way they would for other violations of consumer protection or civil rights laws.
In the absence of a private right of action, there is a very real risk that companies will not comply with the bill because they think it is unlikely that they will get caught; companies are likely to assume the risk of a state Attorney General’s office pursuing an enforcement action against them is low enough due to lack of resources that they will be willing to take the risk of not complying. Companies know that state Attorneys General are often under-resourced and do not have the ability to pursue cases against every company that violates any of the laws that AGs are tasked with enforcing. Therefore, private enforcement is critical to ensure that developers and deployers have strong financial incentives to meet their transparency, testing, and disclosure requirements under this bill. The inclusion of a private right of action would preserve the State’s resources by reducing the amount of resources it must spend to enforce this law.
EPIC suggests that Section 10(d) be struck and Section 10(e) be amended to make clear that a violation of S.B. 2 will be treated as an unfair trade practice and that companies in violation of the law will be subject to traditional consumer protection remedies, which include a private right of action.
The risk of noncompliance is made even greater by the bill’s inclusion of a right for companies to cure violations before AG’s can bring a case against them. While EPIC appreciates that the mandatory cure period expires after one year, the option for AGs to provide a right to cure after that should be removed. A one-year right to cure is a reasonable compromise that allows companies the time and flexibility to come into compliance with this bill before more aggressive enforcement mechanisms can be used. However, if a right to cure continues to be an option after that one-year period ends, it could create undesirable incentives for companies to delay or avoid complying with the requirements of this bill indefinitely with the hope that the Attorney General will offer them a cure period before pursuing a case for violations.
EPIC recommends striking Section 10(c), which allows the AG to offer a right to cure after October 1, 2027.
* * *
EPIC commends the sponsors of S.B. 2 for recognizing that regulating the development and use of AI systems in critical contexts like housing, employment, and health care is a pressing issue that needs the Legislature’s immediate attention. With the above recommended amendments, S.B. 2 could be a nation-leading law that protects Connecticut residents while allowing technological innovation to continue safely and responsibly. We urge the committee to further strengthen this bill to ensure its goals are met.
Thank you for the opportunity to speak today. EPIC is happy to be a resource to the Committee on these issues.
Sincerely,
Caitriona Fitzgerald
EPIC Deputy Director
Kara Williams
EPIC Law Fellow
- EPIC, About EPIC, https://epic.org/about/. ↩︎
- See e.g., Protecting America’s Consumers: Bipartisan Legislation to Strengthen Data Privacy and Security: Hearing before the Subcomm. on Consumer Protection & Comm. of the H. Comm. on Energy & Comm., 117th Cong. (2022) (testimony of Caitriona Fitzgerald, Deputy Director, EPIC), https://epic.org/wp-content/uploads/2022/06/Testimony_Fitzgerald_CPC_2022.06.14.pdf; Governor Moore Signs Maryland Online Data Privacy Act, EPIC (May 9, 2024), https://epic.org/governor-moore-signs-maryland-online-data-privacy-act/; Virginia Legislature Passes Weak AI Bill Full of Loopholes, EPIC (Feb. 21, 2025), https://epic.org/virginia-legislature-passes-weak-ai-bill-full-of-loopholes/. ↩︎
- Johana Bhuiyan, She Didn’t Get an Apartment Because of an AI-Generated Score – and Sued to Help Others Avoid the Same Fate, Guardian (Dec. 14, 2024), https://www.theguardian.com/technology/2024/dec/14/saferent-ai-tenant-screening-lawsuit. ↩︎
- Elizabeth Napolitano, UnitedHealth Uses Faulty AI to Deny Elderly Patients Medically Necessary Coverage, Lawsuit Claims, CBS News (Nov. 20, 2023), https://www.cbsnews.com/news/unitedhealth-lawsuit-ai-deny-claims-medicare-advantage-health-insurance-denials/. ↩︎
- Charlotte Lytton, AI Hiring Tools May Be Filtering out the Best Job Applicants, BBC (Feb. 16, 2024), https://www.bbc.com/worklife/article/20240214-ai-recruiting-hiring-software-bias-discrimination. ↩︎
- Kori Hale, A.I. Bias Caused 80% of Black Mortgage Applicants to Be Denied, Forbes (Sept. 2, 2021), https://www.forbes.com/sites/korihale/2021/09/02/ai-bias-caused-80-of-black-mortgage-applicants-to-be-denied/. ↩︎
- Kevin De Liban, Inescapable AI: The Ways AI Decides How Low-Income People Work, Live, Learn, and Survive, TechTonic Justice (Nov. 2024), https://www.techtonicjustice.org/reports/inescapable-ai. ↩︎
- A.I./Algorithmic Decision-Making: Consumer Reports Nationally Representative Phone and Internet Survey, May 2024, Consumer Reports Survey Group (July 9, 2024), https://advocacy.consumerreports.org/wp-content/uploads/2024/07/CR-AES-AI-Algorithms-Report-7.25.24.pdf. ↩︎
- Id. ↩︎
- A nationally representative survey of 2,022 U.S. adults found that 83% of those surveyed would want to know what personal information AI systems used to produce a decision about them and that 91% would want the way to correct any incorrect information the AI system relied on in producing that decision. A.I./Algorithmic Decision-Making: Consumer Reports Nationally Representative Phone and Internet Survey, May 2024, Consumer Reports Survey Group (July 9, 2024), https://advocacy.consumerreports.org/wp-content/uploads/2024/07/CR-AES-AI-Algorithms-Report-7.25.24.pdf. ↩︎
- Local Law 144, File # Int. 1894-2020 (N.Y.C. Council 2021). ↩︎
- Lucas Wright et al., Null Compliance: NYC Local Law 144 and the Challenges of Algorithm Accountability, 2024 ACM Conference on Fairness, Accountability, and Transparency (June 2024), https://dl.acm.org/doi/pdf/10.1145/3630106.3658998. ↩︎
- Grace Gedye, New Research: NYC Algorithmic Transparency Law Is Falling Short of its Goals, Consumer Reports (Feb. 8, 2024), https://innovation.consumerreports.org/new-research-nyc-algorithmic-transparency-law-is-falling-short-of-its-goals/. ↩︎
- Local Law 144, File # Int. 1894-2020 (N.Y.C. Council 2021). ↩︎
- S.B. 24-205, 2024 Gen. Assemb., Reg. Sess. (Colo. 2024). ↩︎
- Moshe Glickman & Tali Sharot, How Human-AI Feedback Loops Alter Human Perceptual, Emotional and Social Judgments, Nature Human Behaviour (Dec. 18, 2024), https://www.nature.com/articles/s41562-024-02077-2. ↩︎
- Consumer Protection Laws: 50 State Survey, Justia (last reviewed Oct. 2023), https://www.justia.com/consumer/consumer-protection-laws-50-state-survey/#connecticut. ↩︎
News
EPIC Testifies in Support of Maryland AI Bill
March 5, 2025
EPIC Testifies in Support of Maryland Bill on High-Risk AI
February 27, 2025

Support Our Work
EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.
Donate