Testimony
(Maryland) S.B. 936: Regulating High-Risk AI
Senate Finance Committee
Miller Senate Office Building
11 Bladen Street
Annapolis, MD 21401
Dear Chair Beidle and Members of the Committee,
EPIC writes in support of S.B. 936, An Act concerning Consumer Protection, High-Risk Artificial Intelligence, and Developer and Deployer Requirements. We commend Senator Hester for sponsoring this legislation and for presenting thoughtful amendments to further strengthen the bill. Maryland has the opportunity to enact innovative policy that both protects the rights and privacy of Maryland residents and encourages AI innovation, just as Maryland did last year with the passage of its landmark Maryland Online Data Privacy Act.
The Electronic Privacy Information Center (EPIC) is an independent, nonpartisan, non-profit research organization in Washington, D.C., established in 1994 to protect privacy, freedom of expression, and democratic values in the information age.[1] EPIC has advocated for strong AI and privacy laws at both the state and federal level for many years.[2]
In my testimony, I will discuss why it is so critical that Maryland take immediate action to place common-sense regulations on the development and use of high-risk AI systems, the reasons S.B. 936 as currently drafted (without the committee amendments) misses the mark, and how advancing S.B. 936 with the committee amendments Senator Hester has offered would be a significant step toward protecting Maryland residents.
AI regulation is urgently needed, and Maryland should act now.
Legislation like S.B. 936 that seeks to address the harms of companies using AI systems in making important decisions about people’s lives is urgently needed. Maryland residents need protections from harms they are already suffering because of the unregulated use of AI systems in life-altering decisions. Both public and private entities use high-risk AI systems in making decisions about people’s housing, employment, education, health care, finances, and access to government services every day. Passing this bill is essential to ensure that the AI systems used in making these key decisions about the lives of Maryland residents are transparent, nondiscriminatory, and accurate – and that individuals have the information and ability to hold companies accountable if AI systems they use do cause harm.
The use of AI systems has led to the wrongful denial of people’s access to housing,[3] medically necessary coverage,[4] job opportunities,[5] loans,[6] and more. The use of AI in making important decisions is a widespread practice that affects most Americans. Low-income individuals are even more likely to have important decisions about their lives made using AI; a recent report by TechTonic Justice found that virtually all 92 million low-income Americans have “some basic aspect of their lives decided by AI.”[7]
Right now, Maryland residents have no way to know whether companies are using AI in making these life-altering decisions about them. Marylanders also have no way to know how these AI systems work, how much an entity relied on AI to make a decision about them, or whether the AI was even relying on accurate information to generate its decision. This information asymmetry between companies developing and using AI and the individuals being subjected to AI – along with the “black box” nature of AI system – is one reason S.B. 936 is essential to protect Marylanders. S.B. 936 requires real transparency from developers and deployers about AI systems and gives important rights to Maryland residents who are subject to decisions made using AI.
Research shows Americans are uncomfortable with companies using AI systems to make these kinds of decisions about their lives. According to a nationally representative survey conducted by Consumer Reports last year, the majority of American adults surveyed expressed concern over AI systems being used in making decisions in several important contexts covered by S.B. 936.[8] Specifically, 72% of U.S. adults would be uncomfortable with AI having a role in a job interview process; 66% would be uncomfortable with banks using AI to determine if they qualified for a loan; 69% would be uncomfortable with apartments, condos, or senior communities using AI to screen potential tenants; and 58% would be uncomfortable with hospitals using AI to help make health care decisions about diagnoses or treatment.[9]
The proliferation of opaque and unproven AI systems into the most sensitive aspects of people’s lives coupled with Americans’ discomfort with this reality makes it essential that the Maryland Legislature move forward with S.B. 936 (with the committee amendments adopted) that puts careful guardrails on the development and use of these systems in consequential decisions.
As introduced, S.B. 936 (without the committee amendments) is insufficient to protect Marylanders from the harms of unregulated use of AI in consequential decisions.
While placing guardrails on the development and deployment of high-risk AI systems for use in consequential decisions is critical, the current language of S.B. 936 misses the mark. The bill purports to address algorithmic discrimination, but because of numerous loopholes and exemptions, improperly scoped definitions, and overbroad liability shields for companies, S.B. 936 as introduced does not achieve this goal. Passing this bill as introduced would leave Maryland residents without meaningful protections from the opaque systems that contribute to life-altering decisions about their access to necessities like housing, employment, health care, and government services.
Fortunately, Senator Hester has consulted with many stakeholders throughout her time working on this bill, and she has offered well-crafted and thoughtful amendments that will form a bill that EPIC is proud to support. I will briefly discuss why the language of S.B. 936 as introduced falls short and how Senator Hester’s proposed amendments solve each issue.
A. As introduced, key definitions in S.B. 936 contain numerous loopholes and exemptions that leave too much discretion to companies to decide for themselves whether they are covered by this bill.
The precise wording of the definitions in this bill is critical to ensuring that it covers all companies that develop and use high-risk AI systems used in making, or as a substantial factor in making, consequential decisions about Maryland residents. A study about a similar New York City local law provides an illustrative example of why clear definitions are so essential. New York City’s Local Law 144, which went into effect last summer, prohibits employers from using AI systems in employment decisions unless they conduct an annual bias audit on the AI system, information about the bias audit has been made publicly available, and disclosures have been given to job applicants about the AI system.[10] However, when researchers at Cornell University, Data & Society, and Consumer Reports studied whether companies were complying with this new law, they found extremely low compliance rates—less than 20 of the 391 employers in the study had posted the required audit reports or transparency notices.[11] The researchers concluded that these low rates of compliance were likely due, in part, to the law’s definitions leaving too much room for companies to decide for themselves whether or not they needed to comply with the law.[12] The law covered companies that used AI to “substantially assist or replace discretionary decision making.”[13] Unfortunately, the definitional problems with Local Law 144 that incentivized this extremely low compliance are quite similar to those found in S.B. 936 as introduced.
As introduced, the definition of “high-risk AI system” contains several loopholes that may allow deployers to claim that their use of an AI system does not qualify as high-risk when it otherwise would. First, it defines a high-risk AI system as one that is “specifically intended to autonomously make” a consequential decision. This narrows the scope of this bill so much that all a developer of an AI system would need to do to avoid complying would be to insert a disclaimer in its terms of service warning that “this system is not intended to be used to replace human decisionmaking.” To be effective, this definition cannot allow developers and deployers to decide for themselves whether this bill covers them.
Second, this definition exempts several uses of AI, including developing or using a system that is “intended to perform any narrow procedural task” or to “improve the result of a previously completed human activity.” These exceptions seem to have been taken from the EU AI Act, but because S.B. 936 does not adopt the same structure as the EU AI Act nor does Maryland have the robust data privacy requirements found in the EU’s General Data Protection Regulation (GDPR), it is problematic to keep these exemptions in the definition of “high-risk AI systems.” In an American context, these exceptions are simply unnecessary and, worse, could allow companies to argue that any manner of tasks are “narrow” and “procedural” and therefore exempt from this bill. For example, a company might argue that screening resumes for a job opportunity or setting the price of insurance are “narrow procedural tasks” and thus decide they do not need to follow the transparency and consumer rights provisions of S.B. 936. However, it is clear that this bill’s intent is to cover exactly these sorts of decisions because they are explicitly identified in the “consequential decision” definition of the bill. These exceptions would allow companies to self-select out of complying with this bill.
The definition of “substantial factor” also contains a couple of problematic loopholes. First, limiting the definition of substantial factor to only something that is a “principal basis for” or “alters the outcome of” a consequential decision narrows the scope of the bill so much that it would exclude most of the common ways companies use AI in making the important decisions covered by this bill. The “substantial factor” definition in S.B. 936 as introduced is much narrower than the definitions found in similar bills in other states, including the Colorado AI Act.[14]
Second, the definition as introduced includes the use of AI systems “as the principal basis to make a consequential decision.” This language should be revised to clarify that an AI system is a substantial factor in making a consequential decision even if it is used only as a partial basis in making the decision. The output of the AI system does not need to be the whole basis of a decision – or even the principal basis – to still be considered a substantial factor in the decision.
Without this change, companies could simply require a human to rubber-stamp recommendations generated by AI systems as an easy way to avoid complying with this bill. Because a human is now technically involved in making the decision – even if that person has no power to diverge from the recommendation generated by the AI system – companies can say that the AI was not the whole basis of the decision, and thus they do need to comply with this bill. This definitional change is essential not only as a common-sense amendment but also because humans are not always aware of how much automated systems influence their thinking. Research shows that AI systems both amplify biases that human decisionmakers already have and that humans are more influenced by the decisions of AI systems than they perceive themselves to be.[15] Because of this, it is critical that S.B. 936 cover all consequential decisions in which an AI output assisted in making that decision. The definition of “substantial factor” should be amended to close these loopholes, and the definition of “principal basis” should then also be struck as it would no longer be used in the bill.
The amendments Senator Hester presented fix this problem by replacing these definitions with more carefully worded ones that ensure companies will not be able to decide for themselves whether their use of AI systems is covered by this bill. We urge the Committee to adopt these amendments.
B. As introduced, S.B. 936 would treat algorithmic discrimination as less harmful than discrimination by any other means.
While S.B. 936 seeks to address algorithmic discrimination, the language as introduced falls short of this goal. First, this bill as introduced places a duty of care on developers and deployers to protect consumers from “any known or reasonably foreseeable risks of algorithmic discrimination” of high-risk AI systems. While this may seem sufficient at first glance, this duty of care standard actually treats algorithmic discrimination as less harmful than discrimination by any other means. Traditional civil rights laws prohibit discrimination outright. It is not enough for companies to merely use “reasonable care” to avoid discrimination in other contexts, such as employment or housing; they simply must not discriminate. Requiring companies to fulfill a duty of care when it comes to algorithmic discrimination rather than prohibiting it outright signals that the use of technology to discriminate is somehow more acceptable and less harmful than discrimination by a human – clearly, this is not the goal of this bill. To avoid lowering the standard in cases of algorithmic discrimination, the provisions placing a duty of care on developers and deployers to avoid algorithmic discrimination should be removed.
The amendments Senator Hester has presented solve this problem by removing the duty of care provisions. We urge the Committee to adopt this amendment.
For these same reasons, the rebuttable presumptions should be removed from S.B. 936 as introduced. These provisions allow developers and deployers to avoid liability for algorithmic discrimination if they have complied with the other requirements in the bill. Companies should not be able to avoid liability for discriminating just because they have complied with their requirements under this bill, which, while important, are largely documentation obligations. Further, the inclusion of these rebuttable presumptions could confuse and undermine existing laws against discrimination. This sort of “get-out-of-jail free” card does not exist in any other non-discrimination laws, and it should not be included in S.B. 936 either. In a court case implicating both this bill and a traditional non-discrimination law, for example, these rebuttable presumptions are likely to cause confusion for jurors and could weaken interpretations of existing civil rights laws.
Additionally, the inclusion of these rebuttable presumptions would make the bill more difficult and costly for the Attorney General to enforce. These rebuttable presumptions would require the AG to overcome these presumptions on top of proving a violation of this bill to win a case against a developer or deployer. To cut down on enforcement costs and ensure existing civil rights laws are not weakened, the rebuttable presumptions should be removed.
The amendments Senator Hester has presented solve this problem by removing the rebuttable presumptions in the bill. We urge the Committee to adopt this amendment.
C. As introduced, S.B. 936 contains overbroad exemptions that would allow developers to withhold information from disclosure at their own discretion without specifying a reason for the withholding.
As introduced, S.B. 936 would allow developers to withhold any information they consider to be a trade secret, “confidential” or “proprietary” information, or information that they believe could create a “security risk.” Trade secrets exemptions have been known to allow companies to hide information from the public to facilitate scams and sell snake oil.[16] The addition of exemptions for “confidential” or “proprietary” information creates even more space for developers to withhold information that should be available to the public or to the Attorney General under this bill. These are vague and undefined terms that would allow companies to withhold so much information that requiring disclosures would become futile. Similarly, while there may be valid reasons for developers to withhold certain information from the public if its disclosure would create a security risk, S.B. 936 as introduced does not define “security risk” nor does it place any limits on companies’ ability to withhold information under this exemption. Without clear limits on these exemptions, developers will be incentivized to withhold as much information as possible, and the law’s enforcers will have no way to know whether companies are violating S.B. 936.
The amendments Senator Hester has presented solve this problem by removing the undefined and overbroad exemptions for “confidential” and “proprietary” information and for information that might create a “security risk” and by adding important steps developers must take to claim trade secrets exemptions. We urge the Committee to adopt this amendment.
D. As introduced, S.B. 936 contains a concerning exception that gives companies – not the affected individual – the power to decide whether that individual should appeal a decision made about them using AI.
As introduced, the right to appeal contains a large loophole that would render it meaningless in practice. Current language allows deployers to withhold the right to appeal from consumers if the company believes an appeal would not be in the “best interest” of the consumer. This loophole is a problem because it would allow companies to take away an individual’s right to appeal at any time for any reason. The individual—not the deployer—is in the best position to know whether appealing a decision is in their own best interest. If a person decides an appeal is not in their best interest, that person can simply not appeal, but this decision should be left in an individual’s hands, not a company’s.
The amendments Senator Hester has presented solve this problem by removing this exception from the right to appeal. We urge the Committee to adopt this amendment.
E. As introduced, S.B. 936 contains an unlimited right to cure that would incentivize developers and deployers to ignore their obligations under this bill.
As introduced, S.B. 936 contains an unlimited right for developers and deployers to cure violations of this bill before facing any real consequences. An unlimited right to cure could create undesirable incentives for companies to delay or avoid complying with the requirements of this bill indefinitely, knowing that they will be able to cure violations before the Attorney General can pursue a case against them. This is a recipe for noncompliance with this bill.
The amendments Senator Hester has presented solve this problem by including a right to cure for the first year after this bill is enacted. A one-year right to cure is a reasonable compromise that allows companies the time and flexibility they need to come into compliance with this bill before more aggressive enforcement mechanisms can be used. We urge the Committee to adopt this amendment.
The issues outlined above are only some of the problems with S.B. 936 as introduced. Because of these numerous concerns, S.B. 936 should not move forward without the amendments Senator Hester has presented. However, a version of S.B. 936 that incorporates these proposed amendments – such as a version with Sen. Hester’s committee amendments – would be a significant step forward for Maryland residents that EPIC would urge this Committee to support.
S.B. 936 (with the committee amendments) would give Maryland residents significant protection from the harms caused by the unregulated use of AI in high-stakes decisions.
The amendments Senator Hester has proposed would turn S.B. 936 from a loophole-ridden bill that would do little to protect Marylanders into a strong piece of legislation that would ensure your constituents will be protected from algorithmic discrimination.
A. S.B. 936 (with committee amendments) would place important transparency requirements on developers and deployers, giving Marylanders more information about the AI systems used to make decisions about them.
Right now, Maryland residents have no way to know whether companies are using AI in making decisions about their jobs, education, housing, health care, finances, or status within the criminal justice system. Even in cases where consumers do know that companies are using AI, they have no ability or right to know how the AI works, how much the company relied on AI to make its decision, what personal information was processed by the AI, and how the company monitors the AI for discrimination or inaccuracy. Passing S.B. 936 (with committee amendments) would give Maryland residents access to this important information.
S.B. 936 (with committee amendments) requires real transparency from developers and deployers about AI systems and gives important rights to Maryland residents who are subject to decisions made using AI. This bill would require developers and deployers to provide important information about their AI systems to the public and Marylanders subject to decisions made using these systems, increasing transparency and reducing the “black box” nature of high-risk AI systems. Requiring AI developers to follow these transparency and disclosure requirements is an important step toward ensuring AI systems will be safer and less likely to produce discriminatory or inaccurate results.
This increased transparency from the developers and deployers of these systems will also ease the burden on Maryland residents who have been discriminated against by a company using these systems. Having access to information about whether AI was used in making a consequential decision about them and how that system works is an essential step toward mitigating the current information asymmetry that exists between powerful companies and individual consumers. Armed with the disclosures required under this bill, Marylanders will be able to hold companies accountable in court if they choose to use AI to discriminate.
B. Passing S.B. 936 (with committee amendments) is necessary to ensure Marylanders who have been harmed by algorithmic discrimination can successfully vindicate their existing rights in court.
S.B. 936 (with committee amendments) is a necessary step toward reinforcing Maryland’s consumer protection and civil rights laws as they apply to the use of AI. While existing laws, of course, do still apply to the use of AI, this bill would ease the burden on individuals in proving discrimination claims against companies that have discriminated against them by using an unsafe AI system in two ways: (1) making it clear that disparate impact resulting from the use of AI is discrimination and (2) easing the burden of bringing a discrimination claim involving the use of AI by eliminating the need for a harmed individual prove causation.
This bill’s definition of algorithmic discrimination explicitly covers both disparate treatment and disparate impact based on a protected class. Ensuring disparate impact is covered in the context of decisions made using AI is essential because industry actors have argued that because AI does not have intent, companies cannot be held liable for AI-driven discrimination. However, regardless of whether the use of AI precludes intent, its use can nevertheless result in discrimination. It may discover and classify workers on the basis of protected characteristics or use seemingly neutral criteria that have a discriminatory impact. This has been made clear by numerous reports from journalists, researchers, and whistleblowers calling out problems with the development or use of AI systems that have resulted in bias and discrimination.
S.B. 936 (with committee amendments) would make it easier for a plaintiff bringing a discrimination claim under this bill to prove their case. First, without this bill’s transparency requirements, Maryland residents may not even know an AI system is being used in making a consequential decision about them. Second, even if they did know an AI system was used and they suspected that the system was producing discriminatory results, it would be exceedingly difficult for them to bring a discrimination claim under traditional civil rights laws. Because of the “black box” nature of AI, people who are subject to decisions made using high-risk AI systems often do not know why a certain decision was made; what personal information was fed to the AI; how the system weighed certain factors in producing its decision; or whether the system improperly relied on a protected classification, such as a person’s race or sexual orientation, in producing its decision. This information asymmetry would make a plaintiff’s burden of proving causation – that the adverse decision resulted from the AI system rejecting a plaintiff because of a trait protected by law – nearly impossible.
This bill would reduce these obstacles and ensure that Marylanders are not discriminated against because of a faulty, biased, or inaccurate AI system. The transparency and disclosure obligations this bill would place on developers and deployers would arm Maryland residents with the information necessary to determine whether they have been discriminated against, and if they have, would ensure that they have the information necessary to prove that discrimination in court.
C. S.B. 936 (with committee amendments) would grant important rights to Maryland residents who are subject to decisions made by companies using high-risk AI systems.
Importantly, S.B. 936 (with committee amendments) would grant Maryland residents certain rights if an entity uses an AI system to make a consequential decision about them. Under this bill, deployers must inform consumers if they are using an AI system in making a consequential decision about them. This notice must include a plain-language description of the AI system and its purpose, the nature of the decision, the personal characteristics the system will assess and how they will be assessed, the relevance of these personal characteristics to the decision, any human components of the system, contact information for the deployer, and instructions about how to access the deployer’s required public posting that includes more information about the AI system’s logic and the results of the system’s most recent impact assessment. This notice must also inform the consumer that they have the right to opt out of any decision based on their personal data.
This bill would also give consumers the right to an explanation of any adverse decision made using an AI system, the right to correct any incorrect personal data, and the right to appeal an adverse decision. Deployers would have to tell consumers the principal reasons for the decision, the degree to which they relied on an AI system in making the decision, and the types and sources of data that were fed to the AI system. Consumers would then have the right to correct any incorrect personal data the AI system was given. Research has shown that these rights—to know what personal information AI systems use to produce recommendations and to be able to correct incorrect information—are important to most Americans.[17] S.B. 936 (with committee amendments) would also give consumers the right to appeal adverse decisions.
* * *
EPIC commends Senator Hester and this Committee for recognizing that regulating the development and use of AI systems in critical contexts like housing, employment, and health care is a pressing issue that needs the Legislature’s immediate attention. With the adoption of the committee amendments, S.B. 936 would protect Maryland residents while allowing AI innovation to continue safely and responsibly. We urge the Committee to adopt the committee amendments to S.B. 936 and vote to advance the amended bill.
Thank you for the opportunity to speak today. EPIC is eager to continue working with Senator Hester on this bill and is happy to be a resource to the Committee on these issues.
Sincerely,
Caitriona Fitzgerald
EPIC Deputy Director
Kara Williams
EPIC Law Fellow
[1] EPIC, About EPIC, https://epic.org/about/.
[2] See e.g., Protecting America’s Consumers: Bipartisan Legislation to Strengthen Data Privacy and Security: Hearing before the Subcomm. on Consumer Protection & Comm. of the H. Comm. on Energy & Comm., 117th Cong. (2022) (testimony of Caitriona Fitzgerald, Deputy Director, EPIC), https://epic.org/wp-content/uploads/2022/06/Testimony_Fitzgerald_CPC_2022.06.14.pdf; Governor Moore Signs Maryland Online Data Privacy Act, EPIC (May 9, 2024), https://epic.org/governor-moore-signs-maryland-online-data-privacy-act/; Virginia Legislature Passes Weak AI Bill Full of Loopholes, EPIC (Feb. 21, 2025), https://epic.org/virginia-legislature-passes-weak-ai-bill-full-of-loopholes/.
[3] Johana Bhuiyan, She Didn’t Get an Apartment Because of an AI-Generated Score – and Sued to Help Others Avoid the Same Fate, Guardian (Dec. 14, 2024), https://www.theguardian.com/technology/2024/dec/14/saferent-ai-tenant-screening-lawsuit.
[4] Elizabeth Napolitano, UnitedHealth Uses Faulty AI to Deny Elderly Patients Medically Necessary Coverage, Lawsuit Claims, CBS News (Nov. 20, 2023), https://www.cbsnews.com/news/unitedhealth-lawsuit-ai-deny-claims-medicare-advantage-health-insurance-denials/.
[5] Charlotte Lytton, AI Hiring Tools May Be Filtering out the Best Job Applicants, BBC (Feb. 16, 2024), https://www.bbc.com/worklife/article/20240214-ai-recruiting-hiring-software-bias-discrimination.
[6] Kori Hale, A.I. Bias Caused 80% of Black Mortgage Applicants to Be Denied, Forbes (Sept. 2, 2021), https://www.forbes.com/sites/korihale/2021/09/02/ai-bias-caused-80-of-black-mortgage-applicants-to-be-denied/.
[7] Kevin De Liban, Inescapable AI: The Ways AI Decides How Low-Income People Work, Live, Learn, and Survive, TechTonic Justice (Nov. 2024), https://www.techtonicjustice.org/reports/inescapable-ai.
[8] A.I./Algorithmic Decision-Making: Consumer Reports Nationally Representative Phone and Internet Survey, May 2024, Consumer Reports Survey Group (July 9, 2024), https://advocacy.consumerreports.org/wp-content/uploads/2024/07/CR-AES-AI-Algorithms-Report-7.25.24.pdf.
[9] Id.
[10] Local Law 144, File # Int. 1894-2020 (N.Y.C. Council 2021).
[11] Lucas Wright et al., Null Compliance: NYC Local Law 144 and the Challenges of Algorithm Accountability, 2024 ACM Conference on Fairness, Accountability, and Transparency (June 2024), https://dl.acm.org/doi/pdf/10.1145/3630106.3658998.
[12] Grace Gedye, New Research: NYC Algorithmic Transparency Law Is Falling Short of its Goals, Consumer Reports (Feb. 8, 2024), https://innovation.consumerreports.org/new-research-nyc-algorithmic-transparency-law-is-falling-short-of-its-goals/.
[13] Local Law 144, File # Int. 1894-2020 (N.Y.C. Council 2021).
[14] S.B. 24-205, 2024 Gen. Assemb., Reg. Sess. (Colo. 2024).
[15] Moshe Glickman & Tali Sharot, How Human-AI Feedback Loops Alter Human Perceptual, Emotional and Social Judgments, Nature Human Behaviour (Dec. 18, 2024), https://www.nature.com/articles/s41562-024-02077-2.
[16] See e.g., Eric Lach, The Secrets of a Billionaire’s Blood-Testing Startup, The New Yorker (Oct. 16, 2015), https://www.newyorker.com/news/news-desk/the-secrets-of-blood-testing-startup-theranos (detailing how Theranos used trade secrets claims to hide the fact that their technology did not live up to the company’s claims).
[17] A nationally representative survey of 2,022 U.S. adults found that 83% of those surveyed would want to know what personal information AI systems used to produce a decision about them and that 91% would want the way to correct any incorrect information the AI system relied on in producing that decision. A.I./Algorithmic Decision-Making: Consumer Reports Nationally Representative Phone and Internet Survey, May 2024, Consumer Reports Survey Group (July 9, 2024), https://advocacy.consumerreports.org/wp-content/uploads/2024/07/CR-AES-AI-Algorithms-Report-7.25.24.pdf.
News
EPIC Testifies in Support of Maryland AI Bill
March 5, 2025
EPIC Testifies in Support of Maryland Bill on High-Risk AI
February 27, 2025

Support Our Work
EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.
Donate