Testimony
(Maryland) H.B. 1331: Regulating High-Risk AI
House Economic Matters Committee
House Office Building
6 Bladen Street
Annapolis, MD 21401
Dear Chair Wilson and Members of the Committee,
EPIC writes in support of H.B. 1331, An Act concerning Consumer Protection and Artificial Intelligence. We commend Delegate Qi for sponsoring this important legislation and avoiding inclusion of many pitfalls we have seen infecting similar bills across the country. Maryland has the opportunity to enact innovative policy that both protects the rights and privacy of Maryland residents and encourages technological innovation, just as you did last year with the passage of its landmark Maryland Online Data Privacy Act.
The Electronic Privacy Information Center (EPIC) is an independent, nonpartisan, non-profit research organization in Washington, D.C., established in 1994 to protect privacy, freedom of expression, and democratic values in the information age.[1] EPIC has advocated for strong AI and privacy laws at both the state and federal level for many years.[2]
In my testimony, I will discuss why it is so critical that Maryland take immediate action to place common-sense regulations on the development and use of high-risk AI systems, why advancing H.B. 1331 would be a significant step toward protecting Maryland residents, and what amendments are necessary to ensure this bill achieves its intent of addressing algorithmic discrimination.
AI regulation is urgently needed, and Maryland should act now.
Legislation like H.B. 1331 that seeks to address the harms of companies using AI systems in making important decisions about people’s lives is urgently needed. Maryland residents need protections from harms they are already suffering because of the unregulated use of AI systems in life-altering decisions. Both public and private entities use high-risk AI systems in making decisions about people’s housing, employment, education, health care, finances, and access to government services every day. Passing this bill is essential to ensure that the AI systems used in making these key decisions about the lives of Maryland residents are transparent, nondiscriminatory, and accurate – and that individuals have the information and ability to hold companies accountable if AI systems they use do cause harm.
The use of AI systems has led to the wrongful denial of people’s access to housing,[3] medically necessary coverage,[4] job opportunities,[5] loans,[6] and more. The use of AI in making important decisions is a widespread practice that affects most Americans. Low-income individuals are even more likely to have important decisions about their lives made using AI; a recent report by TechTonic Justice found that virtually all 92 million low-income Americans have “some basic aspect of their lives decided by AI.”[7]
Right now, Maryland residents have no way to know whether companies are using AI in making these life-altering decisions about them. Marylanders also have no way to know how these AI systems work, how much an entity relied on AI to make a decision about them, or whether the AI was even relying on accurate information to generate its decision. This information asymmetry between companies developing and using AI and the individuals being subjected to AI – along with the “black box” nature of AI system – is one reason H.B. 1331 is essential to protect Marylanders. H.B. 1331 requires real transparency from developers and deployers about AI systems and gives important rights to Maryland residents who are subject to decisions made using AI.
Research shows Americans are uncomfortable with companies using AI systems to make these kinds of decisions about their lives. According to a survey conducted by Consumer Reports last year, the majority of American adults surveyed expressed concern over AI systems being used in making decisions in several important contexts covered by H.B. 1331.[8] Specifically, 72% of U.S. adults would be uncomfortable with AI having a role in a job interview process; 66% would be uncomfortable with banks using AI to determine if they qualified for a loan; 69% would be uncomfortable with apartments, condos, or senior communities using AI to screen potential tenants; and 58% would be uncomfortable with hospitals using AI to help make health care decisions about diagnoses or treatment.[9]
The proliferation of opaque and unproven AI systems into the most sensitive aspects of people’s lives coupled with Americans’ discomfort with this reality makes it essential that the Maryland Legislature move forward with H.B. 1331 to put careful guardrails on the development and use of these systems in consequential decisions.
Advancing this bill would provide Maryland residents with important rights and information related to the AI systems used to make significant decisions about them.
A. H.B. 1331 would provide necessary information to the public – including Maryland residents – who are subject to decisions made using high-risk AI.
Right now, Maryland residents have no way to know whether companies are using AI in making decisions about their jobs, education, housing, health care, finances, or status within the criminal justice system. Even in cases where consumers do know that companies are using AI, they have no ability or right to know how the AI works, how much the company relied on AI to make its decision, what personal information was processed by the AI, and how the company monitors the AI for discrimination or inaccuracy. Passing H.B. 1331 would give Maryland residents access to this important information.
H.B. 1331 requires real transparency from developers and deployers about AI systems and gives important rights to Maryland residents who are subject to decisions made using AI. This bill would require developers and deployers to provide important information about their AI systems to the public and Marylanders subject to decisions made using these systems, increasing transparency and reducing the “black box” nature of high-risk AI systems. Requiring AI developers to follow these transparency and disclosure requirements is an important step toward ensuring AI systems will be safer and less likely to produce discriminatory or inaccurate results.
This increased transparency from the developers and deployers of these systems will also ease the burden on Maryland residents who have been discriminated against by a company using these systems. Having access to information about whether AI was used in making a consequential decision about them and how that system works is an essential step toward mitigating the current information asymmetry that exists between powerful companies and individual consumers. Armed with the disclosures required under this bill, Marylanders will be able to hold companies accountable in court if they choose to use AI to discriminate.
B. H.B. 1331 would grant important rights to Maryland residents who are subject to decisions made by companies using high-risk AI systems.
Importantly, H.B. 1331 would grant Maryland residents certain rights if an entity uses an AI system to make a consequential decision about them. Under this bill, deployers must inform consumers if they are using an AI system in making a consequential decision about them and provide a robust disclosure about how and why the company is using the AI system in making an important decision. This bill would also give consumers the right to correct any incorrect personal data and the right to appeal an adverse decision. Research has shown that these rights—to know what personal information AI systems use to produce recommendations and to be able to correct incorrect information—are important to most Americans.[10] H.B. 1331 would also give consumers the right to appeal adverse decisions.
H.B. 1331 requires a few key amendments to ensure it can fully protect Marylanders.
A. Sexual orientation and gender identity should be added to this bill’s definition of “algorithmic discrimination” to match other Maryland law.
The definition of “algorithmic discrimination” is missing two important categories – sexual orientation and gender identity – that are covered by the Maryland Online Data Privacy Act enacted last year. These protected categories should also be included in this bill.
Section 14-5001(B)(1) should be amended to read as follows: “ ‘Algorithmic discrimination’ means differential treatment as a result of the use of artificial intelligence that negatively impacts a person based on the person’s actual or perceived age, color, disability, ethnicity, genetic information, proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, sexual orientation, gender identity, or other protected class.”
B. A definition should be added for “consumer” for clarity.
H.B. 1331 uses the term “consumer” throughout the bill, but it is not defined. A definition should be added to the bill, and it should be sure to include workers because this bill covers decisions that would affect “employment opportunities.” Thus, a definition of “consumer” should be added.
EPIC would suggest the following definition: “‘Consumer’ means an individual who: (1) is a resident of the state; (2) is an employee as defined in Section 3-1001 of the labor and employment article; or (III) is employed by a business in the State.”
C. The definition of “decision that produces legal or similarly significant effects concerning the consumer” should be expanded.
This bill’s current definition, from the Maryland Online Data Privacy Act, includes nearly all of the key contexts that are covered by similar bills seeking to regulate the use of AI in consequential decisions. However, two important decision contexts are missing: decisions about insurance and decisions about access to essential government benefits and services. These categories should be added to the definition under this bill. A decision about whether someone has access to Medicaid or SNAP benefits, for example, can be as life-altering as the other contexts covered by this bill. Similarly, whether a Maryland resident can access car insurance, for example, can affect their everyday lives in key ways, such as their ability to drive to work and earn a living. For these reasons, both “insurance” and access to “essential government benefits and services” should be added to the covered decisions under this bill, in addition the list outlined in the data privacy section of this title.
One other key addition that should be made to this definition concerns what types of decisions the bill would cover. Many other similar bills, including the Colorado AI Act, include not only whether consumers are provided or denied services by a company using high-risk AI but also the “cost and terms of” those services. Whether insurance or rent is so cost-prohibitive as to effectively be denied to a Marylander is just as much as “decision that produces legal or similarly significant effects” as whether they are denied insurance or an apartment outright. Because of the high stakes of the costs and terms of these life necessities, these categories should also be added to the definition of “decision that produces legal or similarly significant effects.”
D. A definition should be added of “substantial factor” for clarity.
The definition of “high-risk AI system” includes the term “substantial factor,” but that term is not defined in the bill. A definition of “substantial factor” should be added to ensure that developers and deployers cannot decide for themselves whether or not they are covered by this bill and, thus, need to comply with its transparency and testing requirements.
EPIC would suggest the following definition: “‘Substantial factor’ includes any use of an artificial intelligence system to generate any content, decision, prediction, or recommendation concerning a consumer that is used as a basis or partial basis in making a decision that produces legal or similarly significant effects concerning the consumer.”
E. The exception to the right to appeal should be removed because it gives companies – not the affected individual– the power to decide whether that individual should appeal a decision made about them using AI.
The right to appeal—an important right that Delegate Qi rightly includes in this bill—unfortunately contains a large loophole that would render it meaningless in practice. As currently drafted, H.B. 1331 allows deployers to withhold the right to appeal from consumers if the company believes an appeal would not be in the “best interest” of the consumer. This loophole is a problem because it would allow companies to take away an individual’s right to appeal at any time for any reason. The individual—not the deployer—is in the best position to know whether appealing a decision is in their own best interest. If a person decides an appeal is not in their best interest, that person can simply not appeal, but this decision should be left in an individual’s hands, not a company’s. To ensure Maryland residents have full access to the rights this bill would give them, this exception should be removed from the right to appeal.
Section 14-5004(B) should be amended to read as follows: “A deployer may decline to provide an opportunity for appeal if the appeal would be against the best interests of the consumer, including situations in which the delay caused by the appeal could would pose a risk to the safety of the consumer.”
F. The right to a post-decision explanation – which is already implied by H.B. 1331 – should be explicitly granted to Marylanders under this bill.
H.B. 1331 rightly gives consumers the right to correct any incorrect information used by an AI system in making a recommendation or decision about them and the right to appeal an adverse decision made using an AI system. However, to effectively exercise these rights, Maryland residents also need the right to an explanation about why the decision was made. It is unclear how a person is supposed to know that incorrect information was used by the system is producing a decision if they are not given a right to know what information was used in making the decision. To ensure consumers are given all the information they need to decide whether to exercise their rights to correct or appeal, the right to a post-decision explanation should be added.
EPIC suggests the following language be added to Section 14-5004(A) between (4) and (5): “If a deployer has deployed a high-risk artificial intelligence system to make a decision that produces legal or similarly significant effects concerning the consumer shall, if the decision is adverse to the consumer, provide to the consumer:
(A) A statement disclosing the principal reason or reasons for the adverse decision, including the degree to which and manner in which, the high-risk artificial intelligence system contributed to the adverse decision; and
(B) The type and source of data that was processed by the high-risk artificial intelligence system in making the adverse decision.”
G. H.B. 1331 should require developers to update documentation about changes to their AI system before a change is made, not after.
This bill would require developers to update their documentation within 90 days of making a change to a high-risk AI system. This update should be required within 90 days before they make that change rather than allowing them to make a change and use a system before they have updated their documentation about the system.
Section 14-5002(E)(3)(II) should be amended to read as follows: “Not later than 90 days before after any change is made to a high-risk artificial intelligence . . .”
H. The undefined “security risk” exemption in H.B. 1331 should be removed because it would allow developers to withhold information from disclosure at their own discretion.
As currently drafted, H.B. 1331 would allow developers to withhold any information that they believe could create a “security risk.” While there may be valid reasons for developers to withhold certain information from the public if its disclosure would create a security risk, H.B. 1331 as currently drafted does not define “security risk” nor does it place any limits on companies’ ability to withhold information under this exemption. Without clear limits on this exemption, developers will be incentivized to withhold as much information as possible.
Section 14-5003(G)(3) should be removed. If this is not possible, a definition should be added for “security risk” to ensure that developers cannot abuse this exemption to withhold information at their own discretion.
I. The rebuttable presumptions in the bill should be removed.
The rebuttable presumptions should be removed from H.B. 1331. As written, developers and deployers can avoid liability for algorithmic discrimination if they have complied with the other requirements in the bill. Companies should not be able to avoid liability for discriminating just because they have complied with their requirements under this bill, which, while important, are largely documentation obligations.
Further, the inclusion of these rebuttable presumptions would make the bill more difficult and costly for the Attorney General to enforce. This bill – particularly without a private right of action – would already add a significant amount of work to the AG’s office in addition to all the other laws they are charged with enforcing. These rebuttable presumptions would require the AG to overcome these presumptions on top of proving a violation of this bill to win a case against a developer or deployer. To cut down on costs and to encourage compliance with this bill, the rebuttable presumptions in Section 14-5007(A)-(B) should be removed.
J. H.B. 1331 should include a private right of action to ensure Marylanders can enforce their rights and to avoid overburdening the Attorney General.
H.B. 1331 classifies violations of the bill as unfair trade practices under Maryland’s consumer protection laws, but it takes the unusual step of removing a right from harmed consumers. H.B. 1331 takes away from consumers the right to hold companies liable for harming them. All 50 states have laws prohibiting unfair, deceptive, or abusive practices that provide consumers with a private right of action.17 H.B. 1331 would stray from this norm by treating violations by deployers and developers of this bill differently from other companies that violate consumer protection laws. Consumers should be able to seek redress for violations of H.B. 1331 the same way they would for other violations of consumer protection laws.
In the absence of a private right of action, there is a very real risk that companies will not comply with the bill because they think it is unlikely that they will get caught; companies are likely to assume the risk of a state Attorney General’s office pursuing an enforcement action against them is low enough due to lack of resources that they will be willing to take the risk of not complying. Therefore, private enforcement is critical to ensure that developers and deployers have strong financial incentives to meet their transparency, testing, and disclosure requirements under this bill. The inclusion of a private right of action would preserve the State’s resources by reducing the amount of resources it must spend to enforce this law.
Section 14-5008(A)(2) should be amended to read as follows: “Subject to the enforcement and penalty provisions contained in Title 13 of this article, except for Section 13-408 of this article.”
* * *
EPIC commends Delegate Qi and this Committee for recognizing that regulating the development and use of AI systems in critical contexts like housing, employment, and health care is a pressing issue that needs the Legislature’s immediate attention. With a few key amendments, H.B. 1331 would protect Maryland residents while allowing AI innovation to continue safely and responsibly. We urge the Committee give a favorable report to H.B. 1331.
Thank you for the opportunity to speak today. EPIC is eager to continue working with Delegate Qi on this bill and is happy to be a resource to the Committee on these issues.
Sincerely,
Caitriona Fitzgerald
EPIC Deputy Director
Kara Williams
EPIC Law Fellow
[1] EPIC, About EPIC, https://epic.org/about/.
[2] See e.g., Protecting America’s Consumers: Bipartisan Legislation to Strengthen Data Privacy and Security: Hearing before the Subcomm. on Consumer Protection & Comm. of the H. Comm. on Energy & Comm., 117th Cong. (2022) (testimony of Caitriona Fitzgerald, Deputy Director, EPIC), https://epic.org/wp-content/uploads/2022/06/Testimony_Fitzgerald_CPC_2022.06.14.pdf; Governor Moore Signs Maryland Online Data Privacy Act, EPIC (May 9, 2024), https://epic.org/governor-moore-signs-maryland-online-data-privacy-act/; Virginia Legislature Passes Weak AI Bill Full of Loopholes, EPIC (Feb. 21, 2025), https://epic.org/virginia-legislature-passes-weak-ai-bill-full-of-loopholes/.
[3] Johana Bhuiyan, She Didn’t Get an Apartment Because of an AI-Generated Score – and Sued to Help Others Avoid the Same Fate, Guardian (Dec. 14, 2024), https://www.theguardian.com/technology/2024/dec/14/saferent-ai-tenant-screening-lawsuit.
[4] Elizabeth Napolitano, UnitedHealth Uses Faulty AI to Deny Elderly Patients Medically Necessary Coverage, Lawsuit Claims, CBS News (Nov. 20, 2023), https://www.cbsnews.com/news/unitedhealth-lawsuit-ai-deny-claims-medicare-advantage-health-insurance-denials/.
[5] Charlotte Lytton, AI Hiring Tools May Be Filtering out the Best Job Applicants, BBC (Feb. 16, 2024), https://www.bbc.com/worklife/article/20240214-ai-recruiting-hiring-software-bias-discrimination.
[6] Kori Hale, A.I. Bias Caused 80% of Black Mortgage Applicants to Be Denied, Forbes (Sept. 2, 2021), https://www.forbes.com/sites/korihale/2021/09/02/ai-bias-caused-80-of-black-mortgage-applicants-to-be-denied/.
[7] Kevin De Liban, Inescapable AI: The Ways AI Decides How Low-Income People Work, Live, Learn, and Survive, TechTonic Justice (Nov. 2024), https://www.techtonicjustice.org/reports/inescapable-ai.
[8] A.I./Algorithmic Decision-Making: Consumer Reports Nationally Representative Phone and Internet Survey, May 2024, Consumer Reports Survey Group (July 9, 2024), https://advocacy.consumerreports.org/wp-content/uploads/2024/07/CR-AES-AI-Algorithms-Report-7.25.24.pdf.
[9] Id.
[10] A nationally representative survey of 2,022 U.S. adults found that 83% of those surveyed would want to know what personal information AI systems used to produce a decision about them and that 91% would want the way to correct any incorrect information the AI system relied on in producing that decision. A.I./Algorithmic Decision-Making: Consumer Reports Nationally Representative Phone and Internet Survey, May 2024, Consumer Reports Survey Group (July 9, 2024), https://advocacy.consumerreports.org/wp-content/uploads/2024/07/CR-AES-AI-Algorithms-Report-7.25.24.pdf.
News
EPIC Testifies in Support of Maryland AI Bill
March 5, 2025
EPIC Testifies in Support of Maryland Bill on High-Risk AI
February 27, 2025

Support Our Work
EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.
Donate