AI Policy
-
Generating Harms: Generative AI’s Impact & Paths Forward
EPIC's report on generative A.I. harms and the policy interventions that can mitigate them.
-
AI Policy
-
Artificial Intelligence and Human Rights
-
Commercial AI Use
-
Publications
-
-
New EPIC Report Sheds Light on Generative A.I. Harms
EPIC has just released a new report detailing the wide variety of harms that new generative A.I. tools like ChatGPT, Midjourney, and DALL-E pose! While many of these tools have been lauded for their capability to produce new and believable text, images, audio, and videos, the rapid integration of generative AI technology into consumer-facing products has undermined years-long efforts to make AI development transparent and accountable.
-
AI Policy
-
Artificial Intelligence and Human Rights
-
Big Data
-
Commercial AI Use
-
Competition and Privacy
-
Consumer Privacy
-
Democracy & Free Speech
-
Online Harassment
-
Privacy & Racial Justice
-
Updates
-
-
EU Parliament Approves AI Act, Urges Rejection of Transatlantic Data Framework
uropean Parliament held a series of meetings, resulting in adopting both the text of the AI Act and a resolution outlining risks and proposals for change in the currently-proposed EU-U.S. Data Privacy Framework.
-
AI in the Criminal Justice System
-
AI Policy
-
Artificial Intelligence and Human Rights
-
Commercial AI Use
-
Consumer Privacy
-
Data Protection
-
Enforcement of Privacy Laws
-
International Privacy
-
International Privacy Laws
-
Privacy Laws
-
Screening & Scoring
-
Web Scraping
-
Updates
-
-
Key U.S. Enforcement Agencies Commit to Enforcement of Existing Laws on Entities Using AI
-
AI in the Criminal Justice System
-
AI Policy
-
Artificial Intelligence and Human Rights
-
Commercial AI Use
-
Government AI Use
-
Screening & Scoring
-
Updates
-
-
EPIC Recommends ACUS Consider Administrative Burdens Exacerbated By Scoring and Screening Tools, Recommend Transparency
-
AI in the Criminal Justice System
-
AI Policy
-
Artificial Intelligence and Human Rights
-
Government AI Use
-
Screening & Scoring
-
Updates
-
-
Framing the Risk Management Framework: Actionable Instructions by NIST in The “Measure” Section of the AI RMF
The Measure Function of the A.I. Risk Management Framework urges companies to build and deploy carefully, centering human experience and a myriad of impact points including environmental and impact on civil liberties and rights. Particularly, it calls for regular testing on validity, reliability, transparency, accountability, safety, security, and fairness.
-
AI Policy
-
Artificial Intelligence and Human Rights
-
Analysis
-
-
The Pitt News: Biden administration official talks AI accountability procedures at Pitt panel
-
AI Policy
-
Artificial Intelligence and Human Rights
-
Government AI Use
-
News
-
-
Framing the Risk Management Framework: Actionable Instructions by NIST in the “Map” section of the RMF
The Map Function of the A.I. Risk Management Framework urges companies to document every step of the A.I. development lifecycle, from identifying use cases, benefits, and risks to building interdisciplinary teams and testing methods. However, it goes further: the A.I. Risk Management Framework also pushes companies to consider the broader contexts and impacts of their A.I. systems—and resolve conflicts that may arise between different documented methods, uses, and impacts. Notably, the Map Function recommends (1) pursuing non-A.I. and non-technological solutions when they are more trustworthy than an A.I. system would be and (2) decommissioning or stopping deployment of A.I. systems when they exceed an organization’s maximum risk tolerance. The Map Function also includes recommendations for instituting and clearly documenting procedures for engaging with internal and external stakeholders for feedback.
-
AI Policy
-
Artificial Intelligence and Human Rights
-
Analysis
-
-
Comments Of The Electronic Privacy Information Center, Center For Digital Democracy, and Consumer Federation Of America, to the California Privacy Protection Agency
The Electronic Privacy Information Center, Center for Digital Democracy, and Consumer Federation of America submit these comments in response to the California Privacy Protection Agency’s February 2023 invitation for public input concerning the agency’s development of further regulations under the California Consumer Protection Act of 2018 as amended by the California Privacy Rights Act of 2020.
-
AI Policy
-
Artificial Intelligence and Human Rights
-
Cybersecurity
-
Data Security
-
Privacy Laws
-
U.S. State Privacy Laws
-
-
Framing the Risk Management Framework: Actionable Instructions by NIST in their “Manage” Section
use of A.I. systems. The core of the framework are recommendations divided into four overarching functions: (1) Govern, which covers overarching policy decisions and organizational culture around A.I. development; (2) Map, which covers efforts to contextualize A.I. risks and potential benefits; (3) Measure, which covers efforts to assess and quantify A.I. risks; and (4) Manage, which covers the active steps an organization should take to mitigate risks and prioritize elements of trustworthy A.I. systems.
-
AI in the Criminal Justice System
-
AI Policy
-
Artificial Intelligence and Human Rights
-
Commercial AI Use
-
Government AI Use
-
Screening & Scoring
-
Analysis
-