Government AI Use
-
Key U.S. Enforcement Agencies Commit to Enforcement of Existing Laws on Entities Using AI
-
AI in the Criminal Justice System
-
AI Policy
-
Artificial Intelligence and Human Rights
-
Commercial AI Use
-
Government AI Use
-
Screening & Scoring
-
Updates
-
-
EPIC and ACLU Urge NIST to Advance Privacy in Digital Identity Guidelines
-
Artificial Intelligence and Human Rights
-
Government AI Use
-
Government Databases
-
Surveillance Oversight
-
Updates
-
-
EPIC Recommends ACUS Consider Administrative Burdens Exacerbated By Scoring and Screening Tools, Recommend Transparency
-
AI in the Criminal Justice System
-
AI Policy
-
Artificial Intelligence and Human Rights
-
Government AI Use
-
Screening & Scoring
-
Updates
-
-
EPIC and ACLU Comments on NIST’s 2023 Digital Identity Draft Guidelines
Comments of EPIC and the ACLU urging NIST to modify their guidelines on digital identity to reduce collection of biometrics and social security numbers, investigate W3C Verifiable Credentials, limit use of fraud prevention tools, and address equity.
-
Artificial Intelligence and Human Rights
-
Data Protection
-
Government AI Use
-
Government Records & Privacy
-
APA Comments
-
-
The Pitt News: Biden administration official talks AI accountability procedures at Pitt panel
-
AI Policy
-
Artificial Intelligence and Human Rights
-
Government AI Use
-
News
-
-
The Markup: It Takes a Small Miracle to Learn Basic Facts About Government Algorithms
-
Artificial Intelligence and Human Rights
-
Government AI Use
-
Open Government
-
Screening & Scoring
-
News
-
-
Framing the Risk Management Framework: Actionable Instructions by NIST in their “Manage” Section
use of A.I. systems. The core of the framework are recommendations divided into four overarching functions: (1) Govern, which covers overarching policy decisions and organizational culture around A.I. development; (2) Map, which covers efforts to contextualize A.I. risks and potential benefits; (3) Measure, which covers efforts to assess and quantify A.I. risks; and (4) Manage, which covers the active steps an organization should take to mitigate risks and prioritize elements of trustworthy A.I. systems.
-
AI in the Criminal Justice System
-
AI Policy
-
Artificial Intelligence and Human Rights
-
Commercial AI Use
-
Government AI Use
-
Screening & Scoring
-
Analysis
-
-
Reason Magazine: Debate: Artificial Intelligence Should Be Regulated
-
Artificial Intelligence and Human Rights
-
Face Surveillance & Biometrics
-
Government AI Use
-
Surveillance Oversight
-
News
-
-
President Biden Signs Executive Order Advancing Racial Equity and Imposing Equity Principles on Government A.I.
With its inclusion of digital civil rights and efforts to prevent algorithmic discrimination, today’s executive order is a welcome shift toward a more equitable and responsible government approach to A.I.
-
Artificial Intelligence and Human Rights
-
Government AI Use
-
Privacy & Racial Justice
-
Screening & Scoring
-
Surveillance Oversight
-
Updates
-
-
Privacy, Surveillance, and AI in the FY’23 National Defense Authorization Act (NDAA)
Each year, Congress passes the National Defense Authorization Act (NDAA), which designates specific budgets and policies for the U.S. military and a host of other government entities. The NDAA, while at its core a national defense bill, is sweeping in scale, with this year’s version providing $816,700,000,000.00 in funding to the Department of Defense. Given the sheer size of this allocation, the NDAA has impacts well beyond the military. This year, as in the recent past, there are many provisions that relate to privacy, surveillance, and AI. EPIC highlights those provisions here to help you understand where this money will be spent in the upcoming years.
-
Artificial Intelligence and Human Rights
-
Consumer Privacy
-
Data Protection
-
Democracy & Free Speech
-
Government AI Use
-
Analysis
-