Tag: Artificial Intelligence

  • New EPIC Report Sheds Light on Generative A.I. Harms 

    EPIC has just released a new report detailing the wide variety of harms that new generative A.I. tools like ChatGPT, Midjourney, and DALL-E pose! While many of these tools have been lauded for their capability to produce new and believable text, images, audio, and videos, the rapid integration of generative AI technology into consumer-facing products has undermined years-long efforts to make AI development transparent and accountable.

    • AI Policy

    • Artificial Intelligence and Human Rights

    • Big Data

    • Commercial AI Use

    • Competition and Privacy

    • Consumer Privacy

    • Democracy & Free Speech

    • Online Harassment

    • Privacy & Racial Justice

    • Updates

  • Supreme Court Avoids Major Section 230 Ruling in Case About Digital Platforms Recommending Terrorist Content

    The Supreme Court has declined to address whether Section 230 of the Communications Decency Act, a law that encourages tech companies to moderate content on their platforms, immunizes companies like Google and Twitter from lawsuits alleging that their recommendation algorithms promoted terrorist activity. In a pair of decisions released today—Gonzalez v. Google and Twitter v. Taamneh.

    • Artificial Intelligence and Human Rights

    • Commercial AI Use

    • Democracy & Free Speech

    • Online Harassment

    • Updates

  • EU Parliament Approves AI Act, Urges Rejection of Transatlantic Data Framework

    uropean Parliament held a series of meetings, resulting in adopting both the text of the AI Act and a resolution outlining risks and proposals for change in the currently-proposed EU-U.S. Data Privacy Framework.

    • AI in the Criminal Justice System

    • AI Policy

    • Artificial Intelligence and Human Rights

    • Commercial AI Use

    • Consumer Privacy

    • Data Protection

    • Enforcement of Privacy Laws

    • International Privacy

    • International Privacy Laws

    • Privacy Laws

    • Screening & Scoring

    • Web Scraping

    • Updates

  • Key U.S. Enforcement Agencies Commit to Enforcement of Existing Laws on Entities Using AI

    • AI in the Criminal Justice System

    • AI Policy

    • Artificial Intelligence and Human Rights

    • Commercial AI Use

    • Government AI Use

    • Screening & Scoring

    • Updates

  • EPIC Recommends ACUS Consider Administrative Burdens Exacerbated By Scoring and Screening Tools, Recommend Transparency

    • AI in the Criminal Justice System

    • AI Policy

    • Artificial Intelligence and Human Rights

    • Government AI Use

    • Screening & Scoring

    • Updates

  • Framing the Risk Management Framework: Actionable Instructions by NIST in The “Measure” Section of the AI RMF

    The Measure Function of the A.I. Risk Management Framework urges companies to build and deploy carefully, centering human experience and a myriad of impact points including environmental and impact on civil liberties and rights. Particularly, it calls for regular testing on validity, reliability, transparency, accountability, safety, security, and fairness. 

    • AI Policy

    • Artificial Intelligence and Human Rights

    • Analysis

  • Framing the Risk Management Framework: Actionable Instructions by NIST in the “Map” section of the RMF

    The Map Function of the A.I. Risk Management Framework urges companies to document every step of the A.I. development lifecycle, from identifying use cases, benefits, and risks to building interdisciplinary teams and testing methods. However, it goes further: the A.I. Risk Management Framework also pushes companies to consider the broader contexts and impacts of their A.I. systems—and resolve conflicts that may arise between different documented methods, uses, and impacts. Notably, the Map Function recommends (1) pursuing non-A.I. and non-technological solutions when they are more trustworthy than an A.I. system would be and (2) decommissioning or stopping deployment of A.I. systems when they exceed an organization’s maximum risk tolerance. The Map Function also includes recommendations for instituting and clearly documenting procedures for engaging with internal and external stakeholders for feedback.

    • AI Policy

    • Artificial Intelligence and Human Rights

    • Analysis

  • Italy Bans ChatGPT and Begins Investigating Potential GDPR Violations by OpenAI

    Yesterday, the Italian Data Protection Authority (DPA) issued an order under the GDPR requiring OpenAI to immediately stop processing local user data, effectively blocking ChatGPT until OpenAI complies with European data protection laws. The DPA’s order comes at a time of increased scrutiny over ChatGPT and similar generative A.I. models.

    • Artificial Intelligence and Human Rights

    • Commercial AI Use

    • Consumer Privacy

    • Enforcement of Privacy Laws

    • International Privacy

    • Updates

  • Framing the Risk Management Framework: Actionable Instructions by NIST in their “Manage” Section

    use of A.I. systems. The core of the framework are recommendations divided into four overarching functions: (1) Govern, which covers overarching policy decisions and organizational culture around A.I. development; (2) Map, which covers efforts to contextualize A.I. risks and potential benefits; (3) Measure, which covers efforts to assess and quantify A.I. risks; and (4) Manage, which covers the active steps an organization should take to mitigate risks and prioritize elements of trustworthy A.I. systems.

    • AI in the Criminal Justice System

    • AI Policy

    • Artificial Intelligence and Human Rights

    • Commercial AI Use

    • Government AI Use

    • Screening & Scoring

    • Analysis

  • Framing the Risk Management Framework: Actionable Instructions by NIST in their “Govern” Section

    If you’ve had trouble following the different laws and frameworks proposed as ways to regulate A.I. systems, you’re not alone. From the White House’s Blueprint for an A.I. Bill of Rights and the National Institute of Standards and Technology’s (NIST’s) A.I. Risk Management Framework to the OECD’s Principles on Artificial Intelligence and state laws like New York’s Local Law 144, the last few years have seen numerous attempts to solidify a framework for regulating A.I. systems

    • Analysis