Tag: Content Moderation

  • EPIC Joins More than 200 Organizations Urging Tech Platforms to Combat AI Election Disinformation

    Today, EPIC and a coalition of over 200 civil society organizations sent a letter to leading platform companies like Meta, X, and Tiktok urging them to strengthen their efforts to protect elections from AI disinformation.

    • Artificial Intelligence and Human Rights

    • Commercial AI Use

    • Democracy & Free Speech

    • Updates

  • Murthy v. Missouri and the Threat of Election Disinformation

    As the 2024 U.S. Presidential election heats up, we’re returning to the thorny problem of election disinformation through the lens of content moderation and the recent Supreme Court case, Murthy v. Missouri.

    • Anonymity

    • Artificial Intelligence and Human Rights

    • Commercial AI Use

    • Democracy & Free Speech

    • Online Harassment

    • Analysis

  • European Commission Publishes Interactive Summary of Platform Content Moderation Data

    The European Commission has published an interactive dashboard that summarizes data on content moderation decisions from multiple online platform providers.

    • Access to Information

    • Data Protection

    • Democracy & Free Speech

    • Enforcement of Privacy Laws

    • International Privacy

    • Online Harassment

    • Updates

  • Supreme Court Avoids Major Section 230 Ruling in Case About Digital Platforms Recommending Terrorist Content

    The Supreme Court has declined to address whether Section 230 of the Communications Decency Act, a law that encourages tech companies to moderate content on their platforms, immunizes companies like Google and Twitter from lawsuits alleging that their recommendation algorithms promoted terrorist activity. In a pair of decisions released today—Gonzalez v. Google and Twitter v. Taamneh.

    • Artificial Intelligence and Human Rights

    • Commercial AI Use

    • Democracy & Free Speech

    • Online Harassment

    • Updates