Online Harassment

  • Murthy v. Missouri and the Threat of Election Disinformation

    As the 2024 U.S. Presidential election heats up, we’re returning to the thorny problem of election disinformation through the lens of content moderation and the recent Supreme Court case, Murthy v. Missouri.

    • Anonymity

    • Artificial Intelligence and Human Rights

    • Commercial AI Use

    • Democracy & Free Speech

    • Online Harassment

    • Analysis

  • European Commission Publishes Interactive Summary of Platform Content Moderation Data

    The European Commission has published an interactive dashboard that summarizes data on content moderation decisions from multiple online platform providers.

    • Access to Information

    • Data Protection

    • Democracy & Free Speech

    • Enforcement of Privacy Laws

    • International Privacy

    • Online Harassment

    • Updates

  • New EPIC Report Sheds Light on Generative A.I. Harms 

    EPIC has just released a new report detailing the wide variety of harms that new generative A.I. tools like ChatGPT, Midjourney, and DALL-E pose! While many of these tools have been lauded for their capability to produce new and believable text, images, audio, and videos, the rapid integration of generative AI technology into consumer-facing products has undermined years-long efforts to make AI development transparent and accountable.

    • AI Policy

    • Artificial Intelligence and Human Rights

    • Big Data

    • Commercial AI Use

    • Competition and Privacy

    • Consumer Privacy

    • Democracy & Free Speech

    • Online Harassment

    • Privacy & Racial Justice

    • Updates

  • Supreme Court Avoids Major Section 230 Ruling in Case About Digital Platforms Recommending Terrorist Content

    The Supreme Court has declined to address whether Section 230 of the Communications Decency Act, a law that encourages tech companies to moderate content on their platforms, immunizes companies like Google and Twitter from lawsuits alleging that their recommendation algorithms promoted terrorist activity. In a pair of decisions released today—Gonzalez v. Google and Twitter v. Taamneh.

    • Artificial Intelligence and Human Rights

    • Commercial AI Use

    • Democracy & Free Speech

    • Online Harassment

    • Updates

  • In Gonzalez v. Google, the Supreme Court Should Recognize That a Narrow Reading of Section 230 Will Help Achieve a Better Internet

    Section 230 of the Communications Decency Act was meant to encourage tech companies to monitor their services for harmful user content, but it has largely done the opposite. In Gonzalez v. Google, the Supreme Court should recognize that a narrower reading of Section 230 can help achieve a better, safer internet.

    • Commercial AI Use

    • Democracy & Free Speech

    • Online Harassment

    • Analysis

  • Europe’s Digital Services Package: What It Means for Online Services and Big Tech

    The EU recently passed comprehensive legislation on platform monitoring, digital free speech, and antitrust, largely directed at Big Tech. On July 5, 2022, the European Parliament adopted the Digital Services Package, comprised of the Digital Markets Act (“DMA”) and the Digital Services Act (“DSA”) and first proposed by the European Commission in December 2020. The European Council of Ministers will sign the bills into law this September, and they will take effect in early 2024 (though Big Tech will have to comply within months of entry into force). The Digital Services Package is touted as a “global first,” promising to “safeguard[] freedom of expression and opportunities for digital businesses.” After years of growing tech reliance and tech consolidation, “Democracy is back.”

    • Access to Information

    • Data Security

    • International Privacy

    • International Privacy Laws

    • Online Harassment

    • Social Media Privacy

    • Analysis

  • Online Harassment

    • Online Harassment

  • EPIC to Congress: Reform Section 230

    • Online Harassment

    • Updates