Algorithmic Transparency: End Secret Profiling
As more decisions become automated and processed by algorithms, these processes become more opaque and less accountable. The public has a right to know the data processes that impact their lives so they can correct errors and contest decisions made by algorithms. Personal data collected from our social connections and online activities are used by the government and companies to make determinations about our ability to fly, obtain a job, get security clearance, and even determine the severity of criminal sentencing. These opaque, automated decision-making processes bear risks of secret profiling and discrimination as well as undermine our privacy and freedom of association.
Without knowledge of the factors that provide the basis for decisions, it is impossible to know whether government and companies engage in practices that are deceptive, discriminatory, or unethical. Algorithmic transparency, for example, plays a key role in resolving the question of Facebook's role in the Russian interference of the 2016 Presidential Election. Therefore, algorithmic transparency is crucial to defending human rights and democracy online.
- NYC Establishes Algorithm Accountability Task Force: New York City has passed the first bill to examine the discriminatory impacts of "automated decision systems." A task force will develop recommendations for how to make the city's algorithms fairer and more transparent. James Vacca, the bill's sponsor, said "If we're going to be governed by machines and algorithms and data, well, they better be transparent." EPIC supports algorithmic transparency and opposed systemic bias in "risk assessment" tools used in the criminal justice system. EPIC has filed Freedom of Information lawsuits to obtain information about "predictive policing" and "future crime prediction" algorithms. EPIC President Marc Rotenberg has called for laws that mandate algorithmic transparency and prohibit automated decision-making that results in discrimination. (Dec. 21, 2017)
- EPIC FOIA: Justice Department Admits Algorithmic Sentencing Report Doesn't Exist: The Justice Department, in response to an EPIC FOIA lawsuit, has admitted that the United States Sentencing Commission never produced an evaluation of "risk assessment" tools in criminal sentencing. In 2014, Attorney General Eric Holder expressed concern about bias in criminal sentencing "risk assessments" and called on the Sentencing Commission to study the problem and produce a report. But after EPIC requested that study and sued the DOJ to obtain it, the DOJ conceded that the report was never produced. EPIC did obtain emails confirming the existence of a 2014 DOJ report about "predictive policing" algorithms, but the agency also withheld that report. "Risk assessments" are secret techniques used to set bail, to determine criminal sentences, and even make decisions about guilt or innocence. EPIC has pursued several FOIA cases to promote "algorithmic transparency", including cases on passenger risk assessment, "future crime" prediction, and proprietary forensic analysis. (Dec. 15, 2017)
- Support for Bills Establishing Oversight of AI Grows in Congress (Dec. 12, 2017) +
- EPIC Urges Congress to Regulate AI Techniques, Promotes 'Algorithmic Transparency' (Dec. 12, 2017) +
- EPIC Promotes 'Algorithmic Transparency,' Urges Congress to Regulate AI Techniques (Nov. 28, 2017) +
- After Public Pressure, FEC To Begin Rulemaking On Online Ad Transparency (Nov. 16, 2017) +
- EPIC, Coalition Oppose Government's 'Extreme Vetting' Proposal (Nov. 16, 2017) +
- Consumer Bureau Proposes Policy Guidance for Data Aggregation Services (Nov. 16, 2017) +
- Senators Urge FEC to Promote Transparency in Online Ads (Nov. 13, 2017) +
- EPIC Promotes 'Algorithmic Transparency' for Political Ads (Nov. 3, 2017) +
- EPIC FOIA: EPIC Uncovers Report on "Predictive Policing" but DOJ Blocks Release (Nov. 1, 2017) +
- At OECD, EPIC Renews Call for Algorithmic Transparency (Oct. 27, 2017) +
- Mattel Cancels "Aristotle," an Internet Device that Targeted Children (Oct. 5, 2017) +
- NGOs to Meet with Privacy Commissioners at Public Voice Event in Hong Kong (Sep. 19, 2017) +
- EPIC Urges Senate To Establish Data Protection Standards For Financial Technologies (Sep. 11, 2017) +
- EPIC FOIA: EPIC Seeks Details of ICE, Palantir Deal (Aug. 15, 2017) +
- Supreme Court Won't Review Ruling on Secretive Sentencing Algorithms (Jun. 26, 2017) +
- Court Rules Secret Scoring of Teachers Unconstitutional (Jun. 13, 2017) +
- EPIC to Congress: Data Protection Needed for Financial Technologies (Jun. 9, 2017) +
- EPIC Asks FTC to Stop System for Secret Scoring of Young Athletes (May. 17, 2017) +
- In Merger Reviews, EPIC Advocates for Privacy, Algorithmic Transparency (May. 9, 2017) +
- European Parliament Adopts Resolution on Big Data (Mar. 24, 2017) +
- EPIC Urges Senate Commerce Committee to Back Algorithmic Transparency, Safeguards for Internet of Things (Mar. 22, 2017) +
- EPIC Sues Justice Department Over "Risk Assessment" Techniques (Mar. 7, 2017) +
- Pew Research Center Releases Report on Algorithms (Feb. 8, 2017) +
- Aspen Institute Report Explores Artificial Intelligence (Jan. 30, 2017) +
- The Verge Features EPIC FOIA Docs on Secret Profiling System (Dec. 21, 2016) +
- European Parliament Explores Algorithmic Transparency (Nov. 7, 2016) +
- EPIC Urges Massachusetts High Court to Protect Email Privacy (Oct. 24, 2016) +
- EPIC Promotes "Algorithmic Transparency" at Annual Meeting of Privacy Commissioners (Oct. 20, 2016) +
- White House Releases Reports on Future of Artificial Intelligence (Oct. 13, 2016) +
- Presidential Science Advisors Challenge Validity of Criminal Forensic Techniques (Sep. 8, 2016) +
- Wisconsin Supreme Court Upholds Use of Sentencing Algorithms, But Recognizes Risks (Jul. 16, 2016) +
- White House Report Points to Risks with Big Data (May. 5, 2016) +
- At UNESCO, EPIC's Rotenberg Argues for Algorithmic Transparency (Dec. 8, 2015) +
- EPIC Pursues Public Release of Secret DNA Forensic Source Code (Oct. 14, 2015) +
- EPIC Pursues Lawsuit about Secret Government Profiling Program (Aug. 11, 2015) +
- Facebook Applies for Patent to Collect Users' Credit Scores (Aug. 5, 2015) +
- EPIC Pursues Documents about Secret Government Profiling Program (Jul. 1, 2015) +
- White House Report on "Big Data" Explores Price Discrimination, Opaque Decisionmaking (Feb. 5, 2015) +
- Senators Challenge Verizon's Secret Mobile Tracking Program (Jan. 30, 2015) +
- EPIC Urges House to Safeguard Consumer Privacy (Jan. 26, 2015) +
More top news
EPIC and Algorithmic Transparency
- EPIC, Coalition Oppose Government’s ‘Extreme Vetting’ Proposal (Nov. 6, 2017)
- EPIC Promotes ‘Algorithmic Transparency’ for Political Ads (Nov. 3, 2017)
- At OECD, EPIC Renews Call for Algorithmic Transparency (Oct. 27, 2017)
- EPIC Asks FTC to Stop System for Secret Scoring of Young Athletes (May 17, 2017)
- In Merger Reviews, EPIC Advocates for Privacy, Algorithmic Transparency (May 9, 2017)
- EPIC Urges Senate Commerce Committee to Back Algorithmic Transparency, Safeguard for Internet of Things (Mar. 22, 2017)
- EPIC Promotes “Algorithmic Transparency” at Annual Meeting of Privacy Commissioners (Oct. 20, 2016)
- At UNESCO, EPIC’s Rotenberg Argues for Algorithmic Transparency (Dec. 8, 2015)
- At OECD Global Forum, EPIC Urges “Algorithmic Transparency” (Oct. 3, 2014)
AI Policy Frameworks
The speed of AI innovation and its impact on society prompts a serious concern for ethical review. There are currently no agreed upon set of standards for ethical AI design and implementation. Researchers and technical experts have grappled with how to align AI research and development with fundamental human values and norms. As a response, several organizations have begun to address the ethical issues in AI by creating AI principles and guidance documents. Below are four existing principles that guide in the development of safe AI.
Asilomar AI Principles
More than 100 AI researchers gathered in Asilomar, California to attend The Future of Life Institute’s “Beneficial AI 2017” conference. Through a multi-day survey and discussion process, attendees developed the Asilomar AI Principles, a list of 23 framework principles geared toward the safe and ethical development of AI. More than 1,200 AI/Robotics researchers and 2,541 others have signed onto the principles. Notable signers include Tesla co-founder Elon Musk, theoretical physicist Stephen Hawking, and EPIC Advisory Board member Ryan Calo. The draft principles are divided into three themes: (1) Research issues, (2) Ethics and Values, and (3) Long-term Issues. The principles highlight concerns ranging from creating beneficial intelligence, safety, transparency, privacy, avoiding an AI weaponry arms race, and non-subversion by AI.
IEEE’s Guide to Ethically Aligned Design
In December 2016, The Institute of Electrical and Electronics Engineers (IEEE) and its Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems published a first draft framework document on how to achieve ethically designed AI systems. Titled “Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems,” the 136-page document encourages technologists to prioritize ethical considerations when creating autonomous and intelligent technologies. Broken down into eight sections, the document begins with a set of general principles and then moves onto specific issue areas such as how to embed human values into their systems, how eliminate data asymmetry and grant greater individual control to personal data, and how to improve legal accountability for harms caused by AI systems. The general principles that apply to all types of AI/AS are: (1) embody the highest ideals of human rights; (2) prioritize the maximum benefit to humanity and the natural environment; and (3) mitigate risks and negative impacts as AI/AS evolve as socio-technical systems.
USACM’s Principles on Algorithmic Transparency and Accountability
In January 2017, the Association for Computing Machinery U.S. Public Policy Council (USACM) issued a statement and list of seven principles on algorithmic transparency and accountability. The USACM statement provides a context for what algorithms are, how they make decisions, and the technical challenges and opportunities to address potentially harmful bias in algorithmic systems. The USACM believes that this set of principles, consistent with the ACM Code of Ethics, should be implemented during every phase of development to mitigate potential harms. The seven principles are: (1) awareness, (2) access and redress, (3) accountability, (4) explanation, (5) data provenance, (6) auditability, and (7) validation and testing.
Japan’s AI Research & Development Guidelines (AI R&D Guidelines)
On April 2016 at the G7 ICT Ministers’ Meeting in Japan, Sanae Takaichi, Minister of Internal Affairs and Communications (MIC) of Japan, proposed to start international discussions toward establishing “AI R&D guidelines” as a non-regulatory and non-binding international framework for AI research and development. In March 2017, the MIC released a report summarizing the current progress of drafting AI R&D Guidelines for International Discussions as well as a Draft AI R&D Guidelines with comments. One of the goals of the guidelines is to achieve a human-centered society, where people can live harmoniously with AI networks while human dignity and individual autonomy is respected. Modeled after OECD privacy guidelines, the nine R&D principles found within the guidelines are: (1) collaboration, (2) transparency, (3) user assistance, (4) controllability, (5) security, (6) safety, (7) privacy, (8) ethics, and (9) accountability.
White House Report on the Future of Artificial Intelligence
In May 2016, the White House announced a series of workshops and a working group devoted to studying the benefits and risks of AI. The announcement recognized the "array of considerations" raised by AI, including those "in privacy, security, regulation, [and] law." The White House established a Subcommittee on Machine Learning and Artificial Intelligence within the National Science and Technology Council.
Over the next three months, the White House co-hosted a series of four workshops on AI:
- Legal and Governance Implications of Artificial Intelligence, May 24, 2016, Seattle, WA
- Artificial Intelligence for Social Good, June 7, 2016, in Washington, DC
- Safety and Control for Artificial Intelligence, June 28, 2016, in Pittsburgh, PA
- The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term, July 7, 2016, in New York City
EPIC Advisory Board members Jack Balkin, danah boyd, Ryan Calo, Danielle Citron, Ed Felten, Ian Kerr, Helen Nissenbaum, Frank Pasquale, and Latanya Sweeney each participated in one or more of the workshops.
The White House Office of Science and Technology issued a Request for Information in June 2016 soliciting public input on the subject of AI. The RFI indicated that the White House was particularly interested in "the legal and governance implications of AI," "the safety and control issues for AI," and "the social and economic implications of AI," among other issues. The White House received 161 responses.
On October 12, 2016, The White House announced two reports on the impact of Artificial Intelligence on the US economy and related policy concerns: Preparing for the Future of Artificial Intelligence and National Artificial Intelligence Research and Development Strategic Plan.
Preparing for the Future of Artificial Intelligence surveys the current state of AI, its applications, and emerging challenges for society and public policy. As Deputy U.S Chief Technology Officer and EPIC Advisory Board member Ed Felten writes for the White House blog, the report discusses "how to adapt regulations that affect AI technologies, such as automated vehicles, in a way that encourages innovation while protecting the public" and "how to ensure that AI applications are fair, safe, and governable." The report concludes that "practitioners must ensure that AI-enabled systems are governable; that they are open, transparent, and understandable; that they can work effectively with people; and that their operation will remain consistent with human values and aspirations."
The companion report, National Artificial Intelligence Research and Development Strategic Plan, proposes a strategic plan for Federally-funded research and development in AI. The plan identifies seven priorities for federally-funded AI research, including strategies to "understand and address the ethical, legal, and societal implications of AI" and "ensure the safety and security of AI systems."
The day after the reports were released, the White House held a Frontiers Conference co-hosted by Carnegie Mellon University and the University of Pittsburgh. Also in October, Wired magazine published an interview with President Obama and EPIC Advisory Board member Joi Ito.
EPIC has promoted Algorithmic Transparency for many years and is has litigated several cases on the front lines of AI. EPIC's cases include:
- EPIC v. FAA, which EPIC filed against the Federal Aviation Administration for failing to establish privacy rules for commercial drones
- EPIC v. CPB, in which EPIC successfully sued U.S. Customs and Border Protection for documents relating to its use of secret, analytic tools to assign "risk assessments" to travelers
- EPIC v. DHS, to compel the Department of Homeland Security to produce documents related to a program that assesses "physiological and behavioral signals" to determine the probability that an individual might commit a crime.
- EPIC v. DOJ, to compel the Department of Justice to produce documents concerning the use of “evidence-based risk assessment tools,” algorithms that try to predict recidivism, in all stages of sentencing.
EPIC also has a strong interest in algorithmic transparency in criminal justice. Secrecy of the algorithms used to determine guilt or innocence undermines faith in the criminal justice system. In support of algorithmic transparency, EPIC submitted FOIA requests to six states to obtain the source code of "TrueAllele," a software product used in DNA forensic analysis. According to news reports, law enforcement officials use TrueAllele test results to establish guilt, but individuals accused of crimes are denied access to the source code that produces the results.
- EPIC: Algorithms in the Criminal Justice System
- USACM, the ACM U.S. Public Policy Council, "Algorithmic Transparency and Accountability" Discussion Panel Event (Sept. 14, 2017)
- CPDP 2017 Conference, "Algorithms: Too Intelligence to be Intelligible?" (Jan. 26,2017)
- We Robot 2017
- We Robot 2016
- Marc Rotenberg, Algorithmic Transparency and Emerging Privacy Issues, UNESCO Presentation (Dec. 2, 2015)
- Ed Felten, CITP Web Privacy and Transparency Conference Panel 2 (Nov. 7, 2014)
- Alessandro Acquisti, Why Privacy Matters, TedGlobal 2013 Conference (June 2013)
- Alessandro Acquisti, Ralph Gross & Fred Stutzman, Faces of Facebook: Privacy in the Age of Augmented Reality, BlackHat USA 2011 Conference (Aug. 4, 2011)
- Steven Aftergood, Secret Law and the Threat to Democratic Government,” Testimony before the Subcommittee on the Constitution of the Committee on the Judiciary, U.S. Senate (Apr. 30, 2008)
News Articles & Blogposts
- Gary Kasparov, Pursuing Transparency and Accountability for Both Humans and Machines, Avast Blog (July 30, 2017)
- Lee Rainie & Janna Anderson, Code-Dependent: Pros and Cons of the Algorithm Age, Pew Research Center (Feb. 8, 2017)
- Kate Crawford & Ryan Calo, There is a blind spot in AI research, Nature: International Weekly Journal of Science (October 13, 2016)
- Rebeca MacKinnon, Where is Microsoft Bing’s Transparency Report?, The Guardian (Feb. 14, 2014)
- Bruce Schneier, Accountable Algorithms, Schneier on Security (Sep. 21, 2012)
- Ian Kerr, Privacy, Identity and Anonymity, Iankerr.ca (Sep. 1, 2011 )
- Ed Felten, Algorithms can be more accountable than people, Freedom to Tinker (Mar. 19, 2014)
- Jeff Jonas, Using Transparency as a Mask, JeffJonas.typepad.com (Aug. 4, 2010)
- Tim Wu, TNR Debate: Too Much Transparency?, New Republic (Oct. 11, 2009)
- Ross Anderson, The Collection, Linking and Use of Data in Biomedical Research and Health Care: Ethical Issues, Nuffield Council on Bioethics (Feb. 2015)
- Urs Gasser et. al., ed, Internet Monitor 2014: Reflections on the Digital World, Berkman Center for Internet and Society (Dec. 2014)
- Jeff Jonas & Ann Cavoukian, Privacy by Design in the Age of Big Data (Jun. 8, 2012)
- Jack Balkin, The Three Laws of Robotics in the Age of Big Data, Ohio State Law Review Vol. 78 (2017), Forthcoming
- Danielle Keats Citron & Frank Pasquale, The Scored Society: Due Process for Automated Predictions, 89 Washington Law Review 1 (2014)
- Cynthia Dwork & Aaron Roth, The Algorithmic Foundations of Differential Privacy, Theoretical Computer Science Vol. 9: No. 3-4, 211 (2014)
- Frank Pasquale, The Scored Society: Due Process for Automated Predictions, 89 Washington Law Review 1 (2014) (with Danielle Citron)
- Ian Kerr & Jessica Earle Prediction, Presumption, Preemption: The Path of Law After the Computational Turn, 66 Stanford Law Review 65 (2013)
- Julie E. Cohen, Power/play: Discussion of Configuring the Networked Self, 6 Jerusalem Rev. Legal Stud. 137-149 (2012)
- Frank Pasquale, Restoring Transparency to Automated Authority, 9 Journal on Telecommunications & High Technology Law 235 (2011)
- Grayson Barber, How Transparency Protects Privacy in Government Records (May 23, 2011) (with Frank L. Corrado)
- David J. Farber & Gerald R. Faulhaber, The Open Internet: A Consumer-Centric Framework, International Journal of Communication (2010)
- Ed Felten, David G. Robinson, Harlan Yu & William P Zeller, Government Data and the Invisible Hand, 11 Yale Journal of Law & Technology 160 (2009)
- Julie E. Cohen, Privacy, Visibility, Transparency, and Exposure 75 University of Chicago Law Review 181 (2008)
- Frank Pasquale, Internet Nondiscrimination Principles: Commercial Ethics for Carriers and Search Engines, 2008 University of Chicago Legal Forum 263 (2008)
- Alessandro Acquisti, Price Discrimination, Privacy Technologies, and User Acceptance (2006)
- Urs Gasser, Regulating Search Engines: Taking Stock and Looking Ahead, 9 Yale Journal of Law & Technology 124 (2006)
- Latanya Sweeney, Privacy Enhanced Linking, ACM SIGKDD Explorations 7(2) (Dec. 2005)
- Phil Agre, Your Face Is Not A Bar Code: Arguments Against Automatic Face Recognition in Public Places (Sept. 7, 2001)
- A. Michael Froomkin, The Death of Privacy, 52 Stanford Law Review 1461 (2000)
- Ryan Calo, A. Michael Froomkin, & Ian Kerr, Robot Law (Edward Elgar 2016)
- Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (2015)
- Colin Bennett, Transparent Lives: Surveillance in Canada (2014)
- danah boyd, Networked Privacy (2012)
- Julie E. Cohen, Configuring the Networked Self: Law, Code, and the Play of Everyday Practice (2012)
- James Bamford, The Shadow Factory: The NSA from 9/11 to the Eavesdropping on America (2009)
- David Burnham, The Rise of the Computer State (1983)
Share this page:
EPIC relies on support from individual donors to pursue our work.
Subscribe to the EPIC Alert
The EPIC Alert is a biweekly newsletter highlighting emerging privacy issues.
Privacy in the Modern Age