April 2014 Archives

April 29, 2014

Argument Recap: Justices Look to Limit Warrantless Cell Phone Searches

Alan Butler imageToday the U.S. Supreme Court heard oral argument in Riley v. California and United States v. Wurie, two cases involving the warrantless search of an individual's cell phone incident to arrest. These cases present an important and fundamental Fourth Amendment question: whether the police can search the entire contents of an individual's cell phone incident to any lawful arrest. As others have noted today, the Justices seemed to recognize that cell phones and other digital devices create a "new world" that justifies a modified search incident to arrest rule. But the Justices struggled throughout the arguments in both cases to identify a workable rule.

One important practical insight from Orin Kerr is that, given the short time frame for a decision (the case will be decided by mid-June), it is possible the Justices will seek a unified majority author for both the Riley and Wurie opinions. Given that consideration, and the facts and arguments in Wurie, it is possible that an unexpected "middle ground" compromise will emerge focused on the plain view doctrine. But regardless of the particular majority approach, it seems very unlikely that the Justices will endorse the broad categorical rule that all individuals' cell phones are subject to limitless search incident to arrest. And if the Court can't agree on a compromise solution, Justice Kagan might have enough votes for a categorical ban on warrantless cell phone searches.

It was during the second half of arguments in Riley, while questioning Edward Dumont (Attorney for California), that Justice Breyer first laid out the potential outcomes for both cases:

JUSTICE BREYER: So there are three possibilities: Possibility one, smartphone, no, get a warrant, unless exigent circumstances. Possibility two, yes, it's just like a piece of paper that you find in his pocket. Or possibility three, sometimes yes, sometimes no. All right, which of those three is yours?

And of these three options, the third "middle ground" was the focus of most of the Justices' questions. Over the course of argument in both cases, three distinct "middle ground" proposals emerged: (1) search only "pre-digital" types of information, (2) search only for evidence of the crime of arrest, and (3) conduct a manual search under immediate exigency or plain view. Of those three options, the "plain view" proposal seems more likely to unite a coalition of the Justices in both Riley and Wurie.

The First proposal, described by Mr. Dumont in response to Justice Breyer's question above, would allow officers to search for certain types of information that were traditionally available to police officers during arrest, as compared to some as-yet-undefined "special" digital data that could be subject to greater protection.

MR. DUMONT: Right. And my in­between rule with the explanation is that for information that is of the same sort that police have always been able to seize from the person, that includes diaries, letters, all other kinds of evidence, purely evidentiary, photographs, address books, for evidence of that same sort, the same rule should apply.

But, as Justices Kagan and Breyer quickly pointed out, that exception would swallow the rule. There are "very, very few things that you cannot find an analog to in pre-digital age searches."

The Second proposal, discussed by Justice Scalia and Deputy Solicitor General Michael Dreeben, was that officers could search the cell phone "in order to find evidence of the crime of arrest." This rule, adopted by the Court in the context of automobile searches incident to arrest in Arizona v. Gant, 556 U.S. 332 (2009), would limit the scope of the evidence obtained by law enforcement during the search of an arrestee's cell phone. But Justice Kagan pushed back that this would hardly be a "limiting" principle because cell phones contain such a wealth of private data.

JUSTICE KAGAN: Can I ask you a question about that, Mr. Dreeben, because given the variety of things that these cell phones have in them, it seems as though that's ­­ you know, it sounds good as a limiting principle, but it ends up you can imagine in every case that the police could really look at everything.

The Third proposal, presented by Mr. Dreeben during the Wurie argument would limit officers to "manual searches of the information that's available to the user of the phone." And this proposal is also directly related to a "plain view" rule described by Justice Sotomayor at the end of Mr. Dreeben's time:

JUSTICE SOTOMAYOR: How about a plain view analysis? Turn on the phone, see if there's been a telephone call within a reasonable amount of time of the arrest or ­­ or any message that was sent at the time of arrest. That's sort of a plain view situation. It would take care of your person with the picture of him or herself with guns. It would take care of the call to the confederate. It would take care of the ­­ of the imminent destruction of the phone.

This thread was then picked up by Chief Justice Roberts during his exchange with Judith Minzer (Counsel for Defendant Wurie).

CHIEF JUSTICE ROBERTS: We've ­­ we've kind of gotten far afield, which I'm sure is not ­­ may not be fair to Mr. Fisher or Mr. Dumont, we're talking about their case, but in your case why isn't the information in plain view? It says, "my house, my home." They look at it, that's what they see. They don't have to open anything.

MS. MIZNER: They saw the words "my house." They did have to open the phone and access the log to -

CHIEF JUSTICE ROBERTS: Sure. But I'm saying do you have ­­ you have no objection to the "my house"?

MS. MIZNER: The "my house" words were in plain view.  And under this Court's doctrine, that's not --

CHIEF JUSTICE ROBERTS: I assume that that's ­­ it says "my house" because he's done something with the particular number.  If he didn't, it would be the number itself that would show up, right?

MS. MIZNER: Yes. And that's part ­­

CHIEF JUSTICE ROBERTS: And so would that also be in plain view?

MS. MIZNER: The number was not in plain view

CHIEF JUSTICE ROBERTS: No, no.  But I mean, in a ­­ in a case in which the user had not coded the particular number, the number would show up, I think, right?

MS. MIZNER: Yes. And ­­

CHIEF JUSTICE ROBERTS: And that would be ­­

MS. MIZNER: And the number would be in plain view.

CHIEF JUSTICE ROBERTS: Okay.

Under this plain view and exigency-based rule, officers would be allowed to manually inspect cell phones for recent calls, contacts, or messages, but could not gather evidence generally or download any of the phone data without a warrant. That rule would provide a basis for the Court to distinguish between the facts in Riley and Wurie. The photos, videos, and contact lists at issue in Riley were uncovered after the officers "looked through some stuff" at the station, whereas the phone number in Wurie was revealed by inspecting the contact "my house" that called the phone while it was in the police station.

Interestingly, the Justices did not appear to seriously consider Mr. Dreeben's alternative proposal: that officers could search data stored on the cell phone, but not data stored in the "cloud." EPIC's amicus brief focused on this issue in particular, and argued that it has become increasingly difficult to distinguish between data stored on the phone and data accessed remotely via the cloud. Justice Kagan, in particular, keyed in on the same points in her questioning of Mr. Dreeben during the Wurie argument:

JUSTICE KAGAN: But I thought the whole ideas of smartphones, Mr. Dreeben, and increasingly so, was that even the user doesn't know what what's on the cloud or not.

For those who are unfamiliar with these cases, tou can also find a description of the cases with an index of the briefs and links to relevant news stories on EPIC's webpage here.

 

April 17, 2014

White Hat, Black Hat, Bleeding Heart

Julia Horwitz imageLet's start with the Heartbleed bug.

Since the announcement of Heartbleed last week, everyone has been paying attention to security vulnerabilities -  a typically niche technical subject. Most internet users are, rightfully, concerned. What can they can do to protect themselves in the short term? What can Internet providers and government agencies do to help protect them in the long run? In a series of posts, I will identify and discuss the technology and policy issues involved in this important question: how can we keep the Internet secure and protect user privacy?

Last week, we found out that there is a vulnerability in the encryption code that enables about 70% of the Internet's secure connections. This story gained some traction in popular news reporting, but there wasn't much to tell without delving into a decade-long series of legal and technical conversations between lawyers, policymakers, technologist, cryptographers, engineers, and politicians. In a brief interview for Reuters, I was asked to advise consumers on how best to protect themselves from loopholes in crypto. But that's an impossible question to answer right now. Not only because there is almost nothing that individuals can do to guard against Open SSL vulnerabilities (although that is true), but also because I could not propose a solution to a problem that no one has diagnosed.

The Heartbleed bug is a flaw in Open SSL encryption that allows hackers to steal data silently and without a trace. This is obviously a problem unto itself, and it was diagnosed brilliantly by Antti Karjalainen, Riku Hietamäki, and Matti Kamunen of Codenomicon, as well as Neel Mehta of Google. But it is also a symptom of a much larger problem: a failure of both private sector companies and government agencies to protect some of our most important critical infrastructure - core Internet security protocols. This is a complex issue that relates to recent debates over cyber warfare and the role of the U.S. defense agencies in information assurance, national security, the market for security vulnerabilities, and encryption standards.

Recent debates about "cybersecurity" circle endlessly around these themes. Who is responsible for protecting Internet security? Can it be the NSA, an agency that notoriously devotes copious resources to cracking code and breaking crypto? Can the U.S. regulate so-called "bug bounties," in which the government pays independent coders to locate zero-day vulnerabilities? Is the private sector obligated to inform the government if "zero-day" security vulnerabilities are found? And, if so, which agency is responsible for informing the public -the NSA, tasked with "information assurance," or DHS, tasked with protecting "critical infrastructure?" The brightest minds have so far been talking around and past each other in an effort to unify all these conversations into the legal and technical panacea that would prevent future Heartbleeds. The questions are too weighty to tackle all at once, but too interconnected to answer individually.

My goal in this series of blog posts is to pull apart the threads of these interconnected conversations. I would like to examine each issue in turn, in the hopes that by looking at each element of the precipitate, we may find the key to the solution. In my next post, I plan to discuss "critical infrastructure:" what we mean when we say it, who is tasked with protecting it, whether (and if so, how) it includes the Internet, and whether in the context of critical infrastructure, "the Internet" includes the protection of cryptographic protocols.

Stay tuned.

April 11, 2014

There Are No OLC Opinions About PRISM or 215, So Who Decided It Was Legal?

Alan Butler imageIn light of the President's recent announcement that the NSA's bulk collection of telephone metadata will end, there is a renewed interest in Congress to revise U.S. surveillance laws. At the same time, the Privacy and Civil Liberties Oversight board is conducting its review of the bulk collection of international communications under the Section 702 / PRISM program. While these oversight and reform efforts are underway, it is important to consider the policy-making process that authorized these programs in the first place.

Two Freedom of Information Act cases, one brought by EPIC following the disclosures last summer and another brought by the ACLU several years before, attempt to get to the heart of this question. Both cases lead to the same shocking conclusion - that the Department of Justice Office of Legal Counsel, which played a central role in the initial decision to implement the warrantless wiretapping program, was not involved in the decision to transition those surveillance programs to new FISA authorities.

This revelation that OLC issued no final legal interpretation of either FISA program is surprising because the OLC played a key role in the initial decision by the President to conduct warrantless surveillance in 2001. The Office of Legal Counsel "provides authoritative legal advice to the President and all the Executive Branch agencies" including "offices within the Department." In that capacity, OLC attorneys advised the President and other agencies on legal issues related to the post-2001 surveillance programs. OLC subsequently conducted an in-depth analysis of the constitutional and legal issues raised by the warrantless wiretapping program (the precursor to both PRISM and the Metadata program). But when it came time to transition those programs to a new legal authority, the OLC did not issue any final legal memoranda or opinions regarding the new interpretation of the relevant FISA provisions.

In EPIC v. DOJ, we filed a FOIA suit to obtain from the Office of Legal Counsel:

All final legal analysis, memoranda, and opinions regarding the PRISM program, including, but not limited to, records addressing the Foreign Intelligence Surveillance Act, 50 U.S.C. §§ 1801 et seq., and the Fourth Amendment to the U.S. Constitution

We recently closed that case after the agency provided a letter concluding that there were "no records responsive" to our request, confirming that the OLC never conducted a legal analysis of the PRISM program after it was implemented pursuant to the FISA Amendments Act of 2008.

The ACLU filed a similar, but broader, request for "any and all records concerning the government's interpretation or use of Section 215" to the Department of Justice OLC, NSD, OPA, OIP, and the FBI. In response to ACLU's suit, the OLC identified two responsive documents, but neither of them included a substantive legal analysis of Section 215. The OLC officials had searched specifically for any classified legal opinions about Section 215 and concluded that there were none. This confirms that, like the PRISM program, the legality of the Metadata program was never analyzed by the OLC.

So even though OLC confirmed in EPIC's original warrantless wiretapping FOIA case (at ¶59) that it took part in the "interagency discussions" related to the transition of the warrantless wiretapping program to a series of FISC-authorized programs, it issued no final legal guidance about Section 215. One likely explanation for this lack of final OLC guidance is that the National Security Division attorneys responsible for FISA applications were confident in their interpretation of the statute. But that interpretation (of the 215 Order specifically) has been widely criticized by members of congress (including the original author of the Patriot Act), the PCLOB, and groups like EPIC. The President has decided to end the program and is no longer advocating for that broad interpretation of the law. If the attorneys at NSD felt that their interpretation of 215 authorizing bulk collection was obvious, the recent controversy has proven them wrong.

In regards to the PRISM program, the lack of OLC consultation is understandable but still troubling. The previous OLC memo addressing the warrantless wiretapping program, authored by Jack Goldsmith in May of 2004, provides at least some Fourth Amendment analysis (though it is limited to 6 pages and remains heavily redacted). But this analysis does not address the new issue raised by the FISA Amendments Act itself: what limitations are necessary to ensure that "acquisition" authorized by Section 702 is "conducted in a manner consistent with the Fourth Amendment" (see 50 U.S.C. § 1881a(b)(5)).

This is a significant question of Fourth Amendment law, and one that the Supreme Court recently declined to consider in Clapper v. Amnesty International. The answer hinges on the exact scope of the protections implemented by the NSA in its PRISM systems. But it also depends on the scope of Fourth Amendment protections as applied to international and one-end-domestic communications. Current cases do not provide a clear answer to this question, and that makes it even more important to know who decides what rule will apply. The FISC certainly plays a role, but without adversarial briefing the Government's own internal policy-making process becomes even more significant.

The fact that OLC, the authoritative legal advisor to the President and executive agencies, did not weigh in on this important issue is vexing. And it raises serious questions of institutional legitimacy, including whether Justice Department attorneys properly considered other constitutional issues, including the avoidance doctrine.

April 7, 2014

The FBI is "Working" on an Updated Privacy Statement for Facial Recognition

Jeramie Scott imageFacial recognition technology presents a serious risk to privacy and civil liberties because it can so easily be deployed covertly, from a distance, and on a mass scale. There is little to no precautions that can be taken to prevent collection of one's image. Participation in society inevitably involves exposing one's face, whether it's on the public streets or through social media. Ubiquitous and near-effortless identification eliminates an individual's ability to control their identity and poses special risk to the First Amendment rights of free association and free expression, particularly for those who engage in lawful protests. The FBI's ever expanding use of facial recognition technology could render anonymous free speech virtually impossible.

For at least 10 years, the FBI has been testing and using facial recognition. This is evidenced by a February 19, 2004 Privacy Impact Assessment ("PIA") conducted by the FBI for the "Computer Aided Facial Recognition Project." The project sought to assist the University of Sheffield in its testing of a particular method of facial recognition. The PIA makes clear that the FBI wanted "to develop a semi-automated tool enabling FBI examiners to extract facial landmark measurements from question images (such as, bank Surveillance photos) and conduct one-on-one comparisons with known images of a suspect in custody."

More recently, the FBI has been working on incorporating facial recognition technology into its Next Generation Identification ("NGI") program. Through the NGI program, the FBI is developing a massive biometric identification database that, when completed, will be one of the world's largest. The vast majority of records contained in the NGI database will be of US citizens and millions of those records will be of individuals who are neither criminals nor suspects. The NGI database will include fingerprints, iris scans, DNA profiles, voice identification profiles, palm prints, and facial images for the purpose of facial recognition.

The FBI deployed a facial recognition pilot as part of the NGI program in February 2012. The addition of facial recognition to NGI is set to be fully operational by the summer of 2014. The NGI program will allow image-based facial recognition searches of the FBI's national repository of criminal mugshots.

The use of facial recognition by the FBI does not stop with comparing suspects against criminal mugshots. The FBI has several Memorandums of Understanding (MOUs) with a number of state DMVs to allow facial recognition searches of the DMV's photo database. The DMV searches amount to a massive virtual line-up of millions of innocent Americans. This is particularly alarming given the FBI's willingness to accept a 20% error rate for facial recognition matches.

The FBI wants to keep pushing the number of use cases for facial recognition. In a 2010 slide deck by the FBI, it cites tracking subjects, identifying subjects in public datasets, and identifying subjects from images in seized systems as uses cases.

Despite the focus on facial recognition technology, the FBI has failed to fully address the privacy implications for the use of this technology. The FBI did conduct a "Privacy Impact Assessment (PIA) for the Next Generation identification (NGI) Interstate Photo System (IPS)" back in 2008, but the document is very limited in the issues raised by the use of facial recognition technology. The 2008 PIA is so lacking in its treatment of facial recognition technology that the FBI committed to updating it in its statement for the record at a Senate Subcommittee hearing in July 2012 on "What Facial Recognition Technology Means for Privacy and Civil Liberties."

Senator Franken, Chairman of the Subcommittee on Privacy, Technology and the Law, held the hearing to raise awareness about facial recognition, its current uses, and its potential to threaten our privacy and civil liberties. Senator Franken, in his opening statement, challenged the FBI to be a leader in addressing the privacy and civil liberty implications, stating, "I have called the FBI . . . here today to challenge them to use their position as leaders in their fields to set an example for others--before this technology is used pervasively." The FBI seemingly agreed to do just that.

In a statement for the record dated July 18, 2012, Jerome M. Pender, Deputy Assistant Director of the FBI's Criminal Justice Information Services Division, said that "the 2008 Interstate Photo System PIA is currently in the process of being renewed by way of a Privacy Threshold Analysis (PTA), with an emphasis on Facial Recognition." The purpose of the update was to "address all evolutionary changes since the preparation of the 2008 IPS PIA." Over a year and a half has passed, and no updated PTA or PIA has been completed yet.

EPIC filed a Freedom of Information Act (FOIA) request on February 28, 2014 for the updated facial recognition PTA and PIA. The FBI acknowledged EPIC's FOIA request on March 11, 2014. On March 19, 2014 the FBI informed EPIC that it could not fulfill the request for the updated PTA or PIA for facial recognition technology because "both documents are currently being drafted." As the FBI moves forward with facial recognition technology, it appears to be dragging its feet with respect to addressing the privacy implications of the technology.

The FBI has a habit of saying they will do a PIA or even starting a PIA but failing to actually follow through with it. I detailed in an earlier blog post how FOIA documents received by EPIC show that the FBI began drafting a PIA regarding its use of License Plate Readers back in early 2012, yet no PIA for LPRs is publicly available. Don't hold your breath for the FBI to finish a new PIA addressing facial recognition any time soon.

About this Archive

This page is an archive of entries from April 2014 listed from newest to oldest.

March 2014 is the previous archive.

May 2014 is the next archive.

Find recent content on the main index or look in the archives to find all content.