NYPD Publishes First Facial Recognition Policy Amid Clearview AI Controversy

  • <<
  • >>

562549.jpg

 

The NYPD has used a variation of facial recognition technology for almost a decade, but there has been increasing opposition in the last few years, especially recently after a report was published linking Clearview AI, a controversial, startup facial recognition company, with members of the NYPD.

In a January 18 article, The New York Times introduced Clearview AI to the world, along with its unprecedented database of more than 3 billion images scraped from social media sites. Five days later, Clearview AI told Buzzfeed News it worked with the NYPD to help crack a case of alleged terrorism in New York City.

The NYPD denied it worked with Clearview AI, but a leaked client list obtained by Buzzfeed in February told a different story. According to the list, the NYPD had utilized Clearview AI technology, as well as the Justice Department, Immigration and Customs Enforcement, the U.S. Attorney's Office for the Southern District of New York, FBI, Customs and Border Protection, Interpol, and hundreds of other local police departments. In fact, the documents reveal more than 30 officers at the NYPD have accounts and they had run over 11,000 searches, the most of any entity on the list.

At the time, an NYPD spokesperson told BuzzFeed News that while it does not have any contract or agreement with Clearview, its “established practices did not authorize the use of services such as Clearview AI nor did they specifically prohibit it.”

A month later, the NYPD has now released it’s first-ever “Facial Recognition Policy,” describing the scope of the technology, how it is used, where it is used and the procedures members of the service must follow. The directive prohibits the use of any searches against external databases of images unless approved “in a specific case for an articulable reason” by the Chief of Detectives, Deputy Commissioner, or Intelligence and Counterterrorism. 

“Facial recognition is…used by the Department exclusively to compare images obtained during criminal investigations with lawfully possessed arrest photos,” reads the NYPD’s statement. “When used in combination with human analysis and additional investigative steps, facial recognition technology is an important tool in solving crime, increasing public safety, and bringing justice for victims. The NYPD has never arrested anyone on the basis of a facial recognition match alone—it is merely a lead in the investigative process.”

Specifically, the announcement names six scenarios as authorized uses for facial recognition technology by the NYPD:

  1. Identify an individual when there is a basis to believe that such individual has committed, is committing, or is about to commit a crime
  2. Identify an individual when there is a basis to believe that such individual is a missing person, crime victim, or witness to criminal activity
  3. Identify a deceased person
  4. Identify a person who is incapacitated or otherwise unable to identify themselves
  5. Identify an individual who is under arrest and does not possess valid identification, is not forthcoming with valid identification, or who appears to be using someone else's identification, or a false identification
  6. Mitigate an imminent threat to health or public safety (e.g. to thwart an active terrorism scheme or plot, etc.)

The NYPD won’t be the only department forgoing the use of Clearview AI’s database. In January, Gurbir Grewal, New Jersey’s attorney general, prohibited police departments in all 21 counties from using the Clearview AI app. And police departments aren’t the only ones. In the wake of the reports by the NYT and Buzzfeed News, Twitter, YouTube and LinkedIn sent cease-and-desist letters to the company, and Facebook demanded it stop accessing or using information from both Facebook and Instagram.

Beyond the controversial database and use of Clearview AI, facial recognition opponents also point to the technology’s well-documented inaccuracies, especially how it disproportionally affects black and Latino people.

“After using facial recognition for a decade without any regulations, the NYPD’s policy is too little, too late,” said Albert Cahn, Executive Director of the Surveillance Technology Oversight Project (STOP) at the Urban Justice Center in a blog post. “This policy places no restrictions on some of the NYPD’s most problematic uses of facial recognition, such as reliance on software that misidentifies Black and Latin/X New Yorkers more often. At a moment when cities around the country are banning facial recognition, simply writing-down the status quo is not enough. We need limits on NYPD surveillance that will stop discrimination against communities of color and block wrongful convictions.”

STOP is a lead proponent of the Public Oversight of Surveillance Technology (POST) Act, a city council bill that, if approved, would require privacy protections for all NYPD surveillance programs and databases.