Rights Groups Want Facial Recognition Process to be Discoverable In a New Jersey Case

eDiscoovery Today Logo
eDiscovery Today’s Logo

Hat tip to the “Data Diva” Debbie Reynolds for this story! Two digital rights groups and a defendants’ rights association are arguing in a New Jersey court that evidence discovery in a criminal case should include the defense’s ability to see aspects of facial recognition systems and policies used to identify a defendant.

As reported by BiometricUpdate.com here, in the case New Jersey v. Arteaga, Francisco Arteaga is accused of a November 2019 armed robbery in that state, although he maintains he was dozens of miles away from the crime scene when it occurred.

According to the Electronic Privacy Information Center (EPIC), before Arteaga was identified, New Jersey police found that witnesses at the scene could not describe the perpetrator, and a facial recognition search run in state by New Jersey’s Regional Operations Intelligence Center turned up no results.

Doug Austin, eDiscovery Today

According to the Electronic Privacy Information Center (EPIC), before Arteaga was identified, New Jersey police found that witnesses at the scene could not describe the perpetrator, and a facial recognition search run in state by New Jersey’s Regional Operations Intelligence Center turned up no results. Police then went out of state to the New York Police Department which ran a facial recognition search using still images cropped from security cameras on the street. The search returned Arteaga as one of the results, and NYPD’s facial recognition analyst reported him to the New Jersey police as a “potential match” for the security camera footage. Police then placed Arteaga’s picture in a photo lineup where he was eventually identified by two witnesses.

Arteaga requested detailed discovery on the facial recognition systems used by the NYPD to identify him, the original photo and any edits performed by the NYPD before a search was run, and information on the analyst who performed the search that identified him. The New Jersey district court denied his motion to compel discovery.

EPIC along with partners the Electronic Frontier Foundation (EFF) and the National Association of Criminal Defense Lawyers (NACDL) filed an amicus brief that pointed out the shortcomings and limitations that have been documented on facial recognition AI algorithms as a prime reason why defendants should have the same digital rights of discovery that are afforded to them in instances of conventional, physical evidence. In the brief, they laid out the series of steps required to conduct a facial recognition search, all of which involve human decisions that can introduce variabilities in accuracy and increase the chance of a misidentification. The brief also highlights known cases of misidentifications and argues that discovery is the only way for defendants to understand the evidence presented against them.

One key argument in the amicus brief is regarding the “low quality of the image and the fact that the subject’s face is turned away from the camera”, which the rights groups point out raises the risk of misidentification. They also cite the fact that the failure to return matches in New Jersey “suggests that Mr. Arteaga is not the subject of the probe photo (and thereby innocent) or that the quality of image was of too low quality to ever obtain reliable results.” Fair point.

Additionally, defense attorneys want a “detailed discovery on the facial recognition systems used by the NYPD to identify [the defendant], the original photo and any edits” that the officials may have made prior to submitting the image for matching, according to a news announcement by EPIC. The defendant’s attorneys also want information on the system analyst involved in the matching.

Misidentifications by facial recognition algorithms are certainly not unprecedented. The New York Times published an article discussing how three black men – one in New Jersey, two others in Detroit – were all falsely arrested based on bad facial recognition matches (their cases were cited in the amicus brief as well). Two of them sued over the wrongful arrests. That same article cites a national study of over 100 facial recognition algorithms which found that they did not work as well on Black and Asian faces.

Facial recognition software isn’t the only AI algorithm that has been challenged in the courts. Back in 2016, I covered Loomis v. Wisconsin, the Wisconsin Supreme Court, in a unanimous ruling, upheld a six-year prison sentence for 34-year-old Eric Loomis, who was deemed a high risk of re-offending by a popular tool known as COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), a 137-question test that covers criminal and parole history, age, employment status, social life, education level, community ties, drug use and beliefs. Loomis had challenged the use of the test’s score, saying it violated his right to due process of law because he was unable to review the algorithm and raise questions about it.

Maura Grossman discussed COMPAS as part of a presentation last year (which I covered here), where she discussed its tendency to rate the possibility for black defendants to reoffend at much higher score than non-black defendants, as was the case of an 18 year old black woman who was rated high risk (a score of 8) for future crime after she and a friend took a kid’s bike and scooter that were sitting outside whereas a 41 year old white man who had already been convicted of armed robbery and attempted armed robbery was rated a low risk (a score of 3).

During that same presentation, Grossman pointed out that COMPAS experienced “function creep” where it was originally designed to provide insight into the types of treatment an offender might need (e.g., drug or mental health treatment), then expanded to decision making about conditions of release after arrest (e.g., release with no bail, bail, or retention without bail), before being expanded again to decisions about sentencing.

Providers of these algorithms and the organizations using them have been fighting against discovery about them with the argument that it is protected intellectual property (IP). However, last year, Rep. Mark Takano (D-Calif.) and Rep. Dwight Evans (D-Penn.) reintroduced the Justice in Forensic Algorithms Act (which I covered here) to ensure that defendants have access to source code and other information necessary to exercise their confrontational and due process rights when algorithms are used to analyze evidence in their case. Back then, I said: “Expect a titanic battle between those pushing for algorithm transparency and those pushing to keep IP secrets.”

To date, the bill hasn’t advanced beyond the introduction stage, so it looks like the titans looking to keep IP secrets are winning so far. This case serves as another opportunity to revisit the discussion and keep the dialogue going to balance justice, privacy rights and IP rights. It’s a discussion that needs to be continued to be escalated until we can find an acceptable balance to all three.

Experience more of Doug Austin’s great work at the eDiscovery Today blog here.

Follow Doug Austin on JD Supra here.

Follow EDRM on JD Supra here.

Author

  • Doug Austin

    Doug Austin is the editor and founder of eDiscovery Today and an EDRM Global Advisory Council Leader. Doug is an established eDiscovery thought leader with over 30 years of experience providing eDiscovery best practices, legal technology consulting and technical project management services to numerous commercial and government clients. Doug has published a daily blog since 2010 and has written numerous articles and white papers. He has received the JD Supra Readers Choice Award as the Top eDiscovery Author for 2017 and 2018 and a JD Supra Readers Choice Award as a Top Cybersecurity Author for 2019.

    View all posts