In the wake of the Black Lives Matter protests against police violence and brutality against people of color, IBM, Amazon, and Microsoft announced last month that they would end or suspend the sale of facial recognition technology to law enforcement. The moves came just a couple of weeks before the New York Times published a feature story highlighting the first known arrest of an innocent suspect caused by a faulty facial recognition match.
While big tech firms are pulling back from offering facial recognition, civil rights activists, researchers, and the American Civil Liberties Union are concerned that smaller tech firms might fill the void, allowing police departments to continue to expand the use of the controversial technology. The use of the technology for policing purposes is controversial in part because there are no national standards dictating the effectiveness of the various facial recognition algorithms being used.
In fact, the facial recognition software used to arrest the aforementioned innocent suspect included two different algorithms that were both included in a federal study of facial recognition systems found to be faulty. That National Institute of Standards and Technology study determined that more than 100 facial recognition systems were biased, with mis-identification of black and Asian faces at a rate 10 to 100 times greater than that of Caucasian faces.
All of which begs the question as to why the Detroit Police Department relied on the technology to arrest Robert Julian-Borchak Williams for the felony theft of $3,800 worth of watches from an upscale boutique.
Williams was arrested in front of his wife and two young daughters upon pulling into his driveway after a long day at work. Other than showing him the warrant for his arrest, Williams had no idea why he was being arrested and had to spend the night in detention before finding the cause of his arrest during his initial interrogation. After asking Williams when he had last visited the upscale boutique that had been robbed, detectives presented him with three photographs of a man standing in front of a watch display. The man in the photographs did not obviously look like Williams, so he held one up next to his face and said, “No, this not me. You think all black men look alike.”
According to Williams, the detectives responded by looking at one another, and one, who seemed “chagrined,” said to his partner, “I guess the computer got it wrong.”
Despite that apparent computer error, police kept Williams in custody for several more hours, finally releasing him after he posted a $1,000 bond. During his arraignment in court two weeks later, the prosecutor moved to dismiss the case without prejudice, allowing police to easily charge him again should they secure more evidence.
The New York Times investigation into the arrest determined that the detectives didn’t rely solely on the facial identification match to justify Williams’ arrest. In fact, the facial recognition report on the match notes that “[t]his document is not a positive identification. It is an investigative lead only and is not probable cause for arrest.” So, detectives included a photo of Williams in a six-person photo line-up that was shown to the boutique’s loss-prevention contractor, who had only witnessed the suspect in the video surveillance photos. Unfortunately, she also picked Williams as the suspect.
It would be interesting to see what would have happened had this case—as weak as it appears—gone to trial, that is, as long as Williams were being represented by a competent criminal attorney. Would such an attorney have sought a bit of ediscovery or other disclosure relating to the facial recognition algorithms used to identify Williams?
Whatever the case, Williams has since figured out he had an alibi, as at about the time the robbery was occurring Williams determined that he had been posting a video to his private Instagram account. We can assume that digital forensics would undoubtedly show that Williams was not posting that video from the store.