US

Man spent 30 hours in police custody after being wrongly identified by AI-based tech

Three years ago in Detroit, Robert Williams arrived home from work to find the police waiting at his front door, ready to arrest him for a crime he hadn’t committed.

Facial recognition technology used by officers had mistaken Williams for a suspect who had stolen thousands of dollars worth of watches.

The system linked a blurry CCTV image of the suspect with Williams in what is considered to be the first known case of wrongful arrest owing to the use of the AI-based technology.

The experience was “infuriating”, Mr Williams said.

“Imagine knowing you didn’t do anything wrong… And they show up to your home and arrest you in your driveway before you can really even get out the car and hug and kiss your wife or see your kids.”

Mr Williams, 45, was released after 30 hours in custody, and has filed a lawsuit, which is ongoing, against Detroit’s police department asking for compensation and a ban on the use of facial recognition software to identify suspects.

Robert Williams, 45, from Detroit, pictured here with his family, was questioned by police after AI-based facial recognition technology wrongly identified him as a suspect in an investigation
Image:
Robert Williams with his family

There are six known instances of wrongful arrest in the US, and the victims in all cases were black people.

Artificial intelligence reflects racial bias in society, because it is trained on real-world data.

A US government study published in 2019 found that facial recognition technology was between 10 and 100 times more likely to misidentify black people than white people.

This is because the technology is trained on predominantly white datasets. This is because it doesn’t have as much information on what people of other races look like, so it’s more likely to make mistakes.

There are growing calls for that bias to be addressed if companies and policymakers want to use it for future decision-making.

One approach to solving the problem is to use synthetic data, which is generated by a computer to be more diverse than real-world datasets.

Chris Longstaff, vice president for product management at Mindtech, a Sheffield-based start-up, said that real-world datasets are inherently biased because of where the data is drawn from.

“Today, most of the AI solutions out there are using data scraped from the internet, whether that is from YouTube, Tik Tok, Facebook, one of the typical social media sites,” he said.

Read more:
New rules unveiled to protect young children on social media
Phones may be able to detect how drunk a person is based on their voice

As a solution, Mr Longstaff’s team have created “digital humans” based on computer graphics.

These can vary in ethnicity, skin tone, physical attributes and age. The lab then combines some of this data with real-world data to create a more representative dataset to train AI models.

One of Mindtech’s clients is a construction company that wants to improve the safety of its equipment.

The lab uses the diverse data it has generated to train the company’s autonomous vehicles to recognise different types of people on the construction site so it can stop moving if someone is in their way.

BERLIN, GERMANY - AUGUST 03: Passersby walk under a surveillance camera which is part of facial recognition technology test at Berlin Suedkreuz station on August 3, 2017 in Berlin, Germany. The technology is claimed it could track terror suspects and help prevent future attacks. (Photo by Steffi Loos/Getty Images)
Image:
Some CCTV cameras are now fitted with facial recognition technology. File pic

Toju Duke, a responsible AI advisor and former programme manager at Google, said that using computer-generated, or “synthetic,” data to train AI models has its downsides.

“For someone like me, I haven’t travelled across the whole world, I haven’t met anyone from every single culture and ethnicity and country,” he said.

“So there’s no way I can develop something that would represent everyone in the world and that could lead to further offences.

“So we could actually have synthetic people or avatars that could have a mannerism that could be offensive to someone else from a different culture.”

The problem of racial bias is not unique to facial recognition technology, it has been recorded across different types of AI models.

Click to subscribe to the Sky News Daily wherever you get your podcasts

The vast majority of AI-generated images of “fast food workers” showed people with darker skin tones, even though US labour market figures show that the majority of fast food workers in the country are white, according to a Bloomberg experiment using Stability AI’s image generator earlier this year.

The company said it is working to diversify its training data.

A spokesperson for the Detroit police department said it has strict rules for using facial recognition technology and considers any match only as an “investigative lead” and not proof that a suspect has committed a crime.

“There are a number of checks and balances in place to ensure ethical use of facial recognition, including: use on live or recorded video is prohibited; supervisor oversight; and weekly and annual reporting to the Board of Police Commissioners on the use of the software,” they said.

Articles You May Like

Watch Kia’s new EV4 hatch carve up the Nurburgring, nearly on two wheels [Video]
Anas Sarwar ‘right’ to distance himself from winter fuel cut, says Ruth Davidson
Ukraine fires UK-supplied missiles at targets inside Russia
Tesla makes finding charging stations for people towing trailers easier
Qualcomm says it expects $4 billion in PC chip sales by 2029, as company gets traction beyond smartphones