em360tech image
Image Credit: Generated using AI via Adobe Stock

A new system powered by artificial intelligence (AI) can allegedly verify witness statements without bias. 

Scientists have recently devised an AI model that analyses the witness statements before they’re evaluated by law enforcement. 

The AI model aiming to enhance the accuracy of eyewitnesses uses natural language to assess the language used by the witness from a neutral perspective. 

According to the official statement by the University of Colorado Boulder, the AI model is uninfluenced by a potential human bias resulting from a featural justification effect. 

AI ascertaining an eyewitness identification statement understands the witness’s natural language inputs and assigns a score based on how accurately they have described the perpetrator. 

This AI tool could help law enforcement and jurors make more informed decisions as a consequence of reducing human bias in witness accounts. 

AI eradicates human bias

Consider this, a middle-aged man with honey-brown eyes passes by you nearly nudging your shoulders on a street in Hackney, London. He’s wearing a black jacket with a tiny bird-like red coloured logo embroidered on the top right pocket, navy blue trousers wearing beige coloured Yeezys and has black Nike baseball hat on his head. 

A few minutes later, you realise your brand new iPhone 15 Pro has vanished so you call the police and describe the man who went past you a few minutes before you discover your phone missing. You report the theft to the police with the description of the man in doubt. 

The law enforcement asks you to come in the next day and spot the thief but now all you remember is the colour of his eyes. 

Do you think the police would believe you if you spot a man with honey-coloured eyes in a completely different outfit today? Studies have shown such detailed descriptions could actually raise doubts but general statements tend to be considered more accurate. 

The new AI model can potentially verify your statement in such a case. It is designed to analyse eyewitness statements and identify potential biases.

Dobolyi explained that AI and natural language processing can provide deeper insights into eyewitness reliability. 

"The traditional analysis has been basic—just counting words. But with recent advancements in AI, we can assess statements in a much more sophisticated way," he added.

Read: New AI Training Model for LLMs Could Save Energy For 1.1M US Homes

How AI Assistance impacted Participants Judgement?

To demonstrate the AI model’s accuracy, researchers asked 1,010 people to evaluate a witness's ability to identify a suspect from a lineup in addition to proving a confidence statement. 

ai model removes human bias, study finds
Image credit: thecorgi | Canva

Participants were divided into four groups – one received no AI assistance, while the others were provided with different types of AI support, including predictions about the accuracy of identifications and graphical explanations. 

Each group assessed the likely accuracy of the eyewitness’s identification based on either a featural or recognition justification, allowing researchers to analyse how AI assistance influenced their judgments.

The witness either described specific features of the suspect such as their eyes by stating “I remember his eyes” or simply stated, “I recognise the person.”

Researchers essentially evaluated how the participants judged the witness’s accuracy based on these different types of statements.

They found that the AI assistance model significantly reduced the featural justification bias among participants who found the AI helpful. 

Those who perceived the AI as very useful tended to rate the accuracy of both featural and recognition statements similarly, effectively overcoming the bias. In contrast, participants who did not view the AI as beneficial continued to exhibit bias.

The researchers enphsised that this project is the first step in evaluating human–algorithm interactions before the widespread use of AI assistance by law enforcement. 

However, Dobolyi warned against blind trust in AI. He says that the AI has potential to support more informed decisions in legal contexts.

"We want tools that can help people make better, less biased decisions—if we can confirm their accuracy.”

It’s important to stress on transparency in AI decision-making, Dobolyi added.

"It’s essential that we understand why an AI makes a recommendation, especially in high-stakes situations like eyewitness testimony.”