A Judge Just Let AI Tell Him What To Do in a Real Criminal Case
This week, a judge used AI to decide how he should proceed with a case in which a man was accused of assault and murder in the city of Chandigarh, India. During the historic trial on Monday (March 27), the judge, Anoop Chitkara, opened the AI program and asked if the defendant, Jaswinder Singh, should be released on bail.
Into ChatGPT, he typed, “What is the jurisprudence on bail when the assailant assaulted with cruelty?” The bot, which is known to be biased with its responses, answered, “If the assailants have been charged with a violent crime that involves cruelty, they may be considered a danger to the community and a flight risk. In such cases, the judge may be less inclined to grant bail or may set the bail amount very high to ensure that the defendant appears in court and does not pose a risk to public safety.”
After the program added that presuming the innocence of the accused is “fundamental” to the justice system and that the judge should be sure that Singh didn’t pose a threat to the public, Chitkara decided that the crime was allegedly committed with cruelty and denied him bail.
India is well known for its backlogged criminal justice system. According to VICE, there are over six million cases yet to be heard by the courts, with 23 cases filed every minute.
The judge later clarified that he used it to make decisions about the defendant’s bail, not whether he was guilty of murder.
“Any reference to ChatGPT and any observation made hereinabove is neither an expression of opinion on the merits of the case nor shall the trial Court advert to these comments. This reference is only intended to present a broader picture on bail jurisprudence, where cruelty is a factor.”
However, this isn’t the first time that ChatGPT has been used in the courtroom. A judge in Colombia used the system to make decisions in January, which is believed to be the first time that it had ever been done.
The judge in that case, Juan Manuel Padilla Garcia, said, according to VICE, “The arguments for this decision will be determined in line with the use of artificial intelligence (AI). Accordingly, we entered parts of the legal questions posed in these proceedings.
“The purpose of including these AI-produced texts is in no way to replace the judge’s decision,” he added. “What we are really looking for is to optimize the time spent drafting judgments after corroborating the information provided by AI.”
There have been numerous issues and fears over AI’s rise in use, with concerns about jobs and its influence. Now, should we worry about how cases will be handled if people in the justice system use it? The Council of Europe has made arguments about its usefulness and why we shouldn’t worry, but considering evidence of existing prejudice making its way into AI systems, there should be some caution, especially in making decisions about crimes and those who commit them.
(featured image: solidcolours/Getty images)
Have a tip we should know? [email protected]