Prosecutors and AI: Navigating Justice in the Age of Algorithms

Prosecutors and AI: Navigating Justice in the Age of Algorithms By Ralph Losey
Image: Ralph Losey, Losey AI LLC, using his Visual Muse GPT.

[Editor’s Note: EDRM is proud to publish Ralph Losey’s advocacy and analysis. The opinions and positions are Ralph Losey’s copyrighted work.]


AI has the potential to transform the criminal justice system through its ability to process vast datasets, recognize patterns, and predict outcomes. However, this potential comes with a profound responsibility: ensuring that AI is employed in ways that uphold basic human principles of justice. This article will focus on how AI can assist prosecutors in fulfilling their duty to represent the people fairly and equitably. It will highlight the practical benefits of AI in criminal law, providing specific examples of its application. The underlying theme emphasizes the necessity of human oversight to prevent the misuse of AI and to ensure that justice remains a human ideal, not an artificial construct.

A courtroom scene featuring five people. At the center, a judge with gray hair and wearing a black robe sits behind a wooden bench. To the judge's left, two men in suits are seated at a table with an open laptop. To the judge's right, a woman in a gray suit stands facing the judge, with another man in a suit standing beside her. The background features symbolic imagery, including a large pair of scales, a human brain with interconnected nodes, and a stylized statue of Lady Justice holding scales. The scene combines elements of law, technology, and artificial intelligence.
AI assisted justice.
Image by Ralph Losey using his custom AI, Visual Muse.

The integration of AI into criminal prosecutions must be aligned with the ethical and legal obligations of prosecutors as outlined, for instance, by the American Bar Association’s Criminal Justice Standards for the Prosecution Function (ABA, 4th ed. 2017) (hereinafter “ABA Standards”). The ABA Standards emphasize the prosecutor’s duty to seek justice, maintain integrity, and act with transparency and fairness in all aspects of the prosecution function. This article will not cover the indirectly related topics of AI evidence. See Gless, Lederer, Weigend, AI-Based Evidence in Criminal Trials? (William & Mary Law School, Winter 2024). It will also not cover criminal defense lawyer issues, but it may be in a follow-up soon.

The Promise of AI in Criminal Prosecutions

The primary duty of the prosecutor is to seek justice within the bounds of the law, not merely to convict.” ABA Standard 3-1.2(b). When AI is used responsibly, it can assist prosecutors in fulfilling this duty by providing new tools. The AI-powered tools can enhance evidence analysis, case management, and decision-making, all while maintaining the integrity and fairness expected of the prosecution function. Prosecutors with AI can better manage the vast amounts of data in modern investigations, identify patterns that might escape human detection, and make more informed decisions.

AI has the potential to transform the criminal justice system through its ability to process vast datasets, recognize patterns, and predict outcomes.

Ralph Losey.

The National Institute of Justice in March 2018 sponsored a workshop of prosecutors from around the country that identified data and technology challenges as a high priority need for prosecutors. According to the report by the Rand Corporation on the conference entitled, Prosecutor Priorities, Challenges, and Solutions (“Rand Report“) the key findings of the prestigious group were: (1) difficulties recruiting, training, managing, and retaining staff, (2) demanding and time-consuming tasks for identifying, tracking, storing, and disclosing officer misconduct and discipline issues, and (3) inadequate or inconsistent collection of data and other information shared among agencies . . . as well as by emerging digital and forensic technologies. The full Rand Report PDF may be downloaded here. The opening summary states:

Prosecutors are expected to deliver fair and legitimate justice in their decision making while balancing aspects of budgets and resources, working with increasingly larger volumes of digital and electronic evidence that have developed from technological advancements (such as social media platforms), partnering with communities and other entities, and being held accountable for their actions and differing litigation strategies . . .

Moreover, the increasing volume of potentially relevant digital information, video footage, and other information from technological devices and tools can significantly add to the amount of time needed to sufficiently examine and investigate the evidence in order to make decisions about whether to drop or pursue a case. This can be especially challenging because the staffing and other resources in prosecutors’ offices have not necessarily kept pace with these increasing demands.

Although the amount of digital information that prosecutors must sometimes sift through can be managed, in part, through innovative technological tools, such as data mining and data reduction solutions (Al Fahdi, Clarke, and Furnell, 2013; Quick and Choo, 2014), there are often steep learning curves or high costs that make it unrealistic for an office to implement these technologies.

Rand Report, pages 1-3.

Also see the excellent Duke Law sponsored one hour panel discussion video, The Equitable, the Ethical and the Technical: Artificial Intelligence’s Role in The U.S. Criminal Justice System for a comprehensive discussion of issues as of November 2021, just before the development and release of the new generative models of AI a year later.

e-Discovery, Evidence Analysis, and Case Management

As the Rand Report confirms, the sheer volume of evidence in complex criminal investigations is a significant challenge for prosecutors. Also see: Tinder Date Murder Case Highlights the Increasing Complexity of eDiscovery in Criminal Investigations: eDiscovery Trends (e-Discovery Daily, 6/15/18). AI can analyze vast datasets—such as emails, text messages, and internet activity logs—to identify patterns indicative of criminal activity, but the software can be expensive and requires trained technology experts. AI algorithms can recognize specific types of evidence, such as images, sentiments, or key concepts relevant in many cases. They can help prosecutors identify patterns and connections within the evidence that might not be immediately apparent to human investigators. This capability can significantly reduce the time needed to search and study evidence, enabling prosecutors to build stronger cases more efficiently.

But, as the Rand Report also makes clear, prosecutors need adequate funding and trained personnel to purchase and use these new tools. Fortunately, generative AI is substantially less expensive that the older models of AI and easier to use. Still, issues of fairness and guardrails against discrimination in their use remain as significant problems.

A high-tech workspace filled with digital screens and holographic displays, all focused on artificial intelligence. In the center, a glowing, spherical network of interconnected nodes hovers above a digital surface, representing data or AI activity. Surrounding this are numerous screens showing various icons, charts, and graphics related to AI, such as a human head composed of dots and lines, email icons, and analytical graphs. A keyboard, mouse, coffee cup, and open notebook are visible on the desk, indicating a working environment. The overall scene conveys a futuristic, data-driven setting.
Image by Ralph Losey using his custom AI, Visual Muse.

Use of AI evidence search and classification tools such as predictive coding, which are well established in civil litigation, should be used more widely used soon in criminal law. The high costs involved are now plummeting and should soon be affordable to most prosecutors. They can drastically reduce the time needed to search and analyze large volumes of complex data. Still, budgets to hire trained personnel to operate the new tools must be expanded. AI can complement, but not entirely replace, human review in what I call a hybrid multimodal process. Ralph Losey, Chat GPT Helps Explains My Active Machine Learning Method of Evidence Retrieval (e-Discovery Team, 1/28/23). Human experts on the prosecutor’s team should always be involved in the evidence review to ensure that no critical information is missed.

A surreal scene blending nature, technology, and knowledge. At the bottom left is a large, open book. Above it, a human head made of light and abstract elements emerges, with flowing lines and patterns extending from it, symbolizing thoughts or data streams. The head is adorned with flowers and scientific symbols, including atomic structures and molecular diagrams. To the right, a winding river flows through a landscape of mountains and trees, leading toward a bright sunset. Surrounding the scene are digital screens displaying AI-related imagery, including a fingerprint, email icon, and various charts, blending the natural and digital worlds.
Image by Ralph Losey using his custom AI, Visual Muse.

Transparency and accountability are also crucial in using AI in discovery. Defense attorneys should be provided with a detailed explanation of how these tools were used. This is essential to maintaining the fairness and integrity of the discovery process, ensuring that both sides have equal access to evidence and can challenge the AI’s conclusions if necessary.

A woman in a business suit sits at a desk, interacting with a large holographic screen in front of her. The screen displays a digital humanoid figure, composed of glowing circuitry patterns, who is pointing at text and icons, including scales of justice. The woman rests her chin on her hand, appearing deep in thought as she engages with the holographic figure. The setting includes bookshelves and a window in the background, suggesting a professional or academic environment where technology and law intersect.
Image by Ralph Losey using his custom AI, Visual Muse.

AI also plays a crucial role in case management. AI-powered tools can help prosecutors organize and prioritize cases based on the severity of the charges, the availability of evidence, and the likelihood of a successful prosecution. These tools can assist in tracking deadlines, managing court calendars, and ensuring that all necessary court filings are completed on time. By streamlining these administrative tasks, AI allows prosecutors and their assistants to concentrate on the substantive aspects of their work—pursuing justice. It also helps them deal with the omnipresent staff shortage issues.

A dynamic, abstract scene representing the convergence of artificial intelligence and information. At the center is a glowing black anonymous use figure labeled "AI," radiating energy in the form of colorful, swirling streams of light that connect to various floating documents, envelopes, and digital symbols. The background is filled with intricate patterns, data streams, and interconnected nodes, creating a sense of vast information flow. The scene blends physical and digital elements.
Image by Ralph Losey using his custom AI, Visual Muse.

Bias Detection and Mitigation

Bias in prosecutorial decision-making—whether conscious or unconscious—remains a critical concern. ABA Standards state:

The prosecutor should not manifest or exercise, by words or conduct, bias or prejudice based upon race, sex, religion, national origin, disability, age, sexual orientation, gender identity, or socioeconomic status. A prosecutor should not use other improper considerations, such as partisan or political or personal considerations, in exercising prosecutorial discretion. A prosecutor should strive to eliminate implicit biases, and act to mitigate any improper bias or prejudice when credibly informed that it exists within the scope of the prosecutor’s authority.

ABA Standards 3-1.6(a).

AI can play a crucial role in detecting and mitigating such biases, helping prosecutors adhere to the mandate that they “strive to eliminate implicit biases, and act to mitigate any improper bias or prejudice” within their scope of authority.

Prosecutors should use care in the selection and use of AI systems. If are trained on biased data, they can perpetuate and even amplify existing disparities in the criminal justice system. For instance, an AI algorithm used to predict recidivism, if trained on data reflecting historical biases—such as the over-policing of minority communities—may disproportionately disadvantage these communities. AI systems used in criminal prosecutions should be designed to avoid this bias. The software purchased by a prosecutor’s office should be chosen carefully, ideally with outside expert advice, and rigorously tested for bias and other errors before deployment.

AI systems also offer the potential to detect and mitigate unconscious human bias in prosecutorial decision-making. AI can analyze past prosecutorial decisions to identify patterns of bias that may not be immediately apparent to human observers. By flagging these patterns, AI can help prosecutors become aware of their biases in their office and take corrective action.

A split-image composition showing the fusion of human and machine. On the left, there is a detailed, futuristic robotic head with glowing blue eyes and visible circuitry. On the right, the face of a man with similar blue eyes stares intently forward. The two faces are closely aligned, emphasizing the merging of human and artificial intelligence. In the background, there are circuit board patterns and a faint image of a scale of justice. The image conveys themes of AI integration with human identity and ethics.
Image by Ralph Losey using his custom AI, Visual Muse.

Prosecutors should not, however, fall into a trap of overcompensating based on statistical analysis alone. AI is a limited tool that, like humans, makes errors of its own. Its use should be tempered by prosecutor experience, independence, intuition and human values. When we use AI in any context or field it should be a hybrid relationship where humans remain in charge. From Centaurs To Cyborgs: Our evolving relationship with generative AI (e-Discovery Team, 4/24/24) (experts recommend two basic ways to use AI, both hybrids, where the unique powers of human intuition are added to those of AI). AI can also help prosecutors make objective decisions on charging and sentencing by providing statistically generated recommendations, again with the same cautionary advice on overreliance.

Sentencing Recommendations and Predictive Analytics

The use of AI in predictive analytics for sentencing is among the most controversial applications in criminal law. AI systems can be trained to analyze data from past cases and make predictions about the likelihood of a defendant reoffending or suggest appropriate sentences for a given crime. These recommendations can then inform the decisions of judges and prosecutors.

Predictive analytics has the potential to bring greater consistency and objectivity to sentencing. By basing recommendations on data rather than individual biases or instincts, AI can help reduce disparities and ensure similar cases are treated consistently. This contributes to a more equitable criminal justice system.

While AI can bring greater consistency to sentencing, prosecutors must ensure that AI-generated recommendations comply with their “heightened duty of candor” and the overarching obligation to ensure that justice is administered equitably.

In light of the prosecutor’s public responsibilities, broad authority and discretion, the prosecutor has a heightened duty of candor to the courts and in fulfilling other professional obligations.

ABA Standard 3-1.4(a).

The use of AI in sentencing raises important ethical questions. Should AI make predictions about a person’s future behavior based on their past? What if the data used to train the AI is biased or incomplete? How can we ensure that AI-generated recommendations are not seen as infallible but are subject to critical scrutiny by human decision-makers?

These concerns highlight the need for caution. While AI can provide valuable insights and recommendations, it is ultimately the responsibility of human prosecutors and judges to make the final decisions. AI should be a tool to assist in the pursuit of justice, not a replacement for human judgment.

A courtroom scene with a focus on the intersection of law and artificial intelligence. In the foreground, a large pair of scales is prominently displayed on the courtroom floor, with a judge's gavel placed nearby. A man in a suit stands with his hands behind his back, facing a wall that features a half-human, half-robotic face surrounded by circuitry and AI-related symbols. The background includes tall windows letting in light, and the overall atmosphere blends traditional legal imagery with futuristic technology, highlighting the balance between justice and AI.
Image by Ralph Losey using his custom AI, Visual Muse.

Predictive Policing

Predictive policing is an area where pre-generative AI has been embraced by many police departments worldwide, including the E.U. countries, but also China and other repressive regimes. Many prosecutors in the U.S. endorse it, but it is quite controversial and hopefully will be improved by new models of generative AI. The DA’s office wants to use predictive analytics software to direct city resources to ‘places that drive crime.’ Will it work? (The Lens, 11/15/23). In theory, by analyzing data on past crimes—such as the time, location, and nature of the offenses—AI algorithms can predict where and when future crimes are likely to occur. The accuracy of these predictions in the past using old AI models has been very controversial and there is widespread concern now as to its misuse. Sankin and Mattu, Predictive Policing Software Terrible At Predicting Crimes (Wired, 10/2/23). But in theory, this kind of statistical analysis should be able to help law enforcement agencies allocate resources more effectively, aiming to prevent crime before it happens. See generally, Navigating the Future of Policing: Artificial Intelligence (AI) Use, Pitfalls, and Considerations for Executives (Police Chief Magazine, 4/3/24).

All prosecutors, indeed. all citizens, want to be smart when it comes to crime, we all want “more police officers on the street, deployed more effectively. They will not just react to crime, but prevent it.” Kamala Harris (Author) and Joan Hamilton, Smart on Crime: A Career Prosecutor’s Plan to Make Us Safer (Chronicle Books, 2010).

A futuristic scene featuring a man in a dark coat standing in a high-tech room, interacting with a large, glowing holographic interface that fills the center of the space. The hologram displays multiple layers of circular patterns, data streams, and complex graphics. The walls around the man are covered with digital screens showing various data points, maps, and interface elements. In the upper right corner, the words "MINORITY REPORT" are visible, referencing the film's iconic technology. The overall atmosphere is sleek and advanced, evoking themes of data analysis, surveillance, and predictive technology.
Image by Ralph Losey using his custom AI, Visual Muse.

The Los Angeles Police Department (LAPD) was one of the first to use predictive policing software, which was known as PredPol (now Geolitica). It identified areas of the city at high risk for certain types of crime, such as burglaries or auto thefts. The software analyzed data on past crimes and generated “heat maps” that indicate where crimes are most likely to occur in the future. This guided patrols and other law enforcement activities. PredPol proved to be very controversial. Crime Prediction Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them (The Markup, 12/2/21). Its use was discontinued by the LAPD in 2020, but other companies claim to have corrected the biases and errors in the programs. See Levinson-Waldman and Dwyer, LAPD Documents Show What One Social Media Surveillance Firm Promises Police (Brennan Center for Justice, 11/17/21).

Another type of predictive policing software was adopted by the NYPD called Patternizr. According to the Wikipedia article on predictive policing:

The goal of the Patternizr was to help aid police officers in identifying commonalities in crimes committed by the same offenders or same group of offenders. With the help of the Patternizr, officers are able to save time and be more efficient as the program generates the possible “pattern” of different crimes. The officer then has to manually search through the possible patterns to see if the generated crimes are related to the current suspect. If the crimes do match, the officer will launch a deeper investigation into the pattern crimes.

Molly Griffard, A Bias-Free Predictive Policing Tool?: An Evaluation of the Nypd’s Patternizr (Fordham Urban Law Journal, December 2019).
A futuristic urban scene featuring police activity in a high-tech city. The street is wet, reflecting neon lights from surrounding buildings, which display holographic signs, including one that reads "PredPol." Several armored police officers, equipped with advanced gear, stand on the street near police vehicles marked with "Minority Report" on the side. Above the street, large holographic screens display human figures and data, likely used for surveillance or identification purposes. The atmosphere is tense, with a blend of futuristic technology and law enforcement evoking themes of predictive policing and advanced surveillance.
Image by Ralph Losey using his custom AI, Visual Muse.

While predictive policing has been credited with reducing crime in some areas, it has also been criticized for potentially reinforcing existing biases. If the data used to train the AI reflects a history of over-policing in certain minority communities, the algorithm may predict those communities are at higher risk for future crimes, leading to even more policing in those areas. This, in turn, can perpetuate a cycle of discrimination and injustice. See e.g. Taryn Bates, Technology and Culture: How Predictive Policing Harmfully Profiles Marginalized People Groups (Vol. 6 No. 1 (2024): California Sociology Forum).

To address these concerns, predictive policing algorithms must be designed with fairness in mind and subject to rigorous oversight. AI systems should be regularly audited to ensure they are not disproportionately targeting specific communities, and the data used to train these systems must be representative and unbiased.

A futuristic city street at dusk, illuminated by streetlights and the glow of holographic displays. Several police officers in advanced tactical gear stand around a sleek, high-tech police car with glowing blue lights on the wheels. Floating above the street are large, translucent holograms of human figures and abstract shapes, possibly representing surveillance or data visualization. The scene blends elements of modern law enforcement with cutting-edge technology, creating a tense and otherworldly atmosphere in the urban environment.
Image by Ralph Losey using his custom AI, Visual Muse.

Sentiment Analysis in Jury Selection

Another trending application of AI in criminal law is the use of sentiment analysis in jury selection. Sentiment analysis is a type of AI that can analyze text or speech to determine the underlying emotions or attitudes of the speaker. In jury selection, sentiment analysis can analyze potential jurors’ public records, especially social media posts, as well as their responses during voir dire—the process of questioning jurors to assess their suitability for a case. It can also monitor unfair questions of potential jurors by prosecutors and defense lawyers. See Jo Ellen Nott, Natural Language Processing Software Can Identify Biased Jury Selection, Has Potential to Be Used in Real Time During Voir Dire (Criminal Legal News, December 2023). Also see AI and the Future of Jury Trials (CLM, 10/18/23).

For example, an AI-powered sentiment analysis tool could analyze the language used by potential jurors to identify signs of bias or prejudice that might not be immediately apparent to human observers. This information could then be used by prosecutors and defense attorneys to make more informed decisions about which jurors to strike or retain.

While sentiment analysis has the potential to improve jury selection fairness, it also raises ethical questions. Should AI influence juror selection, given the potential for errors or biases in the analysis? How do we ensure AI-generated insights are used to promote justice, rather than manipulate the selection process?

These questions underscore the need for careful consideration and oversight in using AI in jury selection. AI should assist human decision-makers, not substitute their judgment.

A courtroom scene featuring a trial focused on artificial intelligence. In the foreground, a lawyer sits at a desk with a large digital screen displaying AI-related information, including an image of a humanoid robot and technical data. Another lawyer stands at a podium addressing the court, with a similar screen behind him displaying more AI visuals. The jury, seated to the left, attentively listens while other individuals, possibly the judge and additional legal staff, are seated around the room. The courtroom is lined with wooden panels and bookshelves, combining traditional legal settings with modern technology, emphasizing the intersection of AI and law.
Image by Ralph Losey using his custom AI, Visual Muse.

AI in Plea Bargaining and Sentencing

AI can also play a transformative role in plea bargaining and sentencing decisions. Plea bargaining is a critical component of the criminal justice system, with most cases being resolved through negotiated pleas rather than going to trial. AI can assist prosecutors in evaluating the strength of their case, the likelihood of securing a conviction, and the appropriate terms for a plea agreement. See: Justice Innovation Lab, Critiquing The ABA Plea Bargaining Principles Report (Medium, 2/1/24); Justice Innovation Lab, Artificial Intelligence In Criminal Court Won’t Be Precogs (Medium, 10/31/23) (article concludes with “Guidelines For Algorithms and Artificial Intelligence In The Criminal Justice System“).

For example, AI algorithms can analyze historical data from similar cases to provide prosecutors with insights into the typical outcomes of plea negotiations, considering factors such as the nature of the crime, the defendant’s criminal history, and the available evidence. This can help prosecutors make more informed decisions on plea deal offers.

Moreover, AI can assist in making sentencing recommendations that are more consistent and equitable. Sentencing disparities have long been a concern in the criminal justice system, with studies showing that factors such as race, gender, and socioeconomic status can influence sentencing outcomes. AI has the potential to reduce these disparities by providing sentencing recommendations based on objective criteria rather than subjective judgment. Keith Brannon, AI sentencing cut jail time for low-risk offenders, but study finds racial bias persisted (Tulane Univ., 1/23/24); Kieran Newcomb, The Place of Artificial Intelligence in Sentencing Decisions (Univ. NH, Spring 2024).

A futuristic courtroom scene featuring a defendant and a lawyer standing before a judge at a high-tech bench. The defendant, wearing a blue prison uniform and red sneakers, stands with his hands cuffed behind his back. The lawyer, dressed in a sleek suit with holographic projections of anatomical diagrams on the back, faces the judge. The judge, also surrounded by holographic interfaces, reviews digital data displayed on the bench. The courtroom is filled with observers seated on either side, with large digital screens on the walls showing various data and holographic images, including human figures and technical readouts. The scene blends elements of modern justice with advanced technology, highlighting the potential future of courtroom proceedings.
Image by Ralph Losey using his custom AI, Visual Muse.

For instance, an AI system could analyze data from thousands of past cases to identify typical sentences imposed for specific crimes, accounting for relevant factors like the severity of the offense and the defendant’s criminal record. This information could then be used to inform sentencing decisions, ensuring that similar cases are treated consistently and fairly.

However, using AI in plea bargaining and sentencing also raises significant ethical considerations. The primary concern is the risk of AI perpetuating or exacerbating existing biases in the criminal justice system. If the data used to train AI systems reflects historical biases—such as harsher sentences for minority defendants—AI’s recommendations may inadvertently reinforce those biases.

To address this concern, AI systems used in plea bargaining and sentencing must be designed with fairness and transparency in mind. This includes ensuring that the data used to train these systems is representative and free from bias and providing clear explanations of how the AI’s recommendations were generated. Moreover, human prosecutors and judges must retain the final authority in making plea and sentencing decisions, using AI as a tool to inform their judgment rather than a substitute for it. It is important that AI systems be chosen and used very carefully in part because “the prosecutor should avoid an appearance of impropriety in performing the prosecution function.” ABA Standard 3-1.2(c)

A courtroom scene blending traditional and technological elements. A female judge sits at the bench, looking at a computer monitor. In front of her, a lawyer is seated at a desk with a computer and a smaller scale of justice. To the right, two men in suits stand, facing the judge. Behind them, a large, abstract figure of Lady Justice is depicted with outstretched arms holding scales formed from a combination of digital circuitry and glowing nodes. The background has a watercolor-like blend of soft colors, symbolizing the intersection of law and technology.
Image by Ralph Losey using his custom AI, Visual Muse.

Ethical Implications of AI in Criminal Prosecutions

While the potential benefits of AI in criminal law are significant, it is equally important to consider the ethical implications of integrating AI into the criminal justice system. AI, by its very nature, raises questions about accountability, transparency, and the potential for misuse—questions that must be carefully addressed to ensure AI is used in ways that advance, not hinder, the cause of justice.

When AI is used responsibly, it can assist prosecutors in fulfilling this duty by providing new tools.

Ralph Losey.

As we integrate AI into criminal prosecutions, it is essential that we do so with a commitment to the principles articulated in the ABA’s Criminal Justice Standards. By aligning AI’s capabilities with these ethical guidelines, we can harness technology to advance justice while upholding the prosecutor’s duty to act with integrity, fairness, and transparency.

Transparency and Accountability

One of the most pressing ethical concerns is the issue of transparency. AI algorithms are often referred to as “black boxes” because their decision-making processes can be difficult to understand, even for those who design and operate them. This lack of transparency can be particularly problematic in criminal prosecutions, where the stakes are incredibly high, and the consequences of a wrong decision can be severe. A ‘black box’ AI system has been influencing criminal justice decisions for over two decades – it’s time to open it up (The Conversation, 7/26/23) (discusses UK systems).

For example, if an AI system is used to predict the likelihood of a defendant reoffending, it is crucial that the defendant, their attorney, and the judge understand how that prediction was made. Without transparency, challenging the AI’s conclusions becomes difficult, raising concerns about due process and the right to a fair trial.

To address this issue, AI systems used in criminal prosecutions must be designed to be as transparent as possible. This includes providing clear explanations of how AI’s decisions were made and ensuring that the underlying data and algorithms are accessible for review and scrutiny. There is federal legislation that has been pending for years that would require this, the Justice in Forensic Algorithms Act. New bill would let defendants inspect algorithms used against them in court (The Verge, 2/15/24) (requires disclosure of source code). Moreover, the legal community must advocate for developing AI systems prioritizing explainability and interpretability, ensuring that the technology is effective, accountable, and understandable.

A holographic representation of a genie inside a transparent, glass-like display case. The genie, composed of glowing, digital light with a vaporous lower body, has its arms raised and appears to be in a gesture of offering or explanation. The case is labeled "AI" at the base, with vertical lines of binary code and digital numbers cascading down the interior walls. The overall design suggests the concept of AI as a powerful, almost magical entity confined within a controlled environment, ready to be summoned or utilized.
Image by Ralph Losey using his custom AI, Visual Muse.

Fairness and Bias

Another ethical concern is the potential for AI to be used in ways that exacerbate existing inequalities in the criminal justice system. For example, there is a risk that AI could justify more aggressive policing or harsher sentencing in communities already disproportionately targeted by law enforcement. This is why AI systems must be designed with fairness in mind and their use subject to rigorous oversight.

Ensuring fairness requires that AI systems are trained on representative and unbiased data. It also necessitates regular audits of AI systems to detect and mitigate any biases that may arise. Additionally, AI should not be the sole determinant in any criminal justice decision-making process; human oversight is essential to balance AI’s recommendations with broader considerations of justice and equity.

Human Judgment and Ethical Responsibility

The deployment of AI in criminal prosecutions also raises important questions about the role of human judgment in the justice system. While AI can provide valuable insights and recommendations, it is ultimately human prosecutors, judges, and juries who must make the final decisions. This is because justice is not just about applying rules and algorithms—it is about understanding the complexities of human behavior, weighing competing interests, and making moral judgments.

AI, no matter how advanced, cannot replicate the full range of human judgment, and it should not be expected to do so. Instead, AI should be seen as a tool to assist human decision-makers, providing them with additional information and insights that can help them make more informed decisions. At the same time, we must be vigilant in ensuring that AI does not become a crutch or a substitute for careful human deliberation, judgment and equity.

A man in a suit sits at a desk, intently studying a holographic display of a humanoid figure composed of glowing nodes and lines representing artificial intelligence. The man rests his chin on his hand, deep in thought, with a computer monitor, keyboard, and a small scale of justice on the desk in front of him. In the background, there is an image of Lady Justice holding scales. The scene combines elements of technology, law, and contemplation, highlighting the interaction between AI and legal considerations.
Image by Ralph Losey using his custom AI, Visual Muse.

Conclusion

The integration of AI into criminal prosecutions holds the promise of advancing the cause of justice in profound and meaningful ways. To do so we must always take care that applications of AI follow the traditional principles stated in the Criminal Justice Standards for the Prosecution Function and other guides of professional conduct. By aligning AI’s capabilities with ethical guidelines, we can harness technology in a manner that advances the prosecutor’s duty to act with integrity, fairness, and transparency.

With these cautions in mind, we should boldly embrace the opportunities that AI offers. Let us use AI as a tool to enhance, not replace, human judgment. And let us work together—lawyers, technologists, and policymakers—to ensure that the use of AI in criminal prosecutions advances the cause of justice for all.

A futuristic courtroom scene where a trial is in progress. The room is filled with people, including judges, lawyers, and jurors, all seated at large wooden tables. Multiple holographic displays are projected around the room, showing various images related to artificial intelligence, such as humanoid figures, data charts, and technical diagrams. A central hologram above the courtroom floor features a human figure with numerical data and technical details. The atmosphere blends traditional courtroom elements like wooden paneling and bookshelves with advanced digital technology, highlighting the integration of AI into legal proceedings.
Courtroom of the future.
Image by Ralph Losey using his custom AI, Visual Muse.

Ralph Losey Copyright 2024 – All Rights Reserved

Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.

Author

  • Ralph Losey headshot

    Ralph Losey is a writer and practicing attorney specializing in providing services in Artificial Intelligence. Ralph also serves as a certified AAA Arbitrator. Finally, he's the CEO of Losey AI, LLC, providing non-legal services, primarily educational services pertaining to AI and creation of custom GPTS. Ralph has long been a leader among the world's tech lawyers. He has presented at hundreds of legal conferences and CLEs around the world and written over two million words on AI, e-discovery, and tech-law subjects, including seven books. Ralph has been involved with computers, software, legal hacking, and the law since 1980. Ralph has the highest peer AV rating as a lawyer and was selected as a Best Lawyer in America in four categories: E-Discovery and Information Management Law, Information Technology Law, Commercial Litigation, and Employment Law - Management. For his full resume and list of publications, see his e-Discovery Team blog. Ralph has been married to Molly Friedman Losey, a mental health counselor in Winter Park, since 1973 and is the proud father of two children.

    View all posts