How AI can ensure JJDPA compliance in juvenile justice

Agencies can benefit from using AI for risk assessment, data collection, preprocessing, and model training in juvenile justice.

The challenges of juvenile sentencing

One of the leading considerations for maintaining compliance with the federal Juvenile Justice and Delinquency Prevention Act (JJDPA) reauthorization is the requirement that any youth under the age of 18 who is being processed through criminal proceedings may not, except under limited circumstances, be held pretrial in any jail or lockup for adults.1 This has prompted the need for alternatives such as youth detention centers or home confinement.

The unfortunate reality of this, however, is that many states are now experiencing a detention center crisis. These facilities are facing challenges with staffing and space. Policy restrictions and the need for a juvenile’s access to parents or legal resources further complicate placement decisions if a center with availability lies outside the minor’s county or parish. Home confinement may address capacity concerns, but it can also place an unfair burden on the family, particularly if electronic monitoring is required but they lack the means for proper infrastructure in the house.

There are two predominant areas of concern when working with juvenile justice systems and youth advocates. The first is the possibility of returning high-risk juveniles to homes where the offenses occurred. The second is the option of placing low-risk youths into facilities that may expose them to harm. To navigate these challenges, a new method of evaluation between detention or home confinement must be considered—one that both conforms to JJDPA mandates and provides a risk-based assessment of each decision. Artificial intelligence (AI) can potentially assist in risk assessment and determination using a variety of data points and machine learning models.

Using AI to assess the risk of re-offense

While AI is being employed across numerous industries and organizations, the justice system stands to gain from this technology as well.

Here's a more detailed look at how AI can be incorporated into the evaluation process:

Data collection

The first step in risk assessment is to collect relevant data through unbiased methods that are compliant with all federal and state criminal history data protection measures. This might include information like the youth’s legal and personal history, family dynamics, educational background, and mental health history.

Data preprocessing

After collection, this data would be cleaned and prepared for analysis. This might entail addressing missing information, encoding categorical variables, or standardizing numerical variables.

Feature selection

Not all collected data is relevant for predicting risk. Feature selection involves identifying the most important factors that contribute to the risk level. These could be characteristics exhibited by the juvenile or others in their family and environment.

Model training

A machine learning model is trained on a portion of the data to identify patterns that correlate with various risk levels. For example, a model might recognize that juveniles with substance abuse history and a lack of positive role models are more likely to reoffend.

Model validation

The model is then tested on a distinct data set to ensure it’s making accurate and reliable predictions.

Risk prediction

The validated model applies its learning to predict the risk level of new cases. It is vital to remember that these are statistical insights and should be used to complement a wider decision-making process predominantly conducted by humans.

Continuous learning

As more data is incorporated, the model will continue to learn and improve the precision of its future predictions.

The caveat of AI in juvenile justice

When discussing the use of AI in this type of risk assessment, it's important to highlight that, like many evidence-based practices and programs in early adoptions, AI is likely to face controversy. Initially, evidence-based approaches also face skepticism and debate.

AI has encountered this already, particularly in relation to concerns about biases in the data and algorithms, as well as the possibility that AI could reinforce existing social inequalities. Further, AI has not yet reached a sufficient maturity level for it to drive and make decisions; it should instead be regarded as an assistive technology, enhancing the professional’s decision-making process. It's crucial that any AI system used in this way is transparent, accountable, and subject to regular review.

Leverage AI to enhance decision-making processes

We invite agencies to explore how our teams have successfully equipped several states with innovative solutions that enhance adherence to the key tenets of the JJDPA. Through collaboration and our commitment to helping states meet compliance with the JJDPA, we can make a positive and lasting impact on juvenile justice and delinquency prevention.


Endnotes

  1. U.S. Congress, “Juvenile Justice and Delinquency Prevention Act of 1974.” Office of Juvenile Justice and Delinquency Prevention. United States government, Department of Justice. https://ojjdp.ojp.gov/about/legislation.

Let's talk!

Interested in learning more? We'd love to connect and discuss the impact CAI could have on your organization.

All fields marked with * are required.

Please correct all errors below.
Please agree to our terms and conditions to continue.

For information about our collection and use of your personal information, our privacy and security practices and your data protection rights, please see our privacy policy and corresponding cookie policy.