Machine learning can help prevent repeat domestic violence offenses
Richard Berk, a professor of criminology and statistics in the School of Arts & Sciences and the Wharton School, and Susan B. Sorenson, director of the Evelyn Jacobs Ortner Center on Family Violence, have made an important discovery about domestic violence: Using machine learning during the arraignment process, when a judge or magistrate decides whether to detain or release an accused offender, prevented more than 1,000 domestic violence arrests in one year in at least one large metropolitan area.
They published their work in the March issue of The Journal of Empirical Legal Studies.
“For those making the decisions, it’s helpful to have additional information as to whether somebody is a good risk for being released,” says Sorenson, a professor of social policy in the School of Social Policy & Practice. “Otherwise it’s sort of a seat-of-the-pants process.”
First, the computer must be trained. “Here, machine learning is basically an algorithm that says, ‘Determine which descriptions of individuals are strongly associated with the outcome,’” says Berk. “Then you give the computer certain instructions on how to look.”
For this research, a computer “learned” with data from more than 28,000 arraignments that took place from the beginning of 2007 to October 2011, plus a two-year follow-up period. Berk and Sorenson initially considered more than 35 inputs such as age and gender, and prior domestic or weapons charges.
They found that machine-learning forecasts of new domestic violence arrests—particularly those that resulted in injuries or attempts to inflict injury—can be surprisingly accurate, Berk says.
This approach takes a different tack from many of the field’s risk-assessment tools, which emphasize victims’ needs and the provision of victim services, such as protection in a shelter, marriage counseling, or random check-in phone calls.
“We’re focusing on the perpetrator to try to prevent [that person] from reoffending,” Sorenson says.
Although risk assessment tools increasingly inform other criminal justice decisions—sentencing and parole, for example—it remains to be seen whether they will become routine at arraignments. Opponents contend that algorithmic risk assessments such as these can generate too many false positives, that their use perpetuates stereotypes, and that they may result in untoward consequences for offenders later found innocent of wrongdoing.
The researchers agree that though important, these tradeoffs exist already, without machine learning risk forecasts. They say the baseline is current practice, and computer assistance can contribute to better decisions than what is currently occurring at arraignments.
“Under existing practices, about 20 percent of the individuals who are released on domestic violence charges reoffend,” Berk says. “If you use our system, only 10 percent reoffend. We’re cutting in half the failure rate.”