The ‘Personalized Advantage Index,’ a Decision-Making Tool, Developed at Penn
One of the primary social motivations for scientific research is the ability to make better decisions based on the results. But whether it is deciding what material to use in making a solar panel, what antibiotic to use on an infection or when to launch a satellite, most decisions involve weighing multiple factors, all of which interact with one another in determining the best course of action.
Now, researchers at the University of Pennsylvania and the University of Pittsburgh have developed a decision-making model that compares and weights multiple variables in order to predict the optimal choice.
They tested their model on data from a study of patients seeking treatment for depression, who received either cognitive behavioral therapy or medication. By using the model to generate a score for each patient that indicated which treatment was likely to be more effective for him or her, researchers showed an advantage equivalent to that of an effective treatment relative to a placebo.
Called the “personalized advantage index,” this analytic tool could be used not just in personalized medicine but in any decision-making scenario with complex, and potentially conflicting, variables.
“If you pay attention to only one variable, you’re going to make a decision that is only true with all else being equal,” said Robert DeRubeis, professor and chair of the Department of Psychology in Penn’s School of Arts and Sciences. “But we know that all else is not equal. We need to take all of those inequalities into account at once to find out what is likely to work the best.“
The study was led by DeRubeis and Zachary Cohen, a doctoral candidate in Psychology. Nicholas Forand, Lois Gelfand and Lorenzo Lorenzo-Luaces, also of Psychology, contributed to the research. The Penn team collaborated with Jay C. Fournier, assistant professor of psychiatry at the University of Pittsburgh.
It was published in the journal PLOS ONE.
In developing the personalized advantage index, the researchers chose a decision-making case with which they were familiar as psychologists: which treatment would be more effective for a depressed patient, cognitive behavioral therapy or medication? Both types of treatments have been shown to be effective in combating depression, but some patients respond better to one type than the other.
The researchers drew upon a longitudinal study of 154 patients that received one of those two treatments. The study collected data on the success of the treatment each patient received, as well as personal information that might play a role in which of the treatments each patient was best suited for, such as marital status, number of previous exposures to antidepressant medications or number of negative life events experienced in the past year. Psychologists have long used this type of information to make clinical recommendations but have never been able to look at all of the relevant data in a comprehensive way.
“The status quo for many decades,” Cohen said, “has been for clinicians to either use clinical judgment, intuition based on what they have done before and the results they’ve seen, or to use a single variable to push the decision in one direction or the other.”
Many randomized control trials have tackled relevant single variables, such as marital status or negative life events, to determine whether they are indicative of one type of treatment’s effectiveness over another. While such studies have occasionally provided strong evidence of certain traits’ correlation with the effectiveness of a given treatment, they have a serious limitation when it comes to applying them to clinical decision-making.
“These studies always look at those variables in isolation,” Cohen said. “That leaves clinicians with little ability to go beyond those single variables. What happens if you have a patient come in with conflicting variables? Which one do you trust?”
In building the personalized advantage index, the researchers’ goal was not to determine which variable to trust above all others but to see the degree each variable played in producing the outcome. They used a “leave-one-out” validation method, where they cycled though each of the patients, using the relationships between the variables and outcomes for the other 153 to predict the results for the 154th. This method was important to avoid “overfitting,” or an over-use of variables that leads to an accurate prediction for those whose data are used to develop the predictive algorithm but fails to be fully generalizable to those outside the data set.
The result of this statistical technique is an algorithm that maximizes the predictive value of the group of variables by assigning weights to each one. This provides useful context for the variables that both intuition and isolated studies suggest may play a role in a given decision’s outcome, as well as a new lens to look at the efficacy of those individual variables.
“Personalized medicine approaches might involve genetic assays or neuroimaging or other kinds of diagnostics,” Cohen said. “The cost of all of this data gathering needs to be weighted against the value it provides to the ultimate predication of efficacy, which is something the personalized advantage index can help us investigate.”
While the researchers cannot yet say how accurate the personalized advantage index’s predictions will be outside the context of their test case, the early results suggest it would have clinical value.
“If this treatment selection approach had been used when the patients in our data set were actually being treated,” DeRubeis said, “it would likely have produced an average benefit that’s equivalent to what you see between groups given medication and groups given a placebo.”
Machine learning and other artificial intelligence techniques could further improve the index by comparing variables in complex, non-linear ways, opening the door to a new way of doing analytics for many types of applications, including ones with more complicated variables and outcomes.
“This is a way to begin to close the chasm between the wealth of information on how to improve outcomes and how that information is actually applied,” DeRubeis said.
The research was supported by the National Institute of Mental Health.
Nicholas Forand is now an assistant professor at The Ohio State University.