What if scientists and policy-makers could automate and optimize the filtering of the massive amount of data available across each system they interact with to pull out information that could help humans solve problems across issues including food security, personalized health care, climate change? Machine learning holds the potential to extend human intuition and recognize patterns in unlikely places.
Nat Trask, associate professor in mechanical engineering and applied mechanics, is combining his expertise in applied mathematics and traditional physics modeling with the powers of machine learning to uncover an abundance of applications through what he calls “self-driving labs.”
“A self-driving lab is like a self-driving car,” says Trask. “In the car, AI tells a mechanical system what to do based on the incoming information and the parameters of the system. In a self-driving lab, AI would tell robots to perform certain experiments in a certain order based on the data it is receiving. There might be a thousand experiments running at one time, each providing different insights from large datasets.”
Trask aims to design some of his first machine-learning-powered, self-driving labs at Penn. These labs would allow researchers from different fields to discover new patterns and connections across their work, and help drive innovative solutions in medical diagnosis and treatment, energy storage, and sustainable material discovery.
As the co-director of a Department of Energy-funded Mathematical Multifaceted Integrated Capability Center called SEA-CROGS, Trask is working with six other universities and two national laboratories to develop next-generation scientific machine learning architectures to address needs in high-consequence engineering settings, such as improving the analysis of climate systems and identifying early warning signs of extreme weather.
Trask is already using the power of machine learning in his own work to produce more accurate physics models.
Traditionally, physics modeling has been done by studying individual interactions at each scale, from atoms to molecules to millimeter-scale materials and so forth. These observations, paired with restrictive assumptions, produce relatively simple models that don’t apply to every scenario. By using machine learning, these interactions can be observed and understood at a higher resolution.
“Now, we can plug one example of interaction at each size scale into an algorithm and have it do the hard work of identifying useful information,” says Trask. “It can apply the known laws of physics, such as ‘f=ma,’ and then inform a model of how matter will interact across these scales. By removing the limitations of our own human cognition in this approach, we also remove the restrictive assumptions applied in previous approaches and produce a more accurate model.”
Read more at Penn Engineering Today.