Through
9/4
The Class of 2022 law student works to identify biases and ‘stereotype threat’ in AI and help provide context for the conversation around mitigating those biases.
Historian of science Elly Truitt’s multidisciplinary investigations of the Middle Ages challenge assumptions about the period as a dark time in innovation and prompt a rethink of notions of ‘modern’ science.
Drawing on field research, the assistant professor of sociology ,examines the specific real-world conditions under which software systems replace, complement, or create human labor.
Student interns worked this summer with the Davis Lab in the Penn Epilepsy Center to research improvements to epilepsy diagnosis using the tools of machine learning and network analysis.
A collaboration with nursing, engineering, and the medical device provider will develop new technologies to assist clinicians via “safe AI.”
Wharton School announces new AI for Business initiative. Led by AI expert and Wharton professor Kartik Hosanagar, AI for Business will enable students, faculty, and industry partners to explore the next phase of digital transformation.
A forthcoming article co-authored by Penn Law’s Cary Coglianese explores algorithmic governance, examining how machine-learning algorithms are currently used by federal and state courts and agencies to support their decision-making.
For Penn synthetic biologist César de la Fuente and his team, these concepts aren’t some far-off ideal. They’re projects already in progress, and they have huge real-world implications should they succeed.
“The Ethical Algorithm” describes how algorithms can inadvertently share private information or perpetuate racial and gender biases, and offers principled solutions that can help researchers design the next generation of socially-aware algorithms.
Experts from Penn share their perspectives on the role of advanced algorithms and AI in health care and what the future holds for digital health technologies.
Cary Coglianese of the Law School argued that people deserve to be listened to by real humans when faced with life-altering decisions, even amid the rise of automation in government agencies. “The public’s need for empathy, though, does not mean that government should avoid automation,” he wrote. “If planned well, the transition to an automated state could, surprisingly, make interacting with government more humane, not less.”
FULL STORY →
Aaron Roth of the School of Engineering and Applied Science spoke about synthetic data and privacy concerns. “Just because the data is ‘synthetic’ and does not directly correspond to real user data does not mean that it does not encode sensitive information about real people,” he said.
FULL STORY →
Michael Kearns of the School of Engineering and Applied Science spoke about ethics and artificial intelligence, saying that regulatory agencies “are playing a serious game of catch-up. They don’t understand the technologies that they’re regulating anymore, or its uses, and they have no means of auditing it.”
FULL STORY →
Jason Moore of the Perelman School of Medicine spoke about how AI and machine learning are aiding the fight against COVID-19, but also warned, “If you’re only studying primarily Caucasian populations and want to apply that nationally, that may not work as well on a more diverse population. AI algorithms themselves can be biased and can pick and inflate biases in the data. Those are the things I worry about,” he said.
FULL STORY →
César de la Fuente of the School of Engineering and Applied Science commented on new MIT research that might speed up antibiotic discovery. “I think it’s a breakthrough in a field of much unmet need,” he said. “After all, no new classes of antibiotics have been discovered for decades. This one is definitely structurally different from conventional antibiotics.”
FULL STORY →
Michael Kearns of the School of Engineering and Applied Science said algorithms force us to be more detailed in our decision-making. “You should never expect machine learning to do something for free that you didn’t explicitly ask it to do for you, and you should never expect it to avoid behavior that you want it to avoid that you didn’t tell it explicitly to avoid,” he said.
FULL STORY →