
nocred
3 min. read
With artificial intelligence evolving faster than human imagination, traditional avenues of regulation may not work as well as they have for other business sectors.
To safely and efficiently oversee AI, governments need to turn to a more flexible system, not immovable guardrails but more adjustable “leashes,” Penn Carey Law professor Cary Coglianese writes in a new article.
“I like to say that AI is not a technology but a suite of many different technologies,” Coglianese says. “That’s why specifying very clear prescriptive rules, or guardrails, is so unrealistic.
“We want technological innovation to explore and traverse, but we want to make sure that, as it’s developing, somebody’s overseeing it,” he says. “This is like wanting to make sure that a large dog can wander around on a walk through the neighborhood but is not going to hurt small children at the playground.”
Coglianese and co-author Colton Crum, a Ph.D. candidate in computer science and engineering at the University of Notre Dame, recently published an article in the journal Risk Analysis arguing for a new AI regulatory approach—a “leash” rather than a “guardrail”—that mirrors systems used for addressing some pollution and food safety problems in the United States and around the world.
The leash approach is also called “management-based regulation,” a model that aims to reduce risks by requiring companies to adopt such steps as quality-control processes, testing, auditing, and algorithmic impact statements. The moniker “leash” takes inspiration from experiences walking a dog and wanting to give the dog some agency in where it can go while still under human handling and supervision.
“The firms themselves are able to adapt those risk-management planning processes or leashes around their particular algorithms and how they’re being used,” Coglianese says. “The risk management is mandated and enforced as any regulation, but the regulatory obligations speak to how a firm manages its technology. It’s not going to be regulation via a rigid, fixed guardrail.”
The standard way of thinking about regulation in the U.S. is as a “guardrail” that uses very specific mandates to prompt safety action, such as with requirements that firms install seat belts or airbags in cars to make them safer, Coglianese says. “Anything related to the development, design, and use of technology that is rigid and imposes a bright-line, static rule is what we would call a guardrail,” such as a flat prohibition on a particular type or use of AI.
A “leash,” by contrast, is a system with enough human oversight to keep AI safe or within bounds while it wanders, innovates, and explores. A parallel approach has applied for years in the food-safety realm, where “there’s also no one-size-fits-all,” Coglianese says. “You can’t say every firm needs to make orange juice the same way because it turns out that every fruit juice manufacturer in the country has a different process.”
The article looks at potential harms that could come from AI systems, including automobile collisions, disturbing content on social media, and bias in classification software, concluding that guardrail systems are not sufficient to prevent problems. “The ‘roads’ that AI travels down are too numerous and, with generative AI especially, they are constantly changing,” Coglianese and Crum write.
Many AI firms today recognize the need for safety and public trust in their products or services and thus the need for regulation, Coglianese says.
“The future of the industry does depend on making sure that these tools are not going to create catastrophes or public controversies,” he says. “Some of the big firms are already doing a lot of the sorts of things that would go into a required leash on AI, such as by doing internal auditing and testing before these things go out.”
Still, Coglianese says, neither guardrails nor leashes are necessarily a complete solution by themselves. He emphasizes regulation must be dynamic and responsive to new developments and problems.
“A leash approach can also be used as an interim step toward guardrails. If we ever were to get to a point where we can get greater standardization, maybe there will be settled use cases where it becomes clear exactly what makes sense for prescriptive rules to be imposed. One can always take a dog that’s on a leash and move it into a fenced yard.”
nocred
Image: Pencho Chukov via Getty Images
The sun shades on the Vagelos Institute for Energy Science and Technology.
nocred
Image: Courtesy of Penn Engineering Today