Law professor Catherine Sharkey explains how artificial intelligence is being used to tackle the arduous work of keeping our federal agencies in check.
The sweeping executive order on artificial intelligence (AI) signed by President Biden on October 30, 2023, emphasizes risk reduction, rigorous testing of AI systems, and safety issues. Less well known is that it also pledges to promote AI innovation in government.
For years, this issue has been a research focus for Sharkey, professor of regulatory law and policy at New York University. An expert in administrative law who has written extensively about government agencies’ use of artificial intelligence, Sharkey has been specifically examining the use of AI for reassessing the effectiveness of existing regulations, otherwise known as “retrospective review” The process involves Federal interagency communication about potentially repetitive, or conflicting regulations. Agencies also issue requests for public comment on how existing regulations can be modified, streamlined, expanded, or repealed.
In May, Sharkey produced a report for the Administrative Conference of the United States (ACUS) that assessed government agencies’ past, current, and future use of AI in retrospective review, drawing on extensive research, supplemented with interviews with dozens of federal government employees and other professionals with interest in governmental use of AI. Prior to this ACUS study, there was limited information available regarding how agencies employed algorithms to aid in retrospective review, and Sharkey’s report is the basis for ACUS’s official recommendation, “Using Algorithmic Tools in Retrospective Review of Agency Rules.”
Here, Sharkey speaks about the evolving intersection of technology and government regulation and how executive agencies can integrate AI into rulemaking processes:
What are the risks presented by government agencies’ use of AI for retrospective review? And what’s the upside?
Machine learning AI technologies are an apt tool for retrospective review, which is extremely important for good governance. It is how agencies make sure existing regulations are not inconsistent, conflicting, or outmoded, and a process that the Administrative Conference of the United States has urged federal agencies to do for decades. But it’s a process that is extremely labor-intensive, and typically low priority for any agency, which would instead prefer to devote its scarce resources to its agenda to protect health and safety, the environment, or the like. These technologies could assist by automating the tasks that retrospective review entails. That’s the main advantage.
A disadvantage is that these technologies require a machine-readable data infrastructure. So the startup costs for an individual agency to start using these technologies could be prohibitive.
But a way forward—and one actually very consistent with the thrust of President Biden’s new executive order on AI—is to start thinking about coordination across federal agencies. I think this is an area where each agency doesn’t necessarily have to go at this alone. An agency in the vanguard—for example, Health and Human Services, or the Department of Defense, which already has a strong internal AI strategy office—could be a leader, and possibly develop technologies that could be shared across other agencies, including agencies that are less well-resourced or have fewer employees with the requisite technological expertise.
How would AI in retrospective review fit into the rule-making process?
It’s interesting to think about how emerging technologies like machine learning and AI will affect the entire lifecycle of a federal regulation. Retrospective review really is at the end of the lifecycle; it’s a look-back, after a regulation has been promulgated, to see whether an agency’s rules are conflicting or overlapping.
But “retrospective” as it may be, retrospective review also plays a role in the setting of a future regulatory agenda. It can help agency officials identify areas that are over-regulated, and areas in need of additional regulation.
If agencies start using these technologies, they may become more accustomed to performing regulatory review on a regular basis, which could lead to more structured rulemaking going forward. We have already seen the integration of sophisticated technology into agency work; the use of natural language processing models that use very sophisticated methods for ordering text-based documents, group classifications, etc. But maybe that isn’t the right starting point. Maybe if we started with a more structured format of rules that would be more easily leveraged by some of these technologies, it would be even more efficient.
You note that AI tools for retrospective review must be open source and able to operate in synergy with other government technology initiatives. Why are these two aspects so important?
With regard to open source, that’s really to prevent a vendor lock-in. Some federal agencies are developing these tools in-house, and some are contracting out for these tools. I think it’s very important that one vendor isn’t locked in for the whole future. That could potentially have anti-competitive effects. I don’t think we want to have any federal agency beholden to one particular private contractor, especially when it comes to thinking about transparency and the particular rules coming from the federal government.
In terms of making it interoperable with other initiatives, this goes back to the idea that machine learning and AI technologies are going to really push in the direction of coordination across federal agencies. They are going to require massive amounts of infrastructure investments and investments in expert personnel.
At the moment, there are shared internet technology [IT] services in the federal government. For example, the General Services Administration is involved in piloting new AI technologies and thinking about sharing these across the federal government. But the new executive order pushes even further, calling for a kind of interagency council and each agency having a specific person identified with some AI expertise. And so I do think that it will make sense to think about how some of these technologies will potentially be akin to what today are the shared IT services in terms of the next generation of federal government rulemaking.
What do you forecast for the future of this intersection between AI and government regulation?
Everyone should be paying attention to the fact that on October 30, President Biden issued one of the longest executive orders ever on the safe, secure, trustworthy development and use of artificial intelligence. It covers enormous ground. But in Section 10, it specifically identifies advancing federal government use of AI. The Office of Management Budget has just put out preliminary guidelines called Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence for public review and comment.
The executive order emphasizes and encourages agency experimentation, and urges agencies to coordinate and share promising cases within the federal government that can serve as models.
Writ large, there is a lot that we can learn by appreciating federal government uses of AI, in particular as agencies start using these technologies in public rulemaking. Their proposed rules are always circulated for public notice and comment. Once federal agencies promulgate a rule, they can be challenged in court down the road. Because of that, I think that the uses of these technologies will put a premium on things like transparency and explainability—being able to defend the reasons and the process before judges. These are all critical concerns that you hear about in the public debate: worries about AI being an opaque black box, etc. So I do think that by shining a light on these federal government uses, we’ll learn a lot.
Additionally, I think everyone is recognizing that this is a transformative moment in terms of how federal agencies will regulate these technologies. For instance, the Food and Drug Administration is approving medical devices that incorporate AI, so it must build internal capacity to understand these technologies. Studying government agencies’ internal use of AI in the regulatory decision process, as well as rulemaking, will shed enormous light on the right way to regulate these technologies out in the world.
Source: NYU