When artificial intelligence makes decisions that affect our lives, do we deserve an explanation?
For example, banks use AI to deny people credit; landlords use automated background checks to deny applicants housing; and judges turn to AI systems to help them decide who should be denied bail.
Often, these decision-makers provide little or no reason for their actions, which means that the people affected have no basis to object.
Kate Vredenburgh says individuals are, in fact, owed explanations when AI makes decisions that affect our lives. Vredenburgh, who is a 2019-2020 postdoctoral fellow at the McCoy Family Center for Ethics in Society and the Institute for Human-Centered Artificial Intelligence at Stanford University, will soon start an assistant professorship in the philosophy, logic, and scientific method department at the London School of Economics.
Here, she speaks about how she became interested in the right to an explanation and what it would mean to implement such a right:
How did you develop an interest in ethics and technology?
I did my PhD at Harvard in philosophy, and while I was there, I had the opportunity to do some ethics teaching in computer science as part of a program called Embedded EthiCS. I actually went into it not knowing that much about computer science. So, from their side, the computer scientists were wondering, “Ethicists, who are they and what do they do?” while I was thinking, “Computer scientists, who are they and what do they do?” But it turns out that when thinking through the moral aspects of technology, it’s actually incredibly valuable to have a detailed understanding of the technology itself as well as what the social impacts are.
There are a lot of topics that wouldn’t have been on my radar had I not had the Embedded EthiCS experience of going into these classrooms and designing ethics modules meant to show students that they were making some moral assumptions in their computer science research, or that the technology they were developing had serious societal impacts that they needed to consider. Through these experiences, I realized there was a lot of room for fruitful interdisciplinary as well as straightforward moral philosophy research around the moral concerns that technology raises.
How did you first happen upon “the right to an explanation” and what drew you to study it?
First, the European Union’s General Data Protection Regulation (GDPR) provides that people have a right to “meaningful information” about the logic behind automated decisions using their data. This law, in an interesting and potentially radical way, seems to mandate that any automated decision-making that people are subject to should be explainable to the person affected. That got me wondering: What does that mean? How do we implement that? And what does explanation really mean here?
But I also came to this topic through the kinds of everyday experiences that most of us have had when facing complex institutions that make decisions that greatly impact our lives—and where we may not understand what’s going on.
The example I like to use is Kafka’s The Trial [in which the main character, Josef K, is arrested, jailed, and executed without ever knowing what he’s charged with]. The Trial is so horrifying to us not only because an innocent person is punished but also because the main character can’t do things to make the process fair. He can’t respond to the charge to disprove it. It is deeply concerning to us because he doesn’t have the information he needs for society’s basic institutions to function morally well.
Those twin examples got me thinking about the use of technology and algorithms in the criminal justice system to help with setting bail, for example, where the technology is perhaps proprietary or complex and hard to understand. You worry that in these contexts people don’t have the information they need to make the criminal justice or other decision-making processes fair.
Why is it useful to think about having this right?
When we identify something as a right, we think, “this is something of supreme moral importance such that the costs would have to be pretty high if we weren’t going to implement it.” Look at something like healthcare. That’s something of supreme moral importance such that it would have to have a very high cost to us as a society to not provide someone with healthcare.
Of course, when you’re thinking about implementing a new right, you do have to make sure it doesn’t create even more costs or raise even greater objections. For example, if implementing a right to an explanation means that decision-makers have to explain all of their decisions, some might object that this would be so costly that we ought not to implement it. Or they might object that some complex proprietary algorithms may have upsides, even if they are opaque. For example, maybe they find complex patterns and produce decisions that make the world fairer and we’d have to give up on that because they are too hard to explain. If you’re developing a proposal for implementing the right to an explanation, it’s important to grapple with these costs.
However, if you do a cost-benefit analysis without taking into account how morally important explanations are, you might come out with what we have now, to be honest. That employers are not required to explain why they fire people and they definitely aren’t required to explain why they didn’t hire someone. In the criminal justice system you get a lot of proprietary algorithms where they might say they did a cost-benefit analysis and it looks overall more efficient or better for welfare, but the justification is incomplete at best.
But the strength of identifying something as a right is that I as an individual can object that even if the system is better overall, in fact my rights are being violated, so we’re going to have to redesign the system to make it fair.
Would implementing this right potentially put a burden on individuals to seek out explanations of complex technologies?
On the surface, it does seem to place a huge burden on individuals. We all barely have the time to stay in touch with loved ones and do our laundry and dishes. How can we expect people to take on gathering and dealing with this new information?
For that reason, any changes will have to be structural. For example, the right to an explanation might mean we each have a fiduciary representative for our data. This could be a more realistic and also desirable structure for a proposal—third parties who have this job where people can come to them and say, “I’m confused about why this is happening to me, can you enlighten me?” We all need an expert sometimes, and often what makes outcomes unfair is that people can’t pay for those experts, so they get worse outcomes.
Are there areas where this right is more or less important?
We as a society have to make some fundamental moral commitments about what decision-makers in different kinds of domains owe to people. For example, in the field of law, it’s really important that the rules be clear so we can avoid being arrested because doing so can seriously impact our lives.
Now take the case of hiring. We might decide—and in fact we seem to have decided—that we don’t need as many explanations in this domain. Maybe it’s because we think it’s the private sector and so efficiency is super important. So maybe the best moral story of that is, “Hey, we were being efficient, and your application wasn’t quite good enough, here are some broad strokes why,” but that’s all that the decision-maker owes to people they don’t hire.
It’s important to go down to the moral foundations and say, for each kind of domain, what’s morally most important and what kinds of information do we need to give people.
Could requiring explanations incentivize algorithm designers to design morally justifiable algorithms from the get-go? Instead of thinking, “I have this algorithm and now I have to explain it,” should engineers think, “I’m going to have to explain this down the line, so I’d better make it a good thing for society”?
That’s absolutely right. I think explanations are incredibly important, but sometimes we get a bit too hung up on whether the rules are transparent or the algorithm’s methodology is clear. Sometimes what’s most important or objectionable is, should we be doing this at all? Was the purpose for which we built this algorithm a good purpose? And, if so, did we do that well?
If people building technology in the public or private sphere know that when they go to implement a system, they will have to precisely justify it, then hopefully this would create incentives for building more morally justifiable algorithms. Moreover, it will create an expectation among people in society generally that that’s what they can expect. And if that’s not what they get, then they can hold decision-makers accountable and have a robust and inclusive conversation about whether technological systems live up to our individual or shared moral vision of how society ought to be.
Source: Katharine Miller for Stanford University