Ethan Zuckerman had an interesting reaction to his first experience with the TSA Pre-Check program, which lets frequent flyers go through a much shorter and less elaborate procedure at airport security checkpoints. Ethan’s concerns about unfairness are worth pondering, but I want to focus here on his call for more openness about the algorithm that selects people for enhanced search.
Public processes often involve algorithms, and the public has an interest in the openness of these processes. Today I want to expand on what we mean when we talk about this kind of openness. In my next post, I’ll work through a specific example, taken from airport security, and show how we can improve the public accountability of that algorithm.
When we talk about making an algorithmic public process open, we mean two separate things. First, we want transparency: the public knows what the algorithm is. Second, we want the execution of the algorithm to be accountable: the public can check to make sure that the algorithm was executed correctly in a particular case. Transparency is addressed by traditional open government principles; but accountability is different.
Sometimes accountability is easy. If the algorithm is deterministic and fully public, and all of the inputs to the algorithm are known, then accountability is trivial: just run the algorithm yourself, and check that your result matches the output of the public process. But accountability can be more challenging if the algorithm involves randomness, or if one or more of the inputs to the algorithm is (legitimately) secret.
An example of a randomized algorithm comes from airport security. The algorithm might say that 5% of people will be selected at random for search. If officials tell you that you’re one of the unlucky 5%, how can you tell if you were really selected at random, as opposed to being singled out for a reason outside the official algorithm? An accountable algorithm would let you verify that things were done by the book.
Note that accountability does not require that you know in advance that you are going to be selected. In the airport setting, it’s important for security that people are surprised by their selection. What accountability requires is that you can verify afterward that your selection (or non-selection) was legit. To put it another way, an algorithm can be unpredictable yet still be accountable.
What about secret inputs? Again, we have an example from airport security. You can be selected for search because you’re on what I’ll call the “searchlist,” which designates a set of people who are considered higher-risk, so that they are always selected for enhanced search, though they’re not considered dangerous enough to be on the no-fly list. The searchlist is secret, but it is an input into the selection algorithm.
Here, an accountable algorithm would require the authorities to commit to a specific searchlist, but without telling the public what it was. Then the accountability mechanism would ensure that, if you were selected because you were allegedly on the searchlist, you could verify that the searchlist to which the authorities committed did indeed contain your name–but you could not learn anything more about the contents of the searchlist. This kind of accountability is possible, using cryptographic methods.
In practice, the search selection algorithm would probably say that a person is always selected if they are on the searchlist, and selected with 5% probability if they’re not on the searchlist. Now, accountability says that you should be able to tell that your selection was correct, but you shouldn’t be able to tell whether your selection was due to the search list or to random selection. This too turns out to be possible, combining the two accountability mechanisms described above, plus a bit more cryptography.
Accountable algorithms show up elsewhere besides airport security. For example, electronic voting systems often use post-election audits, which need to be accountable so that the public can be sure that the audit was done according the approved procedures.
In my next post, I’ll work through an example, to show how an algorithm can be made accountable.
Leave a Reply