Friday, September 21, 2012

Algorithms Do It Better

One of the best things I've read all week was this great debate between Sam Harris and security expert Bruce Schneier over the merits of profiling in the context of TSA airport security checkpoints. I like Sam Harris, but his position justifiably gets destroyed by Schneier. A taste:
I’ve done my cost-benefit analysis of profiling based on looking Muslim, and it’s seriously lopsided.  On the benefit side, we have increased efficiency as screeners ignore some primary-screening anomalies for people who don’t meet the profile.  On the cost side, we have decreased security resulting from our imperfect profile of Muslims, decreased security resulting from our ignoring of non-Muslim terrorist threats, decreased security resulting in errors in implementing the system, increased cost due to replacing procedures with judgment, decreased efficiency (or possibly increased cost) because of the principal-agent problem, and decreased efficiency as screeners make their profiling judgments.  Additionally, your system is vulnerable to mistakes in your estimation of the proper profile.  If you’ve made any mistakes, or if the profile changes with time and you don’t realize it, your system becomes even worse.
It's worth reading the whole thing to better understand the many ways Harris is mistaken, but of particular interest to me is the general philosophical approach Harris takes in defending a profiling regime. He thinks it's merely common sense to profile because not doing so neglects statistical information based on demographics. In his view a security system confronted with a white European wheelchair-bound grandma should consider the probability she's a terrorist (tiny) and quickly update its screening procedure accordingly (by devoting less attention to her). Conversely, a young Semitic-looking man would attract more scrutiny because he's in a riskier demographic category. On the face of it this seems rational, just as employers operating with limited information might discriminate on the basis of race when hiring. But doing so is almost always not worth it, and here's why: the logic changes when you move from considering a single one-off interaction to a system involving repeated interactions. In a system like airport security, blind rules or algorithms become necessary because the costs of constantly updating your information quickly becomes exorbitant.

What Harris is really upset about is an unfortunate byproduct of adherence to blind maxims or algorithms: the perverse outcome. If our security system doesn't update based on demographic statistics, it's unavoidable that we'll have ridiculous situations like the grandma thing occur from time to time. But here's the rub: if we think we can be clever and have it both ways (say, by relaxing our adherence to our algorithm in obviously perverse cases) it results in a worse system overall. Why? Because the experts we put in charge of making the decision almost always opt to override the algorithm too often. This problem has been called the "case of the broken leg" by psychologist Paul Meehl, and is described in Ian Ayres' wonderful book Super Crunchers. Although it requires a more sophisticated understanding of how systems work, we must recognize that in the context of airport security algorithms work better, even though it means accepting a certain minimal amount of perversity.

No comments:

Post a Comment