Friday, October 3, 2014

Quote of the week

Human individuals and human organizations typically have preferences over resources that are not well represented by an "unbounded aggregative utility function." A human will typically not wager all her capital for a fifty-fifty chance of doubling it. A state will typically not risk losing all its territory for a ten percent chance of a tenfold expansion. For individuals and governments, there are diminishing returns to most resources. The same need not hold for AIs. ... An AI might therefore be more likely to pursue a risky course of action that has some chance of giving it control of the world. 
"Humans and human-run organizations may also operate with decision processes that do not seek to maximize expected utility. For example, they may allow for fundamental risk aversion, or "satisficing" decision rules that focus on meeting adequacy thresholds, or "deontological" side-constraints that proscribe certain kinds of action regardless of how desirable their consequences. Human decision makers often seem to be acting out an identity or a social role rather than seeking to maximize the achievement of some particular objective. Again, this need not apply to artificial agents.
That's from Superintelligence: Paths, Dangers, Strategies by Oxford philosopher Nick Bostrom. The book is sober, clear-eyed analysis of an issue that's been socially defined by its recurring role as a narrative device in science fiction stories. But the topic is real--and important. Overlooking the issue in public affairs is increasingly risky. For a more accessible introduction to the potential risks of artificial intelligence, I highly recommend checking out this short podcast.

No comments:

Post a Comment