The problem of evil is a philosophical topic with a long historical pedigree, but nowadays it's treated mainly as a curiosity by an increasingly secular student and teacher body. The problem of evil, in short, is the following apparent paradox (which can be framed in many related ways, depending on the level of philosophical rigor one is interested in):
- God is omnipotent and benevolent.
- If God is benevolent, then he would desire that no evil occurs in the world.
- If God is omnipotent, then he can bring to pass anything that he desires.
- Therefore God would make it so there is no evil in the world.
- But there is evil in the world.
Or one can reject #2. Perhaps benevolence does not imply desiring the elimination of all evil, but instead involves desiring the best overall world. And perhaps the best overall world must involve evil. After all, we know that evil can sometimes lead to good. But those cases seem few and far between, and there are evils so severe that we cannot imagine that they make the world a better place. How can the Holocaust be reconciled with a view like this, for instance? In what way could the best possible world involve something like that?
Or we can refuse to accept #4. It seems like a simple deduction from #2 and #3, but on closer inspection there's some room to argue. Does the fact that God desires X and has the power to bring about X mean that he will bring about X? Not necessarily, and this is the argument from free will. God could have prevented the Holocaust, of course, but to do so would have denied the free will of millions of individuals who chose to go along with Hitler. He could have prevented chattel slavery, but that likewise would have denied free will. On the other hand, what about the evils that cannot be traced back to free will? Wouldn't a benevolent God have prevented Katrina?
I don't mean this to be a comprehensive treatment of the problem of evil, of course. But why am I bringing up this quaint historical discussion, and what does all of this have to do with autonomous robotics?
Human Rights Watch has put out a report warning that fully autonomous robots could be in heavy use in advanced militaries in the not too distant future. Imagine Israel's Iron Dome (which automatically detects incoming rockets and fires missiles at them) combined with a UAV: drones that can autonomously fly around, identify enemy militants, and fire on them. The technology is advanced, but it's not unreasonable to consider. If Google can try to build a car capable of driving autonomously on crowded streets, the military can try to build a weapon capable of killing autonomously on the battlefield.
Early versions of these weapons will probably have close human monitoring, and the opportunity for intervention if things look like they're malfunctioning. But what happens when autonomous drones become the new normal? When we get to the point when one operator is monitoring an entire fleet, trusting the drones to function correctly?
There will come a point when the artificial intelligence passes beyond the point of intelligibility, when it will start to make sense to think of these things as agents with free will. If one of those robots (the term "drone" would no longer seem appropriate) goes on a civilian killing spree, there won't be any identifiable operator or programmer to hold responsible. Just the robot itself.
And herein lies the renewed relevance of the problem of evil. In this realm, more than any other, humanity truly has the opportunity to play God. But we also have the decked stacked against us. Theologians usually start from the point of God's benevolence and omnipotence. But no one thinks about humanity the same way, especially not the disjointed and disperse conglomeration of humanity that will contribute, in various ways, to the development of autonomous killing robots. The problem for us is not to reconcile our supposed benevolence with the evil that we can predict will probably occur. It is to ask how we can act benevolently, knowing that this course of development will probably lead to some evil.
Do we close our minds to the possibility of evil coming from this technology, and insist that anything that may look like evil (such as the killing of civilians) will actually end up being a good thing, for unforeseen reasons? Is that at all plausible?
Do we focus on the net positives that this technology will bring? If so, are we more confident that, on net, it will be positive than we are about the similar argument in defense of God?
Or do we help ourselves to the argument from free will? Is the creation of agents with free will a good in itself, one that will balance out the evil that those agents might cause? If we succeed in creating robots with free will of the same sort that we have, are we absolved of their evil deeds in the same way that theologians try to absolve God of the sins of humanity?
I don't think that Human Rights Watch is right to call for a moratorium on the development of this technology. But I do think that everyone needs to grapple with the difficult questions posed by this potentiality. And yes, I do think we need to revisit the problem of evil in a new light. How can we strive for benevolence in the face of the fact that our creations will use their free will for evil?
Hiç yorum yok:
Yorum Gönder