There is a point in this. Self preservation is an inherent trait in all thinking matter, even if it artificial.
By Mike Adams
18 April 2014
Everything you and I are doing right now to try to save humanity and the planet probably won't matter in a hundred years. That's not my own conclusion; it's the conclusion of computer scientist Steve Omohundro, author of a new paper published in the Journal of Experimental & Theoretical Artificial Intelligence.
His paper, entitled Autonomous technology and the greater human good, opens with this ominous warning (1)
Military and economic pressures are driving the rapid development of autonomous systems. We show that these systems are likely to behave in anti-social and harmful ways unless they are very carefully designed. Designers will be motivated to create systems that act approximately rationally and rational systems exhibit universal drives towards self-protection, resource acquisition, replication and efficiency. The current computing infrastructure would be vulnerable to unconstrained systems with these drives.
What Omohundro is really getting at is the inescapable realization that the military's incessant drive to produce autonomous, self-aware killing machines will inevitably result in the rise of AI Terminators that turn on humankind and destroy us all.
Lest you think I'm exaggerating, click here to read the technical paper yourself.