Robot taught to disobey humans which could mark the era of self-aware AI
Is this the beginning of the end for humanity as we speak?
Recently, researchers from Tufts University in Massachusetts are training robots to understand the same sensation of human beings when making a decision. In this way, they will, or will not obey the commands of the human beings, which is something to be worried about. While this could be thought of as a clever approach in order to operate futuristic machines verbally without having to operate them manually, training robots to disobey commands is not exactly a smart thing to do.
If anyone has paid even an ounce of attention to the ‘Terminator’ series, then they will realize that all of humanity was wiped out since Artificial Intelligence or AI started becoming self-aware and wiped out the entire human race in order to preserve its existence. Additionally, great science fiction writer Isaac Asimov’s Three Laws of Robotics states the following:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
The writer later added one more law that would exceed all of the above stated laws; a robot may not harm humanity, or, by inaction, allow humanity to come to harm. Most some robotic scientists are actually afraid of Artificial Intelligence, fearing that that robots will become too clever. They also fear that if robots’ intelligence factor is not going to be the primary reason, then it is their ignorance that will lead to more human deaths if they are given control over specific things.
In order to prevent this, these researchers at Tufts University’s Human-Robot Interaction Lab have been developing a specific type of programming for robots that enables them to understand that they can reject a command from a human if they have a good enough reason for it. However, this reason might translate into protecting their own existence if they develop a much broader intelligence that their presence is being threatened.
Furthermore, the programming of these robots in order to keep them within the parameters of human control might not be sufficient since it is also possible for robots to override such commands. Whatever the case maybe, scientists should start preparing themselves for the worst.
Comments
Post a Comment