What can happen to a world with murdered victims but no murderers? What if this murderer is your own personal computer? Or your own mobile phone? Or your car? Or any device you own? It may sound weird, yes, but these devices are nothing more than some coded programs, and these programs are being developed every day to come up with a machine like no other; a killer robot, or an automated weapon. Can you imagine a robot that goes to jail? Better yet, can you imagine a robot that goes on trial for merciless acts?
Autonomous weapons or killer robots have been a center of discussion for a while now. These autonomous weapons can kill a certain target with great precision, whether human or object.
The oldest form of autonomous weapons is a landmine. Landmines are explosive objects concealed underground designed to kill anything that passes over them.
Automated weapons indeed have long been used to only identify targets and maneuver offensive attacks, but have never been used to decide whether to kill or not to kill.
There has been some discussions over the development of these automated weapons, where some people think that with the great development of artificial intelligence, these automated weapons could actually be designed to include facial recognition and kill a specific target by first monitoring its surrounding, and killing its target.
The dilemma here would be to either use lethal weapons and eliminate human error, or to not use lethal weapons where human judgment would have the upper hand for a final decision.
An open letter signed by 116 founders of robotics and artificial intelligence companies from all over the world had urged the United Nations to ban the use of autonomous weapons internationally.
There are several arguments that go against the development and use of autonomous weapons, where a quite similar situation was portrayed in the trolley problem, which is often brought up in topics relating to autonomous devices, along with ethics, philosophy, and psychology to name a few.
One variant of the trolley problem stated that “Imagine if you are driving a trolley towards a certain way, and for some reason, the brakes failed to work; you have two options:
The trolley problem is always discussed in matters of autonomous devices. Certainly, programmers of autonomous devices do not call in moralists and philosophers and discuss all matters of unforeseen circumstances. Even if this was the case, you cannot account for every ethical externality that may or may not occur. Therefore, human judgment is still a very important factor in operating these kind of machines, especially when it comes to life or death situations.
On the other hand, some arguments actually encourage the use of autonomous weapons. Some researchers argue that the development of autonomous weapons can help reduce error and accidental civilian fatalities that often occur in the field of war. It could also eliminate discriminatory killing and human error, where it is designed to be completely objective.
Others believe that sometimes some wars are inevitable, so why sacrifice many of one’s good soldiers, when you can win the war without even going? You can quite literally start an army of AI machines. Autonomous weapons indeed give military advantage to its owner.
We can conclude that the arguments against the use of autonomous weapons revolve around moral grounds, while arguments against its ban revolve around military advantage. So whether you agree with the use of autonomous weapons or their ban, one thing is certain, the world is almost definitely entering a new era, and it is very near. The question still remains, are we ready to let machines control life or death decisions?
To find out more, visit: https://goo.gl/7BEU8A