Site icon theGIST

LAW-less: Should We Ban Killer Robots?

Image Credit: Peter Ladd via flickr.com

Autonomous Machines

The term ‘artificial intelligence’ was first coined in 1955 by Professor John McCarthy prior to the famous Dartmouth conference of 1956. The task of developing software that could mimic human behaviour was more complicated than he first imagined, and progress in the field was slow due to the laborious programming required.

In the last ten years we have seen an AI explosion, with advances in machine learning techniques and huge improvements in computing power prompting massive investment from big tech firms. Today, AI is everywhere and affects everything from how you shop online to how you receive medical treatment. It can make your daily routine easier, your work more productive, and can even unlock your phone using your face. Many of us already have virtual assistants (think Siri, Cortana or Alexa), and it isn’t a massive leap to envision humans will live alongside intelligent machines within our lifetime. For decades, the idea of a future with robotic servants has permeated popular culture, but human control is usually the key to this fantasy. For some, a robot uprising has become a genuine fear.

A fundamental aspect of AI is that machines possess the ability to make their own decisions, however the training of AI has traditionally been carried out under close human supervision. Algorithms are trained with carefully selected training data; they make decisions more quickly and with fewer errors than we can, but essentially the data you provide ensures that the machines make the decisions that you want them to make. The application of ‘deep learning’ may change that. Since the 1950s, programmers have attempted to simulate the human brain using a simplified network of virtual neurons. However, it is only recent advances in computer power that have enabled machines to train themselves using complex neural networks without human supervision1.

Neural networks are still not reaching anywhere near the complexity of the human brain but, despite this, many experts believe that this form of deep learning will be the key to developing machines that think just like humans 2. Google’s AI system AlphaGo recently made headlines when it defeated Ke Jie, the Go world champion. This ancient strategy game is believed to be the most complex game ever devised. For comparison, when playing a game of chess you will typically have 35 moves to choose from per turn – in Go this number is almost 200. This achievement represents a significant leap forward as, in the ‘90s, AI experts predicted that it could take at least 100 years until a computer could beat a human at Go3. With AlphaGo, Google engineers have used neural networks to create the first AI displaying something akin to intuition. However, the feature that roboticists are trying to capture is autonomy – the ability to make an informed decision, free from external pressures or influence – although as it stands, even autonomous robots are only capable of making simple decisions within a controlled environment.

While AI can now outperform humans in quantitative data analysis and repetitive actions, we still have the advantage when it comes to judgement and reasoning. Science fiction has taught us to fear a robot uprising, often with humanoid robots that walk, talk, and think just like us. What if they refuse to obey orders? This is particularly concerning if those robots are armed and dangerous.

Killer Robots

In August 2017, the founders of 116 robotics and AI companies, most notably Elon Musk (Tesla, SpaceX, and OpenAI) and Mustafa Suleyman (Google DeepMind), signed an open letter calling for the United Nations to ban military robots known as lethal autonomous weapons (LAWs). As it stands, there is still no definition of fully autonomous weapon that is internationally agreed upon, however the International Committee of the Red Cross stipulate that LAWs are machines with the ability to acquire, track, select and attack targets independent of human influence. Also calling for a total ban on LAWs is The Campaign to Stop Killer Robots, an international advocacy group formed by multiple NGOs, who believe that allowing machines to make life and death decisions crosses a fundamental moral line. According to their website, 22 countries already support an international ban and the list is growing4.

Despite growing concerns, the US, Israeli, Chinese, and Russian governments are all ploughing money into the development of LAWs. Lethal autonomous weapons may sound like science fiction but the desire to create weapons that detonate independently of human control is far from new. Since the 13th century, landmines have been used to destroy enemy combatants, and while they are unsupervised, they aren’t autonomous by modern standards. Landmines detonate indiscriminately (typically in response to pressure), rather than as an active decision made by the device.

Developing LAWs for offensive operations is desirable as governments look to increase their military capabilities and reduce the risk to personnel. However, campaigners are worried that this potential risk reduction will lower the threshold for entering into armed conflict. There is also concern that when fully autonomous robots are placed in a battle environment where they are required to adapt to sudden and unexpected changes, their behavioural response may be highly unpredictable. Current autonomous weapons tend to be used for defensive, rather than offensive purposes, and are limited to attacking military hardware rather than personnel. The Israeli Harpy is one such lethal autonomous weapon, armed with a high-explosive warhead. Marketed as a ‘fire and forget’ autonomous weapon5, once launched it loiters around a target area then identifies and attacks enemy radar systems without human input (however, it’s attack mission can be overridden). It is believed that these LAWs, known as loitering munitions, are already being used by at least 14 different nations. NATO suspect that drones capable of functioning without any human supervision are not currently in operation due to political sensitivity rather than any technological limitations6.

The US is already developing autonomous drones that take orders from other drones. Department of Defence documents reveal that this ‘swarm system’ of nano drones is called PERDIX. The drones can be released from, and can act as an extension of, a manned aircraft but they can also function with a high degree of autonomy7. These autonomous weapons have learned the desired response to a series of scenarios, but what if they continued to learn? Perhaps one day, advances in machine learning techniques will lead to the development of weapons that are capable of adapting their behaviour. With all the political caginess, it’s difficult to say for certain that this technology isn’t already in development. Greg Allen from the Center for New American Security thinks that a full ban  on LAWs is unlikely as the advantages gained by developing these weapons are too tempting. Yale Law School’s Rebecca Crootof has stated that she believes rather than calling for a total ban, it would be more productive to campaign for new regulatory legislation. The Geneva Convention currently restricts the actions of human soldiers, perhaps this should be adapted to apply to robot soldiers too.

An Ethical Minefield

Many have expressed concern that, as robots become increasingly more human-like in their decision-making, their decisions must be based on human morals and laws. It has been 75 years since Isaac Asimov first wrote of a future with android servants, and he devised three rules which still play a key role in today’s conversation surrounding the ethics of creating intelligent machines:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

Asimov correctly predicted the development of autonomous robots, and while robots making the conscious decision to obey human-imposed laws may be far-fetched, experts have called for the rules to be followed by programmers. The impending development of LAWs is causing machine ethicists to reconsider the First Law, as Asimov’s principles do not take into account the possibility that we would develop robots specifically to injure and kill other humans. These three rules have formed the basis of the principles of robotics published by the Engineering and Physical Sciences Research Council (EPSRC). Their updated version of Asimov’s laws redirects the responsibility from robots to roboticists8. The most notable amendment is to the first law, which conveniently states that robots should not be designed to kill humans ‘except in the interests of national security’. It is worth pointing out the UK government has previously stated that they are opposed to a ban on LAWs. However, following the open letter from Musk and co., the Ministry of Defence has clarified that any autonomous weapons developed by the UK will always operate under human supervision. I don’t find this particularly reassuring.

Creating ethical robots is just as hard as you would imagine, and creating a moral code requires the programmer to consider countless exceptions and contradictions to each rule. Even Asimov’s relatively simple laws illustrate this problem. Morality is also highly subjective, and humans probably aren’t the best moral teachers. If the training data supplied for machine learning is biased, then you will get a biased robot. This is particularly concerning when considering LAWs, as it will be possible for governments to develop weapons that are inherently racist (either by accident or on purpose). Perhaps it is not a robot rebellion that we fear, but what governments and individuals will be able to achieve by abusing this technology. In November, the Russian government made it clear that they would ignore a UN ban on LAWs under the pretence that it would harm the development of civilian AI technologies.

The Greater of Two Evils

As machines become faster, stronger, and smarter than we are, the need for control becomes more critical. However, some experts believe that when it comes to LAWs, we shouldn’t waste time tackling these particular ethical issues. The current debate around banning LAWs often assumes that such weapons will be operating free from oversight and that humans will be absolved from any blame for their actions. Due to international law and restrictions on appropriate military force, many feel it is unlikely that we will ever see robots fighting in conflicts without close human supervision.

Some ethicists are concerned that the language being used in the debate confuses the features of the technology with potential consequences of its misuse. It is unlikely that we will find ourselves in a scenario where humans are absolved of blame – LAWs will have programmers, manufacturers, and overseers. The EPSRC principles attempt to highlight this by stressing that robots are manufactured products, and that there must be a designated person legally responsible for their actions. Though this is assuming that in the future, robots will still be programmed by humans9.

Autonomous drones can already follow and take orders from other drones, AI can program superior AI, and robots can create their own languages. It’s beginning to look like the robot uprising could occur sooner than we think. Some seek comfort in the belief that robots will follow our instructions. Others believe that legislation, bans, and limits on autonomy are the way forward. But is a robot rebellion really the most pressing threat? Perhaps we should be more concerned about governments ignoring international law, and using these robots as weapons of terror. It is easy to imagine that in this scenario, the people responsible may wash their hands of any wrongdoing and blame the robots. Or hackers. Those who support a total ban on the development of LAWs must hope that it will not be possible to abuse this technology if it does not exist in the first place. However, it’s possible that we have already let the genie out of the bottle. It is very difficult to ban the development of something that has already been developed.

Edited by Derek Connor

Author

References

  1. You can read more about machine learning here.
  2. Have a go building your own neural network at playground.tensorflow.org
  3. http://www.nytimes.com/1997/07/29/science/to-test-a-powerful-computer-play-an-ancient-game.html?pagewanted=all
  4. https://www.stopkillerrobots.org/2017/11/gge/
  5. http://www.iai.co.il/2013/36694-16153-en/Business_Areas_Land.aspx
  6. https://www.nato.int/docu/Review/2017/Also-in-2017/autonomous-military-drones-no-longer-science-fiction/EN/index.htm
  7. Watch a swarm of 100 nano drones being released from an F-18 here https://www.youtube.com/watch?v=ndFKUKHfuM0
  8. You can read the full list here https://www.epsrc.ac.uk/research/ourportfolio/themes/engineering/activities/principlesofrobotics/
  9. Google’s AI can already create better AI than their engineers (https://futurism.com/googles-new-ai-is-better-at-creating-ai-than-the-companys-engineers/
Exit mobile version