Artificial Intelligence: Threat or the future?

Image Credit: Erik Stein via Pixabay

On April 8th 2019, the EU announced guidelines for the ethical development of artificial intelligence (AI). AI can be defined as the intelligence demonstrated by machines like robots, for example. The use of robots becomes more complicated when they can think for themselves since it means that they could potentially evolve out of human control. These seven guidelines are not legally binding, but they are requirements that the EU believes AI systems should fulfil. To summarise, the EU guidelines indicate that AI should not infringe upon human autonomy, that only a secure and safe AI system should be used, that AI systems should be unbiased, protect personal data, and be part of an environmentally friendly system. As the US and China are pioneers in AI, many journalists think that the EU has published these guidelines with the aim of establishing itself as a  leader in ethical AI. There is no question that this was a required step since ethical guidelines will aid the better development of AI. Regardless, AI ethics is a subject that needs to be clarified.

The first popular work to suggest governance for AI came in 1942, by American author, Isaac Asimov, who outlined the “Three Laws of Robotics” in his short story “Runaround”. He stated that: “1) a robot may not injure a human being or, through inaction, allow a human being to come to harm, 2) a robot must obey orders given [to] it by human beings except where such orders would conflict with the First Law and 3) a robot must protect its own existence as long as such protection does not conflict with the First or Second Law”1. Asimov’s laws differ from the EU’s guidelines as they form a moral and specific scheme about robotics, whereas the guidelines are abstract in regard to society and environmental well-being. These guidelines state that AI systems should benefit humanity, but there is not a clear statement declaring that AI systems should not harm humanity, as Asimov’s laws have expressed.

The concept of robotics was pioneered in around 400 BC by Archytas of Tarentum, a Greek philosopher and the father of mathematical mechanics.  In 1988, the robot PUMA 560 was used to guide a needle into a brain to collect a biopsy2. Other surgical robots followed such as ROBODOC, Zeus, and others. In 2000, the da Vinci Surgical System was approved by the Food and Drug Administration or FDA, revolutionizing the field and becoming the first surgical robot to be used for laparoscopic surgeries in the USA. Despite this leap forward, the ultimate question still remains: how to create AI with the same characteristics and parameters as humans.

An interesting ethical question arises if we consider molecular robots. These are microscopic robots, some mere millimetres in size, and are made of 150 carbon, hydrogen, oxygen and nitrogen atoms. A single molecular robot can manipulate individual molecules by directing molecular reactions. A new molecular robot developed two years ago by a team at the University of Manchester can direct a substrate through multiple active sites and guide the production of a certain product in a chemical reaction3. If we consider a molecular robot that can destroy a blockage, do the robotics laws apply or not? In order for the robot to break the plaque, it will have to hurt a human being, which brings it into conflict with Asimov’s first law. The problem is that a blockage consists of cells that are part of us. In that case, is it sensible to alter Asimov’s laws; or would we consider the fact that a robot could destroy a part of our body to be unethical, even if it will help us in the long term? Whenever we make a new discovery in the field of medicine or in engineering, we must consider even the most outlandish potential applications. If we imagine that these molecular robots are not required to obey the laws of robotics, what would happen if someone created a “swarm robot” consisting of multiple tiny robots? That would mean that anyone could create tiny robots with no ethical binding and merge them into a bigger robot that potentially could be used as a weapon. A molecular application like this could revolutionize current treatment approaches and we should take this into account. However, each individual robot, molecular or any other type, should be subject to ethical approval, including an explanation of its benefits for society and any potential harms.

Research into the development of AI is useful in many fields. The threats that can arise are also real. As already discussed, robots are a hardware tool. AI includes complex coding to create “machine intelligence”, which basically means that machines are able to learn, adapt and evolve. The idea behind this is to create humanized robots that can think and learn like humans, but creating a human-like robot requires a lot of complex code. This means that it is extremely difficult to correct a malfunction if one occurs. What happens if it evolves beyond our control?

Trying to define ethical guidelines for AI is difficult. Ethics can be a grey area and not just black and white in the way that Asimov’s Three Laws of Robotics are. AI ethics is a controversial topic with both sides, in favour of and against AI, presenting valid arguments. Despite the difficulty, in order to avoid potential abuse of technology, it is vital to legislate for good practice. It is inevitable now that AI will be developed, and all we can do is to try to make it as ethical as possible.  

This article was specialist edited by Madeline Pritchard and copy-edited by Dzachary Zainudden.

Author

References

  1. https://www.ncbi.nlm.nih.gov/pubmed/18681805
  2. https://link.springer.com/article/10.1007/s11701-010-0202-2
  3. https://www.nature.com/articles/nature23677

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.