Driverless Cars: The new era of transportation comes to life.

By Michael Shick (own work) [CC BY-SA 4.0 (], via Wikimedia Commons

Originally published in the March 2016 issue of theGIST magazine

“Imagine if everyone could get around easily and safely, regardless of their ability to drive. Time spent commuting could be time spent doing what you want to do. Deaths from traffic accidents could be reduced dramatically, especially since 94% of accidents in the U.S. involve human error”.1 Google’s stated vision of driverless cars are no longer confined to the realm of Hollywood movies and science fiction, they may already be cruising on a road close to you. If you happen to live in California and see Google’s ‘Pod’ conducting road tests or live in selected UK towns and cities such as Milton Keynes, Bristol, Coventry and Greenwich where the ‘Lutz Pod’ trials are currently happening. Autonomous vehicles have been tipped heavily as the future mode of transport. However, before you decide to jump in and take a seat, you may want to learn more about the numerous technologies that power and drive autonomous vehicles.


Creating driverless cars is not an easy task – several different technologies and sensors have to be finely tuned and coordinated in tandem to substitute for a real human behind the wheel. Firstly, a navigation system superior to today’s GPS. In addition, a system to recognise and eventually ‘predict’ dynamic conditions on the road ahead based on previously captured data, and finally, an interface where all the sensors and systems are brought together into one seamless operation of driving your car.


Many of the technologies and cameras have actually been available in some form for many years and even exist in cars today. However, the real magic happens when all this comes together. If we take Google’s Self-driving car project as an example, it has a Lidar sensor on its roof to ‘see’ the road and identify lane markings, pavements and edges of roads.2

This sensor works by sending out several low intensity laser beams into its surroundings and measuring the light that bounces off objects in its way, which is then returned to the sensor. This enables the device to create a 3D map of its surroundings. It also allows it to calculate how far an object is from the car by measuring the time it takes for the beams to return, with quite an impressive range of 200m. There is a camera on the front windshield panel, to provide close-range vision for the car to identify objects, such as pedestrians and other road users. Just as a driver needs to follow the rules of the road, as does the self-driving car. So this remarkable camera can also detect and record information on road signs and traffic lights. Intelligent software can process this information and decide which action the car should take, all in a fraction of a second.


The real challenge for a driverless car is to react to unexpected obstacles or disruptions on the road. To avoid collisions, there are four radars in each corner of the car’s bumpers which monitor distance. This ensures the required gap is kept from the traffic in front and allows it to adapt to traffic speed. It is also an important safety feature, as it recognises pedestrians walking near the vehicle. We can all relate to the nightmare of having to park a car – well fear not, as two ultrasonic sensors embedded in the rear wheels ensure the self-driving car can also park itself. The days of accidental bumps and scrapes are over.


The outside sensors of the car may be important for smooth self-driving, but it is the inside that’s vital in helping the car understand its location. Autonomous cars could not rely on GPS systems alone for providing them with their location, as they have an error margin of several meters, which can mean the difference between driving behind or into the car in front. Instead, a variety of internal sensors measuring location points ensures that as the car travels, its internal map is updated with collected data. Currently, the Google self-driving car is pre-mapped before it goes out to drive, but it is possible to envision a scenario where a car can drive a new route it has never ‘seen’ before and be able to accurately map it.


Even with all the sensors and other technologies in place, a driverless car needs to react to the conditions on the road like a human would. It is the difficult task of engineers and artificial intelligence specialists to program self-driving cars to react to the dynamics and variability of the road. The core of this challenge lies in effectively programming the car to react to common road signs and signals. For example, if a cyclist gestured to make a manoeuvre, then the self-driving car would slow down to allow the cyclist to turn. In the case of Google’s project, shape and motion signs have been programmed beforehand into the system. These may seem like very simple exercises, but the camera sensor coupled with radars have to work together in executing an action on the road. More importantly the room for error is virtually zero; gestures cannot be misinterpreted as they could result in fatal collisions. The work of specialists in this field will form a crucial part of the human-like experience we would receive when driving in (or alongside) such a vehicle.  


The vast array of connected sensors that drives the autonomous system creates room for vulnerabilities. With a computer steering the wheel, there is a certain danger of hacking attacks or viruses plaguing the system with the potential to take control of a vehicle. The integrity and robustness of systems can only be fully determined once they are manufactured and have undergone testing to be made available to the public.


I return to the vision presented at the beginning of this article, of a world where any person regardless of physical abilities or the ability to drive can get around, and a world where accidents can be massively reduced, saving lives. Google’s testing is one such example of existing technologies already at our fingertips, which has overcome challenges that would simply have not been possible ten years ago. There are increasingly more organisations and researchers exploring this field and laws are currently being developed in California to govern the use of autonomous vehicles. Therefore, it is only a matter of time until they are available in a showroom (or perhaps an online store) near you.



  1. Google on “Why self-driving cars matter”,
  2. To learn more about Google’s self-driving car project, visit:

You may also like...

1 Response

  1. Steffen says:

    The problem is that by cutting into the space, you have negated the reason the space was left – to allow a safe distance between one vehicle and another. In case of an unexpected event, that space can be the difference between an annoyance and a crash, as any good driver is aware.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.