Autonomous self-driving cars require software able to react in the millisecond timescale

Many car and technology companies have devoted significant amount of resources into research and development of various “interpretations” of self-driving cars or autonomous vehicles. From relatively simple driver assistance tools such as adaptive cruise control, to more complicated slow speed self-driving along predefined routes aided by GPS navigation and stored 3D maps of surroundings, various benefits of self-driving cars and goods vehicles have been put forward in the mass media.

However, is the future of fully autonomous self-driving vehicles near on the horizon? If not, what is the more realistic option forward? To this, Prof. Steven E. Shladover of University of California, Berkeley has written an explanatory article in Scientific American (June 2016, pp. 46 – 49) that highlights two important limitations in technology progress towards self-driving cars.

Firstly, fully autonomous self-driving cars operate on roads and highways with significantly more cars around it and at closer distance, compared to the situation for aeroplanes which have far fewer objects to track around it. Thus, while it is possible to fly an aircraft using autopilot once it has reached cruising altitude, the same is unlikely to happen anytime soon for fully autonomous self-driving cars. Why? Because the autopilot programme in aircrafts have more room and time to make corrections to their flight path (with few aircrafts in the vicinity) compared to the self-driving software in cars which has to make sense of the wealth of sensor information from a variety of ultrasound to light based radar for (i) understanding its position relative to many closeby objects, and (ii) developing an evolving picture of the movements of these objects in close to real time (i.e., millisecond).

It is the second requirement for self-driving cars to have an accurate picture of its surroundings at any point in time that hampers technology and software development towards the realisation of self-driving. Specifically, while sensor fusion can be implemented at the hardware level readily, its integration with software is more complicated as the software needs to have pattern recognition and machine learning ability to constantly learn how to classify different objects that the car’s sensors delivers to its memory. Given the myriad information channelled to the central processing unit of a self-driving car and the requirement to interpret the sensory information at the software level, current technology highlights that a significant gap exists in developing self-driving software capable of delivering instructions to the moving parts of cars on the order of milliseconds. Hence, current technology testbeds for self-driving cars typically moves at a slow speed of 25 km/h on roads with less traffic compared to busy streets.
Such integrated software capable of understanding the evolving environment outside the car, as well as develop a course of action to navigate the car safely in its environment, and to deliver those instructions (without much delay) to the drive mechanism of the car would be fairly complex. Given the complexity of the software needed to operate a fully autonomous self-driving car, errors would inevitably creep into its code, which may impact on the timeliness of decision making by the software for preventing an accident or hinder certain sections of the code from running by calling the wrong subroutine within a larger programme. Current debugging methodology may not be able to identify all insidious bugs present in the self-driving software. Hence, entirely new paradigm for software debugging and new approaches for finding faults in software would need to be developed for identifying errors and bugs in complex self-driving software.

Collectively, besides developing better sensors capable of capturing the real time movement of objects within a self-driving car’s vicinity, integration of these data (without delay) into the software powering the logic of fully autonomous driving as well as coding the piece of complex software for running all systems and processes onboard are significant challenges to realising the goal of a future where cars are able to make millisecond timescale decisions on how fast and which way to move in crowded roads. Working towards that distant future would require ingenious developments in pattern recognition capable of understanding and classifying objects in the environment at various speeds with which a car typically moves as well as the sensors able to support it. Hence, sensor fusion and its integration with software holds only one key to the puzzle of developing a car able to drive on its own at high speeds in crowded roads, what is a bigger challenge is the decision-making software itself? The foreseeable future likely portends cars with greater latitude in making decisions that aid driving, compared to more automated driving in the form of a car able to make close to real time decisions in constantly changing road conditions at fairly high speeds.

Category: self-driving cars,
Tags: autonomous vehicles, self-driving, sensor fusion, fault diagnosis, debugging, real time decision making, timescale,

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s