zFAS – computing power, networking and data processing
The core of the systems which Audi is developing for piloted driving is the central driver assistance system control unit (zFAS). The mastermind makes its debut in the new Audi A8.

Until now, driver assistance systems were managed by spatially isolated control units. Audi will be the first automobile manufacturer to bundle these in a central domain architecture. To this extent, the function portfolio, the required sensors, electronic hardware and the software architecture have been combined into a single central system. Right from the outset, full attention was paid to this, and especially to the safety concept.

As a result of the vast sensor information bundled in the zFAS, it computes an entire model of the vehicle surroundings at lightning speed and provides this information to all assistance systems. It is thus also the central interface for all functions of piloted driving.

Despite compact package dimensions, it offers high computing power – the prerequisite for which are powerful electronic modular components. The zFAS – roughly as big as a tablet – is a high-tech computing center. Audi developed the zFAS with an international leading team of technology partners. It integrates high-performance chips – the Tegra K1 from NVIDIA, the Aurix from Infineon, and the Cyclon V from Altera – which are supplemented by the EyeQ3 processor from Mobileye, the world leader in image processing algorithms for the automobile industry. Its modular concept makes the zFAS flexibly scalable and thus future-proof.

Artificial intelligence and machine learning
Artificial intelligence will soon make it possible for piloted vehicles to react appropriately in highly complex situations, similar to the way in which a human driver would, or perhaps even better. As a sub-branch of information technology, artificial intelligence looks at equipping machines with similar capabilities to those of human beings. This might be achievable, for example, using machine learning.

Machine learning is therefore a pre-requisite for artificial intelligence. The basis for this comes from mathematics and statistics. In the most complex of situations, algorithms will independently find patterns and rules – and will make decisions based on these. In the not-too-distant past, research in the field of artificial neural networks (i.e. the imitation of signal connections within the human brain) made major progress. Deep learning emulates networks of the brain on a computer. This requires enormous computing power and a broad base of data.

In intelligent and piloted vehicles, there will be numerous use cases for machine learning in the future. Thus Audi is evaluating different methods – for example supervised learning or deep reinforcement learning – with the aim of finding the optimal approach for each of these use cases. To this end, Audi is working closely with top businesses from the software field, as well as with leading universities.

Object and environment recognition
One of the most important fields of application of machine learning is currently object and environment recognition. In the Audi A4, A5, Q5 and Q7 models, object recognition has already been implemented in series production with the help of supervised learning. For this purpose, a trained system is used: the learning process is thus complete before the car goes into production.

Even in the new Audi A8, supervised learning is used for object recognition. Image processing developed by our technology partner Mobileye is based, among other things, on the deep learning method. This involves deep neural networks being trained using various data sets. In this way, the neural network learns to classify a diverse range of objects – as cars, as cyclists, as pedestrians. The data retrieved as part of this process is then made available to the final version of the driver assistance system software as well as to that of piloted driving.

Thanks to this process, the new Audi A8 therefore also detects free spaces, i.e. spaces in which it can drive. This is a major requirement for the new Audi AI traffic jam pilot.

Preliminary development projects at Audi

Audi Q2 deep learning concept:
At NIPS (Conference and Workshop on Neural Information Processing Systems) held in Barcelona in December 2016, Audi used a scale model to demonstrate for the first time how a car can develop intelligent parking strategies. On a 3 x 3-meter (9.8 x 9.8 ft) field, the Audi Q2 deep learning concept autonomously searches for a suitable parking space in the form of a metal frame and then parks in it.

The model car (scale 1:8) gained the ability to park autonomously by means of deep reinforcement learning. As part of this process, the system essentially learns through trial and error. To begin, the car selects its direction of travel at random. An algorithm identifies the successful actions, thus continually refining the parking strategy. So in the end the system is able to solve even difficult problems autonomously.

The model car’s sensor technology consists of two mono cameras, one facing forward and the other towards the rear, along with ten ultrasonic sensors positioned at points all around the model. A central on-board computer converts their data into control signals for steering and the electric motor. On the driving area, the model car first determines its position relative to the parking space. As soon as this is recognized, the system calculates how it can safely drive to its destination. The model car maneuvers, steers and drives forward or in reverse, depending on the situation.

The “Audi Q2 deep learning concept” is a pre-development project of Audi Electronics Venture (AEV), an AUDI AG subsidiary.

Audi Q7 deep learning concept:
A use case for machine learning in 1:1 scale was presented by Audi in January 2017 at the Consumer Electronics Show (CES) in Las Vegas. On a specially established, adaptable open-air track, the Audi Q7 deep learning concept used a front camera with two-megapixel resolution for orientation. This then communicated with an NVIDIA Drive PX 2 computer unit which subsequently initiated the highly precise steering movement itself. The high-performance controller was specially engineered for piloted driving applications.

Serving as the core of the software are deep neural networks that experts from Audi and NVIDIA have trained specifically for autonomous driving and recognition of dynamic traffic control signals. At the beginning, the Audi Q7 deep learning concept made several laps of the track with a driver behind the wheel and additional training cameras in order to get to know the route. The system established a correlation between the driver’s reactions and the occurrences detected by the cameras. As a result, the car understands external signals such as a temporary traffic light, can interpret them and deal with them as the situation requires.

The biggest difference between the Audi Q2 deep learning concept and the Audi Q7 deep learning concept is the method used for machine learning. While the 1:8 scale model car learns how to park through trial and error (deep reinforcement learning), during the training runs, the network of the Audi Q7 deep learning concept receives concrete, relevant data – in other words, it learns from a human driver (supervised learning). Both projects are important aspects in researching the topic of artificial intelligence at Audi and illustrate the bandwidth of this approach. Audi also evaluates and trials various types of machine learning in order to implement the technologies in a targeted manner as part of new applications in the field of autonomous driving and personalization.

Car-to-x technology
See more than with the human eye or the infra-red camera – car-to-x technology expands the horizon of the established vehicle sensors based on radars, cameras and ultrasound, by supplementing these with information obtained from far away and outside of the field of vision of the driver. In this way, dangerous situations can be recognized even earlier and accidents can be avoided. Real-time communication between cars and the road infrastructure today already offers us improved safety, comfort and efficiency. With the A8, Audi will be the first manufacturer to introduce the powerful LTE Advanced mobile transmission standard.

“Traffic light information”:
The first highly-networked standard function of the car-to-x module is called “Time-to-Green”. In the Audi virtual cockpit or the head-up display, the driver sees whether the next traffic light will be green upon arrival there (within the legally permitted speed). If this is not the case, a countdown starts until the next green phase. The driver can therefore move his/her foot off the gas pedal in good time. In the future, it would also be conceivable that Audi e-tron models rolling towards a red traffic light use more of the braking energy for charging the battery. At a red light, car-to-x technology will soon make it possible for a column of vehicles to start off almost simultaneously when the light turns green. The through-flow of vehicles during each green phase should thus drastically improve.

Car drivers drive in a more forward-thinking manner thanks to this traffic light information. And that has a positive effect on traffic flow. In the future, traffic light information will, for example, also be coupled with an intelligent navigation system and will be usable in conjunction with new drive concepts. A row of green lights would thus be possible in the optimum route plan.

“On Street Parking”:
A further car-to-x service is the parking space search function, which Audi has developed under the project name “On Street Parking”. Cars equipped with car-to-x technology automatically report when they enter and leave a parking spot to the servers in the cloud. The application registers parking maneuvers based on various parameters, such as control signals of the engine, gear changes, steering angle and speed.

Using the information supplied by ultrasonic sensors or a camera, in future the system will also be able to identify vacant parking spaces while on the move. It calculates the number of free parking spaces on the side of the road based on statistical models that consider factors such as the time of day. The service shows the driver in real time the probability of finding a free parking space, making it easier to find a spot, particularly in city centers. Unnecessary time spent searching for a parking space is thus saved and that also reduces traffic on the road.

At the same time, emissions in major cities could be effectively reduced. In today’s rush hour, hundreds of cars often spend up to 30 minutes driving around residential areas looking for a parking space, but in the future, free parking spaces on the side of the road and in parking garages will be reliably shown to the driver. The driver therefore benefits from a direct journey to the location. A simple example calculation shows how much fuel and gaseous emissions could be saved by this process: an average passenger car consumes more than five liters of fuel every 100 kilometers (1.3 US gal per 62.1 mi) in urban traffic. This is the distance which some drivers cover in urban areas each month simply looking for a parking space. Thus, overall, each car consumes more than 50 liters (13.2 US gal) of fuel each year – that’s an entire tank of fuel.

Voice control
The next stage of voice control can be seen in a hybrid concept. It answers questions from the driver in two ways. On the one hand, it uses knowledge about the user’s preferences saved in the vehicle, while on the other hand it calls up knowledge from the cloud. What’s more, the driver can formulate his/her questions or instructions freely – the self-teaching dialog manager reacts, asks questions itself where needed, or provides a list of possible selection options. In dialog with the system, the driver can switch between menu areas. For example, he/she can call a contact from the address book and have the navigation system adopt the address as the destination for route guidance. Using the destination search, the new hybrid voice control also includes media, climate control, as well as some telephone functions and some Audi connect services. In Europe, these work across borders.

Audi Fit Driver
Today already, every Audi is equipped with the latest technology and offers top-level comfort and safety. As a private place of retreat and all-round networked space, a car isn’t just an ideal place for monitoring fitness levels, rather it can also actively improve the health and well-being of the driver. The Audi Fit Driver project turns the car into an empathetic assistant. In many situations, it knows what the driver needs.

The number of users of so-called wearables – fitness bands or smartwatches – continues to grow. These wrist-bound devices monitor vital parameters such as the pulse or skin temperature. In future development stages, the data of these wearables will be combinable with that of the vehicle sensors. This will then allow reliable statements to be made about the current condition of the driver, to which the car can then individually adapt. If the up-coming Audi Fit Driver detects, for example, increased stress or fatigue, the vehicle systems adapt themselves accordingly in a relaxing, a vitalizing or a protective manner. Thanks to intelligent algorithms, the system gets to know the driver better and better.

For the first time, Audi Fit Driver will allow for stress to be actively diminished and concentration increased, all whilst in the vehicle. If the system notices high stress on the driver, this can be reduced by means of a special breathing technique. The instructions for this are shown in the Audi virtual cockpit display as so-called Bio-Feedback, similar to how it works in top-level sports. Additionally, a voice over the loudspeakers guides the driver through the exercise. Whether it be relaxing breathing exercises, energizing seat massage functions to the beat of the music, special climate control functions, adaptive infotainment measures or perfectly-suited interior lighting moods: the aim of Audi Fit Driver is to create a driving experience which is optimally suited to the respective condition of the driver, which allows him/her to leave the vehicle at the destination feeling more relaxed than when he/she got in.

In a later expansion stage, Audi Fit Driver could incorporate assistance and safety systems, as well as future systems for piloted driving. In extreme situations, an Audi could initiate a piloted emergency stop and issue an emergency call using the eCall system.