Werbung
Werbung

Fraunhofer Institute: How Drivers and Vehicles Will Communicate in the Future

How could the communication between vehicle and driver be optimized depending on the degree of automation? This question is being investigated by a Fraunhofer research project. Sensors for interior monitoring and speech models are intended to increase safety and comfort.

An image processing model processes visual data on demand. The researchers are working on extracting relevant information from the image and providing it for various vehicle functions, such as AI assistants and safety systems. (Photo: Fraunhofer IOSB/Zensch)
An image processing model processes visual data on demand. The researchers are working on extracting relevant information from the image and providing it for various vehicle functions, such as AI assistants and safety systems. (Photo: Fraunhofer IOSB/Zensch)
Werbung
Werbung
von Anna Barbara Brüggmann

In a few years, drivers might hear the following sentences from their vehicle: "Warning, if you continue reading now, you might get motion sickness on the winding road. In five minutes, we'll be on the highway, then it will be better," or "It will start raining soon and we need to end the automatic driving. Please prepare to drive yourself for a bit. I'm sorry you have to safely stow your laptop now. Safety first."

According to a research team from the Fraunhofer Institutes for Optronics, System Technologies and Image Exploitation IOSB and for Industrial Engineering IAO, as the degree of vehicle automation increases, the interaction with humans also needs to be rethought.

Research Project Karli

The researchers are taking on this task together with ten partners, including Continental, Ford, and Audi, as well as a number of medium-sized companies and universities in the Karli project.

The name Karli stands for Artificial Intelligence (AI) for Adaptive, Responsive, and Level-Conform Interaction in the Vehicle of the Future. Current automation levels are divided into non-automated (0), assisted (1), semi-automated (2), highly automated (3), fully automated (4), and autonomous (5).

“In the Karli project, we are developing AI functions for automation levels two to four. For this, we collect the states of drivers and design different human-machine interactions that are typical for the respective levels,” explains project coordinator Frederik Diederichs from the Fraunhofer Institute for Optronics, System Technologies, and Image Exploitation IOSB in Karlsruhe.

Adapt Interaction to Automation Level

Drivers must concentrate on the road depending on the automation level or can engage in other activities. According to the researchers, they have ten seconds to take back the wheel, or in some cases, do not need to intervene at all.

Defining suitable interactions for each level is very complex due to the different requirements for users and the possibility of switching between various levels depending on the road situation.

It must also always be clear to drivers which automation level they are in – and this must be ensured through interaction and design.

Warnings and Notices

The applications developed in the Karli project focus on three main areas: Firstly, warnings and notices aim to promote level-appropriate behavior and, for example, prevent the driver from being distracted at a moment that requires attention on the road.

The user interaction must be adapted to the respective level – visually, acoustically, haptically, or a combination of these possibilities. AI agents would control the interaction – the partners are evaluating their performance and reliability.

Preventing Motion Sickness

A second focus addresses the risk of motion sickness, which should be predicted and minimized, as it is one of the biggest problems in passive driving. According to the information provided, between 20 and 50 percent of people suffer from this so-called motion sickness.

“By matching the activities of the occupants with predictable accelerations on winding roads, we can enable AI to give the right occupant tips for avoiding motion sickness at the right time, based on their current activities,” said Diederichs.

For this, so-called generated user interfaces, abbreviated as 'GenUIn', would be used for individualized interaction between humans and AI.

AI Interaction

And the third focus of the research project is on this AI interaction. GenUIn aims to create individualized communications and provide tips on how to reduce nausea if it occurs.

The current activity is captured by sensors. The tips could then relate to the activity, but also consider what options are available in the current context.

Additionally, users should also be able to express wishes, thus gradually personalizing the interaction in the vehicle.

The automation level must always be taken into account: for example, short and purely verbal hints if the driver is concentrating on the road, or more detailed and visual hints if the vehicle is currently handling the driving task.

Different Sensors

Various AI-supported sensors are to be used to capture activities in the vehicle: optical sensors from interior cameras, which will become mandatory due to current legislation on autonomous driving to ensure the driver's capability.

The visual data from the cameras will then be combined with large language models into so-called Vision-Language Models (VLM).

According to the research team, these are the basis for modern driver assistance systems in (semi-)autonomous vehicles to semantically capture and respond to situations within the interior.

Almost like a butler who stays in the background but knows the context and offers the occupants the best possible support, says Diederichs.

Anonymization and Data Protection

According to Diederichs, optimal anonymization and data security, as well as transparent and explainable data capture, are essential.

“Decisive factors for the acceptance of such systems are trust in the service provider, data security, and a direct benefit for the drivers,” emphasizes Frederik Diederichs and further explains: “Not everything that is in the camera's field of view is evaluated. It must be transparent what information a sensor captures and what it is used for. We are researching how this can be ensured in our working group Xplainable AI at Fraunhofer IOSB.”

In another project called Anymos, the Fraunhofer researchers are working on anonymizing camera data, processing it data-efficiently, and protecting it effectively.

Data Efficiency

The research project, funded by the Federal Ministry for Economic Affairs and Climate Action, also deals with the topic of data efficiency.

“Our Small2BigData approach requires only a few, high-quality AI training data that are empirically collected and synthetically generated. They form the basis for car manufacturers to know what data they need to collect later in series production to use the system,” explains Diederichs.

This should keep the data effort manageable and make the project results scalable. Recently, the research team put a mobile research lab based on a Mercedes EQS into operation to better research user needs in level 3 automated driving on the road.

There, the findings from the Karli project are to be tested and evaluated in practice. According to the information, the first functions could be available in series vehicles as early as 2026.

Translated automatically from German.
Werbung

Branchenguide

Werbung