What is the legal situation regarding autonomous driving?
The decision-making capabilities of autonomous vehicles lead to a legal dilemma.
Daimler
Autonomous driving is approaching: New laws, pilot projects, and technical advancements are changing the framework conditions. Find out which regulations apply in Germany, how EU requirements are implemented, and who is liable in the event of an accident.
Autonomous driving has been considered one of the biggest drivers for the mobility of the future for years. The promises range from more road safety to lower emissions and more efficient use of existing infrastructures. For passenger transport, the technology means nothing less than a fundamental change in the way we experience mobility - towards a connected, multimodal system in which the car becomes a digital service. Hopes are also high in freight transport: autonomous trucks could make transport more efficient and cushion the driver shortage.
It's no wonder, then, that the federal government is giving the topic high priority and adopted a national strategy for autonomous driving in December 2024. The goal is to initiate regular operations in public spaces from 2026, to create the largest contiguous operational area in the world by 2028, and to achieve full integration into a connected mobility system by 2030 at the latest.
What laws apply to autonomous driving in Germany?
As early as 2017, the legislator adapted the Road Traffic Act (StVG) and for the first time created a legal framework for highly automated systems at SAE Level 3. Under clearly defined conditions - such as on motorways at moderate speeds - the driver was allowed to temporarily hand over the driving task to the system, but had to remain ready to intervene at any time.
The decisive step followed in 2021: with the amendment of the StVG, vehicles at SAE level 4 received a legal basis. They are allowed to operate without a human driver in previously approved operational areas - for example, as shuttles on designated routes or for hub-to-hub transport in logistics traffic. Germany thus became one of the first countries worldwide to allow the operation of such vehicles beyond pilot projects as a rule.
At the end of 2024, the federal government concretised its vision in a strategy paper. Autonomous mobility solutions are to be initially established in public transport and logistics. Germany is thus not only to become a pioneer but also a lead market for autonomous driving. This goal is also anchored in the coalition agreement. However, despite major announcements, key questions remain unanswered: how quickly can approvals actually be granted? What infrastructure is needed to safely integrate vehicles into traffic?
Important foundations have also been laid at the European level. With the General Safety Regulation in force since July 2022, uniform safety standards apply to automated vehicles. In addition, international UN-ECE regulations, such as the standard for Automated Lane Keeping Systems (ALKS), govern the use of level 3 functions in road traffic. This harmonisation is intended to ensure that German vehicles are also approved throughout Europe. At the same time, it shows that autonomous driving cannot be regulated nationally alone but requires internationally coordinated rules.
Who is liable for autonomous vehicles?
One of the biggest criticisms of the legal framework remains the question of liability in 2025. Although former Transport Minister Alexander Dobrindt emphasised as early as 2017 that the manufacturer should be held accountable in automated mode, this has not been enforced to date. The strict liability of the owner still applies: even fully autonomous journeys are no exception, the owner continues to bear the operational risk, and their insurance remains the first point of contact in the event of damage.
Direct manufacturer liability does not exist in 2025 either. It only applies if it can be proven that a technical defect or design flaw caused the accident. In practice, this proof is difficult because vehicles today consist of highly complex systems: sensors from suppliers, software from third parties, and AI algorithms interact in real-time. It is not always possible to determine unequivocally who actually caused the error in the event of damage. Although vehicles must be equipped with an event data recorder ('black box'), these systems can only provide clues, not clear attribution of fault.
While Germany adheres to owner liability, other countries are discussing alternative models. In France, for example, there is increasing consideration of manufacturer liability for certain scenarios, and in the USA, special regulations already apply in some states for autonomous shuttle services. At the European level, the AI Act, adopted in 2024, could further fuel the discussion: it prescribes transparency and risk management obligations for AI systems, which could also be used as a basis for liability issues in the future.
What decisions are algorithms allowed to make?
A major hurdle remains in the ethical and constitutional assessment: algorithms are not allowed to make decisions that weigh people based on personal characteristics such as age, gender, or health status. This red line was drawn by the ethics commission in 2017; at the same time, it calls for systems to be designed in such a way that critical situations do not arise in the first place and that the vehicle transitions to a risk-minimal state in case of doubt. These principles remain crucial to this day and are incorporated into law and approval practices. The much-cited 'life-against-life' balancing act remains legally unregulated: the law does not allow programming that 'sacrifices' uninvolved parties or prioritises people based on characteristics. A general prioritisation of the protection of human life over property and animal damage is permissible; beyond that, the prohibition of discrimination applies.
If, contrary to all expectations, an autonomous car were to have an accident, the life of a toddler should not be placed above that of an elderly woman, nor should uninvolved pedestrians be sacrificed. When weighing up between a group of people and an individual, the damage should be minimised, but the 'sacrosanct' individual should not be intentionally killed. A realisation that seems theoretically feasible but confronts programmers with an impossibility. 'Who should we run over first? If we continue to set the agenda like this, we will not make progress,' Christoph Lütge, director of the Institute for Ethics in AI at the Technical University of Munich, points out in a study by the & Audi Initiative. He calls for the concretisation of ethical principles based on practical situations instead of emotional and ideologically driven debates.
This automotiveIT article originally appeared in September 2021 and has been continuously updated since then.