Software Defined Vehicles

Interview with Felix Martin, Tasking

“AI makes diligence affordable”

3 min
Smiling young professional in a suit jacket against a grey studio background. This is Felix Martin, Research Engineer and AI expert at Tasking.
Martin's talk at the ACC US 2026 gives a practical overview of AI-assisted embedded development and shows how the Model Context Protocol (MCP) and reusable “skills” connect an LLM to requirements, implementation, on-target tests, and static/dynamic analysis in a closed loop.

AI-driven tools are reshaping embedded automotive development – not by replacing engineers, but by accelerating testing, validation and review workflows. Felix Martin, Research Engineer at Tasking, explains where AI already adds value and where safety-critical limits remain.

Felix Martin, Research Engineer at Tasking, has been working at the intersection of embedded systems and development toolchains for more than a decade. After roles at Vector Informatik and Continental, he joined Tasking in 2017 and today focuses on integrating AI-driven workflows into automotive software environments.

At the Automotive Computing Conference US 2026, Martin will speak on “Empowering Automotive Development with AI-Driven Toolchains,” outlining how large language models evolve from chat interfaces into tool-connected engineering agents. Ahead of the conference, we had the opportunity to speak with him.

ADT: How are AI-driven tools changing the way automotive software is developed and validated?

Martin: Maybe counter-intuitively, the biggest impact of AI in automotive development isn't code generation. Most production code is already generated by AUTOSAR tooling, model-based development environments, or code generators. Handwritten C is a shrinking fraction of code in vehicles. The real shift is that engineers who use tools to generate, test, and validate code can now use AI to interface with these tools directly, and also to learn how to better utilize them. Think about writing a test script for a hardware-in-the-loop setup: traditionally a significant effort of reading API docs, writing, and debugging by hand. With an AI agent that understands the tool's API and scripting environment, the same results can be created from the test specification in a fraction of the time. That pattern repeats across the entire software development workflow. AI doesn't replace the debugger, profiler, static analyzer, or test tool. It removes friction between the engineer and the tool, enabling more iterations and faster insight.

What practical value do LLM-based engineering agents already deliver today?

Even today, LLM-based agents deliver practical value across the embedded development workflow: writing test scripts, performing code reviews, running static analysis, optimizing algorithms, and measuring performance. The complete toolkit of embedded development can already be augmented. The key insight is that this isn't primarily about generating more code. It's about working more thoroughly. Tasks that were previously skipped due to time pressure, like writing comprehensive test coverage or reviewing every MISRA violation in context, become feasible when an agent can do the heavy lifting. AI makes diligence affordable.

Where do current AI-assisted workflows still fall short in safety-critical development?

The honest answer is that the same limitations that apply to human-written code apply here. In a safety-critical environment, no code can enter the system without a qualified review process, and that doesn't change because AI wrote it. Human review remains mandatory. That said, this isn't a new constraint. Code reviews were required before AI too. What changes is that AI can now assist in that review process, flagging issues, providing context, and surfacing potential violations faster than a manual pass alone. The deeper limitation is non-determinism. LLMs are fundamentally probabilistic tools, which makes them difficult to qualify under current safety standards like ISO 26262. Until that changes, AI sits in an advisory role: it can accelerate workflows, generate candidates, and provide additional insight, but the human engineer remains accountable for every artifact that enters the safety case. This is also where static analysis and testing become especially important. These are deterministic, qualifiable processes, and AI can help set them up faster and run them more thoroughly, which is where the real near-term value lies.

How do you ensure trust and determinism when AI becomes part of the development toolchain?

The answer is perhaps less exotic than the question implies. The same things that ensure trust when working with human engineers apply here too: structured processes, review steps, automated tests, and traceability. AI doesn't require a fundamentally different approach to governance, it requires the existing approach to be applied consistently. What does change is that the toolchain has to keep up. If the tools an AI agent operates cannot log what was done, expose results in a structured way, or integrate into existing review workflows, the process breaks down. This is why it matters that development tools adapt alongside AI capabilities rather than being left behind. An agent is only as trustworthy as the toolchain it operates through.

Which AI-enabled vehicle functions already generate real value in production today beyond showcase demos?

That question goes somewhat beyond the scope of the talk, but many production vehicles already use AI-driven functions well beyond demos; especially in advanced driver-assistance systems ADAS. Systems like adaptive cruise control, automatic emergency braking, lane-keeping assist, and collision avoidance rely on AI-enhanced sensor fusion and decision-making to improve safety and reduce crashes. Other real-world AI applications include predictive analytics for vehicle performance and maintenance, as well as personalized in-vehicle user experiences such as intelligent assistants that learn driver preferences and adjust settings or respond to voice commands.

What are the biggest prerequisites in terms of architecture, safety, toolchains for scaling AI across vehicle portfolios?

The prerequisites are well-known but hard to execute simultaneously: standardized software architectures, toolchains that support certification requirements from the start, and hardware platforms that don't lock you into a single silicon vendor. The challenge is less about knowing what is needed and more about aligning organizations, suppliers, and standards bodies to get there at the same pace.