Software Defined Vehicles

Interview with Steve Stoddard, Sonatus

“AI model inputs can be tagged to VSS signal names upfront”

4 min
Steve Stoddard brings experience in scaling AI across distributed vehicle architectures. At ACC US 2026, he will outline how consolidated compute platforms, abstraction layers and robust toolchains can accelerate the deployment of intelligent, adaptive vehicles.

Scaling edge AI across vehicle platforms remains a complex challenge. From compute constraints and legacy architectures to abstraction and toolchains, Steve Stoddard, Principal Product Manager AI at Sonatus, explains where scalability breaks down and what OEMs must prioritise next.

As vehicles evolve into software defined and increasingly AI driven systems, deploying intelligence at the edge is becoming a decisive architectural challenge. Automakers must balance model complexity, compute headroom, safety requirements and cost efficiency, all while working across heterogeneous platforms and legacy constraints.

Steve Stoddard, Principal Product Manager, AI at Sonatus, focuses precisely on this intersection of architecture and scalability. In his upcoming Automotive Computing Conference US 2026 presentation, Beyond Autonomy: Scaling Edge AI for Smarter, Safer Vehicles, he explores how edge AI can be deployed efficiently across general purpose ECUs, gateways and domain controllers. Ahead of the conference, we had the opportunity to speak with him.

ADT: What architectural challenges arise when scaling edge AI across vehicle platforms?

Stoddard: The biggest challenges arise from platform heterogeneity. Each platform has its own mix of ECUs, SoCs, sensors, data signals, and message definitions. An AI model that runs cleanly on one vehicle may be non deployable on another because the target ECU lacks compute headroom, cannot access the required data inputs, or uses different message naming. Even when the logic is portable, AI models often require calibration for different vehicles and may require engineering work to re configure for different message naming, signal parameters, or timing. This fragmentation turns every deployment into a bespoke integration effort rather than a scalable capability.

How do OEMs balance intelligence, compute constraints and cost in real world deployments?

In vehicle AI workloads are often segregated by domain. ADAS and other safety critical controls are allocated to dedicated ECUs during architecture design and core function definition. Other AI features, such as comfort, personalisation, and convenience features, are often relegated to shared resources on infotainment ECUs or other high performance computing platforms when headroom allows. By the mid to late stages of vehicle programmes, that headroom is usually exhausted. To work around compute constraints, it is common to define intelligent features using simpler, calibrated algorithms with lookup tables for the applicable operating domains.

What role does software abstraction play in accelerating edge AI adoption?

The abstraction of CAN or Ethernet networks and signal definitions enables greater portability of developed AI models from one vehicle platform to another. For example, AI model inputs can be tagged to VSS signal names upfront. VSS signal to CAN mappings can then be maintained for each vehicle model, and when an OEM wants to port an AI model from one vehicle to another, the proper conversions can be handled automatically via these mappings. Historically, maintaining these mappings has been labour intensive, but LLM based systems can now automate much of the translation and maintenance. This allows OEMs to amortise model development and calibration across more vehicles, reducing both engineering cost and time to deployment.

From your perspective, which edge AI capabilities will deliver real customer value first?

Outside of ADAS, driver and occupancy monitoring, and autonomy features, we see virtual sensors gaining interest among OEMs because they reduce BOM costs with some pass through cost savings for customers. Fleets are already using similar virtual sensors to lower operational costs. Diagnostics and anomaly detection also have strong potential to improve the owner experience. Much of that work remains cloud based for now, but we will see things increasingly moving in vehicle in the next 12 to 24 months.

Which AI enabled vehicle functions already generate real value in production today beyond showcase demos?

ADAS features like Automatic Emergency Braking, Active Cruise Control, and Lane Keep Assist represent the most mature and widely deployed in vehicle AI ML models. These deliver tremendous safety and driver assurance benefits at scale. Driver monitoring features, including the detection of drowsiness and distraction, are also quite mature and face upcoming regulations requiring their use for key regions.

What are the biggest prerequisites in architecture, safety and toolchains for scaling AI across vehicle portfolios?

Architecture is critical. Standardised runtimes, data interfaces, and pipelines, plus the abstraction of data signals and hardware, are key for scaling. Consolidating compute and selecting processors with neural compute capabilities enable greater capacity for AI features. Safety requires separating safety critical and non critical workloads, and deploying models in containerised environments to help OEMs control the consumption of compute and memory resources. Toolchains must provide robust tools for model version management, observability into model KPIs and on vehicle resource usage, and deployment mechanisms that support model promotion, relegation, and rollback across diverse vehicle platforms.

Which architectural principles are shaping next generation vehicle platforms today?

OEMs continue to target fully software defined networks with service oriented interfaces, richer diagnostics, and fewer but more capable high performance computing units within zonal architectures. After encountering challenges with wholesale, cross platform architecture changeovers, however, a number of OEMs have shifted towards a more incremental approach to the transition, choosing to evolve existing architectures rather than replacing them outright.

Where do legacy concepts still slow down scalability and long term upgradability?

The legacy approach of an ECU for every function, coupled with software and AI model integration at a signal level without abstraction, forces every AI feature to become a custom integration project filled with glue code. Outdated data capture policies can also slow model development and re-training, while legacy over the air release trains built for infrequent updates conflict with the iterative nature of AI model improvement. These patterns create friction along the entire AI lifecycle.

Partnerships are becoming essential across the automotive computing stack. Which type of partnerships will matter most in the next three years, strategic, technological, or regulatory, and what should OEMs prioritise first?

Technological partnerships will be the most important over the next three years, particularly for scaling AI models across vehicle fleets. The AI model lifecycle spans vehicle subsystems, data science, machine learning, silicon, software development, and vehicle integration, too many domains to maintain fully in house at a competitive cost. The most important priority for OEMs is to identify an AI model toolchain that can meet their needs while standardising runtimes, interfaces, and pipelines. By abstracting the data and hardware, OEMs can then effectively leverage strategic partnerships with model vendors and Tier 1 suppliers to scale.