Software Defined Vehicles
Interview with Chris Jacobs, Micron
“Memory solutions also enable lower power consumption”
As multimodal AI turns cars into data centers on wheels, memory and storage become critical. Chris Jacobs from Micron explains how new DRAM and flash architectures deliver the bandwidth, latency, and reliability needed for real-time in-vehicle AI.
Chris Jacobs is Vice President & GM, Automotive & Embedded Market Segments at Micron, where he leads a global team driving strategy and business development for automotive, industrial and embedded consumer memory and storage solutions. Before joining Micron, he spent more than two decades in leadership roles at Analog Devices, focusing on autonomous transportation, power, and high-speed data converters, and began his career as an engineer at Texas Instruments.
As a speaker at the Automotive Computing Conference 2025, Jacobs outlined how multimodal AI is transforming vehicles into data centers on wheels – and what this means for DRAM bandwidth, storage performance and system reliability. After the event, we conducted an in-depth interview with him.
As AI models become multimodal, vehicles are turning into data centers on wheels. From your perspective, what are the biggest memory and bandwidth challenges that come with bringing LLMs and VLMs into production vehicles?
As AI models become multimodal, vehicles are rapidly evolving into data centers on wheels, and this transformation brings significant memory and bandwidth challenges. The integration of large language models (LLMs) and vision-language models (VLMs) into production vehicles dramatically increases the demand for higher bandwidth and faster storage. For advanced autonomous driving systems at Level 4 and beyond, DRAM bandwidth requirements can surge to over 1 terabyte per second. These models have substantial footprints, often being multiple gigabytes while trading off between quantization and perplexity. Given that typical automotive systems often demand real-time inference and fast cold-boot capabilities, significant pressure is placed on both memory and storage subsystems.
What does this enable in practice?
These capabilities enable vehicles to make split-second decisions and become operational almost immediately upon startup – both critical for safety and user experience. Meeting these needs involves deploying wide memory interfaces with advanced error correction protocols. We must also continue to maintain focus on traditional edge-compute challenges such as complexity in system integration, routing, and thermal management. Specifically, automotive platforms demand robust solutions that can withstand environmental extremes and enable functional safety, often requiring ASIL-D compliance. All these requirements must be balanced with cost, scalability, and reliability, making ecosystem collaboration essential to drive innovation in memory and storage technologies.
At the ACC 2025, you discussed how memory architecture must evolve to keep pace with the rise of multimodal intelligence. What innovations is the industry exploring to handle these extreme data loads efficiently?
To keep pace with the rise of multimodal intelligence in vehicles, the automotive industry is exploring several cutting-edge innovations in memory architecture. One major advancement Micron brings to the market is the adoption of Direct Link ECC Protocol (DLEP), which boosts effective memory bandwidth by 15–25% and maintains system safety through robust error correction – critical for automotive AI workloads. Micron’s automotive near-memory LPDDR5X packages offer expanded bus widths, another innovation that delivers high bandwidth, low power, and improved signal integrity. On the storage front, next-generation SSDs and UFS 4.1 devices enable faster boot-up, efficient model loading, and enhanced endurance to support evolving workloads and rapid inference. These innovations collectively address the challenges of extreme data loads, enabling automotive platforms to efficiently support advanced AI models and real-time processing as vehicles become increasingly intelligent and connected.
Micron has long been at the forefront of memory innovation. How do you see your technologies enabling real-time AI performance in the car – while balancing cost, energy consumption, and system reliability?
Innovations like LPDDR5X with Direct Link ECC Protocol (DLEP) deliver substantial bandwidth improvements, which are essential for automotive AI workloads. These memory solutions also enable lower power consumption and an optimized system bill of materials, helping to reduce overall costs and energy usage without compromising performance. Micron’s near-memory architectures, expanded bus widths, and improved packaging allow for scalable, high-speed data transfer that meets the extreme demands of multimodal AI, all within the thermal and environmental constraints of automotive platforms. Additionally, Micron’s automotive-focused innovations, including best-in-class self-refresh current for LPDDR5, on-device built-in diagnostics and telemetry features for LPDDR5, and ultra-endurance UFS 4.1, will enable cutting-edge, next-generation automotive platforms. By integrating these technologies and features, Micron empowers automotive systems to run complex AI models efficiently and reliably, supporting everything from advanced driver assistance to immersive in-cabin experiences. Our focus on automotive qualifications, ASIL-D compliance, and ecosystem collaboration enables these solutions to be not only high-performing but also safe, cost-effective, and ready for the evolving needs of intelligent vehicles.