Planned start from 2026
How Tesla is Changing its AI Hardware Strategy
After the end of the Dojo supercomputer project, Tesla is realigning its AI strategy.
Moose - stock.adobe.com
In the summer of 2025, Tesla discontinued the Dojo supercomputer project and will now focus on AI5 and AI6 chips, manufactured by Samsung. What are the goals behind this and what risks does the exclusive 2-nm manufacturing pose?
FAQ: Tesla's End of Dojo, AI Chip Plans & Samsung Deal
Why is Tesla ending the Dojo project?
According to Elon Musk, chip development is being streamlined: Instead of separate training and inference hardware, Tesla is focusing on AI5/AI6 to reduce complexity and costs and to move into series production faster.
What are AI5/AI6 designed for?
Primarily for real-time inference - that is, perception, planning, and control in FSD, robotaxis, and the Optimus robot. Parts of the training can also be covered to achieve a unified platform.
Why is Tesla manufacturing with Samsung and not TSMC?
To diversify the supply chain, secure capacity, and closely align manufacturing recipes with its own architecture. The large order also strengthens Samsung's foundry business in the automotive/AI segment.
What risks does the partnership entail?
Single point of failure: Delays or low yield rates at the new US plant would directly affect Tesla's hardware upgrades and robotaxi schedules.
When can AI5 be expected?
Planned start from 2026; industrialisation depends on the successful ramp-up of production.
What are the implications of "Tesla's deal with Samsung" for the industry?
Blueprint for deep OEM-foundry integration: dedicated capacity, faster iterations, and closer hardware/software coupling - with increased reliance on a single manufacturer.
Tesla is making a radical shift in its AI hardware strategy in the summer of 2025: The ambitious Dojo supercomputer project is being discontinued and the development department is being reoriented. Instead of a proprietary training platform, the electric car manufacturer will rely on a unified chip architecture that is suitable for both inference and parts of training - and is entering into a multi-billion dollar manufacturing partnership with Samsung for this purpose.
From supercomputer to specialised chip
The Dojo project was long considered Tesla's answer to the increasing computational demands of training neural networks for autonomous driving. The hardware was tailored to massive amounts of data from the Tesla fleet. The goal: faster and more cost-efficient training cycles. But in August 2025, Elon Musk announced its end - too expensive, too complex, too little strategic benefit compared to a unified hardware platform. The Tesla Dojo supercomputer was Tesla's self-developed system for training neural networks in the field of autonomous driving and robotics. The core was the D1 chip with 354 specialised cores, manufactured in the 7-nm process at TSMC and optimised for machine learning. 25 of these chips were combined into a "Dojo Training Tile," offering around 9 PetaOps BF16 performance and 11 GB integrated SRAM. Several of these tiles could be scaled into a 2D mesh system, fully expanded as an "ExaPod" with 120 tiles.
The architecture used physical memory addresses and custom network technology to reduce latencies and replace GPUs in certain training workloads. Dojo was fully tailored to Tesla's own workloads and aimed to accelerate the development of Full Self-Driving (FSD) by processing large amounts of driving data more quickly. Despite this ambitious architecture, the project is now being discontinued - Tesla is focusing on a simplified, unified AI5/AI6 chip platform in partnership with Samsung. This marks the end of a chapter of highly specialized in-house hardware in favor of a strategic realignment.
The purpose of AI5/AI6 & why inference is the bottleneck
The newly set priorities are AI5 and AI6. These chips are primarily optimized for inference tasks - that is, running trained AI models in real-time. Musk intends to use them to power both the computing platform in vehicles and Tesla's humanoid robot Optimus and future robotaxi fleets. At the same time, the chips should also be able to take on training tasks in certain scenarios to avoid redundant architectures.
Inference chips are the heart of autonomous systems: They process sensor data extremely quickly, make decisions, and initiate actions - without detours via external data centres. For applications like Full Self-Driving (FSD), this means lower latency, higher reliability, and more independence from mobile or cloud connections. The shift from a dedicated training platform like Dojo to a scalable, production-oriented chip generation is also a signal to the market: Tesla wants to deliver faster and spend less time tied up in expensive, proprietary infrastructure.
The AI5/AI6 chips are intended to serve as the computing core for FSD in production vehicles, future robotaxi fleets, the humanoid robot Optimus, as well as AI data centres in the Tesla ecosystem. The architecture aims for maximum computing power per watt for deterministic real-time processes - crucial for precise perception, planning, and control in autonomy applications. At the same time, there should be enough flexibility to perform selected training tasks on-device or in data centres. Unified software stacks and tooling reduce integration effort according to Tesla's plans, shorten development cycles, and enable new functions to be rolled out faster and more stably across the entire fleet via over-the-air updates.
Samsung as a Strategic Partner
In Tesla's strategic shift, Samsung plays an important role. As part of a $16.5 billion deal, the Korean company takes over the production of Tesla's AI5 and AI6 chips. Production will take place in the new Texas fab, which is based on a 2-nm process and is said to be reserved exclusively for Tesla. This deal comes at the right time for Samsung, as it was recently revealed that the plant - due to low utilisation - was to start operations later.
This order is a double win for Samsung: It brings the urgently needed volume to a foundry business that has so far clearly lagged behind TSMC, and anchors the company more deeply in the automotive and AI semiconductor market. For Tesla, in turn, it means secured capacities at a time when chip shortages and geopolitical risks are putting pressure on supply chains. The fears are probably too great, stemming from the early days of the coronavirus pandemic, when car manufacturers were struggling with shortages.
What Tesla's Strategic Shift Means for the Automotive Industry
By moving away from Dojo and focusing on AI5/AI6, Tesla is following a clear industry trend: away from hard-to-scale, proprietary data centres towards highly specialised, partnership-developed chips. While providers like Nvidia design their inference architectures for a broad customer base, Tesla relies on extreme vertical integration - from chip design to application
in vehicles and robotics. For development teams, this means more stable planning through guaranteed capacities, unified toolchains across all platforms, clearer thermal and power budgets for series ECUs, and shorter validation cycles. At the same time, however, the dependency on the execution quality of a single manufacturer increases. Whether Samsung delivers the 2-nm process in Texas on time and with high yield will determine whether Tesla can expand its technological lead or whether the step becomes a cautionary example of the risks of exclusive manufacturing alliances.
This article was first published
at automotiveit.eu