By Jeremie Bouchaud and Nimish Ashar


Article Summary

In the autumn of 2025, we first flagged a growing risk in the automotive electronics supply chain. Dynamic random-access memory (DRAM) makers were shifting capacity toward AI data centers, leaving automotive systems increasingly exposed. Since December, DRAM prices have risen faster than expected, up about 70% year over year for automotive LPDDR4 by January 2026, with additional increases signaled for 2026 and 2027 as supply for older generations tightens. This update builds on our December outlook as the industry moves deeper into 2026.

You can read our earlier perspective here: DRAM makers prioritize AI data center demand, sparking automotive semiconductor shortage.

This disruption is the result of long‑term shifts in demand, investment, and profitability, and that distinction changes how OEMs should respond.

During our recent webinar, Strategies for Navigating the DRAM Chip Shortage, we explored why this shortage is more structural than cyclical and why quick fixes are unlikely to hold. If you missed the live discussion, you can watch the webinar on demand.

Strategies for Navigating the DRAM Chip Shortage

Why DRAM supply tightened in the first place

Automotive demand didn’t suddenly spike. Data‑center demand did, and it did so at a scale the memory industry wasn’t built to absorb.

As AI applications have moved from experimentation to infrastructure, data centers have ramped up deployment of GPUs at unprecedented rates. Each GPU integrates multiple stacks of DRAM, typically in the form of high‑bandwidth memory (HBM), spread across several memory layers per processor. While HBM is estimated to account for up to 40% of a GPU’s bill of materials—according to Morgan Stanley (2024)—the more fundamental constraint lies elsewhere.

This memory‑dense architecture consumes a disproportionately large amount of DRAM silicon wafer area per GPU, placing significant pressure on manufacturing capacity. It is this concentration of wafer‑area demand, rather than value alone, that has driven capacity reallocation decisions and tightened DRAM supply across the market.

S&P Global 451 Research estimates show how quickly GPU shipments accelerated following the AI boom.  What’s less visible but just as important is how tightly memory supply is linked to that growth.

DRAM manufacturing is capital‑intensive and slow to expand. After a period of heavy losses and overcapacity in 2023, DRAM makers were cautious about reinvesting just as data‑center demand surged. With new semiconductor capacity taking years to come online because of high costs and long build times, supply has not kept pace with demand.

Faced with that imbalance, suppliers had to make choices. Capacity flowed toward the applications with the greatest volume, the highest margins, and the clearest long‑term growth. Automotive wasn’t deprioritized because it didn’t matter, it was crowded out because other markets could pay more and do so at scale.

Price is absorbing the shock for now

In 2026, supply is still available if buyers are willing to accept higher prices. That’s creating a sense of short‑term stability, but it’s a fragile one.

Prices for older automotive‑grade DRAM have already jumped sharply, particularly for LPDDR4. More increases are expected over the next two years, even as suppliers begin scaling back production of older generations altogether.

The pressure builds quietly. New programs are moving to LPDDR5, while older automotive cockpit and ADAS systems are not, or not far enough. That’s where risk concentrates.

 Automotive cockpit and ADAS systems designed several years ago—and scheduled to remain in production through 2027 or 2028—are now more exposed to rising costs and shrinking supply options. This is not a volume crisis yet, but it is a planning one.

Decontenting sounds simple but in practice, it rarely is.

When costs rise, “decontenting”, the practice of removing content or features, can seem like the obvious lever. In reality, it’s one of the hardest levers to pull.

In the cockpit, features that consume the most DRAM such as large displays, advanced graphics, and GenAI assistants, are often the same features that define brand identity and consumer expectations. Removing them saves memory, but it can also undo years of positioning work, especially in markets like China.

In ADAS (Advanced Driver Assistance Systems), flexibility is even more limited. Many features are required by regulation, especially in Europe, or heavily incentivized by safety ratings. Cutting them brings real safety and reputational risks, with limited cost relief.

Automated driving offers slightly more room to adjust, but those systems support future revenue through software upgrades and subscriptions. Pulling back today reduces the funnel for tomorrow. While decontenting can happen at the margins, it is rarely a solution at scale.

Vehicle mix is the pressure valve few want to talk about

If DRAM tightens further, the industry is more likely to shift what it builds than what it installs. For example, lower‑margin, entry‑level vehicles consume relatively little DRAM per unit. Higher‑end vehicles consume far more, but they also generate stronger margins. In a constrained environment, production naturally tilts upward.

That leads to a difficult trade‑off, one where OEMs may preserve their marginswhile overall unit volume softens, with the impact concentrated in lower segments. It’s how manufacturers have responded to past constraints and the logic still holds.

Which OEMs are most exposed?

The risk and impact of the DRAM chip shortage are uneven across the industry, and the differences become clear when you look at how vehicles are designed, not just where they’re sold.

Models with richer cockpit experiences and higher autonomy adoption naturally carry more memory. That shows up most clearly in OEMs such as Tesla and EV start-ups in China, where vehicles ship with advanced displays, AI features, and automated driving as standard. The result is significantly higher DRAM content per vehicle than in other regions, leaving greater exposure to price shocks or supply tightening.

The implication is twofold. First, these OEMs are more exposed to cost increases, as higher DRAM intensity makes it harder to absorb rising memory prices without passing them on to end customers. Second, they are more exposed to supply disruptions, as shortages or allocation cuts in DRAM components have a larger operational impact on memory‑heavy vehicle designs.

The chart below illustrates how these dynamics play out by OEM, with Chinese EV start‑ups and several tech‑forward brands showing the highest DRAM content per vehicle, and therefore the greatest sensitivity to both pricing pressure and parts availability.

Beyond the total DRAM content per vehicle, the choice of processor or system on chip (SoC) and the DRAM type it supports matters just as much. OEMs that migrated early to the latest System on Chips (SoCs) supporting LPDD5 DRAM are showing up differently in the data. Their exposure is lower, not because they use less memory overall, but because they’re aligned with where capacity investment is heading. Those still dependent on older memory generations face tougher decisions sooner, as prices rise and supply contracts.

What separates these groups isn’t a short‑term sourcing call. It’s a series of platform and SoC  decisions made years ago, long before today’s supply pressures were visible. DRAM exposure today is often the outcome of yesterday’s architecture strategy.

Help is coming, but it won’t be immediate

There is a path toward relief, though it takes time. New suppliers are emerging in China, led by CXMT, which is beginning to ramp automotive‑grade DRAM production. Initially, that capacity will serve domestic demand. Over time, it could free up supply from global players and ease pressure beyond China’s borders.

At the same time, the automotive industry is accelerating its transition to newer DRAM generations. LPDDR5 and LPDDR6 are not immune to competition from data centers, but they are where future capacity investment is focused.

Most signs point to conditions improving toward the end of the decade. That doesn’t help teams making decisions today but it does change how those decisions should be framed.

DRAM is only the first signal

The bigger story isn’t just memory but the competition for semiconductor capacity. Advanced nodes, power devices, packaging, and even flash memory are all seeing pressure from AI‑driven demand. The assumption that chip costs decline over time no longer holds across many categories.

For OEMs and suppliers, this means semiconductor strategy can’t remain reactive. It has to be built into architecture planning, sourcing, and product cadence from the start.

Because when decisions shape vehicles years in advance, confidence depends on seeing those risks early.

Get granular analysis by chip and application category

If you want to understand how DRAM and other semiconductor constraints affect your portfolio by program, region, or supplier, our EE & Semiconductor Service helps you see the full picture.

Our automotive semiconductor forecast analytics is built to support decisions that keep programs on track and margins intact, with easy filtering by region, propulsion, domain and application (220+ ECUs), by semiconductor with over 300 chip categories, EE architecture, vehicle segment and OEMs.

This article was published by S&P Global Mobility and not by S&P Global Ratings, which is a separately managed division of S&P Global.


Content Type

 

Article



Series

Fuel for Thought