Paul van Gerven
21 December 2022

SRAM scaling at TSMC has dramatically slowed down, according to an analysis by Wikichip based on data made public at the IEDM recently. While the foundry has realized healthy 1.6-1.7x density improvements going from the 5nm to the 3nm node, the SRAM bit cell size has only shrunk 5 percent for TSMC’s base node N3. For the enhanced N3E node, it hasn’t shrunk at all. Unless a viable alternative can be found, this would mean the leading-edge chip segment could see prices going up.


Advanced chips such as CPUs and GPUs use SRAM as cache, though the percentage of die area occupied by these on-chip memories varies greatly by application. AI and machine learning processors, in particular, use loads of cache because of the inefficiency of shuttling the tremendous amounts of data back and forth between memory and processing units. Engineers working on these types of products might be the first to encounter challenges from lack of SRAM scaling, though the problem might trickle down to consumer products as well.

Nvidia CEO Jensen Huang shared a similar message a few months ago. “The idea that a chip is going to go down in cost over time, unfortunately, is a story of the past,” he told journalists, discussing hefty price hikes for the next generation of GPUs.