SRAM vs DRAM scaling compared
A tale of two memories: how Moore's law enabled them and where we go from here
SRAM and DRAM are the two most common types of RAM (Random Access Memory) on a computer. In a typical computing system, SRAM is used as fast on-chip memory with limited capacity while DRAM is used off-chip with much larger capacity at the expense of higher latency. Both of them are volatile, i.e., data is lost when power supply is turned off and random access, i.e., any individual memory cell can be uniquely addressed using a combination of access lines (typically Wordline and Bitline for DRAM/ SRAM).
SRAM is glorified poster child of Moore’s law, while DRAM is often left out of the discussion. Perhaps this has something with do with cost of each technology. Traditional semiconductor processing improvements have historically benefited scaling of both SRAM and DRAM. These primarily include lithography driven changes like move from dry to immersion lithography, pitch doubling (and later quadrupling) etc. Typically DRAM companies like Samsung/ SK Hynix/ Micron have been paving the path for pushing limits of printing smaller pitches, and logic companies like Intel and TSMC would follow suit few years later. The trend has been buckled by EUV lithography, where logic foundries have been lead adopters due to massive change in cost structure and extreme price sensitivity of DRAM market.
The plot below shows that DRAM bit level density has been trending around 10x smaller than corresponding SRAM over last couple of decades. SRAM unit cell has 6 transistors (6T) while DRAM uses only 1 transistor with an integrated capacitor on top of it (1T1C). This is the main reason why DRAM is denser than SRAM. Additionally dedicated DRAM process has evolved to support tighter pitches increasing overall density. SRAM, on the other hand, has to be co-optimized with digital and analog logic transistors.
DRAM process roadmap continues to evolves with EUV and 3D-DRAM in the horizon. On the other hand, as mentioned in my previous post, SRAM scaling officially stopped with 3nm logic node and the gap between SRAM and DRAM is expected to grow wider in future. This can lead to disruptive new technologies like HBM (High Bandwith Memory) becoming especially important in coming decade. HBM is essentially a DRAM memory broken down into smaller arrays along with TSV (Through Si Via) technology to enable compact packaging with compute die and increased data bandwidth. Continual improvement of HBM based memory is expected to define the next decade of memory technology.
The views expressed here are the author’s own