Paul van Gerven
8 July 2021

Reports of the death of Moore’s Law are greatly exaggerated. Chipmakers just have to shift gears once in a while, like they’re doing now by embracing chiplets.

They’re all the rage right now in the semiconductor industry: chiplets. Instead of carving a, say, new processor from a single slab of silicon, chipmakers assemble it from different smaller pieces, which are then connected (see inset “What are chiplets?”). You won’t have to look very hard to find claims that this approach is going to get Moore’s Law back on track.

So will it? Yes and no. It depends on what you think Moore’s Law is. In any case, the rise of chiplet technology is clear evidence that a new approach is necessary to make the next generation of chips worthwhile to design, manufacture or buy.

Self-fulfilling prophesy

After several decades in action, Moore’s Law became synonymous with increasing the density of chip components – doubling it every two years, to be precise. During all this time, it was a no-brainer to combine as much functionality as possible in a single chip. That’s the point of integrated circuits, after all.

Initially, increasing density yielded not only cost reductions but also performance gains. About fifteen years ago, however, these gains as a ‘side effect’ of shrinking components began to slow down. Next, a couple of years ago, chipmakers started admitting that they need more than two years to double density.

So, clearly, Moore’s Law as most have become to understand it is running on its last legs. Each increase in density comes with a steep rise in wafer cost, so the cost per transistor is hardly decreasing. Meanwhile, node-to-node improvements in speed and power are significantly smaller than they used to be. From that perspective, Moore’s Law needs saving: shrinking components is becoming more expensive while offering less benefit.

Gordon Moore

The thing is: Gordon Moore’s prediction never was about shrinking exclusively but about IC complexity, ie the number of components per chip. IC complexity depends on chip area and component density, but – as Moore pointed out in 1975, ten years after formulating his original prediction – also on a third factor: device and circuit cleverness. At the time, he was referring to engineers managing to dedicate more IC surface area to components rather than inactive structures such as isolators and interconnects. In those early days, Moore noted, this contributed more to increased IC complexity than either increased chip area or finer lines.

After 1975, the contribution of device and circuit cleverness quickly diminished. Optimal chip area was mainly determined by yield considerations. And so, increasing density became the most important means for increasing IC complexity, over time turning Moore’s prediction into a stringent development cadence for the semiconductor industry, a self-fulfilling prophesy of sorts.

Long-standing trend

Now that shrinking is becoming more and more complicated, it makes sense that chipmakers are revisiting the other two factors that contribute to IC complexity. For example, US company Cerebras turned to wafer-scale chips for AI and deep learning applications. Its chips are so massive that only a single one fits on a 300 mm wafer – over 2.5 trillion transistors per chip, how about that for IC complexity? Other semiconductor companies, including AMD and Intel, are embracing device and circuit cleverness once again. Chiplets are an example of that.

In this respect, nothing fundamentally new is happening as far as Moore’s Law is concerned. This is especially true for chiplets, since Moore acknowledged in his 1965 paper that a monolithic design isn’t always the best solution. “It may prove to be more economical to build large systems out of smaller functions, which are separately packaged and interconnected,” he wrote. Chiplets aren’t separately packaged, but the rationale to employ them follows Moore’s argument.

In the beginning, device and circuit cleverness was driving IC complexity. Then, for a long time, density improvement. When performance gains started to erode and increasing clock frequencies proved no longer viable, the industry turned to multicore processors. Then planar CMOS ‘gave out’ and was replaced by FinFETs. And now it’s reversing a long-standing trend by breaking up monolithic chips into smaller pieces. What will be next?