News

Going analog may tame AI’s exploding energy needs

Paul van Gerven
Reading time: 3 minutes

IBM came up with an analog AI chip that’s 14 times more energy efficient than the best digital hardware available right now.

Artificial intelligence is booming, but so is its carbon footprint. Training of the large-language model ChatGPT 3 is estimated to have cost about 1300 megawatthours, enough to drive 750,000 kilometers in an electric car. Google has revealed that AI accounts for 10-15 percent of the company’s energy use. And OpenAI has published data showing that the computing power for key AI landmarks has doubled every 3.4 months over the past few years.

In a world that’s increasingly threatened by climate change, this cannot continue. So far, innovations that increase the energy efficiency of AI have not resulted in curbing the technology’s carbon footprint. That’s why IBM researchers suggest a more radical approach: ditching the popular GPU in favor of analog technology. Their prototype analog chip described in this week’s edition of Nature runs an AI speech recognition model well over 10 times more efficiently than existing hardware.

IBM’s 14nm analog AI chip on a testing board. Credit: Ryan Lavine for IBM

Far cry

One of the reasons why it’s so hard to slim down the energy consumption of AI, is that massive amounts of data need to be shuttled between processors and memory. This can take between 3 to a whopping 10,000 times the amount of energy required for the actual computation, depending on the connection between processor and memory. Moving them closer together both increases speed and reduces energy consumption, but this is not practical in today’s digital circuitry.

One way to stop the relentless back-and-forth of data is to perform calculations inside the memory itself. This is a concept that IBM has been working on for a while now. Back in 2021, the tech firm developed chips that leverage the properties of phase-change memory (PCM) to streamline processing. Now, they have scaled up their concept and demonstrated that it can be used on the AI models that have attracted so much attention recently.

Mind you, the technology is still a far cry from actually capable of handling massive models such as ChatGPT, but more on that later.

State-of-the-art

PCM is a type of non-volatile memory that exploits the behavior of so-called chalcogenide glasses. After heating this material, it can either be quenched quickly to make it amorphous, resulting in a high electrical resistance, or gently to allow for crystallization, resulting in low resistance. In digital circuitry, these two states represent 0s and 1s.

Crucially, however, it’s also possible to assign values in between. This property suits neural networks very nicely, as the gradient in resistance can represent synaptic weight. Analogous to the strength of neural connections in the brain, the weight determines how much of an input signal is passed on to other nodes. In PCM, this is as simple as passing a current through a memory cell. This eliminates the need to move weights between memory and compute regions of a chip, or across chips. As a result, much less components are needed to perform one of the most common operations in AI training processes.

Of course, PCM can’t do it all by itself: control electronics are still required to control the flow of data. IBM tightly integrated these compute units with the memory, further increasing energy-efficiency. Altogether, the PCM chips were able to achieve 12.5 trillion operations per second per watt, an energy efficiency that’s at ten to hundred times higher than current state-of-the-art CPUs or GPUs. In speech-recognition tasks the analog processors proved 14 times more efficient than conventional hardware.

Set sail

As promising IBM’s results are to combat AI’s sustainability problem, analog technology “is still in its infancy,” notes Intel’s Hechen Wang, who was not involved in the research. To scale to the size needed to handle advanced model, Wang points out that major innovations are required on the basic memory level, but also at the circuit and architecture level. Once a solid hardware base has been established, the technology also requires the development of a compiler that can translate code into machine-level instructions, algorithms that are better suited to analog chips and applications that are optimized for them.

“It will probably take years to establish the same sort of environment for analog AI. The good news is that IBM’s researchers, together with other researchers in this area, are steering the ship, and have set sail towards realizing this goal,” Wang asserts.

Related content