Your cart is currently empty!
Imec and Globalfoundries connect to bring AI chip to IoT’s edge
Imec has announced the hardware demonstration of a new AI chip. In conjunction with semiconductor manufacturing giant Globalfoundaries (GF), the Leuven-based research hub has optimized its analog in-memory computing architecture to perform deep neural network calculations. The demonstrator achieved a record high energy efficiency and is touted as being a key enabler to inference-on-the-edge for low-power devices. As a result, Imec believes that the privacy, security and latency benefits of this new technology will impact AI applications on a wide range of edge devices, from smart speakers to self-driving vehicles
From the beginning of the digital computer age, processing and memory functions have been separated. Operations that use large amounts of data also require a large number of data elements to be retrieved from the memory storage. This limitation, known as the Von Neumann bottleneck, can greatly hinder computing time and requires high amounts of energy. To address this challenge, Imec and GF developed a new architecture that eliminates the Von Neumann bottleneck by performing analog computation in SRAM cells. The resulting analog inference accelerator, built on GF’s 22FDX semiconductor platform, has exceptional energy efficiency. This means that pattern recognition in tiny sensors and low-power edge devices, which are typically powered by machine learning in data centers, can now be performed locally on this power-efficient accelerator.
“The successful tape-out of Ania marks an important step forward toward validation of analog in-memory computing,” says Diederik Verkest, program director for machine learning at Imec. “The reference implementation not only shows that analog in-memory calculations are possible in practice, but also that they achieve an energy efficiency ten to a hundred times better than digital accelerators. In Imec’s machine learning program, we tune existing and emerging memory devices to optimize them for analog in-memory computation. These promising results encourage us to further develop this technology.”