Nieke Roos
15 December 2022

Eindhoven-based Axelera AI has announced its Metis artificial-intelligence platform to accelerate computer vision processing at the edge. It encompasses the Metis AI processing unit (AIPU) chip and the Voyager SDK software stack. PCI Express and M.2 acceleration cards will be available for selected customers in early 2023. Separately, Axelera is working with partners to develop system-ready vision solutions.

The Metis AIPU is a quad-core architecture. Central to the operation of each core is Axelera’s SRAM-based Digital In-Memory Computing (D-IMC) engine for accelerating matrix-vector multiplication operations, offering a high energy efficiency of 15 tera operations per second (TOPS) per watt at INT8 precision. The Thetis test chip, taped out a year ago, already demonstrated the engine’s efficacy. FP32 iso-accuracy is achieved without the need to retrain the neural network models.

Axelera AI vision box
Credit: Axelera AI

Each AIPU core can execute all layers of a standard neural network without external interactions delivering up to 53.5 TOPS. Thus, the compound throughput of the AIPU can reach 214 TOPS. The cores can either be combined to boost the throughput of a complex workload, operate independently on the same neural network to reduce latency or concurrently process different neural networks for applications featuring pipelines of neural networks.

The four cores are integrated into a system-on-chip (SoC), comprising a RISC-V controller, a PCI Express interface, an LPDDR4X controller and a security complex connected via a high-speed network-on-chip (NoC). The NoC also links the cores to a multi-level hierarchy of more than 52 MB of on-chip high-speed shared memories, while the LPDDR4 controller connects to external memory, enabling support for much larger neural networks. Finally, the PCI Express interface provides a high-speed link to an external host, which will offload full neural network applications to the AIPU.

Built on open, industry-standard frameworks and APIs, the Voyager SDK includes an integrated compiler, a runtime stack and optimization tools. It provides turnkey pipelines for state-of-the-art models, which users may adapt to their application, generate optimized code for and deploy to Metis-enabled edge devices. Via Axelera’s Model Zoo, they can access a host of neural networks to customize, fine-tune or deploy as-is. They can also import their own pre-trained neural networks. The SDK automatically quantizes and compiles neural networks that have been trained on different frameworks, generating code for the Axelera platform.

Axelera AI M2
Credit: Axelera AI

Metis supports solutions for a wide range of environments, ranging from fully embedded use cases to distributed processing of multiple 4K video streams across networked devices. According to Axelera, it’s the first product in the market to offer hundreds of TOPS compute performance at an edge price point. The M.2 acceleration card, for example, delivers up to 214 TOPS priced from 149 dollars. Powered by four AIPUs, the PCI Express card achieves up to 856 TOPS.

“The Metis AIPU demonstrates the effectiveness of our D-IMC engine by achieving matrix-vector multiplications with high precision and constant time complexity. In tandem with the advanced quantization algorithms, the large on-chip SRAM and the RISC-V technology, it’s well-suited to address the stringent operational requirements of the AI applications at the edge,” says Axelera AI co-founder and CTO Evangelos Eleftheriou. “Our end-to-end integrated software stack addresses the user pain points of AI application development at the edge.”