Compute-in-Memory Designs: Trends and Prospects

Presenter
Country
USA
Affiliation
Department of Electrical and Computer Engineering University of Texas at Austin

Presentation Menu

Abstract

The unprecedented growth in Deep Neural Networks (DNN) model size has resulted into a massive amount of data movement from off-chip memory to on-chip processing cores in modern Machine Learning (ML) accelerators. Compute-In-Memory (CIM) designs performing DNN computations within memory arrays are being explored to mitigate this ‘Memory Wall’ bottleneck of latency and energy overheads. Multiple memory technologies with unique attributes are being explored for enabling energy efficient CIM designs.
I this talk, I will present trends in recent CIM designs and highlight key principles utilized for performing multi-bit Multiply-Accumulate (MAC) computations using both analog and digitally-intensive approaches. The design trade-offs among bit-precision, throughput, energy efficiency, data converter overheads, and computational accuracies will be discussed. In addition, the prospects of compute-in-memory designs for applications beyond DNNs will also be presented.