Apr 25, 2024
9:00am - 9:30am
Room 344, Level 3, Summit
Kah-Wee Ang1
National University of Singapore1
The exponential growth of data storage and computation requirement has imposed severe power consumption challenges for digital computers built on traditional von Neumann architecture. New computing systems using the compute-in-memory (CIM) concept could offer a potential solution to overcome the inherent energy consumption and latency issues. In particular, CIM based on analog memristors is promising to enable a low latency and energy-efficient approach for performing data-intensive tasks such as image processing by means of neural network training. Here, we demonstrate memristive crossbar arrays (CBA) using transition metal dichalcogenides for implementing convolution neural network (CNN) hardware. The memristor achieves a small switching voltage, low switching energy, and improved variability in addition to the ability to emulate synaptic weight plasticity. The CBA successfully implements both neuromorphic and matrix-heavy workloads in neural networks, including artificial-synapse-based ANN, multiply-and-accumulate (MAC) operations, and convolutional image processing with high recognition accuracy. Moreover, the column-by-column MAC operation manifests a highly parallelized computing ability, opening a route to enable hardware acceleration of machine learning algorithms for emerging artificial intelligence applications.