Apr 23, 2024
2:00pm - 2:30pm
Room 339, Level 3, Summit
Yoeri van de Burgt1
Eindhoven University of Technology1
The process of neural network training can be slow and energy-expensive due to the transfer of weight data between digital memory and processor chips. Neuromorphic systems can accelerate neural networks by performing multiply-accumulate operations in parallel using non-volatile analogue memory. However, the backpropagation training algorithm in multi-layer (deep) neural networks requires information - and thus storage - on the partial derivatives of the weight values, preventing easy implementation in hardware.<br/>In this talk I will highlight a novel hardware implementation of the well-established backpropagation algorithm that progressively updates each layer using <i>in situ</i> stochastic gradient descent, thus avoiding this storage requirement. We experimentally demonstrate the <i>in situ</i> error calculation and the proposed progressive backpropagation method using a multi-layer hardware implemented neural network based on organic EC-RAM, and confirm identical learning characteristics and classification performance compared to conventional backpropagation in software.