MRS Meetings and Events

 

EQ11.05.07 2022 MRS Spring Meeting

Relaxed Synaptic Device Specifications for Neural Network Training with Tiki-Taka Algorithm

When and Where

May 23, 2022
9:45am - 9:50am

EQ11-Virtual

Presenter

Co-Author(s)

Kyungmi Noh1,Wonjae Ji1,Chaeun Lee2,Tayfun Gokmen3,Seyoung Kim1

Pohang University of Science and Technology1,NAVER Clova2,IBM T.J. Watson Research Center3

Abstract

Kyungmi Noh1,Wonjae Ji1,Chaeun Lee2,Tayfun Gokmen3,Seyoung Kim1

Pohang University of Science and Technology1,NAVER Clova2,IBM T.J. Watson Research Center3
Non-volatile memory-based synaptic device and array technologies came into the spotlight due to its feasibility for accelerated AI computations in analog domain. Vigorous research has been conducted to implement analog neural network training accelerators with resistive cross-point arrays. Despite the theoretically promised benefits, such as computation speed-up, area- and power-efficiency, non-ideal switching characteristics of resistive memory devices hinder the realization of such analog accelerators. Especially in neural network training applications, non-idealities such as conductance update asymmetry and retention characteristic are known to critically impact the convergence of the training and performance. To resolve the issue, Tiki-Taka algorithm, in which an auxiliary array, in addition to the main array, is introduced to form a coupled dynamical system, was proposed to minimize the negative impact of asymmetric conductance update [1]. In this work, we study the impact of update asymmetry and retention in two coupled arrays when training neural network with Tiki-Taka algorithm. We carefully inspect the training process and show that these non-idealities in each array affect differently on how the history of gradient information is stored and evolved during the training. By exploring the asymmetry, retention and other network hyper-parameters, we reveal the optimal range of device parameters for two arrays, which guarantees near-software performance with Tiki-Taka algorithm. Our study demonstrates the possibility of significant relaxation in synaptic device specifications to realize high-performance analog neural network training accelerators with practical resistive switching devices [2].<br/><br/>[1] Gokmen, T., & Haensch, W. (2020). Algorithm for training neural networks on resistive device arrays. Frontiers in neuroscience, 14, 103. [2] Lee, C. et al. (2021). Impact of Asymmetric Weight Update on Neural Network Training with Tiki-Taka Algorithm. Frontiers in neuroscience, in press.

Symposium Organizers

Yoeri van de Burgt, Technische Universiteit Eindhoven
Yiyang Li, University of Michigan
Francesca Santoro, Forschungszentrum Jülich/RWTH Aachen University
Ilia Valov, Research Center Juelich

Symposium Support

Bronze
Nextron Corporation

Publishing Alliance

MRS publishes with Springer Nature