Seyoung Kim1
Pohang University of Science and Technology1
Seyoung Kim1
Pohang University of Science and Technology1
Artificial Intelligence (AI) technology based on artificial neural networks is making a huge business and socioeconomic impact, changing the way of our daily life. With an expectation of achieving a brain-like efficiency and large performance gain, there has been increasing interest to implement artificial neural networks using novel non-volatile memory devices. Novel cross-point array architectures based on resistive memory devices, which can be operated in a massively parallel manner, have shown potential to achieve a large acceleration in AI computation. For realization of such architectures in hardware, the specifications for synaptic devices are revealed to be different in many aspects with those for traditional memory devices, necessitating new material systems and device designs [1, 2] as well as novel algorithms and architectures for efficient AI computation. In this talk, I will overview the recent progress and effort to achieve ideal synaptic device characteristics for novel neuromorphic architectures, as exemplified in our recent experimental results on capacitor-based approach [1, 3, 4], and 3-terminal ionic switching devices [5, 6]. In addition, I will discuss novel algorithmic and architecture-level remedies to overcome the device non-idealities for neural network training [7].<br/><br/>[1] S. Kim et al., 2017 IEEE 60th MWSCAS, 2017. [2] T. Gokmen et al., Frontiers in Neuroscience 10, 2016. [3] Y. Li et al., Symposia on VLSI Technology and Circuits, 2018. [4] Y. Kohda et al., 2020 IEEE International Electron Devices Meeting, 2020. [5] J. Tang et al., 2018 IEEE International Electron Devices Meeting, 2018. [6] S. Kim et al., 2019 IEEE International Electron Devices Meeting, 2019. [7] C. Lee et al., Frontiers in Neuroscience (Accepted)