MRS Meetings and Events

 

EL18.16.02 2023 MRS Spring Meeting

A Nanomesh Skin-Sensor that Rapidly Learns Hand-Based Tasks with Limited Trials

When and Where

Apr 14, 2023
2:00pm - 2:15pm

Moscone West, Level 3, Room 3018

Presenter

Co-Author(s)

Kyun Kyu Kim1,Zhenan Bao1

Stanford University1

Abstract

Kyun Kyu Kim1,Zhenan Bao1

Stanford University1
With the help of machine learning, electronic devices — including gloves and electronic skins — can track the movement of human hands and perform tasks such as object and gesture recognition. However, such devices can be bulky and lack an ability to adapt to the curvature of the body. Furthermore, the existing models for signal processing require massive amounts of labelled data for individual tasks and users.<br/>Here, we report a nanomesh receptor that is integrated with an unsupervised meta-learning scheme and can be used for data-efficient user-independent recognition of different hand tasks. The nanomesh is based on biocompatible materials and can be directly printed onto the skin without the need for an external substrate, which improves user-comfort and avoids potential substrate mechanical constraints. The system can translate skin stretches into proprioception information, analogous to the way cutaneous receptors provide feedback for hand. With the approach, complex proprioceptive signals can be decoded using a single sensor along the index finger, without the need for a multi-sensing array. Highly informative multi-joint proprioceptive information can thus be produced in low-dimensional data, reducing computational processing time of our learning network. Our learning framework does not require large amounts of data to be collected for each individual user. We develop a time-dependent contrastive learning algorithm to provide an awareness of temporal continuity and to generate a motion feature space. Our system pretrains unlabelled signals collected from three different users to distinguish user independent, task specific sensor signal patterns from random hand motion. We show that the pretrained model can quickly adapt to different daily tasks —motion command, keypad typing, two-handed keyboard typing, and object recognition — using only a few personal hand signals. [1]<br/>Reference:<br/>1. K.K.Kim, Z.Bao*, Nature Electronics, in press

Symposium Organizers

Ho-Hsiu Chou, National Tsing Hua University
Francisco Molina-Lopez, KU Leuven
Sihong Wang, University of Chicago
Xuzhou Yan, Shanghai Jiao Tong University

Symposium Support

Bronze
Azalea Vision
MilliporeSigma
Device, Cell Press

Publishing Alliance

MRS publishes with Springer Nature