April 7 - 11, 2025
Seattle, Washington
Symposium Supporters
2025 MRS Spring Meeting & Exhibit
SB10.04.04

Decoding Neural Intrinsic Dynamics with Flexible Brain-Machine Interfaces

When and Where

Apr 9, 2025
9:30am - 10:00am
Summit, Level 3, Room 332

Presenter(s)

Co-Author(s)

Jia Liu1

Harvard University1

Abstract

Jia Liu1

Harvard University1
The brain is a highly dynamic system with constantly evolving neural activity. Capturing these changes reliably presents significant challenges without stable recording platforms. In this talk, I will present our recent study designing different types of flexible electrode arrays to achieve stable neural recordings over months to years. I will then explore how these arrays were used to record stimulus-dependent single-unit action potentials in the mouse visual cortex. This enabled us to track action potentials from the same neurons across extended periods under visual stimulation, providing insight into representational drift during these stimuli. Through this approach, we tested hypotheses regarding the origins and mechanisms of representational drift, tracked their latent dynamics transformations, and modeled these transformations with affine analysis. Our findings allowed us to build a long-term stable, high-performance visual information decoder that adapts to neural representational drift. This development opens the door to chronically stable, flexible brain-machine interfaces (BMIs) in regions experiencing representational drift. Next, I will discuss neuromorphic algorithms and hardware designs that facilitate real-time decoding of neural intrinsic dynamics, enabling more efficient BMIs. Further, I will highlight how leveraging these dynamics can address a key limitation in AI—catastrophic forgetting. Inspired by representational drift, we developed DriftNet, a novel deep neural network framework. DriftNet not only surpasses conventional networks lacking drift but also outperforms state-of-the-art lifelong learning models. Our results demonstrate its robust, cost-effective, and adaptive approach to equipping large language models (LLMs), such as GPT-2 and RoBERTa, with lifelong learning capabilities. Lastly, I will discuss future perspectives on studying neural intrinsic dynamics through flexible BMIs and their broader impact on machine intelligence development.

Symposium Organizers

Francesca Santoro, RWTH Aachen University
Yoeri van de Burgt, Technische Universiteit Eindhoven
Dmitry Kireev, University of Massachusetts Amherst
Damia Mawad, University of New South Wales

Session Chairs

Samuel Liu
Yoeri van de Burgt

In this Session