Transfer Deep RNN on Neuromorphic Hardware

It is a truth universally acknowledged (well, at least by another workgroup), that the neuromorphic community is in need of efficient algorithms for spatiotemporal processing, which can guide the behaviour of autonomous robots in complex environments, AND spiking recurrent neural networks (sRNN) are suitable candidates for such efficient processing, however, it remains a challenge to stabilize their dynamics.

A recent paper (He et. al, 2019, showed that it is possible to transfer the dynamical properties of a well-performing neural network which has been optimized on a digital computer, onto neuromorphic hardware (Dynapse) by a computational scheme called Reservoir Transfer. Empirical results from an ECG signal monitoring task showed that the transferred reservoir with ternary weights is able to not only integrate information over a time span longer than the timescale of individual neurons but also function as an information processing medium with performance close to a standard, high precision, deterministic, non-spiking ESN.

In this workgroup, we will extend the previous work to transfer a BPTT-trained recurrent network instead of a reservoir to the Dynapse Hardware. Specifically, we will take a deep (vanilla) RNN trained on simple signal processing tasks (such as temporal MNIST digit recognition task) and transfer it into a BRIAN-simulated spiking network. If promising results can be achieved in simulation, we will then use the CortexControl interface to transfer it on Dynapse.

Go to group wiki


Day Time Location
Wed, 24.04.2019 16:30 - 17:00 Sala Panorama
Thu, 25.04.2019 14:30 - 15:00 Sala Panorama
Fri, 26.04.2019 15:30 - 16:00 Disco


Xu He


Felix Bauer
Karla Burelo
Erika Covi
Elisa Donati
Christos Giotis
Qinghai Guo
Germain Haessig
Michael Hopkins
Raphaela Kreiser
Alexander Kugele
Zhaofang Li
Dongchen Liang
Marco Monforte
Dylan Muir
Mattias Nilsson
Melika Payvand
Alpha Renner
Nicoletta Risi
Vanessa Rodrigues Coelho Leite
Oskar Weinberger
Dmitrii Zendrikov