Do surrogate gradients dream of spiky GPUs

Gradient-based learning may not be what the brain is doing and there are lots of issues regarding data hunger etc but, lot of people are using gradient-based learning rules to train SNNs and other event-based models like eGRU to do really cool things. However, most of these event-based networks are trained on GPU even though learning rules like EventProp (other learning rules are avaiable) are very well-suited to the computational substrate of flexible digital neuromorphic systems like SpiNNaker 2 (more so than more biologically plausible three-factor learning rules).  Are these systems already capable of GPU-beating end-to-end on-chip training of SNNs? If not, what would a future system of this sort like like?

Go to group wiki Go to wiki users Info


Day Time Location
Wed, 01.05.2024 21:00 - 22:00 Lecture room


Jimmy Weber


Christian Fernandez Lorden
Anindya Ghosh
Jesse Hagenaars
Sanja Karilanova
Thomas Shoesmith