Cortical networks have the remarkable ability to self-assemble into dynamic regimes in which excitatory positive-feedback is balanced by recurrent inhibition. This inhibition-stabilized regime is increasingly viewed as the default dynamic regime of the cortex and believed to underlie many cortical computations, such as input amplification, working memory or motor control. High-gain excitation balanced by inhibition is also a fundamental ingredient of recurrent neural networks models able to perform complex computational tasks. However, the learning mechanisms responsible to bring networks to the inhibition-stabilized regime remain elusive. We have realized that this is because networks in this regime exhibit ‘paradoxical’ responses, which make classic forms of homeostatic plasticity fail in this context. We have recently developed a family of learning rules operating on all four synaptic weight classes (WE←E, WE←I, WI←E, WI←I) that overcome the paradoxical effect and robustly lead to the unsupervised emergence of inhibition-stabilized networks.
So far our rules only bring an RNN to the inhibition-stabilized regime, but can they be combined with other forms of learning, like hebbian plasticity, to perform any interesting computations? We will work for a start on a rate based model, and incorporate a hebbian rule to solve a simple temporal task. Additionally, if anyone would be interested in implementing the rules in hardware we would be very curious to see if they can achieve stable dynamics on a neuromorphic RNN.
For more information about the learning rules see our preprint
and associated matlab code (python version coming soon)
|Wed, 04.05.2022||21:00 - 21:30||Lecture Room|
|Thu, 05.05.2022||21:00 - 22:00||Sala Panorama|
|Mon, 09.05.2022||15:00 - 16:00||Terrace (outside lobby)|