Description: |
Cortical networks have the remarkable ability to self-assemble into dynamic regimes in which excitatory positive-feedback is balanced by recurrent inhibition. This inhibition-stabilized regime is increasingly viewed as the default dynamic regime of the cortex and believed to underlie many cortical computations, such as input amplification, working memory or motor control. High-gain excitation balanced by inhibition is also a fundamental ingredient of recurrent neural networks models able to perform complex computational tasks. However, the learning mechanisms responsible to bring networks to the inhibition-stabilized regime remain elusive. We have realized that this is because networks in this regime exhibit ‘paradoxical’ responses, which make classic forms of homeostatic plasticity fail in this context. We have recently developed a family of learning rules operating on all four synaptic weight classes (WE←E, WE←I, WI←E, WI←I) that overcome the paradoxical effect and robustly lead to the unsupervised emergence of inhibition-stabilized networks.
So far our rules only bring an RNN to the inhibition-stabilized regime, but can they be combined with other forms of learning, like hebbian plasticity, to perform any interesting computations? We will work for a start on a rate based model, and incorporate a hebbian rule to solve a simple temporal task. Additionally, if anyone would be interested in implementing the rules in hardware we would be very curious to see if they can achieve stable dynamics on a neuromorphic RNN.
For more information about the learning rules see our preprint
https://doi.org/10.1101/2020.12.30.424888
and associated matlab code (python version coming soon)
https://github.com/saraysoldado/UpDev2021 |