How to design an EMG-prosthesis system that can mimic the natural hand movement remains an open question. One approach is through classification-based control; the user motor intent gets decoded and mapped to a finite set of motions. However, this approach suffers from one main limitation: the user can only perform a restricted set of motions and hence is not taking advantage of the full dexterity of the currently available prosthetic hands. Another approach is to steer away from classification and rephrase the problem as a regression task. In this case, we would ideally want to continuously reconstruct motion kinematics (e.g. position, force, joint angles,…) for individual finger. This approach can allow for simultaneous and continuous control of multiple degrees of freedom, hence leading to more intuitive and natural EMG-prostheses.
This project aims at investigating the feasibility of decoding individual finger positions from sEMG data using spiking neural networks simulated in Brian2 in contrast to conventional machine learning approaches. Successful networks can then be mapped onto DYNAPSE-2 to control a MIA prosthetic hand in “real-time”.
Sample questions we plan to investigate in this project:
- Can spiking network output layer firing rate map to finger position?
- Is a feedforward topology suitable for this task? are recurrent connections needed? Do they help solve the task?
- How to learn on SNN? but also equally important what to learn?
|Tue, 03.05.2022||16:30 - 17:00||Disco|