||Even though many datasets are time-dependent (videos, audio, biological signals, DVS streams, etc.), not all of them actually require machine learning models to refer to time-dependent features for an effective classification. For example, in videos, a single frame can already give out a lot of information, which can be sufficient for solving certain simpler tasks. On the other hand, lower-dimensional data, such as audio and EMG, usually require the network to consider features over time spans of tenths and hundreds of milliseconds for accurate classification. This can usually be achieved through recurrent network structures, long time constants, or other architectures.
Solutions that do not rely on recurrent architectures exist for audio data, such as the well-known Wavenet, which is a state-of-the-art model for audio classification and generation. The adoption of such models into the spiking domain is only beginning to be explored. In this project, we aim to experiment with time-dependent classification in spiking neural networks, with possible applications to EMG or audio data, showing there is no need for recurrence even when time-dependent features (longer than the time constants of neurons) are essential to classification. The final aim is to be able to implement such a model on a neuromorphic chip, with a data-to-classification pipeline.