Reduced-precision computation, Stochastic Rounding
There is increasing interest in saving on energy usage and memory bandwidth/footprint in more conventional machine learning implementations by reducing the size and complexity of the arithmetic types used, with a number of successful implementations. We aim to discuss how far this can go in terms of the precision of underlying storage and computation, how this can be achieved without losing necessary precision in learning and/or inference, the role of stochasticity, and whether these issues may overlap with spiking neural networks (where the representation is arguably 1-bit) or other low energy computation mechanisms.
|Thu, 25.04.2019||16:00 - 17:00||Sala Panorama|
|Tue, 30.04.2019||16:00 - 17:00||Sala Panorama|