Speech denoising with RNN-based SNNs

Hearing impairment is a large societal problem, and speech enhancement is one type of help for this problem. Speech enhancement aims to filter clean speech from speech mixed with background noise. Such enhancement would ideally be integrated with hearing aides. The power, latency and energy constraints of hearing aides make it a natural/perfect candidate for neuromorphic computing applications. However, at present no effective SNN-based solutions exist because of performance and computation complexity issues. With novel learnable neurons, efficient learning rules and surrogate gradient introduced in the last two years, new opportunities have emerged to address the hearing impairment problem with SNN-based neuromorphic solutions.


My question is thus: how can we design compact and powerful SNNs for speech enhancement? I propose to start from current compact RNN-based SNN models, and adapt and optimize them for the speech enhancement task while realizing very low latency.


Aim: training on long sequences with online learning in the form of FPTT/waveform, novel and more powerful SNN models. Our exploration is planned to focus on the following points:


1. Develop compact efficient recurrent SNNs for speech enhancement


2. Improve speech enhancement latency for better listener comfort;


3. Determine the effects of quantization and prune to satisfy severe memory and computation limits of neuromorphic hardware.


 

Go to group wiki Go to wiki users Info

Timetable

Day Time Location
Tue, 02.05.2023 15:00 - 16:00 Lobby
Wed, 03.05.2023 15:00 - 16:00 Sala Panorama
Thu, 04.05.2023 14:00 - 16:00 Lobby
Fri, 05.05.2023 14:00 - 15:00 Sala Panorama
Mon, 08.05.2023 14:00 - 16:00 Lecture room

Moderator

Members

Steve Durstewitz
Mark Schoene