Hybrid deep learning based speech signal separation

Show simple item record

dc.contributor.author Bouaouni, Mohamed Yacine
dc.contributor.author Ait-Ali-Yahia, Rayane
dc.contributor.other Belouchrani, Mohamed Arezki Adel, Directeur de thèse
dc.date.accessioned 2021-11-14T14:29:07Z
dc.date.available 2021-11-14T14:29:07Z
dc.date.issued 2021
dc.identifier.other EP00262
dc.identifier.uri http://repository.enp.edu.dz/xmlui/handle/123456789/9934
dc.description Mémoire de Projet de Fin d’Études : Électronique : Alger, École Nationale Polytechnique : 2021 fr_FR
dc.description.abstract Audio source separation is a challenging problem which consists of identifying the different sources present in a mixed signal, either by using traditional model based methods or using deep learning algorithms. In this work, we propose two different paradigms for combining model based methods (nonnegative matrix factorization) with neural networks to take advantage of both. The first approach fuses the NMF and a deep neural network (DNN) in a two sequential stages stack, where the DNN enhances the separation of the signals by updating the spectrograms/gains that were estimated using the NMF. Two architectures based on autoencoders are presented in this thesis, that handle two different kind of input data. The second approach is based on the deep unfolding paradigm. It consists of unrolling the optimization algorithm of the model based method into layers of a deep network, and train it using deep learning techniques. fr_FR
dc.language.iso en fr_FR
dc.subject Deep Learning fr_FR
dc.subject NMF fr_FR
dc.subject DNN fr_FR
dc.subject Autoencoders fr_FR
dc.subject Unfolding algorithm fr_FR
dc.title Hybrid deep learning based speech signal separation fr_FR
dc.type Thesis fr_FR


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search Repository


Advanced Search

Browse

My Account