[Pols-announce] Fwd: [Methods-l] CRMDA Activities for Friday, February 22 1-4pm
Haider-Markel, Donald Patrick
dhmarkel at ku.edu
Thu Feb 21 16:23:49 CST 2019
FYI
Don Haider-Markel
Sent via airborne drone
Begin forwarded message:
From: Paul Johnson via Methods-l <methods-l at lists.ku.edu<mailto:methods-l at lists.ku.edu>>
Date: February 21, 2019 at 12:34:24 PM CST
To: <methods-l at lists.ku.edu<mailto:methods-l at lists.ku.edu>>, "CRMDA-workgroups at googlegroups.com<mailto:CRMDA-workgroups at googlegroups.com>" <CRMDA-workgroups at googlegroups.com<mailto:CRMDA-workgroups at googlegroups.com>>
Subject: [Methods-l] CRMDA Activities for Friday, February 22 1-4pm
Reply-To: Paul Johnson <pauljohn at ku.edu<mailto:pauljohn at ku.edu>>, Methodology Related Events <methods-l at lists.ku.edu<mailto:methods-l at lists.ku.edu>>
Dear Friends of CRMDA
Because last week's Python group was postponed due to the snow event, we will be having three events tomorrow afternoon.
1. Python and Natural Language: 1PM Watson 455
Here again is the link to the materials we’ve been working with:
https://www.dropbox.com/sh/fm0j3rftm4eb0gj/AABckgOaI1muhTMIkH8DJZhka?dl=0
Please download the folder and explore the Jupyter Notebook. We are interested to hear thoughts about the tfidf matrix, calculations of distance, and any other interesting things you find.
2. Big Data: 2PM Watson 455
Variable selection and regression "regularization"; group LASSO.
3. Presentation about 'deep learning' with Neural Networks. 3PM Watson 455.
https://crmda.ku.edu/pirhosseinloo-20190221
Supervised Speech Separation Based on Deep Neural Networks
Presenter: Shadi Pir Hosseinloo
In real world environments, the speech signals received by our ears are usually a combination of different sounds that include not only the target speech, but also acoustic interference like music, background noise, and competing speakers. This interference has negative effect on speech perception and degrades the performance of speech processing applications such as automatic speech recognition (ASR), speaker identification, and hearing aid devices. One way to solve this problem is using source separation algorithms to separate the desired speech from the interfering sounds. A supervised speech separation algorithm is proposed based on deep neural networks to estimate the time frequency masks. The main goal of the proposed algorithm is to increase the intelligibility and quality of the recovered speech from noisy environments, which has the potential to improve both speech processing applications and signal processing strategies for hearing aid / cochlear implant technology.
PJ
lost my sig in the power outage :(
_______________________________________________
Methods-l mailing list
Methods-l at lists.ku.edu<mailto:Methods-l at lists.ku.edu>
https://lists.ku.edu/listinfo/methods-l
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ku.edu/pipermail/pols-announce/attachments/20190221/53d2c1ac/attachment.html>
More information about the Pols-announce
mailing list