Open Access. Powered by Scholars. Published by Universities.®

Medicine and Health Sciences Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 2 of 2

Full-Text Articles in Medicine and Health Sciences

Across-Speaker Articulatory Normalization For Speaker-Independent Silent Speech Recognition, Jun Wang, Ashok Samal, Jordan Green Sep 2014

Across-Speaker Articulatory Normalization For Speaker-Independent Silent Speech Recognition, Jun Wang, Ashok Samal, Jordan Green

CSE Conference and Workshop Papers

Silent speech interfaces (SSIs), which recognize speech from articulatory information (i.e., without using audio information), have the potential to enable persons with laryngectomy or a neurological disease to produce synthesized speech with a natural sounding voice using their tongue and lips. Current approaches to SSIs have largely relied on speaker-dependent recognition models to minimize the negative effects of talker variation on recognition accuracy. Speaker-independent approaches are needed to reduce the large amount of training data required from each user; only limited articulatory samples are often available for persons with moderate to severe speech impairments, due to the logistic difficulty of …


Preliminary Test Of A Real-Time, Interactive Silent Speech Interface Based On Electromagnetic Articulograph, Jun Wang, Ashok Samal, Jordan R. Green Jun 2014

Preliminary Test Of A Real-Time, Interactive Silent Speech Interface Based On Electromagnetic Articulograph, Jun Wang, Ashok Samal, Jordan R. Green

CSE Conference and Workshop Papers

A silent speech interface (SSI) maps articulatory movement data to speech output. Although still in experimental stages, silent speech interfaces hold significant potential for facilitating oral communication in persons after laryngectomy or with other severe voice impairments. Despite the recent efforts on silent speech recognition algorithm development using offline data analysis, online test of SSIs have rarely been conducted. In this paper, we present a preliminary, online test of a real-time, interactive SSI based on electromagnetic motion tracking. The SSI played back synthesized speech sounds in response to the user’s tongue and lip movements. Three English talkers participated in this …