Open Access. Powered by Scholars. Published by Universities.®

Medicine and Health Sciences Commons

Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics

University of Nebraska - Lincoln

Silent speech interface

Articles 1 - 2 of 2

Full-Text Articles in Medicine and Health Sciences

Preliminary Test Of A Real-Time, Interactive Silent Speech Interface Based On Electromagnetic Articulograph, Jun Wang, Ashok Samal, Jordan R. Green Jun 2014

Preliminary Test Of A Real-Time, Interactive Silent Speech Interface Based On Electromagnetic Articulograph, Jun Wang, Ashok Samal, Jordan R. Green

CSE Conference and Workshop Papers

A silent speech interface (SSI) maps articulatory movement data to speech output. Although still in experimental stages, silent speech interfaces hold significant potential for facilitating oral communication in persons after laryngectomy or with other severe voice impairments. Despite the recent efforts on silent speech recognition algorithm development using offline data analysis, online test of SSIs have rarely been conducted. In this paper, we present a preliminary, online test of a real-time, interactive SSI based on electromagnetic motion tracking. The SSI played back synthesized speech sounds in response to the user’s tongue and lip movements. Three English talkers participated in this …


Sentence Recognition From Articulatory Movements For Silent Speech Interfaces, Jun Wang, Ashok Samal, Jordan R. Green, Frank Rudzicz Mar 2012

Sentence Recognition From Articulatory Movements For Silent Speech Interfaces, Jun Wang, Ashok Samal, Jordan R. Green, Frank Rudzicz

Department of Special Education and Communication Disorders: Faculty Publications

Recent research has demonstrated the potential of using an articulation-based silent speech interface for command-and-control systems. Such an interface converts articulation to words that can then drive a text-to-speech synthesizer. In this paper, we have proposed a novel near-time algorithm to recognize whole-sentences from continuous tongue and lip movements. Our goal is to assist persons who are aphonic or have a severe motor speech impairment to produce functional speech using their tongue and lips. Our algorithm was tested using a functional sentence data set collected from ten speakers (3012 utterances). The average accuracy was 94.89% with an average latency of …