Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 5 of 5

Full-Text Articles in Speech and Hearing Science

Word Recognition From Continuous Articulatory Movement Time-Series Data Using Symbolic Representations, Jun Wang, Arvind Balasubramanian, Luis Mojica De La Vega, Jordan R. Green, Ashok Samal, Balakrishnan Prabhakaran Aug 2013

Word Recognition From Continuous Articulatory Movement Time-Series Data Using Symbolic Representations, Jun Wang, Arvind Balasubramanian, Luis Mojica De La Vega, Jordan R. Green, Ashok Samal, Balakrishnan Prabhakaran

CSE Conference and Workshop Papers

Although still in experimental stage, articulation-based silent speech interfaces may have significant potential for facilitating oral communication in persons with voice and speech problems. An articulation-based silent speech interface converts articulatory movement information to audible words. The complexity of speech production mechanism (e.g., co-articulation) makes the conversion a formidable problem. In this paper, we reported a novel, real-time algorithm for recognizing words from continuous articulatory movements. This approach differed from prior work in that (1) it focused on word-level, rather than phoneme-level; (2) online segmentation and recognition were conducted at the same time; and (3) a symbolic representation (SAX) was …


Whole-Word Recognition From Articulatory Movements For Silent Speech Interfaces, Jun Wang, Ashok Samal, Jordan R. Green, Frank Rudzicz Sep 2012

Whole-Word Recognition From Articulatory Movements For Silent Speech Interfaces, Jun Wang, Ashok Samal, Jordan R. Green, Frank Rudzicz

Department of Special Education and Communication Disorders: Faculty Publications

Articulation-based silent speech interfaces convert silently produced speech movements into audible words. These systems are still in their experimental stages, but have significant potential for facilitating oral communication in persons with laryngectomy or speech impairments. In this paper, we report the result of a novel, real-time algorithm that recognizes whole-words based on articulatory movements. This approach differs from prior work that has focused primarily on phoneme-level recognition based on articulatory features. On average, our algorithm missed 1.93 words in a sequence of twenty-five words with an average latency of 0.79 seconds for each word prediction using a data set of …


Sentence Recognition From Articulatory Movements For Silent Speech Interfaces, Jun Wang, Ashok Samal, Jordan R. Green, Frank Rudzicz Mar 2012

Sentence Recognition From Articulatory Movements For Silent Speech Interfaces, Jun Wang, Ashok Samal, Jordan R. Green, Frank Rudzicz

Department of Special Education and Communication Disorders: Faculty Publications

Recent research has demonstrated the potential of using an articulation-based silent speech interface for command-and-control systems. Such an interface converts articulation to words that can then drive a text-to-speech synthesizer. In this paper, we have proposed a novel near-time algorithm to recognize whole-sentences from continuous tongue and lip movements. Our goal is to assist persons who are aphonic or have a severe motor speech impairment to produce functional speech using their tongue and lips. Our algorithm was tested using a functional sentence data set collected from ten speakers (3012 utterances). The average accuracy was 94.89% with an average latency of …


Augmented Control Of A Hands-Free Electrolarynx, Brian Madden, James Condron, Eugene Coyle Jan 2011

Augmented Control Of A Hands-Free Electrolarynx, Brian Madden, James Condron, Eugene Coyle

Conference Papers

During voiced speech, the larynx acts as the sound source, providing a quasi-periodic excitation of the vocal tract. Following a total laryngectomy, some people speak using an electrolarynx which employs an electromechanical actuator to perform the excitatory function of the absent larynx. Drawbacks of conventional electrolarynx designs include the monotonic sound emitted, the need for a free-hand to operate the device, and the difficulty experienced by many laryngectomees in adapting to its use. One improvement to the electrolarynx, which clinicians and users frequently suggest, is the provision of a convenient hands-free control facility. This would allow more natural use of …


Intelligibility Of Electrolarynx Speech Using A Novel Actuator, Brian Madden, Mark Nolan, Ted Burke, James Condron, Eugene Coyle Jun 2010

Intelligibility Of Electrolarynx Speech Using A Novel Actuator, Brian Madden, Mark Nolan, Ted Burke, James Condron, Eugene Coyle

Conference Papers

During voiced speech, the larynx provides quasi-periodic acoustic excitation of the vocal tract. Following a laryngectomy, some people speak using an electrolarynx which replaces the excitatory function of the absent larynx. Drawbacks of conventional electrolarynx designs include the buzzing monotonic sound emitted, the need for a free hand to operate the device, and difficulty experienced by many laryngectomees in adapting to its use. Despite these shortcomings, it remains the preferred method of speech rehabilitation for a substantial minority of laryngectomees. In most electrolarynxes, mechanical vibrations are produced by a linear electromechanical actuator, the armature of which percusses against a metal …