Open Access. Powered by Scholars. Published by Universities.®

Medicine and Health Sciences Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 2 of 2

Full-Text Articles in Medicine and Health Sciences

Individual Articulator's Contribution To Phoneme Production, Jun Wang, Jordan R. Green, Ashok Samal May 2013

Individual Articulator's Contribution To Phoneme Production, Jun Wang, Jordan R. Green, Ashok Samal

CSE Conference and Workshop Papers

Speech sounds are the result of coordinated movements of individual articulators. Understanding each articulator’s role in speech is fundamental not only for understanding how speech is produced, but also for optimizing speech assessments and treatments. In this paper, we studied the individual contributions of six articulators, tongue tip, tongue blade, tongue body front, tongue body back, upper lip, and lower lip to phoneme classification. A total of 3,838 vowel and consonant production samples were collected from eleven native English speakers. The results of speech movement classification using a support vector machine indicated that the tongue encoded significantly more information than …


Vowel Recognition From Continuous Articulatory Movements For Speaker-Dependent Applications, Jun Wang, Jordan R. Green, Ashok Samal, Tom D. Carrell Jan 2010

Vowel Recognition From Continuous Articulatory Movements For Speaker-Dependent Applications, Jun Wang, Jordan R. Green, Ashok Samal, Tom D. Carrell

Department of Special Education and Communication Disorders: Faculty Publications

A novel approach was developed to recognize vowels from continuous tongue and lip movements. Vowels were classified based on movement patterns (rather than on derived articulatory features, e.g., lip opening) using a machine learning approach. Recognition accuracy on a single-speaker dataset was 94.02% with a very short latency. Recognition accuracy was better for high vowels than for low vowels. This finding parallels previous empirical findings on tongue movements during vowels. The recognition algorithm was then used to drive an articulation-to-acoustics synthesizer. The synthesizer recognizes vowels from continuous input stream of tongue and lip movements and plays the corresponding sound samples …