Open Access. Powered by Scholars. Published by Universities.®

Education Commons

Open Access. Powered by Scholars. Published by Universities.®

PDF

Department of Special Education and Communication Disorders: Faculty Publications

Articulation

Articles 1 - 5 of 5

Full-Text Articles in Education

Vowel Recognition From Continuous Articulatory Movements For Speaker-Dependent Applications, Jun Wang, Jordan R. Green, Ashok Samal, Tom D. Carrell Aug 2010

Vowel Recognition From Continuous Articulatory Movements For Speaker-Dependent Applications, Jun Wang, Jordan R. Green, Ashok Samal, Tom D. Carrell

Department of Special Education and Communication Disorders: Faculty Publications

A novel approach was developed to recognize vowels from continuous tongue and lip movements. Vowels were classified based on movement patterns (rather than on derived articulatory features, e.g., lip opening) using a machine learning approach. Recognition accuracy on a single-speaker dataset was 94.02% with a very short latency. Recognition accuracy was better for high vowels than for low vowels. This finding parallels previous empirical findings on tongue movements during vowels. The recognition algorithm was then used to drive an articulation-to-acoustics synthesizer. The synthesizer recognizes vowels from continuous input stream of tongue and lip movements and plays the corresponding sound samples …


Vowel Recognition From Continuous Articulatory Movements For Speaker-Dependent Applications, Jun Wang, Jordan R. Green, Ashok Samal, Tom D. Carrell Jan 2010

Vowel Recognition From Continuous Articulatory Movements For Speaker-Dependent Applications, Jun Wang, Jordan R. Green, Ashok Samal, Tom D. Carrell

Department of Special Education and Communication Disorders: Faculty Publications

A novel approach was developed to recognize vowels from continuous tongue and lip movements. Vowels were classified based on movement patterns (rather than on derived articulatory features, e.g., lip opening) using a machine learning approach. Recognition accuracy on a single-speaker dataset was 94.02% with a very short latency. Recognition accuracy was better for high vowels than for low vowels. This finding parallels previous empirical findings on tongue movements during vowels. The recognition algorithm was then used to drive an articulation-to-acoustics synthesizer. The synthesizer recognizes vowels from continuous input stream of tongue and lip movements and plays the corresponding sound samples …


Kinematic Analysis Of Articulatory Coupling In Acquired Apraxia Of Speech Post-Stroke, Carly J. Bartle-Meyer, Justine V. Goozée, Bruce E. Murdoch, Jordan R. Green Feb 2009

Kinematic Analysis Of Articulatory Coupling In Acquired Apraxia Of Speech Post-Stroke, Carly J. Bartle-Meyer, Justine V. Goozée, Bruce E. Murdoch, Jordan R. Green

Department of Special Education and Communication Disorders: Faculty Publications

Primary objective: Electromagnetic articulography was employed to investigate the strength of articulatory coupling and hence the degree of functional movement independence between individual articulators in apraxia of speech (AOS). Methods and procedures: Tongue-tip, tongue-back and jaw movement was recorded from five speakers with AOS and a concomitant aphasia (M = 53.6 years; SD = 12.60) during /ta, sa, la, ka/ syllable repetitions, spoken at typical and fastrates of speech. Covariance values were calculated for each articulatory pair to gauge the strength of articulatory coupling. The results obtained for each of the participants with AOS were individually compared to those obtained …


Transitioning From Analog To Digital Audio Recording In Childhood Speech Sound Disorders, Lawrence D. Shriberg, Jane L. Mcsweeny, Bruce E. Anderson, Thomas F. Campbell, Michael R. Chial, Jordan R. Green, Katherina K. Hauner, Christopher A. Moore, Heather L. Rusiewicz, David L. Wilson Jun 2005

Transitioning From Analog To Digital Audio Recording In Childhood Speech Sound Disorders, Lawrence D. Shriberg, Jane L. Mcsweeny, Bruce E. Anderson, Thomas F. Campbell, Michael R. Chial, Jordan R. Green, Katherina K. Hauner, Christopher A. Moore, Heather L. Rusiewicz, David L. Wilson

Department of Special Education and Communication Disorders: Faculty Publications

Few empirical findings or technical guidelines are available on the current transition from analog to digital audio recording in childhood speech sound disorders. Of particular concern in the present context was whether a transition from analog- to digital-based transcription and coding of prosody and voice features might require re-standardizing a reference database for research in childhood speech sound disorders. Two research transcribers with different levels of experience glossed, transcribed, and prosody-voice coded conversational speech samples from eight children with mild to severe speech disorders of unknown origin. The samples were recorded, stored, and played back using representative analog and digital …


More On The Role Of The Mandible In Speech Production: Clinical Correlates Of Green, Moore, And Reilly’S (2002) Findings And Methodological Issues In Studies Of Early Articulatory Development: A Response To Dworkin, Meleca, And Stachler (2003), James Paul Dworkin, Robert J. Meleca, Robert J. Stachler, Jordan R. Green, Christopher A. Moore, Kevin J. Reilly Aug 2003

More On The Role Of The Mandible In Speech Production: Clinical Correlates Of Green, Moore, And Reilly’S (2002) Findings And Methodological Issues In Studies Of Early Articulatory Development: A Response To Dworkin, Meleca, And Stachler (2003), James Paul Dworkin, Robert J. Meleca, Robert J. Stachler, Jordan R. Green, Christopher A. Moore, Kevin J. Reilly

Department of Special Education and Communication Disorders: Faculty Publications

Dworkin et al. comment: We would like to comment on Green, Moore, and Reilly’s article, which appeared in the February 2002 issue of this journal [Journal of Speech, Language, and Hearing Research]. In that investigation, these clinical researchers examined upper lip, lower lip, and mandibular movements during repetitive bisyllable word productions by infants, toddlers, young children, and adults with normal developmental and neurologic histories. Kinematic traces from these articulators were analyzed using a computer-based movement tracking system. Results revealed that these oral structures may have sequential neuromotor developmental schedules, characterized by more mature movement patterns for speech emerging …