Open Access. Powered by Scholars. Published by Universities.®

Digital Commons Network

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 2 of 2

Full-Text Articles in Entire DC Network

Jaw Rotation In Dysarthria Measured With A Single Electromagnetic Articulography Sensor, Jeffrey Berry, Andrew Kolb, James Schroeder, Michael T. Johnson Jun 2017

Jaw Rotation In Dysarthria Measured With A Single Electromagnetic Articulography Sensor, Jeffrey Berry, Andrew Kolb, James Schroeder, Michael T. Johnson

Speech Pathology and Audiology Faculty Research and Publications

Purpose This study evaluated a novel method for characterizing jaw rotation using orientation data from a single electromagnetic articulography sensor. This method was optimized for clinical application, and a preliminary examination of clinical feasibility and value was undertaken.

Method The computational adequacy of the single-sensor orientation method was evaluated through comparisons of jaw-rotation histories calculated from dual-sensor positional data for 16 typical talkers. The clinical feasibility and potential value of single-sensor jaw rotation were assessed through comparisons of 7 talkers with dysarthria and 19 typical talkers in connected speech.

Results The single-sensor orientation method allowed faster and safer participant preparation, …


Speaker-Specific Adaptation Of Maeda Synthesis Parameters For Auditory Feedback, Joseph Vonderhaar Apr 2017

Speaker-Specific Adaptation Of Maeda Synthesis Parameters For Auditory Feedback, Joseph Vonderhaar

Master's Theses (2009 -)

The Real-time Articulatory Speech Synthesizer (RASS) is a research tool in the Marquette Speech and Swallowing lab that simultaneously collects acoustic and articulatory data from human participants. The system is used to study acoustic-to-articulatory inversion, articulatory-to-acoustic synthesis mapping, and the effects of real-time acoustic feedback. Electromagnetic Articulography (EMA) is utilized to collect position data via sensors placed in a subject’s mouth. These kinematic data are then converted into a set of synthesis parameters that controls an articulatory speech synthesizer, which in turn generates an acoustic waveform matching the associated kinematics. Independently from RASS, the synthesized acoustic waveform can be further …