Open Access. Powered by Scholars. Published by Universities.®

Education Commons

Open Access. Powered by Scholars. Published by Universities.®

Series

2010

University of Nebraska - Lincoln

Medicine and Health Sciences

Department of Special Education and Communication Disorders: Faculty Publications

Articles 1 - 2 of 2

Full-Text Articles in Education

Age Effect On The Gaze Stabilization Test, Julie A. Honaker Jan 2010

Age Effect On The Gaze Stabilization Test, Julie A. Honaker

Department of Special Education and Communication Disorders: Faculty Publications

Impairments of the vestibular-ocular reflex (VOR) lead to a decline in visual acuity during head movements. Dynamic visual acuity (DVA) testing is a sensitive assessment tool for detecting VOR impairments. DVA evaluates accuracy of visual acuity during fixed velocity head movements. In contrast, the Gaze Stabilization test (GST) is a new functional evaluation of the VOR that identifies a person’s maximum head velocity (in degrees per second) a person can maintain with stable vision of a target (i.e. optotype). The objective of this study was to evaluate the effect of age on the GST in participants without vestibular disease. The …


Vowel Recognition From Continuous Articulatory Movements For Speaker-Dependent Applications, Jun Wang, Jordan R. Green, Ashok Samal, Tom D. Carrell Jan 2010

Vowel Recognition From Continuous Articulatory Movements For Speaker-Dependent Applications, Jun Wang, Jordan R. Green, Ashok Samal, Tom D. Carrell

Department of Special Education and Communication Disorders: Faculty Publications

A novel approach was developed to recognize vowels from continuous tongue and lip movements. Vowels were classified based on movement patterns (rather than on derived articulatory features, e.g., lip opening) using a machine learning approach. Recognition accuracy on a single-speaker dataset was 94.02% with a very short latency. Recognition accuracy was better for high vowels than for low vowels. This finding parallels previous empirical findings on tongue movements during vowels. The recognition algorithm was then used to drive an articulation-to-acoustics synthesizer. The synthesizer recognizes vowels from continuous input stream of tongue and lip movements and plays the corresponding sound samples …