Open Access. Powered by Scholars. Published by Universities.®

Medicine and Health Sciences Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 6 of 6

Full-Text Articles in Medicine and Health Sciences

Across-Speaker Articulatory Normalization For Speaker-Independent Silent Speech Recognition, Jun Wang, Ashok Samal, Jordan Green Sep 2014

Across-Speaker Articulatory Normalization For Speaker-Independent Silent Speech Recognition, Jun Wang, Ashok Samal, Jordan Green

CSE Conference and Workshop Papers

Silent speech interfaces (SSIs), which recognize speech from articulatory information (i.e., without using audio information), have the potential to enable persons with laryngectomy or a neurological disease to produce synthesized speech with a natural sounding voice using their tongue and lips. Current approaches to SSIs have largely relied on speaker-dependent recognition models to minimize the negative effects of talker variation on recognition accuracy. Speaker-independent approaches are needed to reduce the large amount of training data required from each user; only limited articulatory samples are often available for persons with moderate to severe speech impairments, due to the logistic difficulty of …


Preliminary Test Of A Real-Time, Interactive Silent Speech Interface Based On Electromagnetic Articulograph, Jun Wang, Ashok Samal, Jordan R. Green Jun 2014

Preliminary Test Of A Real-Time, Interactive Silent Speech Interface Based On Electromagnetic Articulograph, Jun Wang, Ashok Samal, Jordan R. Green

CSE Conference and Workshop Papers

A silent speech interface (SSI) maps articulatory movement data to speech output. Although still in experimental stages, silent speech interfaces hold significant potential for facilitating oral communication in persons after laryngectomy or with other severe voice impairments. Despite the recent efforts on silent speech recognition algorithm development using offline data analysis, online test of SSIs have rarely been conducted. In this paper, we present a preliminary, online test of a real-time, interactive SSI based on electromagnetic motion tracking. The SSI played back synthesized speech sounds in response to the user’s tongue and lip movements. Three English talkers participated in this …


Genetics Of Peripheral Vestibular Dysfunction: Lessons From Mutant Mouse Strains, Sherri M. Jones, Timothy A. Jones Mar 2014

Genetics Of Peripheral Vestibular Dysfunction: Lessons From Mutant Mouse Strains, Sherri M. Jones, Timothy A. Jones

Department of Special Education and Communication Disorders: Faculty Publications

Background

A considerable amount of research has been published about genetic hearing impairment. Fifty to sixty percent of hearing loss is thought to have a genetic cause. Genes may also play a significant role in acquired hearing loss due to aging, noise exposure, or ototoxic medications. Between 1995 and 2012, over 100 causative genes have been identified for syndromic and nonsyndromic forms of hereditary hearing loss (see Hereditary Hearing Loss Homepage http://hereditaryhearingloss.org). Mouse models have been extremely valuable in facilitating the discovery of hearing loss genes, and in understanding inner ear pathology due to genetic mutations or elucidating fundamental mechanisms …


The Impact Of Interface Design During An Initial High-Technology Aac Experience: A Collective Case Study Of People With Aphasia, Aimee R. Dietz, Kristy S.E. Weissling, Julie Griffith, Miechelle L. Mckelvey, Devan Macke Jan 2014

The Impact Of Interface Design During An Initial High-Technology Aac Experience: A Collective Case Study Of People With Aphasia, Aimee R. Dietz, Kristy S.E. Weissling, Julie Griffith, Miechelle L. Mckelvey, Devan Macke

Department of Special Education and Communication Disorders: Faculty Publications

The purpose of this collective case study was to describe the communication behaviors of five people with chronic aphasia when they retold personal narratives to an unfamiliar communication partner using four variants of a visual scene display (VSD) interface. The results revealed that spoken language comprised roughly 70% of expressive modality units; variable patterns of use for other modalities emerged. Although inconsistent across participants, several people with aphasia experienced no trouble sources during the retells using VSDs with personally relevant photographs and text boxes. Overall, participants perceived the personally relevant photographs and the text as helpful during the retells. These …


Supporting Narrative Retells For People With Aphasia Using Augmentative And Alternative Communication: Photographs Or Line Drawings? Text Or No Text?, Julie Griffith, Aimee R. Dietz, Kristy S.E. Weissling Jan 2014

Supporting Narrative Retells For People With Aphasia Using Augmentative And Alternative Communication: Photographs Or Line Drawings? Text Or No Text?, Julie Griffith, Aimee R. Dietz, Kristy S.E. Weissling

Department of Special Education and Communication Disorders: Faculty Publications

Purpose: The purpose of this study was to examine how the interface design of an augmentative and alternative communication (AAC) device influences the communication behaviors of people with aphasia during a narrative retell task.

Method: A case-series design was used. Four narratives were created on an AAC device with combinations of personally relevant (PR) photographs, line drawings (LDs), and text for each participant. The narrative retells were analyzed to describe the expressive modality units (EMUs) used, trouble sources experienced, and whether trouble sources were repaired. The researchers also explored the participants’ perceived helpfulness of the interface features.

Results: The participants …


Treating Myofunctional Disorders: A Multiple-Baseline Study Of A New Treatment Using Electropalatography, Alana Mantie-Kozlowski, Kevin M. Pitt Jan 2014

Treating Myofunctional Disorders: A Multiple-Baseline Study Of A New Treatment Using Electropalatography, Alana Mantie-Kozlowski, Kevin M. Pitt

Department of Special Education and Communication Disorders: Faculty Publications

Purpose: This study assessed the benefit of using electropalatography (EPG) in treatment aimed at habilitating individuals with nonspeech orofacial myofunctional disorders (NSOMD).

Method: The study used a multiple-baseline design across 3 female participants who were referred for an evaluation and possible treatment of their NSOMD. Treatment sessions were 30 min and provided twice weekly. Participant 1 received 8 treatments, Participant 2 received 6 treatments, and Participant 3 received 4 treatments. The patterns of sensor activation produced when participants’ tongues made contact with the electropalate during saliva swallows were compared with the patterns of age-matched peers. Individualized goals were developed on …