Open Access. Powered by Scholars. Published by Universities.®

Communication Sciences and Disorders Commons

Open Access. Powered by Scholars. Published by Universities.®

Computer Sciences

Institution
Keyword
Publication Year
Publication
Publication Type

Articles 1 - 24 of 24

Full-Text Articles in Communication Sciences and Disorders

One Font Doesn’T Fit All: The Influence Of Digital Text Personalization On Comprehension In Child And Adolescent Readers, Shannon M. Sheppard, Susanne L. Nobles, Anton Palma, Sophie Kajfez, Marjorie Jordan, Kathy Crowley, Sofie Beier Aug 2023

One Font Doesn’T Fit All: The Influence Of Digital Text Personalization On Comprehension In Child And Adolescent Readers, Shannon M. Sheppard, Susanne L. Nobles, Anton Palma, Sophie Kajfez, Marjorie Jordan, Kathy Crowley, Sofie Beier

Communication Sciences and Disorders Faculty Articles and Research

Reading comprehension is an essential skill. It is unclear whether and to what degree typography and font personalization may impact reading comprehension in younger readers. With advancements in technology, it is now feasible to personalize digital reading formats in general technology tools, but this feature is not yet available for many educational tools. The current study aimed to investigate the effect of character width and inter-letter spacing on reading speed and comprehension. We enrolled 94 children (kindergarten–8th grade) and compared performance with six font variations on a word-level semantic decision task (Experiment 1) and a passage-level comprehension task (Experiment 2). …


Chatgpt As Metamorphosis Designer For The Future Of Artificial Intelligence (Ai): A Conceptual Investigation, Amarjit Kumar Singh (Library Assistant), Dr. Pankaj Mathur (Deputy Librarian) Mar 2023

Chatgpt As Metamorphosis Designer For The Future Of Artificial Intelligence (Ai): A Conceptual Investigation, Amarjit Kumar Singh (Library Assistant), Dr. Pankaj Mathur (Deputy Librarian)

Library Philosophy and Practice (e-journal)

Abstract

Purpose: The purpose of this research paper is to explore ChatGPT’s potential as an innovative designer tool for the future development of artificial intelligence. Specifically, this conceptual investigation aims to analyze ChatGPT’s capabilities as a tool for designing and developing near about human intelligent systems for futuristic used and developed in the field of Artificial Intelligence (AI). Also with the helps of this paper, researchers are analyzed the strengths and weaknesses of ChatGPT as a tool, and identify possible areas for improvement in its development and implementation. This investigation focused on the various features and functions of ChatGPT that …


Comparison Of Machine Learning Methods For Classification Of Alexithymia In Individuals With And Without Autism From Eye-Tracking Data, Furkan Iigin, Megan A. Witherow, Khan M. Iftekharuddin Jan 2023

Comparison Of Machine Learning Methods For Classification Of Alexithymia In Individuals With And Without Autism From Eye-Tracking Data, Furkan Iigin, Megan A. Witherow, Khan M. Iftekharuddin

Electrical & Computer Engineering Faculty Publications

Alexithymia describes a psychological state where individuals struggle with feeling and expressing their emotions. Individuals with alexithymia may also have a more difficult time understanding the emotions of others and may express atypical attention to the eyes when recognizing emotions. This is known to affect individuals with Autism Spectrum Disorder (ASD) differently than neurotypical (NT) individuals. Using a public data set of eye-tracking data from seventy individuals with and without autism who have been assessed for alexithymia, we train multiple traditional machine learning models for alexithymia classification including support vector machines, logistic regression, decision trees, random forest, and multilayer perceptron. …


A Study Of Factors That Influence Symbol Selection On Augmentative And Alternative Communication Devices For Individuals With Autism Spectrum Disorder, William Todd Dauterman Jan 2021

A Study Of Factors That Influence Symbol Selection On Augmentative And Alternative Communication Devices For Individuals With Autism Spectrum Disorder, William Todd Dauterman

CCE Theses and Dissertations

According to the American Academy of Pediatrics (AAP), 1 in 59 children are diagnosed with Autism Spectrum Disorder (ASD) each year. Given the complexity of ASD and how it is manifested in individuals, the execution of proper interventions is difficult. One major area of concern is how individuals with ASD who have limited communication skills are taught to communicate using Augmentative and Alternative Communication devices (AAC). AACs are portable electronic devices that facilitate communication by using audibles, signs, gestures, and picture symbols. Traditionally, Speech-Language Pathologists (SLPs) are the primary facilitators of AAC devices and help establish the language individuals with …


Digital Addiction: A Conceptual Overview, Amarjit Kumar Singh, Pawan Kumar Singh Oct 2019

Digital Addiction: A Conceptual Overview, Amarjit Kumar Singh, Pawan Kumar Singh

Library Philosophy and Practice (e-journal)

Abstract

Digital addiction referred to an impulse control disorder that involves the obsessive use of digital devices, digital technologies, and digital platforms, i.e. internet, video game, online platforms, mobile devices, digital gadgets, and social network platform. It is an emerging domain of Cyberpsychology (Singh, Amarjit Kumar and Pawan Kumar Singh; 2019), which explore a problematic usage of digital media, device, and platforms by being obsessive and excessive. This article analyses, reviewed the current research, and established a conceptual overview on the digital addiction. The research literature on digital addiction has proliferated. However, we tried to categories the digital addiction, according …


A Virtual Reality System For Practicing Conversation Skills For Children With Autism, Natalia Stewart Rosenfield, Kathleen Lamkin, Jennifer Re, Kendra Day, Louanne E. Boyd, Erik J. Linstead Apr 2019

A Virtual Reality System For Practicing Conversation Skills For Children With Autism, Natalia Stewart Rosenfield, Kathleen Lamkin, Jennifer Re, Kendra Day, Louanne E. Boyd, Erik J. Linstead

Engineering Faculty Articles and Research

We describe a virtual reality environment, Bob’s Fish Shop, which provides a system where users diagnosed with Autism Spectrum Disorder (ASD) can practice social interactions in a safe and controlled environment. A case study is presented which suggests such an environment can provide the opportunity for users to build the skills necessary to carry out a conversation without the fear of negative social consequences present in the physical world. Through the repetition and analysis of these virtual interactions, users can improve social and conversational understanding.


Amplification Vs The Natural Ear: A Test On The Effectiveness Of The Natural Ear On Adults Ability To Match Pitch In Song, Celeste Orozco Jan 2019

Amplification Vs The Natural Ear: A Test On The Effectiveness Of The Natural Ear On Adults Ability To Match Pitch In Song, Celeste Orozco

Open Access Theses & Dissertations

Background: Singing is a natural enjoyment of life; however, individuals tend to isolate themselves from this enjoyment due to their inability to match pitch accurately. A new technology, the Natural Ear provides altered auditory feedback to the user while singing. It is hypothesized that this feedback may aid in the userâ??s ability to match pitch.

Purpose: The purpose of this study is to compare the effects of the Natural Ear to amplification and no amplification conditions on pitch matching accuracy in song.

Study Design: This study used a complex counterbalance within-subjects design.

Methods: 50 adults from the El Paso Metropolitan …


Eeg-Based Processing And Classification Methodologies For Autism Spectrum Disorder: A Review, Gunavaran Brihadiswaran, Dilantha Haputhanthri, Sahan Gunathilaka, Dulani Meedeniya, Sampath Jayarathna Jan 2019

Eeg-Based Processing And Classification Methodologies For Autism Spectrum Disorder: A Review, Gunavaran Brihadiswaran, Dilantha Haputhanthri, Sahan Gunathilaka, Dulani Meedeniya, Sampath Jayarathna

Computer Science Faculty Publications

Autism Spectrum Disorder is a lifelong neurodevelopmental condition which affects social interaction, communication and behaviour of an individual. The symptoms are diverse with different levels of severity. Recent studies have revealed that early intervention is highly effective for improving the condition. However, current ASD diagnostic criteria are subjective which makes early diagnosis challenging, due to the unavailability of well-defined medical tests to diagnose ASD. Over the years, several objective measures utilizing abnormalities found in EEG signals and statistical analysis have been proposed. Machine learning based approaches provide more flexibility and have produced better results in ASD classification. This paper presents …


Electroencephalogram (Eeg) For Delineating Objective Measure Of Autism Spectrum Disorder, Sampath Jayarathna, Yasith Jayawardana, Mark Jaime, Sashi Thapaliya Jan 2019

Electroencephalogram (Eeg) For Delineating Objective Measure Of Autism Spectrum Disorder, Sampath Jayarathna, Yasith Jayawardana, Mark Jaime, Sashi Thapaliya

Computer Science Faculty Publications

Autism spectrum disorder (ASD) is a developmental disorder that often impairs a child's normal development of the brain. According to CDC, it is estimated that 1 in 6 children in the US suffer from development disorders, and 1 in 68 children in the US suffer from ASD. This condition has a negative impact on a person's ability to hear, socialize, and communicate. Subjective measures often take more time, resources, and have false positives or false negatives. There is a need for efficient objective measures that can help in diagnosing this disease early as possible with less effort. EEG measures the …


Least-Squares Mapping From Kinematic Data To Acoustic Synthesis Parameters For Rehabilitative Acoustic Learning, Xiangyu Zhou Apr 2016

Least-Squares Mapping From Kinematic Data To Acoustic Synthesis Parameters For Rehabilitative Acoustic Learning, Xiangyu Zhou

Master's Theses (2009 -)

Thousands of people suffer from dysarthria resulting from neurological injury of the motor component of the motor-speech system, and need to rely on alternative methods to communicate in daily life, such as body language or text-to-speech [1] . However, there are currently very few effective rehabilitative therapies for helping these patients improve their speech. Because of this, research is needed to develop better rehabilitative therapies. One such area of research is the use of involuntary acoustic learning. The Speech and Swallowing lab at Marquette University has an Electromagnetic Articulography (EMA) system to collect kinematic data and a software system called …


Across-Speaker Articulatory Normalization For Speaker-Independent Silent Speech Recognition, Jun Wang, Ashok Samal, Jordan Green Sep 2014

Across-Speaker Articulatory Normalization For Speaker-Independent Silent Speech Recognition, Jun Wang, Ashok Samal, Jordan Green

CSE Conference and Workshop Papers

Silent speech interfaces (SSIs), which recognize speech from articulatory information (i.e., without using audio information), have the potential to enable persons with laryngectomy or a neurological disease to produce synthesized speech with a natural sounding voice using their tongue and lips. Current approaches to SSIs have largely relied on speaker-dependent recognition models to minimize the negative effects of talker variation on recognition accuracy. Speaker-independent approaches are needed to reduce the large amount of training data required from each user; only limited articulatory samples are often available for persons with moderate to severe speech impairments, due to the logistic difficulty of …


Preliminary Test Of A Real-Time, Interactive Silent Speech Interface Based On Electromagnetic Articulograph, Jun Wang, Ashok Samal, Jordan R. Green Jun 2014

Preliminary Test Of A Real-Time, Interactive Silent Speech Interface Based On Electromagnetic Articulograph, Jun Wang, Ashok Samal, Jordan R. Green

CSE Conference and Workshop Papers

A silent speech interface (SSI) maps articulatory movement data to speech output. Although still in experimental stages, silent speech interfaces hold significant potential for facilitating oral communication in persons after laryngectomy or with other severe voice impairments. Despite the recent efforts on silent speech recognition algorithm development using offline data analysis, online test of SSIs have rarely been conducted. In this paper, we present a preliminary, online test of a real-time, interactive SSI based on electromagnetic motion tracking. The SSI played back synthesized speech sounds in response to the user’s tongue and lip movements. Three English talkers participated in this …


Articulatory Distinctiveness Of Vowels And Consonants: A Data-Driven Approach, Jun Wang, Jordan R. Green, Ashok Samal, Yana Yunusova Oct 2013

Articulatory Distinctiveness Of Vowels And Consonants: A Data-Driven Approach, Jun Wang, Jordan R. Green, Ashok Samal, Yana Yunusova

School of Computing: Faculty Publications

Purpose: To quantify the articulatory distinctiveness of 8 major English vowels and 11 English consonants based on tongue and lip movement time series data using a data-driven approach.

Method: Tongue and lip movements of 8 vowels and 11 consonants from 10 healthy talkers were collected. First, classification accuracies were obtained using 2 complementary approaches: (a) Procrustes analysis and (b) a support vector machine. Procrustes distance was then used to measure the articulatory distinctiveness among vowels and consonants. Finally, the distance (distinctiveness) matrices of different vowel pairs and consonant pairs were used to derive articulatory vowel and consonant spaces …


Word Recognition From Continuous Articulatory Movement Time-Series Data Using Symbolic Representations, Jun Wang, Arvind Balasubramanian, Luis Mojica De La Vega, Jordan R. Green, Ashok Samal, Balakrishnan Prabhakaran Aug 2013

Word Recognition From Continuous Articulatory Movement Time-Series Data Using Symbolic Representations, Jun Wang, Arvind Balasubramanian, Luis Mojica De La Vega, Jordan R. Green, Ashok Samal, Balakrishnan Prabhakaran

CSE Conference and Workshop Papers

Although still in experimental stage, articulation-based silent speech interfaces may have significant potential for facilitating oral communication in persons with voice and speech problems. An articulation-based silent speech interface converts articulatory movement information to audible words. The complexity of speech production mechanism (e.g., co-articulation) makes the conversion a formidable problem. In this paper, we reported a novel, real-time algorithm for recognizing words from continuous articulatory movements. This approach differed from prior work in that (1) it focused on word-level, rather than phoneme-level; (2) online segmentation and recognition were conducted at the same time; and (3) a symbolic representation (SAX) was …


Whole-Word Recognition From Articulatory Movements For Silent Speech Interfaces, Jun Wang, Ashok Samal, Jordan R. Green, Frank Rudzicz Sep 2012

Whole-Word Recognition From Articulatory Movements For Silent Speech Interfaces, Jun Wang, Ashok Samal, Jordan R. Green, Frank Rudzicz

Department of Special Education and Communication Disorders: Faculty Publications

Articulation-based silent speech interfaces convert silently produced speech movements into audible words. These systems are still in their experimental stages, but have significant potential for facilitating oral communication in persons with laryngectomy or speech impairments. In this paper, we report the result of a novel, real-time algorithm that recognizes whole-words based on articulatory movements. This approach differs from prior work that has focused primarily on phoneme-level recognition based on articulatory features. On average, our algorithm missed 1.93 words in a sequence of twenty-five words with an average latency of 0.79 seconds for each word prediction using a data set of …


Sentence Recognition From Articulatory Movements For Silent Speech Interfaces, Jun Wang, Ashok Samal, Jordan R. Green, Frank Rudzicz Mar 2012

Sentence Recognition From Articulatory Movements For Silent Speech Interfaces, Jun Wang, Ashok Samal, Jordan R. Green, Frank Rudzicz

Department of Special Education and Communication Disorders: Faculty Publications

Recent research has demonstrated the potential of using an articulation-based silent speech interface for command-and-control systems. Such an interface converts articulation to words that can then drive a text-to-speech synthesizer. In this paper, we have proposed a novel near-time algorithm to recognize whole-sentences from continuous tongue and lip movements. Our goal is to assist persons who are aphonic or have a severe motor speech impairment to produce functional speech using their tongue and lips. Our algorithm was tested using a functional sentence data set collected from ten speakers (3012 utterances). The average accuracy was 94.89% with an average latency of …


Bridging The Research Gap: Making Hri Useful To Individuals With Autism, Elizabeth Kim, Rhea Paul, Frederick Shic, Brian Scassellati Jan 2012

Bridging The Research Gap: Making Hri Useful To Individuals With Autism, Elizabeth Kim, Rhea Paul, Frederick Shic, Brian Scassellati

Communication Disorders Faculty Publications

While there is a rich history of studies involving robots and individuals with autism spectrum disorders (ASD), few of these studies have made substantial impact in the clinical research community. In this paper we first examine how differences in approach, study design, evaluation, and publication practices have hindered uptake of these research results. Based on ten years of collaboration, we suggest a set of design principles that satisfy the needs (both academic and cultural) of both the robotics and clinical autism research communities. Using these principles, we present a study that demonstrates a quantitatively measured improvement in human-human social interaction …


Vowel Recognition From Continuous Articulatory Movements For Speaker-Dependent Applications, Jun Wang, Jordan R. Green, Ashok Samal, Tom D. Carrell Jan 2010

Vowel Recognition From Continuous Articulatory Movements For Speaker-Dependent Applications, Jun Wang, Jordan R. Green, Ashok Samal, Tom D. Carrell

Department of Special Education and Communication Disorders: Faculty Publications

A novel approach was developed to recognize vowels from continuous tongue and lip movements. Vowels were classified based on movement patterns (rather than on derived articulatory features, e.g., lip opening) using a machine learning approach. Recognition accuracy on a single-speaker dataset was 94.02% with a very short latency. Recognition accuracy was better for high vowels than for low vowels. This finding parallels previous empirical findings on tongue movements during vowels. The recognition algorithm was then used to drive an articulation-to-acoustics synthesizer. The synthesizer recognizes vowels from continuous input stream of tongue and lip movements and plays the corresponding sound samples …


Graduate Bulletin, 1995-1996 (1995), Moorhead State University Jan 1995

Graduate Bulletin, 1995-1996 (1995), Moorhead State University

Graduate Bulletins (Catalogs)

No abstract provided.


Graduate Bulletin, 1993-1995, Moorhead State University Jan 1993

Graduate Bulletin, 1993-1995, Moorhead State University

Graduate Bulletins (Catalogs)

No abstract provided.


Time Delay Neural Networks And Speech Recognition: Context Independence Of Stops In Different Vowel Environments, Gregory Andrew Makowski Jun 1991

Time Delay Neural Networks And Speech Recognition: Context Independence Of Stops In Different Vowel Environments, Gregory Andrew Makowski

Masters Theses

A series of speech recognition experiments was conducted to investigate time-dynamic speech recognition of stop consonants invariant of vowel environment using data from six talkers. The speech preprocessing was based on previous studies investigating acoustic characteristics which correlate to the place of articulation (Blumstein and Stevens 1979). The place of articulation features were statistically abstracted using four moments and the energy level of the speech sample.

Both statistical and neural network pattern recognition methods were used. Statistical methods included linear and quadratic discriminant functions, maximum likelihood estimator (MLE) and K-nearest neighbors (KNN). The neural network approach used was Time Delay …


Graduate Bulletin, 1991-1993, Moorhead State University Jan 1991

Graduate Bulletin, 1991-1993, Moorhead State University

Graduate Bulletins (Catalogs)

No abstract provided.


Graduate Bulletin, 1985-1987 (1985), Moorhead State University Jan 1985

Graduate Bulletin, 1985-1987 (1985), Moorhead State University

Graduate Bulletins (Catalogs)

No abstract provided.


Graduate Bulletin, 1982-1984 (1982), Moorhead State University Jan 1982

Graduate Bulletin, 1982-1984 (1982), Moorhead State University

Graduate Bulletins (Catalogs)

No abstract provided.