Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 10 of 10

Full-Text Articles in Physical Sciences and Mathematics

Convolutional Neural Networks Analysis Reveals Three Possible Sources Of Bronze Age Writings Between Greece And India, Shruti Daggumati, Peter Z. Revesz Apr 2023

Convolutional Neural Networks Analysis Reveals Three Possible Sources Of Bronze Age Writings Between Greece And India, Shruti Daggumati, Peter Z. Revesz

School of Computing: Faculty Publications

This paper analyzes the relationships among eight ancient scripts from between Greece and India. We used convolutional neural networks combined with support vector machines to give a numerical rating of the similarity between pairs of signs (one sign from each of two different scripts). Two scripts that had a one-to-one matching of their signs were determined to be related. The result of the analysis is the finding of the following three groups, which are listed in chronological order: (1) Sumerian pictograms, the Indus Valley script, and the proto-Elamite script; (2) Cretan hieroglyphs and Linear B; and (3) the Phoenician, Greek, …


Improved Evolutionary Support Vector Machine Classifier For Coronary Artery Heart Disease Prediction Among Diabetic Patients, Narasimhan B, Malathi A Dr Apr 2019

Improved Evolutionary Support Vector Machine Classifier For Coronary Artery Heart Disease Prediction Among Diabetic Patients, Narasimhan B, Malathi A Dr

Library Philosophy and Practice (e-journal)

Soft computing paves way many applications including medical informatics. Decision support system has gained a major attention that will aid medical practitioners to diagnose diseases. Diabetes mellitus is hereditary disease that might result in major heart disease. This research work aims to propose a soft computing mechanism named Improved Evolutionary Support Vector Machine classifier for CAHD risk prediction among diabetes patients. The attribute selection mechanism is attempted to build with the classifier in order to reduce the misclassification error rate of the conventional support vector machine classifier. Radial basis kernel function is employed in IESVM. IESVM classifier is evaluated through …


Across-Speaker Articulatory Normalization For Speaker-Independent Silent Speech Recognition, Jun Wang, Ashok Samal, Jordan Green Sep 2014

Across-Speaker Articulatory Normalization For Speaker-Independent Silent Speech Recognition, Jun Wang, Ashok Samal, Jordan Green

CSE Conference and Workshop Papers

Silent speech interfaces (SSIs), which recognize speech from articulatory information (i.e., without using audio information), have the potential to enable persons with laryngectomy or a neurological disease to produce synthesized speech with a natural sounding voice using their tongue and lips. Current approaches to SSIs have largely relied on speaker-dependent recognition models to minimize the negative effects of talker variation on recognition accuracy. Speaker-independent approaches are needed to reduce the large amount of training data required from each user; only limited articulatory samples are often available for persons with moderate to severe speech impairments, due to the logistic difficulty of …


Articulatory Distinctiveness Of Vowels And Consonants: A Data-Driven Approach, Jun Wang, Jordan R. Green, Ashok Samal, Yana Yunusova Oct 2013

Articulatory Distinctiveness Of Vowels And Consonants: A Data-Driven Approach, Jun Wang, Jordan R. Green, Ashok Samal, Yana Yunusova

School of Computing: Faculty Publications

Purpose: To quantify the articulatory distinctiveness of 8 major English vowels and 11 English consonants based on tongue and lip movement time series data using a data-driven approach.

Method: Tongue and lip movements of 8 vowels and 11 consonants from 10 healthy talkers were collected. First, classification accuracies were obtained using 2 complementary approaches: (a) Procrustes analysis and (b) a support vector machine. Procrustes distance was then used to measure the articulatory distinctiveness among vowels and consonants. Finally, the distance (distinctiveness) matrices of different vowel pairs and consonant pairs were used to derive articulatory vowel and consonant spaces …


Word Recognition From Continuous Articulatory Movement Time-Series Data Using Symbolic Representations, Jun Wang, Arvind Balasubramanian, Luis Mojica De La Vega, Jordan R. Green, Ashok Samal, Balakrishnan Prabhakaran Aug 2013

Word Recognition From Continuous Articulatory Movement Time-Series Data Using Symbolic Representations, Jun Wang, Arvind Balasubramanian, Luis Mojica De La Vega, Jordan R. Green, Ashok Samal, Balakrishnan Prabhakaran

CSE Conference and Workshop Papers

Although still in experimental stage, articulation-based silent speech interfaces may have significant potential for facilitating oral communication in persons with voice and speech problems. An articulation-based silent speech interface converts articulatory movement information to audible words. The complexity of speech production mechanism (e.g., co-articulation) makes the conversion a formidable problem. In this paper, we reported a novel, real-time algorithm for recognizing words from continuous articulatory movements. This approach differed from prior work in that (1) it focused on word-level, rather than phoneme-level; (2) online segmentation and recognition were conducted at the same time; and (3) a symbolic representation (SAX) was …


Whole-Word Recognition From Articulatory Movements For Silent Speech Interfaces, Jun Wang, Ashok Samal, Jordan R. Green, Frank Rudzicz Sep 2012

Whole-Word Recognition From Articulatory Movements For Silent Speech Interfaces, Jun Wang, Ashok Samal, Jordan R. Green, Frank Rudzicz

Department of Special Education and Communication Disorders: Faculty Publications

Articulation-based silent speech interfaces convert silently produced speech movements into audible words. These systems are still in their experimental stages, but have significant potential for facilitating oral communication in persons with laryngectomy or speech impairments. In this paper, we report the result of a novel, real-time algorithm that recognizes whole-words based on articulatory movements. This approach differs from prior work that has focused primarily on phoneme-level recognition based on articulatory features. On average, our algorithm missed 1.93 words in a sequence of twenty-five words with an average latency of 0.79 seconds for each word prediction using a data set of …


Data Mining Of Protein Databases, Christopher Assi Jul 2012

Data Mining Of Protein Databases, Christopher Assi

Department of Computer Science and Engineering: Dissertations, Theses, and Student Research

Data mining of protein databases poses special challenges because many protein databases are non-relational whereas most data mining and machine learning algorithms assume the input data to be a relational database. Protein databases are non-relational mainly because they often contain set data types. We developed new data mining algorithms that can restructure non-relational protein databases so that they become relational and amenable for various data mining and machine learning tools. We applied the new restructuring algorithms to a pancreatic protein database. After the restructuring, we also applied two classification methods, such as decision tree and SVM classifiers and compared their …


Sentence Recognition From Articulatory Movements For Silent Speech Interfaces, Jun Wang, Ashok Samal, Jordan R. Green, Frank Rudzicz Mar 2012

Sentence Recognition From Articulatory Movements For Silent Speech Interfaces, Jun Wang, Ashok Samal, Jordan R. Green, Frank Rudzicz

Department of Special Education and Communication Disorders: Faculty Publications

Recent research has demonstrated the potential of using an articulation-based silent speech interface for command-and-control systems. Such an interface converts articulation to words that can then drive a text-to-speech synthesizer. In this paper, we have proposed a novel near-time algorithm to recognize whole-sentences from continuous tongue and lip movements. Our goal is to assist persons who are aphonic or have a severe motor speech impairment to produce functional speech using their tongue and lips. Our algorithm was tested using a functional sentence data set collected from ten speakers (3012 utterances). The average accuracy was 94.89% with an average latency of …


Vowel Recognition From Continuous Articulatory Movements For Speaker-Dependent Applications, Jun Wang, Jordan R. Green, Ashok Samal, Tom D. Carrell Jan 2010

Vowel Recognition From Continuous Articulatory Movements For Speaker-Dependent Applications, Jun Wang, Jordan R. Green, Ashok Samal, Tom D. Carrell

Department of Special Education and Communication Disorders: Faculty Publications

A novel approach was developed to recognize vowels from continuous tongue and lip movements. Vowels were classified based on movement patterns (rather than on derived articulatory features, e.g., lip opening) using a machine learning approach. Recognition accuracy on a single-speaker dataset was 94.02% with a very short latency. Recognition accuracy was better for high vowels than for low vowels. This finding parallels previous empirical findings on tongue movements during vowels. The recognition algorithm was then used to drive an articulation-to-acoustics synthesizer. The synthesizer recognizes vowels from continuous input stream of tongue and lip movements and plays the corresponding sound samples …


Vowel Recognition From Articulatory Position Time-Series Data, Jun Wang, Ashok Samal, Jordan R. Green, Tom D. Carrell Sep 2009

Vowel Recognition From Articulatory Position Time-Series Data, Jun Wang, Ashok Samal, Jordan R. Green, Tom D. Carrell

CSE Conference and Workshop Papers

A new approach of recognizing vowels from articulatory position time-series data was proposed and tested in this paper. This approach directly mapped articulatory position time-series data to vowels without extracting articulatory features such as mouth opening. The input time-series data were time-normalized and sampled to fixed-width vectors of articulatory positions. Three commonly used classifiers, Neural Network, Support Vector Machine and Decision Tree were used and their performances were compared on the vectors. A single speaker dataset of eight major English vowels acquired using Electromagnetic Articulograph (EMA) AG500 was used. Recognition rate using cross validation ranged from 76.07% to 91.32% for …