Open Access. Powered by Scholars. Published by Universities.®

Computer Sciences Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 13 of 13

Full-Text Articles in Computer Sciences

Emotion Classification Of Indonesian Tweets Using Bidirectional Lstm, Aaron K. Glenn, Phillip M. Lacasse, Bruce A. Cox Feb 2023

Emotion Classification Of Indonesian Tweets Using Bidirectional Lstm, Aaron K. Glenn, Phillip M. Lacasse, Bruce A. Cox

Faculty Publications

Emotion classification can be a powerful tool to derive narratives from social media data. Traditional machine learning models that perform emotion classification on Indonesian Twitter data exist but rely on closed-source features. Recurrent neural networks can meet or exceed the performance of state-of-the-art traditional machine learning techniques using exclusively open-source data and models. Specifically, these results show that recurrent neural network variants can produce more than an 8% gain in accuracy in comparison with logistic regression and SVM techniques and a 15% gain over random forest when using FastText embeddings. This research found a statistical significance in the performance of …


Machine Learning Land Cover And Land Use Classification Of 4-Band Satellite Imagery, Lorelei Turner [*], Torrey J. Wagner, Paul Auclair, Brent T. Langhals Jan 2022

Machine Learning Land Cover And Land Use Classification Of 4-Band Satellite Imagery, Lorelei Turner [*], Torrey J. Wagner, Paul Auclair, Brent T. Langhals

Faculty Publications

Land-cover and land-use classification generates categories of terrestrial features, such as water or trees, which can be used to track how land is used. This work applies classical, ensemble and neural network machine learning algorithms to a multispectral remote sensing dataset containing 405,000 28x28 pixel image patches in 4 electromagnetic frequency bands. For each algorithm, model metrics and prediction execution time were evaluated, resulting in two families of models; fast and precise. The prediction time for an 81,000-patch group of predictions wasmodels, and >5s for the precise models, and there was not a significant change in prediction time when a …


Year-Independent Prediction Of Food Insecurity Using Classical & Neural Network Machine Learning Methods, Caleb Christiansen, Torrey J. Wagner, Brent Langhals May 2021

Year-Independent Prediction Of Food Insecurity Using Classical & Neural Network Machine Learning Methods, Caleb Christiansen, Torrey J. Wagner, Brent Langhals

Faculty Publications

Current food crisis predictions are developed by the Famine Early Warning System Network, but they fail to classify the majority of food crisis outbreaks with model metrics of recall (0.23), precision (0.42), and f1 (0.30). In this work, using a World Bank dataset, classical and neural network (NN) machine learning algorithms were developed to predict food crises in 21 countries. The best classical logistic regression algorithm achieved a high level of significance (p < 0.001) and precision (0.75) but was deficient in recall (0.20) and f1 (0.32). Of particular interest, the classical algorithm indicated that the vegetation index and the food price index were both positively correlated with food crises. A novel method for performing an iterative multidimensional hyperparameter search is presented, which resulted in significantly improved performance when applied to this dataset. Four iterations were conducted, which resulted in excellent 0.96 for metrics of precision, recall, and f1. Due to this strong performance, the food crisis year was removed from the dataset to prevent immediate extrapolation when used on future data, and the modeling process was repeated. The best “no year” model metrics remained strong, achieving ≥0.92 for recall, precision, and f1 while meeting a 10% f1 overfitting threshold on the test (0.84) and holdout (0.83) datasets. The year-agnostic neural network model represents a novel approach to classify food crises and outperforms current food crisis prediction efforts.


Improving Optimization Of Convolutional Neural Networks Through Parameter Fine-Tuning, Nicholas C. Becherer, John M. Pecarina, Scott L. Nykl, Kenneth M. Hopkinson Aug 2019

Improving Optimization Of Convolutional Neural Networks Through Parameter Fine-Tuning, Nicholas C. Becherer, John M. Pecarina, Scott L. Nykl, Kenneth M. Hopkinson

Faculty Publications

In recent years, convolutional neural networks have achieved state-of-the-art performance in a number of computer vision problems such as image classification. Prior research has shown that a transfer learning technique known as parameter fine-tuning wherein a network is pre-trained on a different dataset can boost the performance of these networks. However, the topic of identifying the best source dataset and learning strategy for a given target domain is largely unexplored. Thus, this research presents and evaluates various transfer learning methods for fine-grained image classification as well as the effect on ensemble networks. The results clearly demonstrate the effectiveness of parameter …


Impact Of Reviewer Social Interaction On Online Consumer Review Fraud Detection, Kunal Goswami, Younghee Park, Chungsik Song Jan 2017

Impact Of Reviewer Social Interaction On Online Consumer Review Fraud Detection, Kunal Goswami, Younghee Park, Chungsik Song

Faculty Publications

Background Online consumer reviews have become a baseline for new consumers to try out a business or a new product. The reviews provide a quick look into the application and experience of the business/product and market it to new customers. However, some businesses or reviewers use these reviews to spread fake information about the business/product. The fake information can be used to promote a relatively average product/business or can be used to malign their competition. This activity is known as reviewer fraud or opinion spam. The paper proposes a feature set, capturing the user social interaction behavior to identify fraud. …


Sub-Symbolic Re-Representation To Facilitate Learning Transfer, Dan A. Ventura Mar 2008

Sub-Symbolic Re-Representation To Facilitate Learning Transfer, Dan A. Ventura

Faculty Publications

We consider the issue of knowledge (re-)representation in the context of learning transfer and present a subsymbolic approach for effecting such transfer. Given a set of data, manifold learning is used to automatically organize the data into one or more representational transformations, which are then learned with a set of neural networks. The result is a set of neural filters that can be applied to new data as re-representation operators. Encouraging preliminary empirical results elucidate the approach and demonstrate its feasibility, suggesting possible implications for the broader field of creativity.


Effectively Using Recurrently-Connected Spiking Neural Networks, Eric Goodman, Dan A. Ventura Jul 2005

Effectively Using Recurrently-Connected Spiking Neural Networks, Eric Goodman, Dan A. Ventura

Faculty Publications

Recurrently-connected spiking neural networks are difficult to use and understand because of the complex nonlinear dynamics of the system. Through empirical studies of spiking networks, we deduce several principles which are critical to success. Network parameters such as synaptic time delays and time constants and the connection probabilities can be adjusted to have a significant impact on accuracy. We show how to adjust these parameters to fit the type of problem.


Simplifying Ocr Neural Networks With Oracle Learning, Tony R. Martinez, Joshua Menke May 2003

Simplifying Ocr Neural Networks With Oracle Learning, Tony R. Martinez, Joshua Menke

Faculty Publications

Often the best model to solve a real world problem is relatively complex. The following presents oracle learning, a method using a larger model as an oracle to train a smaller model on unlabeled data in order to obtain (1) a simpler acceptable model and (2) improved results over standard training methods on a similarly sized smaller model. In particular, this paper looks at oracle learning as applied to multi-layer perceptrons trained using standard backpropagation. For optical character recognition, oracle learning results in an 11.40% average decrease in error over direct training while maintaining 98.95% of the initial oracle accuracy.


Concurrently Learning Neural Nets: Encouraging Optimal Behavior In Cooperative Reinforcement Learning Systems, Nancy Fulda, Dan A. Ventura May 2003

Concurrently Learning Neural Nets: Encouraging Optimal Behavior In Cooperative Reinforcement Learning Systems, Nancy Fulda, Dan A. Ventura

Faculty Publications

Reinforcement learning agents interacting in a common environment often fail to converge to optimal system behaviors even when the individual goals of the agents are fully compatible. Claus and Boutilier have demonstrated that the use of joint action learning helps to overcome these difficulties for Q-learning systems. This paper studies an application of joint action learning to systems of neural networks. Neural networks are a desirable candidate for such augmentations for two reasons: (1) they may be able to generalize more effectively than Q-learners, and (2) the network topology used may improve the scalability of joint action learning to systems …


A Memory-Based Approach To Cantonese Tone Recognition, Deryle W. Lonsdale, Michael Emonts Jan 2003

A Memory-Based Approach To Cantonese Tone Recognition, Deryle W. Lonsdale, Michael Emonts

Faculty Publications

This paper introduces memory-based learning as a viable approach for Cantonese tone recognition. The memorybased learning algorithm employed here outperforms other documented current approaches for this problem, which is based on neural networks. Various numbers of tones and features are modeled to find the best method for feature selection and extraction. To further optimize this approach, experiments are performed to isolate the best feature weighting method, the best class voting weights method, and the best number of k-values to implement. Results and possible future work are discussed.


Improving Speech Recognition Learning Through Lazy Training, Tony R. Martinez, Michael E. Rimer, D. Randall Wilson May 2002

Improving Speech Recognition Learning Through Lazy Training, Tony R. Martinez, Michael E. Rimer, D. Randall Wilson

Faculty Publications

Multi-layer backpropagation, like most learning algorithms that can create complex decision surfaces, is prone to overfitting. We present a novel approach, called lazy training, for reducing the overfit in multiple-layer networks. Lazy training consistently reduces generalization error of optimized neural networks by more than half on a large OCR dataset and on several real world problems from the UCI machine learning database repository. Here, lazy training is shown to be effective in a multi-layered adaptive learning system, reducing the error of an optimized backpropagation network in a speech recognition system by 50.0% on the TIDIGITS corpus.


Robust Optimization Using Training Set Evolution, Tony R. Martinez, Dan A. Ventura Jun 1996

Robust Optimization Using Training Set Evolution, Tony R. Martinez, Dan A. Ventura

Faculty Publications

Training Set Evolution is an eclectic optimization technique that combines evolutionary computation (EC) with neural networks (NN). The synthesis of EC with NN provides both initial unsupervised random exploration of the solution space as well as supervised generalization on those initial solutions. An assimilation of a large amount of data obtained over many simulations provides encouraging empirical evidence for the robustness of Evolutionary Training Sets as an optimization technique for feedback and control problems.


Digital Neural Networks, Tony R. Martinez Jan 1988

Digital Neural Networks, Tony R. Martinez

Faculty Publications

Demands for applications requiring massive parallelism in symbolic environments have given rebirth to research in models labeled as neural networks. These models are made up of many simple nodes which are highly interconnected such that computation takes place as data flows amongst the nodes of the network. To present, most models have proposed nodes based on simple analog functions, where inputs are multiplied by weights and summed, the total then optionally being transformed by an arbitrary function at the node. Learning in these systems is accomplished by adjusting the weights on the input lines. This paper discusses the use of …