Open Access. Powered by Scholars. Published by Universities.®

Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 10 of 10

Full-Text Articles in Mathematics

A Review Of Cyber Attacks On Sensors And Perception Systems In Autonomous Vehicle, Taminul Islam, Md. Alif Sheakh, Anjuman Naher Jui, Omar Sharif, Md Zobaer Hasan Nov 2023

A Review Of Cyber Attacks On Sensors And Perception Systems In Autonomous Vehicle, Taminul Islam, Md. Alif Sheakh, Anjuman Naher Jui, Omar Sharif, Md Zobaer Hasan

School of Mathematical and Statistical Sciences Faculty Publications and Presentations

Vehicle automation has been in the works for a long time now. Automatic brakes, cruise control, GPS satellite navigation, etc. are all common features seen in today's automobiles. Automation and artificial intelligence breakthroughs are likely to lead to an increase in the usage of automation technologies in cars. Because of this, mankind will be more reliant on computer-controlled equipment and car systems in our daily lives. All major corporations have begun investing in the development of self-driving cars because of the rapid advancement of advanced driver support technologies. However, the level of safety and trustworthiness is still questionable. Imagine what …


Explainable Machine Learning Reveals The Relationship Between Hearing Thresholds And Speech-In-Noise Recognition In Listeners With Normal Audiograms, Jithin Raj Balan, Hansapani Rodrigo, Udit Saxena, Srikanta K. Mishra Oct 2023

Explainable Machine Learning Reveals The Relationship Between Hearing Thresholds And Speech-In-Noise Recognition In Listeners With Normal Audiograms, Jithin Raj Balan, Hansapani Rodrigo, Udit Saxena, Srikanta K. Mishra

School of Mathematical and Statistical Sciences Faculty Publications and Presentations

Some individuals complain of listening-in-noise difficulty despite having a normal audiogram. In this study, machine learning is applied to examine the extent to which hearing thresholds can predict speech-in-noise recognition among normal-hearing individuals. The specific goals were to (1) compare the performance of one standard (GAM, generalized additive model) and four machine learning models (ANN, artificial neural network; DNN, deep neural network; RF, random forest; XGBoost; eXtreme gradient boosting), and (2) examine the relative contribution of individual audiometric frequencies and demographic variables in predicting speech-in-noise recognition. Archival data included thresholds (0.25–16 kHz) and speech recognition thresholds (SRTs) from listeners with …


Longboard Classification Using Machine Learning, Tuan (Kevin) Le, Evans Sajtar, Mckenzie Lamb Oct 2023

Longboard Classification Using Machine Learning, Tuan (Kevin) Le, Evans Sajtar, Mckenzie Lamb

Annual Student Research Poster Session

There are several techniques a rider can choose from that they can perform being distributed along the long-board ride. This research aims to create a machine-learning model that can efficiently classify these techniques at different periods of time using raw acceleration data. This paper presents the complete workflow of the application. This application involves analytical geometry, multidimensional calculus, and linear algebra and can be used to visualize and normalize time-invariant object paths. This model focuses on displacement data calculated from raw acceleration data and gyro sensor data from a smartphone application called "Physics Toolbox Sensor Suite". We extracted features from …


Compatibility Of Clique Clustering Algorithm With Dimensionality Reduction, Ug ̆Ur Madran, Duygu Soyog ̆Lu Sep 2023

Compatibility Of Clique Clustering Algorithm With Dimensionality Reduction, Ug ̆Ur Madran, Duygu Soyog ̆Lu

Applied Mathematics & Information Sciences

In our previous work, we introduced a clustering algorithm based on clique formation. Cliques, the obtained clusters, are constructed by choosing the most dense complete subgraphs by using similarity values between instances. The clique algorithm successfully reduces the number of instances in a data set without substantially changing the accuracy rate. In this current work, we focused on reducing the number of features. For this purpose, the effect of the clique clustering algorithm on dimensionality reduction has been analyzed. We propose a novel algorithm for support vector machine classification by combining these two techniques and applying different strategies by differentiating …


Numerical Simulation Of The Korteweg–De Vries Equation With Machine Learning, Kristina O. F. Williams *, Benjamin F. Akers Jun 2023

Numerical Simulation Of The Korteweg–De Vries Equation With Machine Learning, Kristina O. F. Williams *, Benjamin F. Akers

Faculty Publications

A machine learning procedure is proposed to create numerical schemes for solutions of nonlinear wave equations on coarse grids. This method trains stencil weights of a discretization of the equation, with the truncation error of the scheme as the objective function for training. The method uses centered finite differences to initialize the optimization routine and a second-order implicit-explicit time solver as a framework. Symmetry conditions are enforced on the learned operator to ensure a stable method. The procedure is applied to the Korteweg–de Vries equation. It is observed to be more accurate than finite difference or spectral methods on coarse …


Why Softmax? Because It Is The Only Consistent Approach To Probability-Based Classification, Anatole Lokshin, Vladik Kreinovich Jun 2023

Why Softmax? Because It Is The Only Consistent Approach To Probability-Based Classification, Anatole Lokshin, Vladik Kreinovich

Departmental Technical Reports (CS)

In many practical problems, the most effective classification techniques are based on deep learning. In this approach, once the neural network generates values corresponding to different classes, these values are transformed into probabilities by using the softmax formula. Researchers tried other transformation, but they did not work as well as softmax. A natural question is: why is softmax so effective? In this paper, we provide a possible explanation for this effectiveness: namely, we prove that softmax is the only consistent approach to probability-based classification. In precise terms, it is the only approach for which two reasonable probability-based ideas -- Least …


Using Deep Neural Networks To Classify Astronomical Images, Andrew D. Macpherson May 2023

Using Deep Neural Networks To Classify Astronomical Images, Andrew D. Macpherson

Honors Projects

As the quantity of astronomical data available continues to exceed the resources available for analysis, recent advances in artificial intelligence encourage the development of automated classification tools. This paper lays out a framework for constructing a deep neural network capable of classifying individual astronomical images by describing techniques to extract and label these objects from large images.


Fast -- Asymptotically Optimal -- Methods For Determining The Optimal Number Of Features, Saied Tizpaz-Niari, Luc Longpré, Olga Kosheleva, Vladik Kreinovich May 2023

Fast -- Asymptotically Optimal -- Methods For Determining The Optimal Number Of Features, Saied Tizpaz-Niari, Luc Longpré, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

In machine learning -- and in data processing in general -- it is very important to select the proper number of features. If we select too few, we miss important information and do not get good results, but if we select too many, this will include many irrelevant ones that only bring noise and thus again worsen the results. The usual method of selecting the proper number of features is to add features one by one until the quality stops improving and starts deteriorating again. This method works, but it often takes too much time. In this paper, we propose …


Nviz: Unraveling Neural Networks Through Visualization, Kevin Hoffman Apr 2023

Nviz: Unraveling Neural Networks Through Visualization, Kevin Hoffman

Mathematics and Computer Science Presentations

The growing utility of artificial intelligence (AI) is attributed to the development of neural networks. These networks are a class of models that make predictions based on previously observed data. While the inferential power of neural networks is great, the ability to explain their results is difficult because the underlying model is automatically generated. The AI community commonly refers to neural networks as black boxes because the patterns they learn from the data are not easily understood. This project aims to improve the visibility of patterns that neural networks identify in data. Through an interactive web application, NVIZ affords the …


Multilevel Optimization With Dropout For Neural Networks, Gary Joseph Saavedra Apr 2023

Multilevel Optimization With Dropout For Neural Networks, Gary Joseph Saavedra

Mathematics & Statistics ETDs

Large neural networks have become ubiquitous in machine learning. Despite their widespread use, the optimization process for training a neural network remains com-putationally expensive and does not necessarily create networks that generalize well to unseen data. In addition, the difficulty of training increases as the size of the neural network grows. In this thesis, we introduce the novel MGDrop and SMGDrop algorithms which use a multigrid optimization scheme with a dropout coarsening operator to train neural networks. In contrast to other standard neural network training schemes, MGDrop explicitly utilizes information from smaller sub-networks which act as approximations of the full …