Open Access. Powered by Scholars. Published by Universities.®

Engineering Commons

Open Access. Powered by Scholars. Published by Universities.®

Computer Engineering

Honors Theses

Theses/Dissertations

Machine Learning

Articles 1 - 4 of 4

Full-Text Articles in Engineering

Art To Influence Creativity In Algorithmic Composition, Tyler Braithwaite Apr 2022

Art To Influence Creativity In Algorithmic Composition, Tyler Braithwaite

Honors Theses

Advances in Recurrent Neural Network (RNN) techniques have caused an explosion of problems posed that revolve around the mass analysis and generation of sequential data, including symbolic music. Building off the work of Nathaniel Patterson’s Musical Autocomplete: An LSTM Approach, we extend this problem of continuing a composition by examining the creative impact that injecting latent-space encoded image data, specifically fine art from the WikiArt Dataset, has on the musical output of RNN architectures designed for autocomplete. For comparison purposes with Patterson, we will also be using a corpus of Erik Satie’s piano music for training, validation, and testing.


Conditional Variational Autoencoder (Cvae) For The Augmentation Of Ecl Biosensor Data, Matthew Dulcich Apr 2022

Conditional Variational Autoencoder (Cvae) For The Augmentation Of Ecl Biosensor Data, Matthew Dulcich

Honors Theses

Machine Learning (ML) is vastly improving the world, from computer vision to fully self-driving cars, we are now able accomplish objectives that were thought to only be dreams. In order to train ML models accurately, they require mountains of information to work with, but sometimes it becomes impossible to collect the data needed, so we turn to data augmentation. In this project we use a conditional variational auto encoder to supplement the original video electrochemiluminescence biosensor dataset, in order to increase the accuracy of a future classification model. In other words, using a cVAE we will create unique realistic videos …


Exploring The Efficiency Of Neural Architecture Search (Nas) Modules, Joshua Dulcich Apr 2022

Exploring The Efficiency Of Neural Architecture Search (Nas) Modules, Joshua Dulcich

Honors Theses

Machine learning is obscure and expensive to develop. Neural architecture search (NAS) algorithms automate this process by learning to create premier ML networks, minimizing the bias and necessity of human experts. From this recently emerging field, most research has focused on optimizing a promisingly unique combination of NAS’s three segments. Despite regularly acquiring state of the art results, this practice sacrifices computing time and resources for slight increases in accuracy; this also obstructs performance comparison across papers. To resolve this issue, we use NASLib’s modular library to test the efficiency per module in a unique subset of combinations. Each NAS …


A Study Of Deep Reinforcement Learning In Autonomous Racing Using Deepracer Car, Mukesh Ghimire May 2021

A Study Of Deep Reinforcement Learning In Autonomous Racing Using Deepracer Car, Mukesh Ghimire

Honors Theses

Reinforcement learning is thought to be a promising branch of machine learning that has the potential to help us develop an Artificial General Intelligence (AGI) machine. Among the machine learning algorithms, primarily, supervised, semi supervised, unsupervised and reinforcement learning, reinforcement learning is different in a sense that it explores the environment without prior knowledge, and determines the optimal action. This study attempts to understand the concept behind reinforcement learning, the mathematics behind it and see it in action by deploying the trained model in Amazon's DeepRacer car. DeepRacer, a 1/18th scaled autonomous car, is the agent which is trained …