Open Access. Powered by Scholars. Published by Universities.®

Engineering Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 5 of 5

Full-Text Articles in Engineering

An Application Of Deep Learning Models To Automate Food Waste Classification, Alejandro Zachary Espinoza Dec 2019

An Application Of Deep Learning Models To Automate Food Waste Classification, Alejandro Zachary Espinoza

Dissertations and Theses

Food wastage is a problem that affects all demographics and regions of the world. Each year, approximately one-third of food produced for human consumption is thrown away. In an effort to track and reduce food waste in the commercial sector, some companies utilize third party devices which collect data to analyze individual contributions to the global problem. These devices track the type of food wasted (such as vegetables, fruit, boneless chicken, pasta) along with the weight. Some devices also allow the user to leave the food in a kitchen container while it is weighed, so the container weight must also ...


Design Of A Canine Inspired Quadruped Robot As A Platform For Synthetic Neural Network Control, Cody Warren Scharzenberger Jul 2019

Design Of A Canine Inspired Quadruped Robot As A Platform For Synthetic Neural Network Control, Cody Warren Scharzenberger

Dissertations and Theses

Legged locomotion is a feat ubiquitous throughout the animal kingdom, but modern robots still fall far short of similar achievements. This paper presents the design of a canine-inspired quadruped robot named DoggyDeux as a platform for synthetic neural network (SNN) research that may be one avenue for robots to attain animal-like agility and adaptability. DoggyDeux features a fully 3D printed frame, 24 braided pneumatic actuators (BPAs) that drive four 3-DOF limbs in antagonistic extensor-flexor pairs, and an electrical system that allows it to respond to commands from a SNN comprised of central pattern generators (CPGs). Compared to the previous version ...


Exploring And Expanding The One-Pixel Attack, Umairullah Khan, Walt Woods, Christof Teuscher May 2019

Exploring And Expanding The One-Pixel Attack, Umairullah Khan, Walt Woods, Christof Teuscher

Student Research Symposium

In machine learning research, adversarial examples are normal inputs to a classifier that have been specifically perturbed to cause the model to misclassify the input. These perturbations rarely affect the human readability of an input, even though the model’s output is drastically different. Recent work has demonstrated that image-classifying deep neural networks (DNNs) can be reliably fooled with the modification of a single pixel in the input image, without knowledge of a DNN’s internal parameters. This “one-pixel attack” utilizes an iterative evolutionary optimizer known as differential evolution (DE) to find the most effective pixel to perturb, via the ...


The Applications Of Grid Cells In Computer Vision, Keaton Kraiger Apr 2019

The Applications Of Grid Cells In Computer Vision, Keaton Kraiger

Undergraduate Research & Mentoring Program

In this study we present a novel method for position and scale invariant object representation based on a biologically-inspired framework. Grid cells are neurons in the entorhinal cortex whose multiple firing locations form a periodic triangular array, tiling the surface of an animal’s environment. We propose a model for simple object representation that maintains position and scale invariance, in which grid maps capture the fundamental structure and features of an object. The model provides a mechanism for identifying feature locations in a Cartesian plane and vectors between object features encoded by grid cells. It is shown that key object ...


Exploring And Expanding The One-Pixel Attack, Umairullah Khan, Walt Woods Jan 2019

Exploring And Expanding The One-Pixel Attack, Umairullah Khan, Walt Woods

Undergraduate Research & Mentoring Program

In machine learning research, adversarial examples are normal inputs to a classifier that have been specifically perturbed to cause the model to misclassify the input. These perturbations rarely affect the human readability of an input, even though the model’s output is drastically different. Recent work has demonstrated that image-classifying deep neural networks (DNNs) can be reliably fooled with the modification of a single pixel in the input image, without knowledge of a DNN’s internal parameters. This “one-pixel attack” utilizes an iterative evolutionary optimizer known as differential evolution (DE) to find the most effective pixel to perturb, via the ...