Open Access. Powered by Scholars. Published by Universities.®

Digital Commons Network

Open Access. Powered by Scholars. Published by Universities.®

Computer Engineering

PDF

Purdue University

Conference

Machine learning

Publication Year

Articles 1 - 2 of 2

Full-Text Articles in Entire DC Network

Micro-Manipulation Using Learned Model, Matthew A. Lyng, Benjamin V. Johnson, David J. Cappelleri Aug 2018

Micro-Manipulation Using Learned Model, Matthew A. Lyng, Benjamin V. Johnson, David J. Cappelleri

The Summer Undergraduate Research Fellowship (SURF) Symposium

Microscale devices can be found in applications ranging from sensors to structural components. The dominance of surface forces at the microscale hinders the assembly processes through nonlinear interactions that are difficult to model for automation, limiting designs of microsystems to primarily monolithic structures. Methods for modeling surface forces must be presented for viable manufacturing of devices consisting of multiple microparts. This paper proposes the implementation of supervised machine learning models to aid in automated micromanipulation tasks for advanced manufacturing applications. The developed models use sets of training data to implicitly model surface interactions and predict end-effector placement and paths that …


Model-Free Method Of Reinforcement Learning For Visual Tasks, Jeff S. Soldate, Jonghoon Jin, Eugenio Culurciello Aug 2014

Model-Free Method Of Reinforcement Learning For Visual Tasks, Jeff S. Soldate, Jonghoon Jin, Eugenio Culurciello

The Summer Undergraduate Research Fellowship (SURF) Symposium

There has been success in recent years for neural networks in applications requiring high level intelligence such as categorization and assessment. In this work, we present a neural network model to learn control policies using reinforcement learning. It takes a raw pixel representation of the current state and outputs an approximation of a Q value function made with a neural network that represents the expected reward for each possible state-action pair. The action is chosen an \epsilon-greedy policy, choosing the highest expected reward with a small chance of random action. We used gradient descent to update the weights and biases …