Open Access. Powered by Scholars. Published by Universities.®

Robotics Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 13 of 13

Full-Text Articles in Robotics

Reinforcement Learning And Place Cell Replay In Spatial Navigation, Chance Hamilton, Pablo Scleidorovich Phd, Alfredo Weitzenfeld Phd May 2023

Reinforcement Learning And Place Cell Replay In Spatial Navigation, Chance Hamilton, Pablo Scleidorovich Phd, Alfredo Weitzenfeld Phd

36th Florida Conference on Recent Advances in Robotics

In the last decade, studies have demonstrated that hippocampal place cells influence rats’ navigational learning ability. Moreover, researchers have observed that place cell sequences associated with routes leading to a reward are reactivated during rest periods. This phenomenon is known as Hippocampal Replay, which is thought to aid navigational learning and memory consolidation. These findings in neuroscience have inspired new robot navigation models that emulate the learning process of mammals. This study presents a novel model that encodes path information using place cell connections formed during online navigation. Our model employs these connections to generate sequences of
state-action pairs to …


Motion Control Simulation Of A Hexapod Robot, Weishu Zhan Apr 2023

Motion Control Simulation Of A Hexapod Robot, Weishu Zhan

Dartmouth College Master’s Theses

This thesis addresses hexapod robot motion control. Insect morphology and locomotion patterns inform the design of a robotic model, and motion control is achieved via trajectory planning and bio-inspired principles. Additionally, deep learning and multi-agent reinforcement learning are employed to train the robot motion control strategy with leg coordination achieves using a multi-agent deep reinforcement learning framework. The thesis makes the following contributions:

First, research on legged robots is synthesized, with a focus on hexapod robot motion control. Insect anatomy analysis informs the hexagonal robot body and three-joint single robotic leg design, which is assembled using SolidWorks. Different gaits are …


Benchmarking Model Predictive Control And Reinforcement Learning For Legged Robot Locomotion, Shivayogi Akki Jan 2023

Benchmarking Model Predictive Control And Reinforcement Learning For Legged Robot Locomotion, Shivayogi Akki

Dissertations, Master's Theses and Master's Reports

This research delves into the realm of quadrupedal robotics, focusing on the comparative analysis of Model Predictive Control (MPC) and Reinforcement Learning (RL) as predominant control strategies. Through the comprehensive dataset compiled and the insights derived from this analysis, this research aims to serve as a valuable resource for the legged robotics community, guiding researchers and practitioners in the selection and implementation of control strategies. The ultimate goal is to contribute to the advancement of legged robot capabilities and facilitate their successful deployment in real-world applications.

In this study, we employ the Unitree Go1 quadrupedal robot as a testbed, subjecting …


Adaptive Multi-Scale Place Cell Representations And Replay For Spatial Navigation And Learning In Autonomous Robots, Pablo Scleidorovich Oct 2022

Adaptive Multi-Scale Place Cell Representations And Replay For Spatial Navigation And Learning In Autonomous Robots, Pablo Scleidorovich

USF Tampa Graduate Theses and Dissertations

Place cells are one of the most widely studied neurons thought to play a vital role in spatial cognition. Extensive studies show that their activity in the rodent hippocampus is highly correlated with the animal’s spatial location, forming “place fields” of smaller sizes near the dorsal pole and larger sizes near the ventral pole. Despite advances, it is yet unclear how this multi-scale representation enables navigation in complex environments.

In this dissertation, we analyze the place cell representation from a computational point of view, evaluating how multi-scale place fields impact navigation in large and cluttered environments. The objectives are to …


Analyzing Decision-Making In Robot Soccer For Attacking Behaviors, Justin Rodney Mar 2022

Analyzing Decision-Making In Robot Soccer For Attacking Behaviors, Justin Rodney

USF Tampa Graduate Theses and Dissertations

In robotics soccer, decision-making is critical to the performance of a team’s SoftwareSystem. The University of South Florida’s (USF) RoboBulls team implements behavior for the robots by using traditional methods such as analytical geometry to path plan and determine whether an action should be taken. In recent works, Machine Learning (ML) and Reinforcement Learning (RL) techniques have been used to calculate the probability of success for a pass or goal, and even train models for performing low-level skills such as traveling towards a ball and shooting it towards the goal[1, 2]. Open-source frameworks have been created for training Reinforcement Learning …


A Study Of Deep Reinforcement Learning In Autonomous Racing Using Deepracer Car, Mukesh Ghimire May 2021

A Study Of Deep Reinforcement Learning In Autonomous Racing Using Deepracer Car, Mukesh Ghimire

Honors Theses

Reinforcement learning is thought to be a promising branch of machine learning that has the potential to help us develop an Artificial General Intelligence (AGI) machine. Among the machine learning algorithms, primarily, supervised, semi supervised, unsupervised and reinforcement learning, reinforcement learning is different in a sense that it explores the environment without prior knowledge, and determines the optimal action. This study attempts to understand the concept behind reinforcement learning, the mathematics behind it and see it in action by deploying the trained model in Amazon's DeepRacer car. DeepRacer, a 1/18th scaled autonomous car, is the agent which is trained …


Reinforcement Learning Approach For Inspect/Correct Tasks, Hoda Nasereddin Dec 2020

Reinforcement Learning Approach For Inspect/Correct Tasks, Hoda Nasereddin

LSU Doctoral Dissertations

In this research, we focus on the application of reinforcement learning (RL) in automated agent tasks involving considerable target variability (i.e., characterized by stochastic distributions); in particular, learning of inspect/correct tasks. Examples include automated identification & correction of rivet failures in airplane maintenance procedures, and automated cleaning of surgical instruments in a hospital sterilization processing department. The location of defects and the corrective action to be taken for each varies from task episode. What needs to be learned are optimal stochastic strategies rather than optimization of any one single defect type and location. RL has been widely applied in robotics …


A Comprehensive And Modular Robotic Control Framework For Model-Less Control Law Development Using Reinforcement Learning For Soft Robotics, Charles Sullivan Jan 2020

A Comprehensive And Modular Robotic Control Framework For Model-Less Control Law Development Using Reinforcement Learning For Soft Robotics, Charles Sullivan

Open Access Theses & Dissertations

Soft robotics is a growing field in robotics research. Heavily inspired by biological systems, these robots are made of softer, non-linear, materials such as elastomers and are actuated using several novel methods, from fluidic actuation channels to shape changing materials such as electro-active polymers. Highly non-linear materials make modeling difficult, and sensors are still an area of active research. These issues have rendered typical control and modeling techniques often inadequate for soft robotics. Reinforcement learning is a branch of machine learning that focuses on model-less control by mapping states to actions that maximize a specific reward signal. Reinforcement learning has …


Robot Motion Planning In Dynamic Environments, Hao-Tien Lewis Chiang Dec 2019

Robot Motion Planning In Dynamic Environments, Hao-Tien Lewis Chiang

Computer Science ETDs

Robot motion planning in dynamic environments is critical for many robotic applications, such as self-driving cars, UAVs and service robots operating in changing environments. However, motion planning in dynamic environments is very challenging as this problem has been shown to be NP-Hard and in PSPACE, even in the simplest case. As a result, the lack of safe, efficient planning solutions for real-world robots is one of the biggest obstacles for ubiquitous adoption of robots in everyday life. Specifically, there are four main challenges facing motion planning in dynamic environments: obstacle motion uncertainty, obstacle interaction, complex robot dynamics and noise, and …


Utilizing Trajectory Optimization In The Training Of Neural Network Controllers, Nicholas Kimball Sep 2019

Utilizing Trajectory Optimization In The Training Of Neural Network Controllers, Nicholas Kimball

Master's Theses

Applying reinforcement learning to control systems enables the use of machine learning to develop elegant and efficient control laws. Coupled with the representational power of neural networks, reinforcement learning algorithms can learn complex policies that can be difficult to emulate using traditional control system design approaches. In this thesis, three different model-free reinforcement learning algorithms, including Monte Carlo Control, REINFORCE with baseline, and Guided Policy Search are compared in simulated, continuous action-space environments. The results show that the Guided Policy Search algorithm is able to learn a desired control policy much faster than the other algorithms. In the inverted pendulum …


Reinforcement Learning In Robotic Task Domains With Deictic Descriptor Representation, Harry Paul Moore Oct 2018

Reinforcement Learning In Robotic Task Domains With Deictic Descriptor Representation, Harry Paul Moore

LSU Doctoral Dissertations

In the field of reinforcement learning, robot task learning in a specific environment with a Markov decision process backdrop has seen much success. But, extending these results to learning a task for an environment domain has not been as fruitful, even for advanced methodologies such as relational reinforcement learning. In our research into robot learning in environment domains, we utilize a form of deictic representation for the robot’s description of the task environment. However, the non-Markovian nature of the deictic representation leads to perceptual aliasing and conflicting actions, invalidating standard reinforcement learning algorithms. To circumvent this difficulty, several past research …


Multi-Scale Spatial Cognition Models And Bio-Inspired Robot Navigation, Martin I. Llofriu Alonso Jun 2017

Multi-Scale Spatial Cognition Models And Bio-Inspired Robot Navigation, Martin I. Llofriu Alonso

USF Tampa Graduate Theses and Dissertations

The rodent navigation system has been the focus of study for over a century. Discoveries made lately have provided insight on the inner workings of this system. Since then, computational approaches have been used to test hypothesis, as well as to improve robotics navigation and learning by taking inspiration on the rodent navigation system.

This dissertation focuses on the study of the multi-scale representation of the rat’s current location found in the rat hippocampus. It first introduces a model that uses these different scales in the Morris maze task to show their advantages. The generalization power of larger scales of …


Neuron Clustering For Mitigating Catastrophic Forgetting In Supervised And Reinforcement Learning, Benjamin Frederick Goodrich Dec 2015

Neuron Clustering For Mitigating Catastrophic Forgetting In Supervised And Reinforcement Learning, Benjamin Frederick Goodrich

Doctoral Dissertations

Neural networks have had many great successes in recent years, particularly with the advent of deep learning and many novel training techniques. One issue that has affected neural networks and prevented them from performing well in more realistic online environments is that of catastrophic forgetting. Catastrophic forgetting affects supervised learning systems when input samples are temporally correlated or are non-stationary. However, most real-world problems are non-stationary in nature, resulting in prolonged periods of time separating inputs drawn from different regions of the input space.

Reinforcement learning represents a worst-case scenario when it comes to precipitating catastrophic forgetting in neural networks. …