Open Access. Powered by Scholars. Published by Universities.®
- Institution
- Publication Year
- Publication
-
- Faculty Publications (10)
- Christopher N. Roman (5)
- Theses and Dissertations (2)
- Department of Agricultural and Biological Systems Engineering: Dissertations, Theses, and Student Research (1)
- Department of Computer Science and Engineering: Dissertations, Theses, and Student Research (1)
-
- Dissertations, Master's Theses and Master's Reports (1)
- FIU Electronic Theses and Dissertations (1)
- Graduate School of Oceanography Faculty Publications (1)
- Graduate Theses, Dissertations, and Problem Reports (1)
- Master's Theses (1)
- Open Access Theses & Dissertations (1)
- USF Tampa Graduate Theses and Dissertations (1)
- Publication Type
Articles 1 - 26 of 26
Full-Text Articles in Computer Engineering
Brain-Inspired Spatio-Temporal Learning With Application To Robotics, Thiago André Ferreira Medeiros
Brain-Inspired Spatio-Temporal Learning With Application To Robotics, Thiago André Ferreira Medeiros
USF Tampa Graduate Theses and Dissertations
The human brain still has many mysteries and one of them is how it encodes information. The following study intends to unravel at least one such mechanism. For this it will be demonstrated how a set of specialized neurons may use spatial and temporal information to encode information. These neurons, called Place Cells, become active when the animal enters a place in the environment, allowing it to build a cognitive map of the environment. In a recent paper by Scleidorovich et al. in 2022, it was demonstrated that it was possible to differentiate between two sequences of activations of a …
Portable Robotic Navigation Aid For The Visually Impaired, Lingqiu Jin
Portable Robotic Navigation Aid For The Visually Impaired, Lingqiu Jin
Theses and Dissertations
This dissertation aims to address the limitations of existing visual-inertial (VI) SLAM methods - lack of needed robustness and accuracy - for assistive navigation in a large indoor space. Several improvements are made to existing SLAM technology, and the improved methods are used to enable two robotic assistive devices, a robot cane, and a robotic object manipulation aid, for the visually impaired for assistive wayfinding and object detection/grasping. First, depth measurements are incorporated into the optimization process for device pose estimation to improve the success rate of VI SLAM's initialization and reduce scale drift. The improved method, called depth-enhanced visual-inertial …
Intelligent Autonomous Inspections Using Deep Learning And Detection Markers, Alejandro Martinez Acosta
Intelligent Autonomous Inspections Using Deep Learning And Detection Markers, Alejandro Martinez Acosta
Open Access Theses & Dissertations
Inspection of industrial and scientific facilities is a crucial task that must be performed regularly. These inspections tasks ensure that the facilityâ??s structure is in safe operational conditions for humans. Furthermore,the safe operation of industrial machinery, is dependent on the conditions of the environment. For safety reasons, inspections for both structural integrity and equipment is often manually performed by operators or technicians. Naturally, this is often a tedious and laborious task. Additionally, buildings and structures frequently contain hard to reach or dangerous areas, which leads to the harm, injury or death of humans. Autonomous robotic systems offer an attractive solution …
Visual Homing For Robot Teams: Do You See What I See?, Damian Lyons, Noah Petzinger
Visual Homing For Robot Teams: Do You See What I See?, Damian Lyons, Noah Petzinger
Faculty Publications
Visual homing is a lightweight approach to visual navigation which does not require GPS. It is very attractive for robot platforms with a low computational capacity. However, a limitation is that the stored home location must be initially within the field of view of the robot. Motivated by the increasing ubiquity of camera information we propose to address this line-of-sight limitation by leveraging camera information from other robots and fixed cameras. To home to a location that is not initially within view, a robot must be able to identify a common visual landmark with another robot that can be used …
Visual Homing For Robot Teams: Do You See What I See?, Damian Lyons, Noah Petzinger
Visual Homing For Robot Teams: Do You See What I See?, Damian Lyons, Noah Petzinger
Faculty Publications
Visual homing is a lightweight approach to visual navigation which does not require GPS. It is very attractive for robot platforms with a low computational capacity. However, a limitation is that the stored home location must be initially within the field of view of the robot. Motivated by the increasing ubiquity of camera information we propose to address this line-of-sight limitation by leveraging camera information from other robots and fixed cameras. To home to a location that is not initially within view, a robot must be able to identify a common visual landmark with another robot that can be used …
Planetary Rover Inertial Navigation Applications: Pseudo Measurements And Wheel Terrain Interactions, Cagri Kilic
Planetary Rover Inertial Navigation Applications: Pseudo Measurements And Wheel Terrain Interactions, Cagri Kilic
Graduate Theses, Dissertations, and Problem Reports
Accurate localization is a critical component of any robotic system. During planetary missions, these systems are often limited by energy sources and slow spacecraft computers. Using proprioceptive localization (e.g., using an inertial measurement unit and wheel encoders) without external aiding is insufficient for accurate localization. This is mainly due to the integrated and unbounded errors of the inertial navigation solutions and the drifted position information from wheel encoders caused by wheel slippage. For this reason, planetary rovers often utilize exteroceptive (e.g., vision-based) sensors. On the one hand, localization with proprioceptive sensors is straightforward, computationally efficient, and continuous. On the other …
An Approach To Fast Multi-Robot Exploration In Buildings With Inaccessible Spaces, Matt Mcneill, Damian Lyons
An Approach To Fast Multi-Robot Exploration In Buildings With Inaccessible Spaces, Matt Mcneill, Damian Lyons
Faculty Publications
The rapid exploration of unknown environments is a common application of autonomous multi-robot teams. For some types of exploration missions, a mission designer may possess some rudimentary knowledge about the area to be explored. For example, the dimensions of a building may be known, but not its floor layout or the location of furniture and equipment inside. For this type of mission, the Space- Based Potential Field (SBPF) method is an approach to multirobot exploration which leverages a priori knowledge of area bounds to determine robot motion. Explored areas and obstacles exert a repulsive force, and unexplored areas exert an …
Evaluation Of Field Of View Width In Stereo-Vision-Based Visual Homing, Damian Lyons, Benjamin Barriage, Luca Del Signore
Evaluation Of Field Of View Width In Stereo-Vision-Based Visual Homing, Damian Lyons, Benjamin Barriage, Luca Del Signore
Faculty Publications
Visual homing is a local navigation technique used to direct a robot to a previously seen location by comparing the image of the original location with the current visual image. Prior work has shown that exploiting depth cues such as image scale or stereo-depth in homing leads to improved homing performance. While it is not unusual to use a panoramic field of view (FOV) camera in visual homing, it is unusual to have a panoramic FOV stereo-camera. So, while the availability of stereo-depth information may improve performance, the concomitant-restricted FOV may be a detriment to performance, unless specialized stereo hardware …
Robot Navigation In Cluttered Environments With Deep Reinforcement Learning, Ryan Weideman
Robot Navigation In Cluttered Environments With Deep Reinforcement Learning, Ryan Weideman
Master's Theses
The application of robotics in cluttered and dynamic environments provides a wealth of challenges. This thesis proposes a deep reinforcement learning based system that determines collision free navigation robot velocities directly from a sequence of depth images and a desired direction of travel. The system is designed such that a real robot could be placed in an unmapped, cluttered environment and be able to navigate in a desired direction with no prior knowledge. Deep Q-learning, coupled with the innovations of double Q-learning and dueling Q-networks, is applied. Two modifications of this architecture are presented to incorporate direction heading information that …
A Dynamical System Approach For Resource-Constrained Mobile Robotics, Tauhidul Alam
A Dynamical System Approach For Resource-Constrained Mobile Robotics, Tauhidul Alam
FIU Electronic Theses and Dissertations
The revolution of autonomous vehicles has led to the development of robots with abundant sensors, actuators with many degrees of freedom, high-performance computing capabilities, and high-speed communication devices. These robots use a large volume of information from sensors to solve diverse problems. However, this usually leads to a significant modeling burden as well as excessive cost and computational requirements. Furthermore, in some scenarios, sophisticated sensors may not work precisely, the real-time processing power of a robot may be inadequate, the communication among robots may be impeded by natural or adversarial conditions, or the actuation control in a robot may be …
Applications Of Robot Operating System (Ros) To Mobile Microgrid Formation Outdoors, John Naglak
Applications Of Robot Operating System (Ros) To Mobile Microgrid Formation Outdoors, John Naglak
Dissertations, Master's Theses and Master's Reports
Application of mobile robots to microgrid formation has value for disaster response and service of forward operating bases. This thesis describes the development, testing and demonstration of broad effort across multiple disciplines to enable outdoor positioning and connection of mobile microgrids for the first time. This work includes an outdoor waypoint controller for a UGV agent, specifically the Clearpath Husky. It details sensor fusion of 2D LiDAR and stereo vision, and fusion of odometry sources using an Extended Kalman Filter. Development of these software tools entails integration of many of the packages available through the Robot Operating System (ROS), with …
Event And Time-Triggered Control Module Layers For Individual Robot Control Architectures Of Unmanned Agricultural Ground Vehicles, Tyler Troyer
Department of Agricultural and Biological Systems Engineering: Dissertations, Theses, and Student Research
Automation in the agriculture sector has increased to an extent where the accompanying methods for unmanned field management are becoming more economically viable. This manifests in the industry’s recent presentation of conceptual cab-less machines that perform all field operations under the high-level task control of a single remote operator. A dramatic change in the overall workflow for field tasks that historically assumed the presence of a human in the immediate vicinity of the work is predicted. This shift in the entire approach to farm machinery work provides producers increased control and productivity over high-level tasks and less distraction from operating …
An Approach To Robust Homing With Stereovision, Fuqiang Fu, Damian Lyons
An Approach To Robust Homing With Stereovision, Fuqiang Fu, Damian Lyons
Faculty Publications
Visual Homing is a bioinspired approach to robot navigation which can be fast and uses few assumptions. However, visual homing in a cluttered and unstructured outdoor environment offers several challenges to homing methods that have been developed for primarily indoor environments. One issue is that any current image during homing may be tilted with respect to the home image. The second is that moving through a cluttered scene during homing may cause obstacles to interfere between the home scene and location and the current scene and location. In this paper, we introduce a robust method to improve a previous developed …
Evaluation Of Parallel Reduction Strategies For Fusion Of Sensory Information From A Robot Team., Damian M. Lyons, Joseph Leroy
Evaluation Of Parallel Reduction Strategies For Fusion Of Sensory Information From A Robot Team., Damian M. Lyons, Joseph Leroy
Faculty Publications
The advantage of using a team of robots to search or to map an area is that by navigating the robots to different parts of the area, searching or mapping can be completed more quickly. A crucial aspect of the problem is the combination, or fusion, of data from team members to generate an integrated model of the search/mapping area. In prior work we looked at the issue of removing mutual robots views from an integrated point cloud model built from laser and stereo sensors, leading to a cleaner and more accurate model. This paper addresses a further challenge: Even …
Eliminating Mutual Views In Fusion Of Ranging And Rgb-D Data From Robot Teams Operating In Confined Areas, Damian M. Lyons, Karma Shrestha
Eliminating Mutual Views In Fusion Of Ranging And Rgb-D Data From Robot Teams Operating In Confined Areas, Damian M. Lyons, Karma Shrestha
Faculty Publications
We address the problem of fusing laser and RGB-Data from multiple robots operating in close proximity to one another. By having a team of robots working together, a large area can be scanned quickly, or a smaller area scanned in greater detail. However, a key aspect of this problem is the elimination of the spurious readings due to the robots operating in close proximity. While there is an extensive literature on the mapping and localization aspect of this problem, our problem differs from the dynamic map problem in that it involves at one kind of transient map feature, robots viewing …
Decentralized Collision Avoidance, Jayasri K. Janardanan
Decentralized Collision Avoidance, Jayasri K. Janardanan
Department of Computer Science and Engineering: Dissertations, Theses, and Student Research
Autonomous Robots must carry out their tasks as independently as possible and each robot may be assigned different tasks at different locations. As these tasks are being performed, the robots have to navigate correctly such that the assigned tasks are completed efficiently, while also avoiding each other and other obstacles. To accomplish effective navigation, we must ensure that the robots are calibrated to avoid colliding with any kind of object on its path. Each robot has to sense the obstacles on its path and take necessary corrective measure to avoid those obstacles. In a situation with multiple robots, robots may …
Fusion Of Ranging Data From Robot Teams Operating In Confined Areas, Damian M. Lyons, Karma Shrestha, Tsung-Ming Liu
Fusion Of Ranging Data From Robot Teams Operating In Confined Areas, Damian M. Lyons, Karma Shrestha, Tsung-Ming Liu
Faculty Publications
We address the problem of fusing laser ranging data from multiple mobile robots that are surveying an area as part of a robot search and rescue or area surveillance mission. We are specifically interested in the case where members of the robot team are working in close proximity to each other. The advantage of this teamwork is that it greatly speeds up the surveying process; the area can be quickly covered even when the robots use a random motion exploration approach. However, the disadvantage of the close proximity is that it is possible, and even likely, that the laser ranging …
Robust Course-Boundary Extraction Algorithms For Autonomous Vehicles, Chris Roman, Charles Reinholtz
Robust Course-Boundary Extraction Algorithms For Autonomous Vehicles, Chris Roman, Charles Reinholtz
Christopher N. Roman
Practical autonomous robotic vehicles require dependable methods for accurately identifying course or roadway boundaries. The authors have developed a method to reliably extract the boundary line using simple dynamic thresholding, noise filtering, and blob removal. This article describes their efforts to apply this procedure in developing an autonomous vehicle.
Intelligent Behavioral Action Aiding For Improved Autonomous Image Navigation, Kwee Guan Eng
Intelligent Behavioral Action Aiding For Improved Autonomous Image Navigation, Kwee Guan Eng
Theses and Dissertations
In egomotion image navigation, errors are common especially when traversing areas with few landmarks. Since image navigation is often used as a passive navigation technique in Global Positioning System (GPS) denied environments; egomotion accuracy is important for precise navigation in these challenging environments. One of the causes of egomotion errors is inaccurate landmark distance measurements, e.g., sensor noise. This research determines a landmark location egomotion error model that quantifies the effects of landmark locations on egomotion value uncertainty and errors. The error model accounts for increases in landmark uncertainty due to landmark distance and image centrality. A robot then uses …
Navigation Of Uncertain Terrain By Fusion Of Information From Real And Synthetic Imagery, Damian M. Lyons, Prem Nirmal, D. Paul Benjamin
Navigation Of Uncertain Terrain By Fusion Of Information From Real And Synthetic Imagery, Damian M. Lyons, Prem Nirmal, D. Paul Benjamin
Faculty Publications
We consider the scenario where an autonomous platform that is searching or traversing a building may observe unstable masonry or may need to travel over unstable rubble. A purely behaviour-based system may handle these challenges but produce behaviour that works against long-terms goals such as reaching a victim as quickly as possible. We extend our work on ADAPT, a cognitive robotics architecture that incorporates 3D simulation and image fusion, to allow the robot to predict the behaviour of physical phenomena, such as falling masonry, and take actions consonant with long-term goals.
We experimentally evaluate a cognitive only and reactive only …
Sharing And Fusing Landmark Information In A Team Of Autonomous Robots, Damian M. Lyons
Sharing And Fusing Landmark Information In A Team Of Autonomous Robots, Damian M. Lyons
Faculty Publications
A team of robots working to explore and map a space may need to share information about landmarks so as register local maps and to plan effective exploration strategies. In this paper we investigate the use of spatial histograms (spatiograms) as a common representation for exchanging landmark information between robots. Each robot can use sonar, stereo, laser and image information to identify potential landmarks. The sonar, laser and stereo information provide the spatial dimension of the spatiogram in a landmark-centered coordinate frame while video provides the image information. We call the result a terrain spatiogram. This representation can be shared …
Micro-Bathymetric Mapping Using Acoustic Range Images, Christopher Roman, Hanumant Singh
Micro-Bathymetric Mapping Using Acoustic Range Images, Christopher Roman, Hanumant Singh
Christopher N. Roman
This work focuses on the creation of high resolution micro-bathymetric maps using a high frequency pencil beam sonar. These maps typically cover areas of 10's to 100's of square meters. Data is collected using a sonar mounted to an underwater vehicle that can be positioned at discrete locations on the sea floor or flown in a survey pattern above the bottom. Specifically, we are focused on improving the accuracy of these terrain maps by merging sonar pings taken from multiple vantage points over the same location. This requires the adaption of data registration techniques to handle errors related to the …
Optical And Acoustic Habitat Characterization With The Seabed Auv, Hanumant Singh, Ryan Eustice, Oscar Pizarro, Christopher Roman
Optical And Acoustic Habitat Characterization With The Seabed Auv, Hanumant Singh, Ryan Eustice, Oscar Pizarro, Christopher Roman
Christopher N. Roman
The Seabed AUV is an Autonomous Underwater Vehicle (AUV) built to serve as a readily available and operationally simple tool for high resolution imaging. It is a hover-capable vehicle that performs optical sensing with a 12 bit 1280/spl times/1024 CCD camera and acoustic high resolution mapping using an MST 300 kHz sidescan and a 675 kHz pencil beam bathymetric sonar. The AUV has been designed for operations from small vessels with minimal support equipment. It has an operational depth of 2000 meters and at 1 m/s can run for up to 10 hours. In this paper we report on the …
Estimation Of Error In Large Area Underwater Photomosaics Using Vehicle Navigation Data, C. Roman, H. Singh
Estimation Of Error In Large Area Underwater Photomosaics Using Vehicle Navigation Data, C. Roman, H. Singh
Christopher N. Roman
Creating geometrically accurate photomosaics of underwater sites using images collected from an AUV or ROV is a difficult task due to dimensional errors which grow as a function of 3D image distortion and the mosaicking process. Although photomosiacs are accurate locally their utility for accurately representing a large survey area is jeopardized by this error growth. Evaluating the error in a mosaic is the first step in creating globally accurate photomosaics of an unstructured environment with bounded error. Using vehicle navigation data and sensor offsets it is possible to estimate the error present in large area photomosaics independent of the …
A New Autonomous Underwater Vehicle For Imaging Research, C. Roman, O. Pizarro, R. Eustice, H. Singh
A New Autonomous Underwater Vehicle For Imaging Research, C. Roman, O. Pizarro, R. Eustice, H. Singh
Christopher N. Roman
Currently, unmanned underwater vehicles either tend to be cumbersome and complex to run, or operationally simple, but not quite suitable platforms for deep water imaging. This paper presents an alternative design in the form of a new low cost and easier to use autonomous underwater vehicle (AUV) for imaging research. The objective of the vehicle is to serve as a readily available and operationally simple tool that allows rapid testing of imaging algorithms in areas such as photomosaicking, 3D image reconstruction from a single camera, image based navigation, and multi-sensor fusion of bathymetry and optical data. These are all current …
Robust Course-Boundary Extraction Algorithms For Autonomous Vehicles, Chris Roman, Charles Reinholtz
Robust Course-Boundary Extraction Algorithms For Autonomous Vehicles, Chris Roman, Charles Reinholtz
Graduate School of Oceanography Faculty Publications
Practical autonomous robotic vehicles require dependable methods for accurately identifying course or roadway boundaries. The authors have developed a method to reliably extract the boundary line using simple dynamic thresholding, noise filtering, and blob removal. This article describes their efforts to apply this procedure in developing an autonomous vehicle.