Open Access. Powered by Scholars. Published by Universities.®
- Institution
- Publication Year
- Publication
-
- Computer Engineering (3)
- Master's Theses (3)
- Electronic Theses, Projects, and Dissertations (2)
- The Summer Undergraduate Research Fellowship (SURF) Symposium (2)
- Theses and Dissertations--Computer Science (2)
-
- Doctoral Dissertations (1)
- Electronic Thesis and Dissertation Repository (1)
- Master of Science in Computer Science Theses (1)
- Masters Theses (1)
- Mathematics, Statistics, and Computer Science Honors Projects (1)
- Senior Projects Spring 2021 (1)
- Theses and Dissertations (1)
- Theses and Dissertations--Electrical and Computer Engineering (1)
- Publication Type
Articles 1 - 20 of 20
Full-Text Articles in Computer Engineering
Seal Counting On Our Plages (S.C.O.O.P.), Kaanan Kharwa
Seal Counting On Our Plages (S.C.O.O.P.), Kaanan Kharwa
Master's Theses
The Vertebrate Integrative Physiology (VIP) lab monitors the population of northern elephant seals at the largest mainland breeding colony, located at Piedras Blancas (San Simeon, CA). As the population expands, more human-seal interactions and conflicts over land use occur. The VIP lab's work informs California State Parks and helps with the management of the rookery. Currently, members of the VIP lab fly a drone over the beaches, capture multiple images, and manually count the seals, which takes around 14 to 21 hours of analysis per survey. Machine learning methods such as Convolutional Neural Networks (CNN) and Region-based Convolutional Neural Networks …
Real-Time Gun Detection In Video Streams Using Yolo V8, Harish Kumar Reddy Kunchala
Real-Time Gun Detection In Video Streams Using Yolo V8, Harish Kumar Reddy Kunchala
Electronic Theses, Projects, and Dissertations
In this research, we advance the domain of public safety by developing a machine learning model that utilizes the YOLO v8 architecture for real-time detection of firearms in video streams. A diverse and extensive dataset, capturing a range of firearms in varying lighting and backgrounds, was meticulously assembled and preprocessed to enhance the model's adaptability to real-world scenarios. Leveraging the YOLO v8 framework, known for its real-time object detection accuracy, the model was fine-tuned to accurately identify firearms across different shapes and orientations.
The training phase capitalized on GPU computing and transfer learning to expedite the learning process while preserving …
Insights Into Cellular Evolution: Temporal Deep Learning Models And Analysis For Cell Image Classification, Xinran Zhao
Insights Into Cellular Evolution: Temporal Deep Learning Models And Analysis For Cell Image Classification, Xinran Zhao
Master's Theses
Understanding the temporal evolution of cells poses a significant challenge in developmental biology. This study embarks on a comparative analysis of various machine-learning techniques to classify cell colony images across different timestamps, thereby aiming to capture dynamic transitions of cellular states. By performing Transfer Learning with state-of-the-art classification networks, we achieve high accuracy in categorizing single-timestamp images. Furthermore, this research introduces the integration of temporal models, notably LSTM (Long Short Term Memory Network), R-Transformer (Recurrent Neural Network enhanced Transformer) and ViViT (Video Vision Transformer), to undertake this classification task to verify the effectiveness of incorporating temporal features into the classification …
Hard-Hearted Scrolls: A Noninvasive Method For Reading The Herculaneum Papyri, Stephen Parsons
Hard-Hearted Scrolls: A Noninvasive Method For Reading The Herculaneum Papyri, Stephen Parsons
Theses and Dissertations--Computer Science
The Herculaneum scrolls were buried and carbonized by the eruption of Mount Vesuvius in A.D. 79 and represent the only classical library discovered in situ. Charred by the heat of the eruption, the scrolls are extremely fragile. Since their discovery two centuries ago, some scrolls have been physically opened, leading to some textual recovery but also widespread damage. Many other scrolls remain in rolled form, with unknown contents. More recently, various noninvasive methods have been attempted to reveal the hidden contents of these scrolls using advanced imaging. Unfortunately, their complex internal structure and lack of clear ink contrast has prevented …
Lung Cancer Type Classification, Mohit Ramajibhai Ankoliya
Lung Cancer Type Classification, Mohit Ramajibhai Ankoliya
Electronic Theses, Projects, and Dissertations
Lung cancer is the third most common cancer in the U.S. This research focuses on classifying lung cancer cells based on their tumor cell, shape, and biological traits in images automatically obtained by passing through the
convolutional layers. Additionally, I classify whether the lung cell is adenocarcinoma, large cell carcinoma, squamous cell carcinoma, or normal cell carcinoma. The benefit of this classification is an accurate prognosis, leading to patients receiving proper therapy. The Lung Cancer CT(Computed Tomography) image dataset from Kaggle has been drawn with 1000 CT images of various types of lung cancer. Two state-of-the-art convolutional neural networks (CNNs) …
Camera And Lidar Fusion For Point Cloud Semantic Segmentation, Ali Abdelkader
Camera And Lidar Fusion For Point Cloud Semantic Segmentation, Ali Abdelkader
Theses and Dissertations
Perception is a fundamental component of any autonomous driving system. Semantic segmentation is the perception task of assigning semantic class labels to sensor inputs. While autonomous driving systems are currently equipped with a suite of sensors, much focus in the literature has been on semantic segmentation of camera images only. Research in the fusion of different sensor modalities for semantic segmentation has not been investigated as much. Deep learning models based on transformer architectures have proven successful in many tasks in computer vision and natural language processing. This work explores the use of deep learning transformers to fuse information from …
An Analysis Of Camera Configurations And Depth Estimation Algorithms For Triple-Camera Computer Vision Systems, Jared Peter-Contesse
An Analysis Of Camera Configurations And Depth Estimation Algorithms For Triple-Camera Computer Vision Systems, Jared Peter-Contesse
Master's Theses
The ability to accurately map and localize relevant objects surrounding a vehicle is an important task for autonomous vehicle systems. Currently, many of the environmental mapping approaches rely on the expensive LiDAR sensor. Researchers have been attempting to transition to cheaper sensors like the camera, but so far, the mapping accuracy of single-camera and dual-camera systems has not matched the accuracy of LiDAR systems. This thesis examines depth estimation algorithms and camera configurations of a triple-camera system to determine if sensor data from an additional perspective will improve the accuracy of camera-based systems. Using a synthetic dataset, the performance of …
Analysis Of Hardware Accelerated Deep Learning And The Effects Of Degradation On Performance, Samuel C. Leach
Analysis Of Hardware Accelerated Deep Learning And The Effects Of Degradation On Performance, Samuel C. Leach
Masters Theses
As convolutional neural networks become more prevalent in research and real world applications, the need for them to be faster and more robust will be a constant battle. This thesis investigates the effect of degradation being introduced to an image prior to object recognition with a convolutional neural network. As well as experimenting with methods to reduce the degradation and improve performance. Gaussian smoothing and additive Gaussian noise are both analyzed degradation models within this thesis and are reduced with Gaussian and Butterworth masks using unsharp masking and smoothing, respectively. The results show that each degradation is disruptive to the …
Self && Self, Shuang Cai
Self && Self, Shuang Cai
Senior Projects Spring 2021
Seldom before the COVID-19 pandemic have so many people simultaneously had their lifestyle drastically changed in the same way. The forced physical isolation is, ironically, a communal experience. The sickening quarantine left everyone nothing but time to confront and reconnect with themselves. Another inevitable result of corporal isolation is the predominant awakening awareness of digital existences and connections. Evoking the shared sensitivity and delicacy, studying the tectonic activity of the digital world, the project documents the endured contemplation in the upcoming resurgence.
Adaptation Of A Deep Learning Algorithm For Traffic Sign Detection, Jose Luis Masache Narvaez
Adaptation Of A Deep Learning Algorithm For Traffic Sign Detection, Jose Luis Masache Narvaez
Electronic Thesis and Dissertation Repository
Traffic signs detection is becoming increasingly important as various approaches for automation using computer vision are becoming widely used in the industry. Typical applications include autonomous driving systems, mapping and cataloging traffic signs by municipalities. Convolutional neural networks (CNNs) have shown state of the art performances in classification tasks, and as a result, object detection algorithms based on CNNs have become popular in computer vision tasks. Two-stage detection algorithms like region proposal methods (R-CNN and Faster R-CNN) have better performance in terms of localization and recognition accuracy. However, these methods require high computational power for training and inference that make …
Localization Using Convolutional Neural Networks, Shannon D. Fong
Localization Using Convolutional Neural Networks, Shannon D. Fong
Computer Engineering
With the increased accessibility to powerful GPUs, ability to develop machine learning algorithms has increased significantly. Coupled with open source deep learning frameworks, average users are now able to experiment with convolutional neural networks (CNNs) to solve novel problems. This project sought to train a CNN capable of classifying between various locations within a building. A single continuous video was taken while standing at each desired location so that every class in the neural network was represented by a single video. Each location was given a number to be used for classification and the video was subsequently titled locX. These …
Automatic Identification Of Animals In The Wild: A Comparative Study Between C-Capsule Networks And Deep Convolutional Neural Networks., Joel Kamdem Teto, Ying Xie
Automatic Identification Of Animals In The Wild: A Comparative Study Between C-Capsule Networks And Deep Convolutional Neural Networks., Joel Kamdem Teto, Ying Xie
Master of Science in Computer Science Theses
The evolution of machine learning and computer vision in technology has driven a lot of
improvements and innovation into several domains. We see it being applied for credit decisions, insurance quotes, malware detection, fraud detection, email composition, and any other area having enough information to allow the machine to learn patterns. Over the years the number of sensors, cameras, and cognitive pieces of equipment placed in the wilderness has been growing exponentially. However, the resources (human) to leverage these data into something meaningful are not improving at the same rate. For instance, a team of scientist volunteers took 8.4 years, …
Investigating Dataset Distinctiveness, Andrew Ulmer, Kent W. Gauen, Yung-Hsiang Lu, Zohar R. Kapach, Daniel P. Merrick
Investigating Dataset Distinctiveness, Andrew Ulmer, Kent W. Gauen, Yung-Hsiang Lu, Zohar R. Kapach, Daniel P. Merrick
The Summer Undergraduate Research Fellowship (SURF) Symposium
Just as a human might struggle to interpret another human’s handwriting, a computer vision program might fail when asked to perform one task in two different domains. To be more specific, visualize a self-driving car as a human driver who had only ever driven on clear, sunny days, during daylight hours. This driver – the self-driving car – would inevitably face a significant challenge when asked to drive when it is violently raining or foggy during the night, putting the safety of its passengers in danger. An extensive understanding of the data we use to teach computer vision models – …
Underwater Computer Vision - Fish Recognition, Spencer Chang, Austin Otto
Underwater Computer Vision - Fish Recognition, Spencer Chang, Austin Otto
Computer Engineering
The Underwater Computer Vision – Fish Recognition project includes the design and implementation of a device that can withstand staying underwater for a duration of time, take pictures of underwater creatures, such as fish, and be able to identify certain fish. The system is meant to be cheap to create, yet still able to process the images it takes and identify the objects in the pictures with some accuracy. The device can output its results to another device or an end user.
Multispectral Identification Array, Zachary D. Eagan
Multispectral Identification Array, Zachary D. Eagan
Computer Engineering
The Multispectral Identification Array is a device for taking full image spectroscopy data via the illumination of a subject with sixty-four unique spectra. The array combines images under the illumination spectra to produce an approximate reflectance graph for every pixel in a scene. Acquisition of an entire spectrum allows the array to differentiate objects based on surface material. Spectral graphs produced are highly approximate and should not be used to determine material properties, however the output is sufficiently consistent to allow differentiation and identification of previously sampled subjects. While not sufficiently advanced for use as a replacement to spectroscopy the …
Hybrid Single And Dual Pattern Structured Light Illumination, Minghao Wang
Hybrid Single And Dual Pattern Structured Light Illumination, Minghao Wang
Theses and Dissertations--Electrical and Computer Engineering
Structured Light Illumination is a widely used 3D shape measurement technique in non-contact surface scanning. Multi-pattern based Structured Light Illumination methods reconstruct 3-D surface with high accuracy, but are sensitive to object motion during the pattern projection and the speed of scanning process is relatively long. To reduce this sensitivity, single pattern techniques are developed to achieve a high speed scanning process, such as Composite Pattern (CP) and Modified Composite Pattern (MCP) technique. However, most of single patter techniques have a significant banding artifact and sacrifice the accuracy. We focus on developing SLI techniques can achieve both high speed, high …
Model-Free Method Of Reinforcement Learning For Visual Tasks, Jeff S. Soldate, Jonghoon Jin, Eugenio Culurciello
Model-Free Method Of Reinforcement Learning For Visual Tasks, Jeff S. Soldate, Jonghoon Jin, Eugenio Culurciello
The Summer Undergraduate Research Fellowship (SURF) Symposium
There has been success in recent years for neural networks in applications requiring high level intelligence such as categorization and assessment. In this work, we present a neural network model to learn control policies using reinforcement learning. It takes a raw pixel representation of the current state and outputs an approximation of a Q value function made with a neural network that represents the expected reward for each possible state-action pair. The action is chosen an \epsilon-greedy policy, choosing the highest expected reward with a small chance of random action. We used gradient descent to update the weights and biases …
Automatic Performance Level Assessment In Minimally Invasive Surgery Using Coordinated Sensors And Composite Metrics, Sami Taha Abu Snaineh
Automatic Performance Level Assessment In Minimally Invasive Surgery Using Coordinated Sensors And Composite Metrics, Sami Taha Abu Snaineh
Theses and Dissertations--Computer Science
Skills assessment in Minimally Invasive Surgery (MIS) has been a challenge for training centers for a long time. The emerging maturity of camera-based systems has the potential to transform problems into solutions in many different areas, including MIS. The current evaluation techniques for assessing the performance of surgeons and trainees are direct observation, global assessments, and checklists. These techniques are mostly subjective and can, therefore, involve a margin of bias.
The current automated approaches are all implemented using mechanical or electromagnetic sensors, which suffer limitations and influence the surgeon’s motion. Thus, evaluating the skills of the MIS surgeons and trainees …
Feature-Based Image Comparison And Its Application In Wireless Visual Sensor Networks, Yang Bai
Feature-Based Image Comparison And Its Application In Wireless Visual Sensor Networks, Yang Bai
Doctoral Dissertations
This dissertation studies the feature-based image comparison method and its application in Wireless Visual Sensor Networks.
Wireless Visual Sensor Networks (WVSNs), formed by a large number of low-cost, small-size visual sensor nodes, represent a new trend in surveillance and monitoring practices. Although each single sensor has very limited capability in sensing, processing and transmission, by working together they can achieve various high level tasks. Sensor collaboration is essential to WVSNs and normally performed among sensors having similar measurements, which are called neighbor sensors. The directional sensing characteristics of imagers and the presence of visual occlusion present unique challenges to neighborhood …
A Computer Vision Application To Accurately Estimate Object Distance, Kayton B. Parekh
A Computer Vision Application To Accurately Estimate Object Distance, Kayton B. Parekh
Mathematics, Statistics, and Computer Science Honors Projects
Scientists have been working to create robots that perform manual work for years. However, creating machines that can navigate themselves and respond to their environment has proven to be difficult. One integral task to such research is to estimate the position of objects in the robot's visual field.
In this project we examine an implementation of computer vision depth perception. Our application uses color-based object tracking combined with model-based pose estimation to estimate the depth of specific objects in the view of our Pioneer 2 and Power Wheels robots. We use the Camshift algorithm for color-based object tracking, which uses …