Open Access. Powered by Scholars. Published by Universities.®

Other Computer Engineering Commons

Open Access. Powered by Scholars. Published by Universities.®

Computer Vision

Discipline
Institution
Publication Year
Publication
Publication Type
File Type

Articles 1 - 20 of 20

Full-Text Articles in Other Computer Engineering

Insights Into Cellular Evolution: Temporal Deep Learning Models And Analysis For Cell Image Classification, Xinran Zhao Mar 2024

Insights Into Cellular Evolution: Temporal Deep Learning Models And Analysis For Cell Image Classification, Xinran Zhao

Master's Theses

Understanding the temporal evolution of cells poses a significant challenge in developmental biology. This study embarks on a comparative analysis of various machine-learning techniques to classify cell colony images across different timestamps, thereby aiming to capture dynamic transitions of cellular states. By performing Transfer Learning with state-of-the-art classification networks, we achieve high accuracy in categorizing single-timestamp images. Furthermore, this research introduces the integration of temporal models, notably LSTM (Long Short Term Memory Network), R-Transformer (Recurrent Neural Network enhanced Transformer) and ViViT (Video Vision Transformer), to undertake this classification task to verify the effectiveness of incorporating temporal features into the classification …


Hard-Hearted Scrolls: A Noninvasive Method For Reading The Herculaneum Papyri, Stephen Parsons Jan 2023

Hard-Hearted Scrolls: A Noninvasive Method For Reading The Herculaneum Papyri, Stephen Parsons

Theses and Dissertations--Computer Science

The Herculaneum scrolls were buried and carbonized by the eruption of Mount Vesuvius in A.D. 79 and represent the only classical library discovered in situ. Charred by the heat of the eruption, the scrolls are extremely fragile. Since their discovery two centuries ago, some scrolls have been physically opened, leading to some textual recovery but also widespread damage. Many other scrolls remain in rolled form, with unknown contents. More recently, various noninvasive methods have been attempted to reveal the hidden contents of these scrolls using advanced imaging. Unfortunately, their complex internal structure and lack of clear ink contrast has prevented …


Lung Cancer Type Classification, Mohit Ramajibhai Ankoliya Dec 2022

Lung Cancer Type Classification, Mohit Ramajibhai Ankoliya

Electronic Theses, Projects, and Dissertations

Lung cancer is the third most common cancer in the U.S. This research focuses on classifying lung cancer cells based on their tumor cell, shape, and biological traits in images automatically obtained by passing through the

convolutional layers. Additionally, I classify whether the lung cell is adenocarcinoma, large cell carcinoma, squamous cell carcinoma, or normal cell carcinoma. The benefit of this classification is an accurate prognosis, leading to patients receiving proper therapy. The Lung Cancer CT(Computed Tomography) image dataset from Kaggle has been drawn with 1000 CT images of various types of lung cancer. Two state-of-the-art convolutional neural networks (CNNs) …


Camera And Lidar Fusion For Point Cloud Semantic Segmentation, Ali Abdelkader Jan 2022

Camera And Lidar Fusion For Point Cloud Semantic Segmentation, Ali Abdelkader

Theses and Dissertations

Perception is a fundamental component of any autonomous driving system. Semantic segmentation is the perception task of assigning semantic class labels to sensor inputs. While autonomous driving systems are currently equipped with a suite of sensors, much focus in the literature has been on semantic segmentation of camera images only. Research in the fusion of different sensor modalities for semantic segmentation has not been investigated as much. Deep learning models based on transformer architectures have proven successful in many tasks in computer vision and natural language processing. This work explores the use of deep learning transformers to fuse information from …


An Analysis Of Camera Configurations And Depth Estimation Algorithms For Triple-Camera Computer Vision Systems, Jared Peter-Contesse Dec 2021

An Analysis Of Camera Configurations And Depth Estimation Algorithms For Triple-Camera Computer Vision Systems, Jared Peter-Contesse

Master's Theses

The ability to accurately map and localize relevant objects surrounding a vehicle is an important task for autonomous vehicle systems. Currently, many of the environmental mapping approaches rely on the expensive LiDAR sensor. Researchers have been attempting to transition to cheaper sensors like the camera, but so far, the mapping accuracy of single-camera and dual-camera systems has not matched the accuracy of LiDAR systems. This thesis examines depth estimation algorithms and camera configurations of a triple-camera system to determine if sensor data from an additional perspective will improve the accuracy of camera-based systems. Using a synthetic dataset, the performance of …


Analysis Of Hardware Accelerated Deep Learning And The Effects Of Degradation On Performance, Samuel C. Leach May 2021

Analysis Of Hardware Accelerated Deep Learning And The Effects Of Degradation On Performance, Samuel C. Leach

Masters Theses

As convolutional neural networks become more prevalent in research and real world applications, the need for them to be faster and more robust will be a constant battle. This thesis investigates the effect of degradation being introduced to an image prior to object recognition with a convolutional neural network. As well as experimenting with methods to reduce the degradation and improve performance. Gaussian smoothing and additive Gaussian noise are both analyzed degradation models within this thesis and are reduced with Gaussian and Butterworth masks using unsharp masking and smoothing, respectively. The results show that each degradation is disruptive to the …


Self && Self, Shuang Cai Jan 2021

Self && Self, Shuang Cai

Senior Projects Spring 2021

Seldom before the COVID-19 pandemic have so many people simultaneously had their lifestyle drastically changed in the same way. The forced physical isolation is, ironically, a communal experience. The sickening quarantine left everyone nothing but time to confront and reconnect with themselves. Another inevitable result of corporal isolation is the predominant awakening awareness of digital existences and connections. Evoking the shared sensitivity and delicacy, studying the tectonic activity of the digital world, the project documents the endured contemplation in the upcoming resurgence.


Adaptation Of A Deep Learning Algorithm For Traffic Sign Detection, Jose Luis Masache Narvaez Jul 2019

Adaptation Of A Deep Learning Algorithm For Traffic Sign Detection, Jose Luis Masache Narvaez

Electronic Thesis and Dissertation Repository

Traffic signs detection is becoming increasingly important as various approaches for automation using computer vision are becoming widely used in the industry. Typical applications include autonomous driving systems, mapping and cataloging traffic signs by municipalities. Convolutional neural networks (CNNs) have shown state of the art performances in classification tasks, and as a result, object detection algorithms based on CNNs have become popular in computer vision tasks. Two-stage detection algorithms like region proposal methods (R-CNN and Faster R-CNN) have better performance in terms of localization and recognition accuracy. However, these methods require high computational power for training and inference that make …


Localization Using Convolutional Neural Networks, Shannon D. Fong Dec 2018

Localization Using Convolutional Neural Networks, Shannon D. Fong

Computer Engineering

With the increased accessibility to powerful GPUs, ability to develop machine learning algorithms has increased significantly. Coupled with open source deep learning frameworks, average users are now able to experiment with convolutional neural networks (CNNs) to solve novel problems. This project sought to train a CNN capable of classifying between various locations within a building. A single continuous video was taken while standing at each desired location so that every class in the neural network was represented by a single video. Each location was given a number to be used for classification and the video was subsequently titled locX. These …


Automatic Identification Of Animals In The Wild: A Comparative Study Between C-Capsule Networks And Deep Convolutional Neural Networks., Joel Kamdem Teto, Ying Xie Nov 2018

Automatic Identification Of Animals In The Wild: A Comparative Study Between C-Capsule Networks And Deep Convolutional Neural Networks., Joel Kamdem Teto, Ying Xie

Master of Science in Computer Science Theses

The evolution of machine learning and computer vision in technology has driven a lot of

improvements and innovation into several domains. We see it being applied for credit decisions, insurance quotes, malware detection, fraud detection, email composition, and any other area having enough information to allow the machine to learn patterns. Over the years the number of sensors, cameras, and cognitive pieces of equipment placed in the wilderness has been growing exponentially. However, the resources (human) to leverage these data into something meaningful are not improving at the same rate. For instance, a team of scientist volunteers took 8.4 years, …


Multi-Sensor Localization And Tracking In Disaster Management And Indoor Wayfinding For Visually Impaired Users, Zhuorui Yang Oct 2018

Multi-Sensor Localization And Tracking In Disaster Management And Indoor Wayfinding For Visually Impaired Users, Zhuorui Yang

Doctoral Dissertations

This dissertation proposes a series of multi-sensor localization and tracking algorithms particularly developed for two important application domains, which are disaster management and indoor wayfinding for blind and visually impaired (BVI) users. For disaster management, we developed two different localization algorithms, one each for Radio Frequency Identification (RFID) and Bluetooth Low Energy (BLE) technology, which enable the disaster management system to track patients in real-time. Both algorithms work in the absence of any pre-deployed infrastructure along with smartphones and wearable devices. Regarding indoor wayfinding for BVI users, we have explored several types of indoor positioning techniques including BLE-based, inertial, visual …


Investigating Dataset Distinctiveness, Andrew Ulmer, Kent W. Gauen, Yung-Hsiang Lu, Zohar R. Kapach, Daniel P. Merrick Aug 2018

Investigating Dataset Distinctiveness, Andrew Ulmer, Kent W. Gauen, Yung-Hsiang Lu, Zohar R. Kapach, Daniel P. Merrick

The Summer Undergraduate Research Fellowship (SURF) Symposium

Just as a human might struggle to interpret another human’s handwriting, a computer vision program might fail when asked to perform one task in two different domains. To be more specific, visualize a self-driving car as a human driver who had only ever driven on clear, sunny days, during daylight hours. This driver – the self-driving car – would inevitably face a significant challenge when asked to drive when it is violently raining or foggy during the night, putting the safety of its passengers in danger. An extensive understanding of the data we use to teach computer vision models – …


Underwater Computer Vision - Fish Recognition, Spencer Chang, Austin Otto Jun 2017

Underwater Computer Vision - Fish Recognition, Spencer Chang, Austin Otto

Computer Engineering

The Underwater Computer Vision – Fish Recognition project includes the design and implementation of a device that can withstand staying underwater for a duration of time, take pictures of underwater creatures, such as fish, and be able to identify certain fish. The system is meant to be cheap to create, yet still able to process the images it takes and identify the objects in the pictures with some accuracy. The device can output its results to another device or an end user.


Multispectral Identification Array, Zachary D. Eagan Jun 2017

Multispectral Identification Array, Zachary D. Eagan

Computer Engineering

The Multispectral Identification Array is a device for taking full image spectroscopy data via the illumination of a subject with sixty-four unique spectra. The array combines images under the illumination spectra to produce an approximate reflectance graph for every pixel in a scene. Acquisition of an entire spectrum allows the array to differentiate objects based on surface material. Spectral graphs produced are highly approximate and should not be used to determine material properties, however the output is sufficiently consistent to allow differentiation and identification of previously sampled subjects. While not sufficiently advanced for use as a replacement to spectroscopy the …


Software Updates To A Multiple Autonomous Quadcopter Search System (Maqss), Jared Speck, Toby Chan May 2017

Software Updates To A Multiple Autonomous Quadcopter Search System (Maqss), Jared Speck, Toby Chan

Computer Engineering

A series of performance-based and feature implementation software updates to an existing multiple vehicle autonomous target search system is outlined in this paper. The search system, MAQSS, is designed to address a computational power constraint found on modern autonomous aerial platforms by separating real-time and computationally expensive tasks through delegation to multiple multirotor vehicles. A Ground Control Station (GCS) is also described as part of the MAQSS system to perform the delegation and provide a low workload user interface. Ultimately, the changes to MAQSS noted in this paper helped to improve the performance of the autonomous search mission, the accuracy …


Hybrid Single And Dual Pattern Structured Light Illumination, Minghao Wang Jan 2015

Hybrid Single And Dual Pattern Structured Light Illumination, Minghao Wang

Theses and Dissertations--Electrical and Computer Engineering

Structured Light Illumination is a widely used 3D shape measurement technique in non-contact surface scanning. Multi-pattern based Structured Light Illumination methods reconstruct 3-D surface with high accuracy, but are sensitive to object motion during the pattern projection and the speed of scanning process is relatively long. To reduce this sensitivity, single pattern techniques are developed to achieve a high speed scanning process, such as Composite Pattern (CP) and Modified Composite Pattern (MCP) technique. However, most of single patter techniques have a significant banding artifact and sacrifice the accuracy. We focus on developing SLI techniques can achieve both high speed, high …


Model-Free Method Of Reinforcement Learning For Visual Tasks, Jeff S. Soldate, Jonghoon Jin, Eugenio Culurciello Aug 2014

Model-Free Method Of Reinforcement Learning For Visual Tasks, Jeff S. Soldate, Jonghoon Jin, Eugenio Culurciello

The Summer Undergraduate Research Fellowship (SURF) Symposium

There has been success in recent years for neural networks in applications requiring high level intelligence such as categorization and assessment. In this work, we present a neural network model to learn control policies using reinforcement learning. It takes a raw pixel representation of the current state and outputs an approximation of a Q value function made with a neural network that represents the expected reward for each possible state-action pair. The action is chosen an \epsilon-greedy policy, choosing the highest expected reward with a small chance of random action. We used gradient descent to update the weights and biases …


Automatic Performance Level Assessment In Minimally Invasive Surgery Using Coordinated Sensors And Composite Metrics, Sami Taha Abu Snaineh Jan 2013

Automatic Performance Level Assessment In Minimally Invasive Surgery Using Coordinated Sensors And Composite Metrics, Sami Taha Abu Snaineh

Theses and Dissertations--Computer Science

Skills assessment in Minimally Invasive Surgery (MIS) has been a challenge for training centers for a long time. The emerging maturity of camera-based systems has the potential to transform problems into solutions in many different areas, including MIS. The current evaluation techniques for assessing the performance of surgeons and trainees are direct observation, global assessments, and checklists. These techniques are mostly subjective and can, therefore, involve a margin of bias.

The current automated approaches are all implemented using mechanical or electromagnetic sensors, which suffer limitations and influence the surgeon’s motion. Thus, evaluating the skills of the MIS surgeons and trainees …


Feature-Based Image Comparison And Its Application In Wireless Visual Sensor Networks, Yang Bai May 2011

Feature-Based Image Comparison And Its Application In Wireless Visual Sensor Networks, Yang Bai

Doctoral Dissertations

This dissertation studies the feature-based image comparison method and its application in Wireless Visual Sensor Networks.

Wireless Visual Sensor Networks (WVSNs), formed by a large number of low-cost, small-size visual sensor nodes, represent a new trend in surveillance and monitoring practices. Although each single sensor has very limited capability in sensing, processing and transmission, by working together they can achieve various high level tasks. Sensor collaboration is essential to WVSNs and normally performed among sensors having similar measurements, which are called neighbor sensors. The directional sensing characteristics of imagers and the presence of visual occlusion present unique challenges to neighborhood …


A Computer Vision Application To Accurately Estimate Object Distance, Kayton B. Parekh Apr 2010

A Computer Vision Application To Accurately Estimate Object Distance, Kayton B. Parekh

Mathematics, Statistics, and Computer Science Honors Projects

Scientists have been working to create robots that perform manual work for years. However, creating machines that can navigate themselves and respond to their environment has proven to be difficult. One integral task to such research is to estimate the position of objects in the robot's visual field.

In this project we examine an implementation of computer vision depth perception. Our application uses color-based object tracking combined with model-based pose estimation to estimate the depth of specific objects in the view of our Pioneer 2 and Power Wheels robots. We use the Camshift algorithm for color-based object tracking, which uses …