Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Theses/Dissertations

William & Mary

2022

Computer Sciences

Articles 1 - 7 of 7

Full-Text Articles in Physical Sciences and Mathematics

Quantum Federated Learning: Training Hybrid Neural Networks Collaboratively, Anneliese Brei May 2022

Quantum Federated Learning: Training Hybrid Neural Networks Collaboratively, Anneliese Brei

Undergraduate Honors Theses

This thesis explores basic concepts of machine learning, neural networks, federated learning, and quantum computing in an effort to better understand Quantum Machine Learning, an emerging field of research. We propose Quantum Federated Learning (QFL), a schema for collaborative distributed learning that maintains privacy and low communication costs. We demonstrate the QFL framework and local and global update algorithms with implementations that utilize TensorFlow Quantum libraries. Our experiments test the effectiveness of frameworks of different sizes. We also test the effect of changing the number of training cycles and changing distribution of training data. This thesis serves as a synoptic …


Exploring Multi-Level Parallelism For Graph-Based Applications Via Algorithm And System Co-Design, Zhen Peng Jan 2022

Exploring Multi-Level Parallelism For Graph-Based Applications Via Algorithm And System Co-Design, Zhen Peng

Dissertations, Theses, and Masters Projects

Graph processing is at the heart of many modern applications where graphs are used as the basic data structure to represent the entities of interest and the relationships between them. Improving the performance of graph-based applications, especially using parallelism techniques, has drawn significant interest both in academia and industry. On the one hand, modern CPU architectures are able to provide massive computational power by using sophisticated memory hierarchy and multi-level parallelism, including thread-level parallelism, data-level parallelism, etc. On the other hand, graph processing workloads are notoriously challenging for achieving high performance due to their irregular computation pattern and unpredictable control …


Enabling Practical Evaluation Of Privacy Of Commodity-Iot, Sunil Manandhar Jan 2022

Enabling Practical Evaluation Of Privacy Of Commodity-Iot, Sunil Manandhar

Dissertations, Theses, and Masters Projects

There has been a massive shift towards the use of IoT products in recent years. While companies have come a long way in making these devices and services easily accessible to the consumers, very little is known about the privacy issues pertaining to these devices. In this dissertation, we focus on evaluating privacy pertaining to commodity-IoT devices by studying device usage behavior of consumers and privacy disclosure practices of IoT vendors. Our analyses consider deep intricacies tied to commodity-IoT domain, revealing insightful findings that help with building automated tools for a large scale analysis. We first present the design and …


Communication And Computation Efficient Deep Learning, Zeyi Tao Jan 2022

Communication And Computation Efficient Deep Learning, Zeyi Tao

Dissertations, Theses, and Masters Projects

Recent advances in Artificial Intelligence (AI) are characterized by ever-increasing datasets and rapid growth of model complexity. Many modern machine learning models, especially deep neural networks (DNNs), cannot be efficiently carried out by a single machine. Hence, distributed optimization and inference have been widely adopted to tackle large-scale machine learning problems. Meanwhile, quantum computers that process computational tasks exponentially faster than classical machines offer an alternative solution for resource-intensive deep learning. However, there are two obstacles that hinder us from building large-scale DNNs on the distributed systems and quantum computers. First, when distributed systems scale to many nodes, the training …


Techniques For Accelerating Large-Scale Automata Processing, Hongyuan Liu Jan 2022

Techniques For Accelerating Large-Scale Automata Processing, Hongyuan Liu

Dissertations, Theses, and Masters Projects

The big-data era has brought new challenges to computer architectures due to the large-scale computation and data. Moreover, this problem becomes critical in several domains where the computation is also irregular, among which we focus on automata processing in this dissertation. Automata are widely used in applications from different domains such as network intrusion detection, machine learning, and parsing. Large-scale automata processing is challenging for traditional von Neumann architectures. To this end, many accelerator prototypes have been proposed. Micron's Automata Processor (AP) is an example. However, as a spatial architecture, it is unable to handle large automata programs without repeated …


Practical Gpgpu Application Resilience Estimation And Fortification, Lishan Yang Jan 2022

Practical Gpgpu Application Resilience Estimation And Fortification, Lishan Yang

Dissertations, Theses, and Masters Projects

Graphics Processing Units (GPUs) are becoming a de facto solution for accelerating a wide range of applications but remain susceptible to transient hardware faults (soft errors) that can easily compromise application output. One of the major challenges in the domain of GPU reliability is to accurately measure general purpose GPU (GPGPU) application resilience to transient faults. This challenge stems from the fact that a typical GPGPU application spawns a huge number of threads and then utilizes a large amount of potentially unreliable compute and memory resources available on the GPUs. As the number of possible fault locations can be in …


Flexible And Robust Iterative Methods For The Partial Singular Value Decomposition, Steven Goldenberg Jan 2022

Flexible And Robust Iterative Methods For The Partial Singular Value Decomposition, Steven Goldenberg

Dissertations, Theses, and Masters Projects

The Singular Value Decomposition (SVD) is one of the most fundamental matrix factorizations in linear algebra. As a generalization of the eigenvalue decomposition, the SVD is essential for a wide variety of fields including statistics, signal and image processing, chemistry, quantum physics and even weather prediction. The methods for numerically computing the SVD mostly fall under three main categories: direct, iterative, and streaming. Direct methods focus on solving the SVD in its entirety, making them suitable for smaller dense matrices where the computation cost is tractable. On the other end of the spectrum, streaming methods were created to provide an …