Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

2020

Deep Learning

Discipline
Institution
Publication
Publication Type

Articles 31 - 40 of 40

Full-Text Articles in Physical Sciences and Mathematics

Attention Mechanism In Deep Neural Networks For Computer Vision Tasks, Haohan Li Jan 2020

Attention Mechanism In Deep Neural Networks For Computer Vision Tasks, Haohan Li

Doctoral Dissertations

“Attention mechanism, which is one of the most important algorithms in the deep Learning community, was initially designed in the natural language processing for enhancing the feature representation of key sentence fragments over the context. In recent years, the attention mechanism has been widely adopted in solving computer vision tasks by guiding deep neural networks (DNNs) to focus on specific image features for better understanding the semantic information of the image. However, the attention mechanism is not only capable of helping DNNs understand semantics, but also useful for the feature fusion, visual cue discovering, and temporal information selection, which are …


Incorporating Cardiac Substructures Into Radiation Therapy For Improved Cardiac Sparing, Eric Daniel Morris Jan 2020

Incorporating Cardiac Substructures Into Radiation Therapy For Improved Cardiac Sparing, Eric Daniel Morris

Wayne State University Dissertations

Growing evidence suggests that radiation therapy (RT) doses to the heart and cardiac substructures (CS) are strongly linked to cardiac toxicities, though only the heart is considered clinically. This work aimed to utilize the superior soft-tissue contrast of magnetic resonance (MR) to segment CS, quantify uncertainties in their position, assess their effect on treatment planning and an MR-guided environment.

Automatic substructure segmentation of 12 CS was completed using a novel hybrid MR/computed tomography (CT) atlas method and was improved upon using a 3-dimensional neural network (U-Net) from deep learning. Intra-fraction motion due to respiration was then quantified. The inter-fraction setup …


A Machine Learning Approach To Estimate The Annihilation Photon Interactions Inside The Scintillator Of A Pet Scanner, Sai Akhil Bharthavarapu Jan 2020

A Machine Learning Approach To Estimate The Annihilation Photon Interactions Inside The Scintillator Of A Pet Scanner, Sai Akhil Bharthavarapu

Graduate Theses, Dissertations, and Problem Reports

Biochemical processes are chemical processes that occur in living organisms. They can be studied with nuclear medicine through the help of radioactive tracers. Based on the radioisotope used, the photons that are emitted from the body tissue are either detected by single-photon emission computed tomography (SPECT) or by positron emission tomography (PET) scanners. SPECT uses gamma rays as tracer but gives a weaker contrast and spatial resolution compared to a PET scanner which uses positrons as tracer. PET scans show the metabolic changes occurring at the cellular level in an organ or a tissue. This detection is important because diseases …


Unitary And Symmetric Structure In Deep Neural Networks, Kehelwala Dewage Gayan Maduranga Jan 2020

Unitary And Symmetric Structure In Deep Neural Networks, Kehelwala Dewage Gayan Maduranga

Theses and Dissertations--Mathematics

Recurrent neural networks (RNNs) have been successfully used on a wide range of sequential data problems. A well-known difficulty in using RNNs is the vanishing or exploding gradient problem. Recently, there have been several different RNN architectures that try to mitigate this issue by maintaining an orthogonal or unitary recurrent weight matrix. One such architecture is the scaled Cayley orthogonal recurrent neural network (scoRNN), which parameterizes the orthogonal recurrent weight matrix through a scaled Cayley transform. This parametrization contains a diagonal scaling matrix consisting of positive or negative one entries that can not be optimized by gradient descent. Thus the …


Cheat Detection Using Machine Learning Within Counter-Strike: Global Offensive, Harry Dunham Jan 2020

Cheat Detection Using Machine Learning Within Counter-Strike: Global Offensive, Harry Dunham

Senior Independent Study Theses

Deep learning is becoming a steadfast means of solving complex problems that do not have a single concrete or simple solution. One complex problem that fits this description and that has also begun to appear at the forefront of society is cheating, specifically within video games. Therefore, this paper presents a means of developing a deep learning framework that successfully identifies cheaters within the video game CounterStrike: Global Offensive. This approach yields predictive accuracy metrics that range between 80-90% depending on the exact neural network architecture that is employed. This approach is easily scalable and applicable to all types of …


Model Parameter Calibration In Power Systems, Yuhao Wu Jan 2020

Model Parameter Calibration In Power Systems, Yuhao Wu

Graduate College Dissertations and Theses

In power systems, accurate device modeling is crucial for grid reliability, availability, and resiliency. Many critical tasks such as planning or even realtime operation decisions rely on accurate modeling. This research presents an approach for model parameter calibration in power system models using deep learning. Existing calibration methods are based on mathematical approaches that suffer from being ill-posed and thus may have multiple solutions. We are trying to solve this problem by applying a deep learning architecture that is trained to estimate model parameters from simulated Phasor Measurement Unit (PMU) data. The data recorded after system disturbances proved to have …


Computational Model For Neural Architecture Search, Ram Deepak Gottapu Jan 2020

Computational Model For Neural Architecture Search, Ram Deepak Gottapu

Doctoral Dissertations

"A long-standing goal in Deep Learning (DL) research is to design efficient architectures for a given dataset that are both accurate and computationally inexpensive. At present, designing deep learning architectures for a real-world application requires both human expertise and considerable effort as they are either handcrafted by careful experimentation or modified from a handful of existing models. This method is inefficient as the process of architecture design is highly time-consuming and computationally expensive.

The research presents an approach to automate the process of deep learning architecture design through a modeling procedure. In particular, it first introduces a framework that treats …


Deepdrawing: A Deep Learning Approach To Graph Drawing, Yong Wang, Zhihua Jin, Qianwen Wang, Weiwei Cui, Tengfei Ma, Huamin Qu Jan 2020

Deepdrawing: A Deep Learning Approach To Graph Drawing, Yong Wang, Zhihua Jin, Qianwen Wang, Weiwei Cui, Tengfei Ma, Huamin Qu

Research Collection School Of Computing and Information Systems

Node-link diagrams are widely used to facilitate network explorations. However, when using a graph drawing technique to visualize networks, users often need to tune different algorithm-specific parameters iteratively by comparing the corresponding drawing results in order to achieve a desired visual effect. This trial and error process is often tedious and time-consuming, especially for non-expert users. Inspired by the powerful data modelling and prediction capabilities of deep learning techniques, we explore the possibility of applying deep learning techniques to graph drawing. Specifically, we propose using a graph-LSTM-based approach to directly map network structures to graph drawings. Given a set of …


Exploration And Implementation Of Neural Ordinary Differential Equations, Long Huu Nguyen, Andy Malinsky Jan 2020

Exploration And Implementation Of Neural Ordinary Differential Equations, Long Huu Nguyen, Andy Malinsky

Capstone Showcase

Neural ordinary differential equations (ODEs) have recently emerged as a novel ap- proach to deep learning, leveraging the knowledge of two previously separate domains, neural networks and differential equations. In this paper, we first examine the back- ground and lay the foundation for traditional artificial neural networks. We then present neural ODEs from a rigorous mathematical perspective, and explore their advantages and trade-offs compared to traditional neural nets.


Representation Learning With Adversarial Latent Autoencoders, Stanislav Pidhorskyi M.S. Jan 2020

Representation Learning With Adversarial Latent Autoencoders, Stanislav Pidhorskyi M.S.

Graduate Theses, Dissertations, and Problem Reports

A large number of deep learning methods applied to computer vision problems require encoder-decoder maps. These methods include, but are not limited to, self-representation learning, generalization, few-shot learning, and novelty detection. Encoder-decoder maps are also useful for photo manipulation, photo editing, superresolution, etc. Encoder-decoder maps are typically learned using autoencoder networks.
Traditionally, autoencoder reciprocity is achieved in the image-space using pixel-wise
similarity loss, which has a widely known flaw of producing non-realistic reconstructions. This flaw is typical for the Variational Autoencoder (VAE) family and is not only limited to pixel-wise similarity losses, but is common to all methods relying upon …