Open Access. Powered by Scholars. Published by Universities.®
Physical Sciences and Mathematics Commons™
Open Access. Powered by Scholars. Published by Universities.®
Articles 1 - 3 of 3
Full-Text Articles in Physical Sciences and Mathematics
Improving Xrd Analysis With Machine Learning, Rachel E. Drapeau
Improving Xrd Analysis With Machine Learning, Rachel E. Drapeau
Theses and Dissertations
X-ray diffraction analysis (XRD) is an inexpensive method to quantify the relative proportions of mineral phases in a rock or soil sample. However, the analytical software available for XRD requires extensive user input to choose phases to include in the analysis. Consequently, analysis accuracy depends greatly on the experience of the analyst, especially as the number of phases in a sample increases (Raven & Self, 2017; Omotoso, 2006). The purpose of this project is to test whether incorporating machine learning methods into XRD software can improve the accuracy of analyses by assisting in the phase-picking process. In order to provide …
A Survey Of Graph Neural Networks On Synthetic Data, Brigham Stone Carson
A Survey Of Graph Neural Networks On Synthetic Data, Brigham Stone Carson
Theses and Dissertations
We relate properties of attributed random graph models to the performance of GNN architectures. We identify regimes where GNNs outperform feedforward neural networks and non-attributed graph clustering methods. We compare GNN performance on our synthetic benchmark to performance on popular real-world datasets. We analyze the theoretical foundations for weak recovery in GNNs for popular one- and two-layer architectures. We obtain an explicit formula for the performance of a 1-layer GNN, and we obtain useful insights on how to proceed in the 2-layer case. Finally, we improve the bound for a notable result on the GNN size generalization problem by 1.
Language Modeling Using Image Representations Of Natural Language, Seong Eun Cho
Language Modeling Using Image Representations Of Natural Language, Seong Eun Cho
Theses and Dissertations
This thesis presents training of an end-to-end autoencoder model using the transformer, with an encoder that can encode sentences into fixed-length latent vectors and a decoder that can reconstruct the sentences using image representations. Encoding and decoding sentences to and from these image representations are central to the model design. This method allows new sentences to be generated by traversing the Euclidean space, which makes vector arithmetic possible using sentences. Machines excel in dealing with concrete numbers and calculations, but do not possess an innate infrastructure designed to help them understand abstract concepts like natural language. In order for a …