Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Applied Mathematics

PDF

Theses/Dissertations

Machine learning

Institution
Publication Year
Publication

Articles 31 - 35 of 35

Full-Text Articles in Physical Sciences and Mathematics

A General Framework Of Large-Scale Convex Optimization Using Jensen Surrogates And Acceleration Techniques, Soysal Degirmenci May 2016

A General Framework Of Large-Scale Convex Optimization Using Jensen Surrogates And Acceleration Techniques, Soysal Degirmenci

McKelvey School of Engineering Theses & Dissertations

In a world where data rates are growing faster than computing power, algorithmic acceleration based on developments in mathematical optimization plays a crucial role in narrowing the gap between the two. As the scale of optimization problems in many fields is getting larger, we need faster optimization methods that not only work well in theory, but also work well in practice by exploiting underlying state-of-the-art computing technology.

In this document, we introduce a unified framework of large-scale convex optimization using Jensen surrogates, an iterative optimization method that has been used in different fields since the 1970s. After this general treatment, …


Data Driven Sample Generator Model With Application To Classification, Alvaro Emilio Ulloa Cerna May 2016

Data Driven Sample Generator Model With Application To Classification, Alvaro Emilio Ulloa Cerna

Mathematics & Statistics ETDs

Despite the rapidly growing interest, progress in the study of relations between physiological abnormalities and mental disorders is hampered by complexity of the human brain and high costs of data collection. The complexity can be captured by machine learning approaches, but they still may require significant amounts of data. In this thesis, we seek to mitigate the latter challenge by developing a data driven sample generator model for the generation of synthetic realistic training data. Our method greatly improves generalization in classification of schizophrenia patients and healthy controls from their structural magnetic resonance images. A feed forward neural network trained …


Singular Value Computation And Subspace Clustering, Qiao Liang Jan 2015

Singular Value Computation And Subspace Clustering, Qiao Liang

Theses and Dissertations--Mathematics

In this dissertation we discuss two problems. In the first part, we consider the problem of computing a few extreme eigenvalues of a symmetric definite generalized eigenvalue problem or a few extreme singular values of a large and sparse matrix. The standard method of choice of computing a few extreme eigenvalues of a large symmetric matrix is the Lanczos or the implicitly restarted Lanczos method. These methods usually employ a shift-and-invert transformation to accelerate the speed of convergence, which is not practical for truly large problems. With this in mind, Golub and Ye proposes an inverse-free preconditioned Krylov subspace method, …


Convergence Of A Reinforcement Learning Algorithm In Continuous Domains, Stephen Carden Aug 2014

Convergence Of A Reinforcement Learning Algorithm In Continuous Domains, Stephen Carden

All Dissertations

In the field of Reinforcement Learning, Markov Decision Processes with a finite number of states and actions have been well studied, and there exist algorithms capable of producing a sequence of policies which converge to an optimal policy with probability one. Convergence guarantees for problems with continuous states also exist. Until recently, no online algorithm for continuous states and continuous actions has been proven to produce optimal policies. This Dissertation contains the results of research into reinforcement learning algorithms for problems in which both the state and action spaces are continuous. The problems to be solved are introduced formally as …


The Gaussian Radon Transform For Banach Spaces, Irina Holmes Jan 2014

The Gaussian Radon Transform For Banach Spaces, Irina Holmes

LSU Doctoral Dissertations

The classical Radon transform can be thought of as a way to obtain the density of an n-dimensional object from its (n-1)-dimensional sections in diff_x001B_erent directions. A generalization of this transform to infi_x001C_nite-dimensional spaces has the potential to allow one to obtain a function de_x001C_fined on an infi_x001C_nite-dimensional space from its conditional expectations. We work within a standard framework in in_x001C_finite-dimensional analysis, that of abstract Wiener spaces, developed by L. Gross. The main obstacle in infinite dimensions is the absence of a useful version of Lebesgue measure. To overcome this, we work with Gaussian measures. Specifically, we construct Gaussian measures …