Open Access. Powered by Scholars. Published by Universities.®

Digital Commons Network

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 2 of 2

Full-Text Articles in Entire DC Network

Deep Learning Via Stacked Sparse Autoencoders For Automated Voxel-Wise Brain Parcellation Based On Functional Connectivity, Céline Gravelines Apr 2014

Deep Learning Via Stacked Sparse Autoencoders For Automated Voxel-Wise Brain Parcellation Based On Functional Connectivity, Céline Gravelines

Electronic Thesis and Dissertation Repository

Functional brain parcellation – the delineation of brain regions based on functional connectivity – is an active research area lacking an ideal subject-specific solution independent of anatomical composition, manual feature engineering, or heavily labelled examples. Deep learning is a cutting-edge area of machine learning on the forefront of current artificial intelligence developments. Specifically, autoencoders are artificial neural networks which can be stacked to form hierarchical sparse deep models from which high-level features are compressed, organized, and extracted, without labelled training data, allowing for unsupervised learning. This thesis presents a novel application of stacked sparse autoencoders to the problem of parcellating …


An Evolutionary Approximation To Contrastive Divergence In Convolutional Restricted Boltzmann Machines, Ryan R. Mccoppin Jan 2014

An Evolutionary Approximation To Contrastive Divergence In Convolutional Restricted Boltzmann Machines, Ryan R. Mccoppin

Browse all Theses and Dissertations

Deep learning is an emerging area in machine learning that exploits multi-layered neural networks to extract invariant relationships from large data sets. Deep learning uses layers of non-linear transformations to represent data in abstract and discrete forms. Several different architectures have been developed over the past few years specifically to process images including the Convolutional Restricted Boltzmann Machine. The Boltzmann Machine is trained using contrastive divergence, a depth-first gradient based training algorithm. Gradient based training methods have no guarantee of reaching an optimal solution and tend to search a limited region of the solution space. In this thesis, we present …