Open Access. Powered by Scholars. Published by Universities.®
Articles 1 - 3 of 3
Full-Text Articles in Entire DC Network
One-Stage Blind Source Separation Via A Sparse Autoencoder Framework, Jason Anthony Dabin
One-Stage Blind Source Separation Via A Sparse Autoencoder Framework, Jason Anthony Dabin
Dissertations
Blind source separation (BSS) is the process of recovering individual source transmissions from a received mixture of co-channel signals without a priori knowledge of the channel mixing matrix or transmitted source signals. The received co-channel composite signal is considered to be captured across an antenna array or sensor network and is assumed to contain sparse transmissions, as users are active and inactive aperiodically over time. An unsupervised machine learning approach using an artificial feedforward neural network sparse autoencoder with one hidden layer is formulated for blindly recovering the channel matrix and source activity of co-channel transmissions. The BSS sparse autoencoder …
Unsupervised Learning With Word Embeddings Captures Quiescent Knowledge From Covid-19 And Materials Science Literature, Tasnim H. Gharaibeh
Unsupervised Learning With Word Embeddings Captures Quiescent Knowledge From Covid-19 And Materials Science Literature, Tasnim H. Gharaibeh
Dissertations
Millions of scientific papers are produced each year and the scientific literature is continuing to grow at a head-spinning speed. Thus, massive scientific knowledge exists in solid text, but all these publications make it difficult, if not impossible, for researchers to keep in up to date with discoveries, even within a narrow scientific area. This massive amount of information also makes it difficult to find implicit and hidden connections, relationships, and dependencies within the information that may guide the direction of future research or lead to valuable new insights. So, there is a need for algorithms or models that can …
Probabilistic Spiking Neural Networks : Supervised, Unsupervised And Adversarial Trainings, Alireza Bagheri
Probabilistic Spiking Neural Networks : Supervised, Unsupervised And Adversarial Trainings, Alireza Bagheri
Dissertations
Spiking Neural Networks (SNNs), or third-generation neural networks, are networks of computation units, called neurons, in which each neuron with internal analogue dynamics receives as input and produces as output spiking, that is, binary sparse, signals. In contrast, second-generation neural networks, termed as Artificial Neural Networks (ANNs), rely on simple static non-linear neurons that are known to be energy-intensive, hindering their implementations on energy-limited processors such as mobile devices. The sparse event-based characteristics of SNNs for information transmission and encoding have made them more feasible for highly energy-efficient neuromorphic computing architectures. The most existing training algorithms for SNNs are based …