Open Access. Powered by Scholars. Published by Universities.®
- Discipline
-
- Computer Sciences (4)
- Physical Sciences and Mathematics (4)
- Artificial Intelligence and Robotics (3)
- Engineering (3)
- Arts and Humanities (1)
-
- Bioimaging and Biomedical Optics (1)
- Bioinformatics (1)
- Biomedical Engineering and Bioengineering (1)
- Cognitive Neuroscience (1)
- Cognitive Psychology (1)
- Computer Engineering (1)
- Computer and Systems Architecture (1)
- Data Science (1)
- Digital Communications and Networking (1)
- Electrical and Computer Engineering (1)
- Graphics and Human Computer Interfaces (1)
- Other Computer Sciences (1)
- Philosophy (1)
- Philosophy of Language (1)
- Philosophy of Mind (1)
- Psychology (1)
- Robotics (1)
- Signal Processing (1)
- Social and Behavioral Sciences (1)
- Software Engineering (1)
- Institution
- Publication
- Publication Type
Articles 1 - 9 of 9
Full-Text Articles in Computational Neuroscience
Self-Supervised Pretraining And Transfer Learning On Fmri Data With Transformers, Sean Paulsen
Self-Supervised Pretraining And Transfer Learning On Fmri Data With Transformers, Sean Paulsen
Dartmouth College Ph.D Dissertations
Transfer learning is a machine learning technique founded on the idea that knowledge acquired by a model during “pretraining” on a source task can be transferred to the learning of a target task. Successful transfer learning can result in improved performance, faster convergence, and reduced demand for data. This technique is particularly desirable for the task of brain decoding in the domain of functional magnetic resonance imaging (fMRI), wherein even the most modern machine learning methods can struggle to decode labelled features of brain images. This challenge is due to the highly complex underlying signal, physical and neurological differences between …
Multimodal Neuron Classification Based On Morphology And Electrophysiology, Aqib Ahmad
Multimodal Neuron Classification Based On Morphology And Electrophysiology, Aqib Ahmad
Graduate Theses, Dissertations, and Problem Reports
Categorizing neurons into different types to understand neural circuits and ultimately brain function is a major challenge in neuroscience. While electrical properties are critical in defining a neuron, its morphology is equally important. Advancements in single-cell analysis methods have allowed neuroscientists to simultaneously capture multiple data modalities from a neuron. We propose a method to classify neurons using both morphological structure and electrophysiology. Current approaches are based on a limited analysis of morphological features. We propose to use a new graph neural network to learn representations that more comprehensively account for the complexity of the shape of neuronal structures. In …
Computational Mechanisms Of Face Perception, Jinge Wang
Computational Mechanisms Of Face Perception, Jinge Wang
Graduate Theses, Dissertations, and Problem Reports
The intertwined history of artificial intelligence and neuroscience has significantly impacted their development, with AI arising from and evolving alongside neuroscience. The remarkable performance of deep learning has inspired neuroscientists to investigate and utilize artificial neural networks as computational models to address biological issues. Studying the brain and its operational mechanisms can greatly enhance our understanding of neural networks, which has crucial implications for developing efficient AI algorithms. Many of the advanced perceptual and cognitive skills of biological systems are now possible to achieve through artificial intelligence systems, which is transforming our knowledge of brain function. Thus, the need for …
Contrastive Learning For Unsupervised Auditory Texture Models, Christina Trexler
Contrastive Learning For Unsupervised Auditory Texture Models, Christina Trexler
Computer Science and Computer Engineering Undergraduate Honors Theses
Sounds with a high level of stationarity, also known as sound textures, have perceptually relevant features which can be captured by stimulus-computable models. This makes texture-like sounds, such as those made by rain, wind, and fire, an appealing test case for understanding the underlying mechanisms of auditory recognition. Previous auditory texture models typically measured statistics from auditory filter bank representations, and the statistics they used were somewhat ad-hoc, hand-engineered through a process of trial and error. Here, we investigate whether a better auditory texture representation can be obtained via contrastive learning, taking advantage of the stationarity of auditory textures to …
A Defense Of Pure Connectionism, Alex B. Kiefer
A Defense Of Pure Connectionism, Alex B. Kiefer
Dissertations, Theses, and Capstone Projects
Connectionism is an approach to neural-networks-based cognitive modeling that encompasses the recent deep learning movement in artificial intelligence. It came of age in the 1980s, with its roots in cybernetics and earlier attempts to model the brain as a system of simple parallel processors. Connectionist models center on statistical inference within neural networks with empirically learnable parameters, which can be represented as graphical models. More recent approaches focus on learning and inference within hierarchical generative models. Contra influential and ongoing critiques, I argue in this dissertation that the connectionist approach to cognitive science possesses in principle (and, as is becoming …
Predicting Fixations From Deep And Low-Level Features, Matthias Kümmerer, Thomas S.A. Wallis, Leon A. Gatys, Matthias Bethge
Predicting Fixations From Deep And Low-Level Features, Matthias Kümmerer, Thomas S.A. Wallis, Leon A. Gatys, Matthias Bethge
MODVIS Workshop
Learning what properties of an image are associated with human gaze placement is important both for understanding how biological systems explore the environment and for computer vision applications. Recent advances in deep learning for the first time enable us to explain a significant portion of the information expressed in the spatial fixation structure. Our saliency model DeepGaze II uses the VGG network (trained on object recognition in the ImageNet challenge) to convert an image into a high-dimensional feature space which is then readout by a second very simple network to yield a density prediction. DeepGaze II is right now the …
Machine Learning Methods For Medical And Biological Image Computing, Rongjian Li
Machine Learning Methods For Medical And Biological Image Computing, Rongjian Li
Computer Science Theses & Dissertations
Medical and biological imaging technologies provide valuable visualization information of structure and function for an organ from the level of individual molecules to the whole object. Brain is the most complex organ in body, and it increasingly attracts intense research attentions with the rapid development of medical and bio-logical imaging technologies. A massive amount of high-dimensional brain imaging data being generated makes the design of computational methods for efficient analysis on those images highly demanded. The current study of computational methods using hand-crafted features does not scale with the increasing number of brain images, hindering the pace of scientific discoveries …
How Deep Is The Feature Analysis Underlying Rapid Visual Categorization?, Sven Eberhardt, Jonah Cader, Thomas Serre
How Deep Is The Feature Analysis Underlying Rapid Visual Categorization?, Sven Eberhardt, Jonah Cader, Thomas Serre
MODVIS Workshop
Rapid categorization paradigms have a long history in experimental psychology: Characterized by short presentation times and fast behavioral responses, these tasks highlight both the speed and ease with which our visual system processes natural object categories. Previous studies have shown that feed-forward hierarchical models of the visual cortex provide a good fit to human visual decisions. At the same time, recent work has demonstrated significant gains in object recognition accuracy with increasingly deep hierarchical architectures: From AlexNet to VGG to Microsoft CNTK – the field of computer vision has championed both depth and accuracy. But it is unclear how well …
Using Deep Features To Predict Where People Look, Matthias Kümmerer, Matthias Bethge
Using Deep Features To Predict Where People Look, Matthias Kümmerer, Matthias Bethge
MODVIS Workshop
When free-viewing scenes, the first few fixations of human observers are driven in part by bottom-up attention. We seek to characterize this process by extracting all information from images that can be used to predict fixation densities (Kuemmerer et al, PNAS, 2015). If we ignore time and observer identity, the average amount of information is slightly larger than 2 bits per image for the MIT 1003 dataset. The minimum amount of information is 0.3 bits and the maximum 5.2 bits. Before the rise of deep neural networks the best models were able to capture 1/3 of this information on average. …