Open Access. Powered by Scholars. Published by Universities.®

Neuroscience and Neurobiology Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 4 of 4

Full-Text Articles in Neuroscience and Neurobiology

Computations Of Top-Down Attention By Modulating V1 Dynamics, David Berga, Xavier Otazu May 2019

Computations Of Top-Down Attention By Modulating V1 Dynamics, David Berga, Xavier Otazu

MODVIS Workshop

The human visual system processes information defining what is visually conspicuous (saliency) to our perception, guiding eye movements towards certain objects depending on scene context and its feature characteristics. However, attention has been known to be biased by top-down influences (relevance), which define voluntary eye movements driven by goal-directed behavior and memory. We propose a unified model of the visual cortex able to predict, among other effects, top-down visual attention and saccadic eye movements. First, we simulate activations of early mechanisms of the visual system (RGC/LGN), by processing distinct image chromatic opponencies with Gabor-like filters. Second, we use a cortical …


Predicting Fixations From Deep And Low-Level Features, Matthias Kümmerer, Thomas S.A. Wallis, Leon A. Gatys, Matthias Bethge May 2017

Predicting Fixations From Deep And Low-Level Features, Matthias Kümmerer, Thomas S.A. Wallis, Leon A. Gatys, Matthias Bethge

MODVIS Workshop

Learning what properties of an image are associated with human gaze placement is important both for understanding how biological systems explore the environment and for computer vision applications. Recent advances in deep learning for the first time enable us to explain a significant portion of the information expressed in the spatial fixation structure. Our saliency model DeepGaze II uses the VGG network (trained on object recognition in the ImageNet challenge) to convert an image into a high-dimensional feature space which is then readout by a second very simple network to yield a density prediction. DeepGaze II is right now the …


Using Deep Features To Predict Where People Look, Matthias Kümmerer, Matthias Bethge May 2016

Using Deep Features To Predict Where People Look, Matthias Kümmerer, Matthias Bethge

MODVIS Workshop

When free-viewing scenes, the first few fixations of human observers are driven in part by bottom-up attention. We seek to characterize this process by extracting all information from images that can be used to predict fixation densities (Kuemmerer et al, PNAS, 2015). If we ignore time and observer identity, the average amount of information is slightly larger than 2 bits per image for the MIT 1003 dataset. The minimum amount of information is 0.3 bits and the maximum 5.2 bits. Before the rise of deep neural networks the best models were able to capture 1/3 of this information on average. …


Putting Saliency In Its Place, John K. Tsotsos May 2015

Putting Saliency In Its Place, John K. Tsotsos

MODVIS Workshop

The role of attention and the place within the visual processing stream where the concept of saliency has been situated is critically examined by considering the experimental evidence and performing tests that link experiment to computation.