Open Access. Powered by Scholars. Published by Universities.®

Neuroscience and Neurobiology Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 6 of 6

Full-Text Articles in Neuroscience and Neurobiology

Characterizing Receptive Field Selectivity In Area V2, Corey M. Ziemba, Robbe Lt Goris, J Anthony Movshon, Eero P. Simoncelli May 2015

Characterizing Receptive Field Selectivity In Area V2, Corey M. Ziemba, Robbe Lt Goris, J Anthony Movshon, Eero P. Simoncelli

MODVIS Workshop

The computations performed by neurons in area V1 are reasonably well understood, but computation in subsequent areas such as V2 have been more difficult to characterize. When stimulated with visual stimuli traditionally used to investigate V1, such as sinusoidal gratings, V2 neurons exhibit similar selectivity (but with larger receptive fields, and weaker responses) relative to V1 neurons. However, we find that V2 responses to synthetic stimuli designed to produce naturalistic patterns of joint activity in a model V1 population are more vigorous than responses to control stimuli that lacked this naturalistic structure (Freeman, et. al. 2013). Armed with this signature …


‘Edge’ Integration Explains Contrast And Assimilation In A Gradient Lightness Illusion, Michael E. Rudd May 2015

‘Edge’ Integration Explains Contrast And Assimilation In A Gradient Lightness Illusion, Michael E. Rudd

MODVIS Workshop

In the ‘phantom’ illusion (Galmonte, Soranzo, Rudd, & Agostini, submitted), either an incremental or a decremental target, when surrounded by a luminance gradient, can to be made to appear as an increment or a decrement, depending on the gradient width. For wide gradients, incremental targets appear as increments and decremental targets appear as decrements. For narrow gradients, the reverse is true. Here, I model these phenomena with a two-stage neural lightness theory (Rudd, 2013, 2014) in which local steps in log luminance are first encoded by oriented spatial filters operating on a log-transformed version of the image; then the filter …


Towards A Unified Computational Model Of Contextual Interactions Across Visual Modalities, David A. Mély, Thomas Serre May 2015

Towards A Unified Computational Model Of Contextual Interactions Across Visual Modalities, David A. Mély, Thomas Serre

MODVIS Workshop

The perception of a stimulus is largely determined by its surrounding. Examples abound from color (Land and McCann, 1971), disparity (Westheimer, 1986) and motion induction (Anstis and Casco, 2006) to orientation tilt effects (O’Toole and Wenderoth, 1976). Some of these phenomena have been studied individually using monkey neurophysiology techniques. In these experiments, a center stimulus is typically used to probe a cell’s classical “center” receptive field (cRF), whose activity is then modulated by an annular “surround” (extra-cRF) stimulus. While this center-surround integration (CSI) has been well characterized, a theoretical framework which unifies these different phenomena across visual modalities is lacking. …


A Binocular Model For Motion Integration In Mt Neurons, Pamela M. Baker, Wyeth Bair May 2015

A Binocular Model For Motion Integration In Mt Neurons, Pamela M. Baker, Wyeth Bair

MODVIS Workshop

Processing of visual motion by neurons in MT has long been an active area of study, however circuit models detailing the computations underlying binocular integration of motion signals remains elusive. Such models are important for studying the visual perception of motion in depth (MID), which involves both frontoparallel (FP) visual motion and binocular signal integration. Recent studies (Czuba et al. 2014, Sanada and DeAngelis 2014) have shown that many MT neurons are MID sensitive, contrary to the prevailing view (Maunsell and van Essen, 1983). These novel data are ideal for constraining models of binocular motion integration in MT. We have …


Modeling Shape Representation In Area V4, Wyeth Bair, Dina Popovkina, Abhishek De, Anitha Pasupathy May 2015

Modeling Shape Representation In Area V4, Wyeth Bair, Dina Popovkina, Abhishek De, Anitha Pasupathy

MODVIS Workshop

Our model builds on a convolutional-style neural network with hierarchical stages representing processing steps in the ventral visual pathway. It was designed to capture the translation-invariance and shape-selectivity of neurons in area V4. The model uses biologically plausible linear filters at the front end, normalization and sigmoidal nonlinear activation functions. The max() function is used to generate translation invariance.


Object Recognition And Visual Search With A Physiologically Grounded Model Of Visual Attention, Frederik Beuth, Fred H. Hamker May 2015

Object Recognition And Visual Search With A Physiologically Grounded Model Of Visual Attention, Frederik Beuth, Fred H. Hamker

MODVIS Workshop

Visual attention models can explain a rich set of physiological data (Reynolds & Heeger, 2009, Neuron), but can rarely link these findings to real-world tasks. Here, we would like to narrow this gap with a novel, physiologically grounded model of visual attention by demonstrating its objects recognition abilities in noisy scenes.

To base the model on physiological data, we used a recently developed microcircuit model of visual attention (Beuth & Hamker, in revision, Vision Res) which explains a large set of attention experiments, e.g. biased competition, modulation of contrast response functions, tuning curves, and surround suppression. Objects are represented by …