Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Portland State University

Computer vision

Computer Science Faculty Publications and Presentations

Articles 1 - 4 of 4

Full-Text Articles in Physical Sciences and Mathematics

View Synthesis Of Dynamic Scenes Based On Deep 3d Mask Volume, Kai-En Lin, Guowei Yang, Lei Xiao, Feng Liu, Ravi Ramamoorthi Jan 2021

View Synthesis Of Dynamic Scenes Based On Deep 3d Mask Volume, Kai-En Lin, Guowei Yang, Lei Xiao, Feng Liu, Ravi Ramamoorthi

Computer Science Faculty Publications and Presentations

Image view synthesis has seen great success in reconstructing photorealistic visuals, thanks to deep learning and various novel representations. The next key step in immersive virtual experiences is view synthesis of dynamic scenes. However, several challenges exist due to the lack of high-quality training datasets, and the additional time dimension for videos of dynamic scenes. To address this issue, we introduce a multi-view video dataset, captured with a custom 10-camera rig in 120FPS. The dataset contains 96 high-quality scenes showing various visual effects and human interactions in outdoor scenes. We develop a new algorithm, Deep 3D Mask Volume, which enables …


Active Object Localization In Visual Situations, Max H. Quinn, Anthony Rhodes, Melanie Mitchell Jul 2016

Active Object Localization In Visual Situations, Max H. Quinn, Anthony Rhodes, Melanie Mitchell

Computer Science Faculty Publications and Presentations

—We describe a method for performing active localization of objects in instances of visual situations. A visual situation is an abstract concept—e.g., “a boxing match”, “a birthday party”, “walking the dog”, “waiting for a bus”—whose image instantiations are linked more by their common spatial and semantic structure than by low-level visual similarity. Our system combines given and learned knowledge of the structure of a particular situation, and adapts that knowledge to a new situation instance as it actively searches for objects. More specifically, the system learns a set of probability distributions describing spatial and other relationships among relevant objects. The …


Interpreting Individual Classifications Of Hierarchical Networks, Will Landecker, Michael David Thomure, Luis M.A. Bettencourt, Melanie Mitchell, Garrett T. Kenyon, Steven P. Brumby Jan 2013

Interpreting Individual Classifications Of Hierarchical Networks, Will Landecker, Michael David Thomure, Luis M.A. Bettencourt, Melanie Mitchell, Garrett T. Kenyon, Steven P. Brumby

Computer Science Faculty Publications and Presentations

Hierarchical networks are known to achieve high classification accuracy on difficult machine-learning tasks. For many applications, a clear explanation of why the data was classified a certain way is just as important as the classification itself. However, the complexity of hierarchical networks makes them ill-suited for existing explanation methods. We propose a new method, contribution propagation, that gives per-instance explanations of a trained network's classifications. We give theoretical foundations for the proposed method, and evaluate its correctness empirically. Finally, we use the resulting explanations to reveal unexpected behavior of networks that achieve high accuracy on visual object-recognition tasks using well-known …


On The Role Of Shape Prototypes In Hierarchical Models Of Vision, Michael David Thomure, Melanie Mitchell, Garrett T. Kenyon Jan 2013

On The Role Of Shape Prototypes In Hierarchical Models Of Vision, Michael David Thomure, Melanie Mitchell, Garrett T. Kenyon

Computer Science Faculty Publications and Presentations

We investigate the role of learned shape-prototypes in an influential family of hierarchical neural-network models of vision. Central to these networks’ design is a dictionary of learned shapes, which are meant to respond to discriminative visual patterns in the input. While higher-level features based on such learned prototypes have been cited as key for viewpointinvariant object-recognition in these models [1], [2], we show that high performance on invariant object-recognition tasks can be obtained by using a simple set of unlearned, “shape-free” features. This behavior is robust to the size of the network. These results call into question the roles of …