Open Access. Powered by Scholars. Published by Universities.®
Articles 1 - 5 of 5
Full-Text Articles in Life Sciences
Utilizing Few-Shot Meta Learning Algorithms For Medical Image Segmentation, Nick Littlefield
Utilizing Few-Shot Meta Learning Algorithms For Medical Image Segmentation, Nick Littlefield
Thinking Matters Symposium
Deep learning models can be difficult to train because they require large amounts of data, which we usually do not have or are too expensive to get or annotate. To overcome this problem, we can use few-shot meta-learning, which allows us to train deep learning models with little data. Using a few examples, meta-learning, or learning-to-learn, aims to use the experience learned during training to generalize to unknown tasks. Medical imaging is an industry where it is particularly useful, as there is limited publicly available data due to patient privacy concerns and annotating costs.
This project examines how meta-learning performs …
Deepnec: A Novel Alignment-Free Tool For The Characterization Of Nitrification-Related Enzymes Using Deep Learning, A Step Towards Comprehensive Understanding Of The Nitrogen Cycle, Naveen Duhan
Student Research Symposium
Abstract: Nitrification is an important microbial two-step transformation in the global nitrogen cycle, as it is the only natural process that produces nitrate within a system. The functional annotation of nitrification-related enzymes has a broad range of applications in metagenomics, agriculture, industrial biotechnology, etc. The time and resources needed for determining the function of enzymes experimentally are restrictively costly. Therefore, an accurate genome-scale computational prediction of the nitrification-related enzymes has become much more important.In this study, we developed an alignment-free computational approach to determine the nitrification-related enzymes from the sequence itself. We propose deepNEC, a novel end-to-end feature selection and …
Predicting Fixations From Deep And Low-Level Features, Matthias Kümmerer, Thomas S.A. Wallis, Leon A. Gatys, Matthias Bethge
Predicting Fixations From Deep And Low-Level Features, Matthias Kümmerer, Thomas S.A. Wallis, Leon A. Gatys, Matthias Bethge
MODVIS Workshop
Learning what properties of an image are associated with human gaze placement is important both for understanding how biological systems explore the environment and for computer vision applications. Recent advances in deep learning for the first time enable us to explain a significant portion of the information expressed in the spatial fixation structure. Our saliency model DeepGaze II uses the VGG network (trained on object recognition in the ImageNet challenge) to convert an image into a high-dimensional feature space which is then readout by a second very simple network to yield a density prediction. DeepGaze II is right now the …
How Deep Is The Feature Analysis Underlying Rapid Visual Categorization?, Sven Eberhardt, Jonah Cader, Thomas Serre
How Deep Is The Feature Analysis Underlying Rapid Visual Categorization?, Sven Eberhardt, Jonah Cader, Thomas Serre
MODVIS Workshop
Rapid categorization paradigms have a long history in experimental psychology: Characterized by short presentation times and fast behavioral responses, these tasks highlight both the speed and ease with which our visual system processes natural object categories. Previous studies have shown that feed-forward hierarchical models of the visual cortex provide a good fit to human visual decisions. At the same time, recent work has demonstrated significant gains in object recognition accuracy with increasingly deep hierarchical architectures: From AlexNet to VGG to Microsoft CNTK – the field of computer vision has championed both depth and accuracy. But it is unclear how well …
Using Deep Features To Predict Where People Look, Matthias Kümmerer, Matthias Bethge
Using Deep Features To Predict Where People Look, Matthias Kümmerer, Matthias Bethge
MODVIS Workshop
When free-viewing scenes, the first few fixations of human observers are driven in part by bottom-up attention. We seek to characterize this process by extracting all information from images that can be used to predict fixation densities (Kuemmerer et al, PNAS, 2015). If we ignore time and observer identity, the average amount of information is slightly larger than 2 bits per image for the MIT 1003 dataset. The minimum amount of information is 0.3 bits and the maximum 5.2 bits. Before the rise of deep neural networks the best models were able to capture 1/3 of this information on average. …