Open Access. Powered by Scholars. Published by Universities.®
![Digital Commons Network](http://assets.bepress.com/20200205/img/dcn/DCsunburst.png)
Physical Sciences and Mathematics Commons™
Open Access. Powered by Scholars. Published by Universities.®
- Discipline
- Publication
- Publication Type
Articles 1 - 2 of 2
Full-Text Articles in Physical Sciences and Mathematics
How Object Segmentation And Perceptual Grouping Emerge In Noisy Variational Autoencoders, Ben Lonnqvist, Zhengqing Wu, Michael H. Herzog
How Object Segmentation And Perceptual Grouping Emerge In Noisy Variational Autoencoders, Ben Lonnqvist, Zhengqing Wu, Michael H. Herzog
MODVIS Workshop
Many animals and humans can recognize and segment objects from their backgrounds. Whether object segmentation is necessary for object recognition has long been a topic of debate. Deep neural networks (DNNs) excel at object recognition, but not at segmentation tasks - this has led to the belief that object recognition and segmentation are separate mechanisms in visual processing. Here, however, we show evidence that in variational autoencoders (VAEs), segmentation and faithful representation of data can be interlinked. VAEs are encoder-decoder models that learn to represent independent generative factors of the data as a distribution in a very small bottleneck layer; …
An Empirical Study Of Pre-Trained Model Reuse In The Hugging Face Deep Learning Model Registry, Wenxin Jiang, Nicholas Synovic, Matt Hyatt, Taylor R. Schorlemmer, Rohan Sethi, Yung-Hsiang Lu, George K. Thiruvathukal, James C. Davis
An Empirical Study Of Pre-Trained Model Reuse In The Hugging Face Deep Learning Model Registry, Wenxin Jiang, Nicholas Synovic, Matt Hyatt, Taylor R. Schorlemmer, Rohan Sethi, Yung-Hsiang Lu, George K. Thiruvathukal, James C. Davis
Department of Electrical and Computer Engineering Faculty Publications
Deep Neural Networks (DNNs) are being adopted as components in software systems. Creating and specializing DNNs from scratch has grown increasingly difficult as state-of-the-art architectures grow more complex. Following the path of traditional software engineering, machine learning engineers have begun to reuse large-scale pre-trained models (PTMs) and fine-tune these models for downstream tasks. Prior works have studied reuse practices for traditional software packages to guide software engineers towards better package maintenance and dependency management. We lack a similar foundation of knowledge to guide behaviors in pre-trained model ecosystems.
In this work, we present the first empirical investigation of PTM reuse. …