Open Access. Powered by Scholars. Published by Universities.®

Digital Commons Network

Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics

Central Washington University

Computer Science Faculty Scholarship

Deep learning

Publication Year

Articles 1 - 2 of 2

Full-Text Articles in Entire DC Network

Information Bottleneck In Deep Learning - A Semiotic Approach, Bogdan Musat, Razvan Andonie Jan 2022

Information Bottleneck In Deep Learning - A Semiotic Approach, Bogdan Musat, Razvan Andonie

Computer Science Faculty Scholarship

The information bottleneck principle was recently proposed as a theory meant to explain some of the training dynamics of deep neural architectures. Via information plane analysis, patterns start to emerge in this framework, where two phases can be distinguished: fitting and compression. We take a step further and study the behaviour of the spatial entropy characterizing the layers of convolutional neural networks (CNNs), in relation to the information bottleneck theory. We observe pattern formations which resemble the information bottleneck fitting and compression phases. From the perspective of semiotics, also known as the study of signs and sign-using behavior, the saliency …


Learning In Convolutional Neural Networks Accelerated By Transfer Entropy, Adrian Moldovan, Angel Caţaron, Răzvan Andonie Sep 2021

Learning In Convolutional Neural Networks Accelerated By Transfer Entropy, Adrian Moldovan, Angel Caţaron, Răzvan Andonie

Computer Science Faculty Scholarship

Recently, there is a growing interest in applying Transfer Entropy (TE) in quantifying the effective connectivity between artificial neurons. In a feedforward network, the TE can be used to quantify the relationships between neuron output pairs located in different layers. Our focus is on how to include the TE in the learning mechanisms of a Convolutional Neural Network (CNN) architecture. We introduce a novel training mechanism for CNN architectures which integrates the TE feedback connections. Adding the TE feedback parameter accelerates the training process, as fewer epochs are needed. On the flip side, it adds computational overhead to each epoch. …