Open Access. Powered by Scholars. Published by Universities.®

Engineering Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 3 of 3

Full-Text Articles in Engineering

Deep Learning (Partly) Demystified, Vladik Kreinovich, Olga Kosheleva Nov 2019

Deep Learning (Partly) Demystified, Vladik Kreinovich, Olga Kosheleva

Departmental Technical Reports (CS)

Successes of deep learning are partly due to appropriate selection of activation function, pooling functions, etc. Most of these choices have been made based on empirical comparison and heuristic ideas. In this paper, we show that many of these choices -- and the surprising success of deep learning in the first place -- can be explained by reasonably simple and natural mathematics.


Computing Without Computing: Dna Version, Vladik Kreinovich, Julio C. Urenda Nov 2019

Computing Without Computing: Dna Version, Vladik Kreinovich, Julio C. Urenda

Departmental Technical Reports (CS)

The traditional DNA computing schemes are based on using or simulating DNA-related activity. This is similar to how quantum computers use quantum activities to perform computations. Interestingly, in quantum computing, there is another phenomenon known as computing without computing, when, somewhat surprisingly, the result of the computation appears without invoking the actual quantum processes. In this chapter, we show that similar phenomenon is possible for DNA computing: in addition to the more traditional way of using or simulating DNA activity, we can also use DNA inactivity to solve complex problems. We also show that while DNA computing without …


Why Deep Learning Is More Efficient Than Support Vector Machines, And How It Is Related To Sparsity Techniques In Signal Processing, Laxman Bokati, Olga Kosheleva, Vladik Kreinovich Nov 2019

Why Deep Learning Is More Efficient Than Support Vector Machines, And How It Is Related To Sparsity Techniques In Signal Processing, Laxman Bokati, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

Several decades ago, traditional neural networks were the most efficient machine learning technique. Then it turned out that, in general, a different technique called support vector machines is more efficient. Reasonably recently, a new technique called deep learning has been shown to be the most efficient one. These are empirical observations, but how we explain them -- thus making the corresponding conclusions more reliable? In this paper, we provide a possible theoretical explanation for the above-described empirical comparisons. This explanation enables us to explain yet another empirical fact -- that sparsity techniques turned out to be very efficient in signal …