Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 4 of 4

Full-Text Articles in Physical Sciences and Mathematics

Ordered Weighted Averaging (Owa), Decision Making Under Uncertainty, And Deep Learning: How Is This All Related?, Vladik Kreinovich Feb 2022

Ordered Weighted Averaging (Owa), Decision Making Under Uncertainty, And Deep Learning: How Is This All Related?, Vladik Kreinovich

Departmental Technical Reports (CS)

Among many research areas to which Ron Yager contributed are decision making under uncertainty (in particular, under interval and fuzzy uncertainty) and aggregation -- where he proposed, analyzed, and utilized ordered weighted averaging (OWA). The OWA algorithm itself provides only a specific type of data aggregation. However, it turns out that if we allow several OWA stages, one after another, we obtain a scheme with a universal approximation property -- moreover, a scheme which is perfectly equivalent to modern ReLU-based deep neural networks. In this sense, Ron Yager can be viewed as a (grand)father of ReLU-based deep learning. We also …


Deep Learning (Partly) Demystified, Vladik Kreinovich, Olga Kosheleva Nov 2019

Deep Learning (Partly) Demystified, Vladik Kreinovich, Olga Kosheleva

Departmental Technical Reports (CS)

Successes of deep learning are partly due to appropriate selection of activation function, pooling functions, etc. Most of these choices have been made based on empirical comparison and heuristic ideas. In this paper, we show that many of these choices -- and the surprising success of deep learning in the first place -- can be explained by reasonably simple and natural mathematics.


Softmax And Mcfadden's Discrete Choice Under Interval (And Other) Uncertainty, Bartłomiej Jacek Kubica, Laxman Bokati, Olga Kosheleva, Vladik Kreinovich Apr 2019

Softmax And Mcfadden's Discrete Choice Under Interval (And Other) Uncertainty, Bartłomiej Jacek Kubica, Laxman Bokati, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

One of the important steps in deep learning is softmax, when we select one of the alternatives with a probability depending on its expected gain. A similar formula describes human decision making: somewhat surprisingly, when presented with several choices with different expected equivalent monetary gain, we do not just select the alternative with the largest gain; instead, we make a random choice, with probability decreasing with the gain -- so that it is possible that we will select second highest and even third highest value. Both formulas assume that we know the exact value of the expected gain for each …


How To Best Apply Neural Networks In Geosciences: Towards Optimal "Averaging" In Dropout Training, Afshin Gholamy, Justin Parra, Vladik Kreinovich, Olac Fuentes, Elizabeth Y. Anthony Dec 2017

How To Best Apply Neural Networks In Geosciences: Towards Optimal "Averaging" In Dropout Training, Afshin Gholamy, Justin Parra, Vladik Kreinovich, Olac Fuentes, Elizabeth Y. Anthony

Departmental Technical Reports (CS)

The main objectives of geosciences is to find the current state of the Earth -- i.e., solve the corresponding inverse problems -- and to use this knowledge for predicting the future events, such as earthquakes and volcanic eruptions. In both inverse and prediction problems, often, machine learning techniques are very efficient, and at present, the most efficient machine learning technique is deep neural training. To speed up this training, the current learning algorithms use dropout techniques: they train several sub-networks on different portions of data, and then "average" the results. A natural idea is to use arithmetic mean for this …