Open Access. Powered by Scholars. Published by Universities.®

Digital Commons Network

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 10 of 10

Full-Text Articles in Entire DC Network

Why Max And Average Poolings Are Optimal In Convolutional Neural Networks, Ahnaf Farhan, Olga Kosheleva, Vladik Kreinovich Sep 2018

Why Max And Average Poolings Are Optimal In Convolutional Neural Networks, Ahnaf Farhan, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

In many practical situations, we do not know the exact relation between different quantities; this relation needs to be determined based on the empirical data. This determination is not easy -- especially in the presence of different types of uncertainty. When the data comes in the form of time series and images, many efficient techniques for such determination use algorithms for training convolutional neural network. As part of this training, such networks "pool" several values corresponding to nearby temporal or spatial points into a single value. Empirically, the most efficient pooling algorithm consists of taking the maximum of the pooled …


Towards Parallel Quantum Computing: Standard Quantum Teleportation Algorithm Is, In Some Reasonable Sense, Unique, Oscar Galindo, Olga Kosheleva, Vladik Kreinovich Sep 2018

Towards Parallel Quantum Computing: Standard Quantum Teleportation Algorithm Is, In Some Reasonable Sense, Unique, Oscar Galindo, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

In many practical problems, the computation speed of modern computers is not sufficient. Due to the fact that all speeds are bounded by the speed of light, the only way to speed up computations is to further decrease the size of the memory and processing cells that form a computational device. At the resulting size level, each cell will consist of a few atoms -- thus, we need to take quantum effects into account. For traditional computational devices, quantum effects are largely a distracting noise, but new quantum computing algorithms have been developed that use quantum effects to speed up …


Optimization Under Fuzzy Constraints: From A Heuristic Algorithm To An Algorithm That Always Converges, Vladik Kreinovich, Juan Carlos Figueroa-Garcia Jul 2018

Optimization Under Fuzzy Constraints: From A Heuristic Algorithm To An Algorithm That Always Converges, Vladik Kreinovich, Juan Carlos Figueroa-Garcia

Departmental Technical Reports (CS)

An efficient iterative heuristic algorithm has been used to implement Bellman-Zadeh solution to the problem of optimization under fuzzy constraints. In this paper, we analyze this algorithm, explain why it works, show that there are cases when this algorithm does not converge, and propose a modification that always converges.


How To Best Apply Neural Networks In Geosciences: Towards Optimal "Averaging" In Dropout Training, Afshin Gholamy, Justin Parra, Vladik Kreinovich, Olac Fuentes, Elizabeth Y. Anthony Dec 2017

How To Best Apply Neural Networks In Geosciences: Towards Optimal "Averaging" In Dropout Training, Afshin Gholamy, Justin Parra, Vladik Kreinovich, Olac Fuentes, Elizabeth Y. Anthony

Departmental Technical Reports (CS)

The main objectives of geosciences is to find the current state of the Earth -- i.e., solve the corresponding inverse problems -- and to use this knowledge for predicting the future events, such as earthquakes and volcanic eruptions. In both inverse and prediction problems, often, machine learning techniques are very efficient, and at present, the most efficient machine learning technique is deep neural training. To speed up this training, the current learning algorithms use dropout techniques: they train several sub-networks on different portions of data, and then "average" the results. A natural idea is to use arithmetic mean for this …


√(X2 + Μ) Is The Most Computationally Efficient Smooth Approximation To |X|: A Proof, Carlos Ramirez, Reinaldo Sanchez, Vladik Kreinovich, Miguel Argaez Jun 2013

√(X2 + Μ) Is The Most Computationally Efficient Smooth Approximation To |X|: A Proof, Carlos Ramirez, Reinaldo Sanchez, Vladik Kreinovich, Miguel Argaez

Departmental Technical Reports (CS)

In many practical situations, we need to minimize an expression of the type |c1| + ... + |cn|. The problem is that most efficient optimization techniques use the derivative of the objective function, but the function |x| is not differentiable at 0. To make optimization efficient, it is therefore reasonable to approximate |x| by a smooth function. We show that in some reasonable sense, the most computationally efficient smooth approximation to |x| is the function √(x2 + μ), a function which has indeed been successfully used in such optimization.


How To Divide Students Into Groups So As To Optimize Learning: Towards A Solution To A Pedagogy-Related Optimization Problem, Olga Kosheleva, Vladik Kreinovich Jul 2012

How To Divide Students Into Groups So As To Optimize Learning: Towards A Solution To A Pedagogy-Related Optimization Problem, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

To enhance learning, it is desirable to also let students learn from each other, e.g., by working in groups. It is known that such groupwork can improve learning, but the effect strongly depends on how we divide students into groups. In this paper, based on a first approximation model of student interaction, we describe how to optimally divide students into groups so as to optimize the resulting learning. We hope that, by taking into account other aspects of student interaction, it will be possible to transform our solution into truly optimal practical recommendations.


Theoretical Explanation Of Bernstein Polynomials' Efficiency: They Are Optimal Combination Of Optimal Endpoint-Related Functions, Jaime Nava, Vladik Kreinovich Jul 2011

Theoretical Explanation Of Bernstein Polynomials' Efficiency: They Are Optimal Combination Of Optimal Endpoint-Related Functions, Jaime Nava, Vladik Kreinovich

Departmental Technical Reports (CS)

In many applications of interval computations, it turned out to be beneficial to represent polynomials on a given interval [x-, x+] as linear combinations of Bernstein polynomials (x- x - )k * (x+ - x)n-k. In this paper, we provide a theoretical explanation for this empirical success: namely, we show that under reasonable optimality criteria, Bernstein polynomials can be uniquely determined from the requirement that they are optimal combinations of optimal polynomials corresponding to the interval's endpoints.


M Solutions Good, M-1 Solutions Better, Luc Longpre, William Gasarch, G. W. Walster, Vladik Kreinovich Aug 2007

M Solutions Good, M-1 Solutions Better, Luc Longpre, William Gasarch, G. W. Walster, Vladik Kreinovich

Departmental Technical Reports (CS)

One of the main objectives of theoretical research in computational complexity and feasibility is to explain experimentally observed difference in complexity.

Empirical evidence shows that the more solutions a system of equations has, the more difficult it is to solve it. Similarly, the more global maxima a continuous function has, the more difficult it is to locate them. Until now, these empirical facts have been only partially formalized: namely, it has been shown that problems with two or more solutions are more difficult to solve than problems with exactly one solution. In this paper, we extend this result and show …


Probabilities, Intervals, What Next? Optimization Problems Related To Extension Interval Computations To Situations With Partial Information About Probabilities, Vladik Kreinovich Apr 2003

Probabilities, Intervals, What Next? Optimization Problems Related To Extension Interval Computations To Situations With Partial Information About Probabilities, Vladik Kreinovich

Departmental Technical Reports (CS)

When we have only interval ranges [xi-,xi+] of sample values x1,...,xn, what is the interval [V-,V+] of possible values for the variance V of these values? We prove that the problem of computing the upper bound V+ is NP-hard. We provide a feasible (quadratic time) algorithm for computing the exact lower bound V- on the variance of interval data. We also provide feasible algorithms that computes V+ under reasonable easily verifiable conditions, in particular, in case interval uncertainty is introduced to maintain privacy in a statistical database.

We also extend the main formulas of interval arithmetic for different arithmetic operations …


Optimal Elimination Of Inconsistency In Expert Knowledge: Formulation Of The Problem, Fast Algorithms, Timothy J. Ross, Berlin Wu, Vladik Kreinovich Sep 2000

Optimal Elimination Of Inconsistency In Expert Knowledge: Formulation Of The Problem, Fast Algorithms, Timothy J. Ross, Berlin Wu, Vladik Kreinovich

Departmental Technical Reports (CS)

Expert knowledge is sometimes inconsistent. In this paper, we describe the problem of eliminating this inconsistency as an optimization problem, and present fast algorithms for solving this problem.