Open Access. Powered by Scholars. Published by Universities.®

Digital Commons Network

Open Access. Powered by Scholars. Published by Universities.®

PDF

University of Texas at El Paso

Departmental Technical Reports (CS)

Series

Uncertainty

Articles 1 - 10 of 10

Full-Text Articles in Entire DC Network

How To Deal With Uncertainties In Computing: From Probabilistic And Interval Uncertainty To Combination Of Different Approaches, With Applications To Engineering And Bioinformatics, Vladik Kreinovich Mar 2017

How To Deal With Uncertainties In Computing: From Probabilistic And Interval Uncertainty To Combination Of Different Approaches, With Applications To Engineering And Bioinformatics, Vladik Kreinovich

Departmental Technical Reports (CS)

Most data processing techniques traditionally used in scientific and engineering practice are statistical. These techniques are based on the assumption that we know the probability distributions of measurement errors etc.

In practice, often, we do not know the distributions, we only know the bound D on the measurement accuracy -- hence, after the get the measurement result X, the only information that we have about the actual (unknown) value x of the measured quantity is that $x$ belongs to the interval [X − D, X + D]. Techniques for data processing under such interval uncertainty are called interval computations; these …


How To Divide Students Into Groups So As To Optimize Learning: Towards A Solution To A Pedagogy-Related Optimization Problem, Olga Kosheleva, Vladik Kreinovich Jul 2012

How To Divide Students Into Groups So As To Optimize Learning: Towards A Solution To A Pedagogy-Related Optimization Problem, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

To enhance learning, it is desirable to also let students learn from each other, e.g., by working in groups. It is known that such groupwork can improve learning, but the effect strongly depends on how we divide students into groups. In this paper, based on a first approximation model of student interaction, we describe how to optimally divide students into groups so as to optimize the resulting learning. We hope that, by taking into account other aspects of student interaction, it will be possible to transform our solution into truly optimal practical recommendations.


Optimizing Computer Representation And Computer Processing Of Epistemic Uncertainty For Risk-Informed Decision Making: Finances Etc., Vladik Kreinovich, Nitaya Buntao, Olga Kosheleva Apr 2012

Optimizing Computer Representation And Computer Processing Of Epistemic Uncertainty For Risk-Informed Decision Making: Finances Etc., Vladik Kreinovich, Nitaya Buntao, Olga Kosheleva

Departmental Technical Reports (CS)

Uncertainty is usually gauged by using standard statistical characteristics: mean, variance, correlation, etc. Then, we use the known values of these characteristics (or the known bounds on these values) to select a decision. Sometimes, it becomes clear that the selected characteristics do not always describe a situation well; then other known (or new) characteristics are proposed. A good example is description of volatility in finance: it started with variance, and now many descriptions are competing, all with their own advantages and limitations.

In such situations, a natural idea is to come up with characteristics tailored to specific application areas: e.g., …


Estimating Information Amount Under Uncertainty: Algorithmic Solvability And Computational Complexity, Vladik Kreinovich, Gang Xiang Jan 2010

Estimating Information Amount Under Uncertainty: Algorithmic Solvability And Computational Complexity, Vladik Kreinovich, Gang Xiang

Departmental Technical Reports (CS)

Measurement results (and, more generally, estimates) are never absolutely accurate: there is always an uncertainty, the actual value x is, in general, different from the estimate X. Sometimes, we know the probability of different values of the estimation error dx=X-x, sometimes, we only know the interval of possible values of dx, sometimes, we have interval bounds on the cdf of dx. To compare different measuring instruments, it is desirable to know which of them brings more information - i.e., it is desirable to gauge the amount of information. For probabilistic uncertainty, this amount of information is described by Shannon's entropy; …


Optimal Sensor Placement In Environmental Research: Designing A Sensor Network Under Uncertainty, Aline James, Craig Tweedie, Tanja Magoc, Vladik Kreinovich, Martine Ceberio Dec 2009

Optimal Sensor Placement In Environmental Research: Designing A Sensor Network Under Uncertainty, Aline James, Craig Tweedie, Tanja Magoc, Vladik Kreinovich, Martine Ceberio

Departmental Technical Reports (CS)

One of our main challenges in meteorology and environment research is that in many important remote areas, sensor coverage is sparse, leaving us with numerous blind spots. Placement and maintenance of sensors in these areas are expensive. It is therefore desirable to find out how, within a given budget, we can design a sensor network are important activities was developing reasonable techniques for sensor that would provide us with the largest amount of useful information while minimizing the size of the "blind spot" areas which is not covered by the sensors.

This problem is very difficult even to formulate in …


Model Fusion Under Probabilistic And Interval Uncertainty, With Application To Earth Sciences, Omar Ochoa, Aaron A. Velasco, Christian Servin, Vladik Kreinovich Nov 2009

Model Fusion Under Probabilistic And Interval Uncertainty, With Application To Earth Sciences, Omar Ochoa, Aaron A. Velasco, Christian Servin, Vladik Kreinovich

Departmental Technical Reports (CS)

One of the most important studies of the earth sciences is that of the Earth's interior structure. There are many sources of data for Earth tomography models: first-arrival passive seismic data (from the actual earthquakes), first-arrival active seismic data (from the seismic experiments), gravity data, and surface waves. Currently, each of these datasets is processed separately, resulting in several different Earth models that have specific coverage areas, different spatial resolutions and varying degrees of accuracy. These models often provide complimentary geophysical information on earth structure (P and S wave velocity structure).

Combining the information derived from each requires a joint …


Maximum Entropy In Support Of Semantically Annotated Datasets, Paulo Pinheiro Da Silva, Vladik Kreinovich, Christian Servin Sep 2008

Maximum Entropy In Support Of Semantically Annotated Datasets, Paulo Pinheiro Da Silva, Vladik Kreinovich, Christian Servin

Departmental Technical Reports (CS)

One of the important problems of semantic web is checking whether two datasets describe the same quantity. The existing solution to this problem is to use these datasets' ontologies to deduce that these datasets indeed represent the same quantity. However, even when ontologies seem to confirm the identify of the two corresponding quantities, it is still possible that in reality, we deal with somewhat different quantities. A natural way to check the identity is to compare the numerical values of the measurement results: if they are close (within measurement errors), then most probably we deal with the same quantity, else …


Propagation And Provenance Of Probabilistic And Interval Uncertainty In Cyberinfrastructure-Related Data Processing And Data Fusion, Paulo Pinheiro Da Silva, Aaron A. Velasco, Martine Ceberio, Christian Servin, Matthew G. Averill, Nicholas Ricky Del Rio, Luc Longpre, Vladik Kreinovich Nov 2007

Propagation And Provenance Of Probabilistic And Interval Uncertainty In Cyberinfrastructure-Related Data Processing And Data Fusion, Paulo Pinheiro Da Silva, Aaron A. Velasco, Martine Ceberio, Christian Servin, Matthew G. Averill, Nicholas Ricky Del Rio, Luc Longpre, Vladik Kreinovich

Departmental Technical Reports (CS)

In the past, communications were much slower than computations. As a result, researchers and practitioners collected different data into huge databases located at a single location such as NASA and US Geological Survey. At present, communications are so much faster that it is possible to keep different databases at different locations, and automatically select, transform, and collect relevant data when necessary. The corresponding cyberinfrastructure is actively used in many applications. It drastically enhances scientists' ability to discover, reuse and combine a large number of resources, e.g., data and services.

Because of this importance, it is desirable to be able to …


A New Cauchy-Based Black-Box Technique For Uncertainty In Risk Analysis, Vladik Kreinovich, Scott Ferson Feb 2003

A New Cauchy-Based Black-Box Technique For Uncertainty In Risk Analysis, Vladik Kreinovich, Scott Ferson

Departmental Technical Reports (CS)

Uncertainty is very important in risk analysis. A natural way to describe this uncertainty is to describe a set of possible values of each unknown quantity (this set is usually an interval), plus any additional information that we may have about the probability of different values within this set. Traditional statistical techniques deal with the situations in which we have a complete information about the probabilities; in real life, however, we often have only partial information about them. We therefore need to describe methods of handling such partial information in risk analysis. Several such techniques have been presented, often on …


Why 95% And Two Sigma? A Theoretical Justification For An Empirical Measurement Practice, Hung T. Nguyen, Vladik Kreinovich, Chin-Wang Tao Jul 2000

Why 95% And Two Sigma? A Theoretical Justification For An Empirical Measurement Practice, Hung T. Nguyen, Vladik Kreinovich, Chin-Wang Tao

Departmental Technical Reports (CS)

The probability p(k) that the value of a random variable is far away from the mean (e.g. further than k standard deviations away) is so small that this possibility can be often safely ignored. It is desirable to select k for which the dependence of the probability p(k) on the distribution is the smallest possible. Empirically, this dependence is the smallest for k between 1.5 and 2.5. In this paper, we give a theoretical explanation for this empirical result.