Open Access. Powered by Scholars. Published by Universities.®

Computer Engineering Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 30 of 36

Full-Text Articles in Computer Engineering

Minimality Of Solution Update In Conflict Resolution: An Application Of Revision Programming To Von Neumann-Morgenstern Approach, Inna Pivkina, Vladik Kreinovich Nov 2003

Minimality Of Solution Update In Conflict Resolution: An Application Of Revision Programming To Von Neumann-Morgenstern Approach, Inna Pivkina, Vladik Kreinovich

Departmental Technical Reports (CS)

In a 1944 book that started game theory (and mathematical approach to conflict resolution), von Neumann and Morgenstern proposed the notion of a solution. When the situation changes, the old solution is often no longer a solution, so it needs to be updated. In practical applications, it is usually desirable to keep the solution change "minimal" in some reasonable sense. We show that for a seemingly straightforward formalization of this minimality, checking whether a change is minimal is NP-hard. We also show that by representing the notion of a solution as a collection of revision rules, we can produce a …


Novel Approaches To Numerical Software With Result Verification, Laurent Granvilliers, Vladik Kreinovich, Norbert Mueller Oct 2003

Novel Approaches To Numerical Software With Result Verification, Laurent Granvilliers, Vladik Kreinovich, Norbert Mueller

Departmental Technical Reports (CS)

Traditional design of numerical software with result verification is based on the assumption that we know the algorithm f(x_1,...,xn) that transforms input x1,...,xn into the output y=f(x1,...,xn), and we know the intervals of possible values of the inputs. Many real-life problems go beyond this paradigm. In some cases, we do not have an algorithm f, we only know some relation (constraints) between xi and y. In other cases, in addition to knowing the intervals [xi], we may know some relations between xi; we may have some information about the probabilities of different values of xi, and we may know the …


Separating Components In Interval-Valued Images, Marilton Sanchotene De Aguiar, Gracaliz Pereira Dimuro, Antonio Carlos Da Rocha Costa, Andrei Finkelstein, Vladik Kreinovich Oct 2003

Separating Components In Interval-Valued Images, Marilton Sanchotene De Aguiar, Gracaliz Pereira Dimuro, Antonio Carlos Da Rocha Costa, Andrei Finkelstein, Vladik Kreinovich

Departmental Technical Reports (CS)

In many applications of imaging, we would like to know whether we have an image of a single-component object or an image of an object that consists of several components. Many algorithms have been designed to solve this problem; however, these algorithms are all heuristic. Often, according to some reasonable methods, we have a single component, while according to some other equally reasonable methods, the same image have multiple components. It is desirable to produce reliable methods, so that if a method claims that there are multiple components, then it should mean that the observed data is incompatible with the …


A New Differential Formalism For Interval-Valued Functions And Its Potential Use In Detecting 1-D Landscape Features, Vladik Kreinovich, Hung T. Nguyen, Gracaliz Pereira Dimuro, Antonio Carlos Da Rocha Costa, Benjamin Rene Callejas Bedregal Oct 2003

A New Differential Formalism For Interval-Valued Functions And Its Potential Use In Detecting 1-D Landscape Features, Vladik Kreinovich, Hung T. Nguyen, Gracaliz Pereira Dimuro, Antonio Carlos Da Rocha Costa, Benjamin Rene Callejas Bedregal

Departmental Technical Reports (CS)

In many practical problems, it is important to know the slope (derivative) dy/dx of one quantity y with respect to some other quantity x. For example, different 1-D landscape features can be characterized by different values of the derivative dy/dx, where y is an altitude, and x is a horizontal coordinate. In practice, we often know the values of y(x) for different x with interval uncertainty. How can we then find the set of possible values of the slope? In this paper, we formulate this problem of differentiating interval-values functions in precise terms, and we describe an (asymptotically) optimal algorithm …


Real-Time Algorithms For Statistical Analysis Of Interval Data, Berlin Wu, Hung T. Nguyen, Vladik Kreinovich Oct 2003

Real-Time Algorithms For Statistical Analysis Of Interval Data, Berlin Wu, Hung T. Nguyen, Vladik Kreinovich

Departmental Technical Reports (CS)

When we have only interval ranges [xi] of sample values x1,...,xn, what is the interval [V] of possible values for the variance V of these values? There are quadratic time algorithms for computing the exact lower bound V- on the variance of interval data, and for computing V+ under reasonable easily verifiable conditions. The problem is that in real life, we often make additional measurements. In traditional statistics, if we have a new measurement result, we can modify the value of variance in constant time. In contrast, previously known algorithms for processing interval data required that, once a new data …


Sensitivity Analysis Of Neural Control, Chin-Wang Tao, Hung T. Nguyen, J. T. Yao, Vladik Kreinovich Oct 2003

Sensitivity Analysis Of Neural Control, Chin-Wang Tao, Hung T. Nguyen, J. T. Yao, Vladik Kreinovich

Departmental Technical Reports (CS)

We provide explicit formulas that describe how sensitive the resulting signal of a neural network is to the measurement errors with which we measure the inputs.


Fast Multiplication Of Interval Matrices (Interval Version Of Strassen's Algorithm), Martine Ceberio, Vladik Kreinovich Oct 2003

Fast Multiplication Of Interval Matrices (Interval Version Of Strassen's Algorithm), Martine Ceberio, Vladik Kreinovich

Departmental Technical Reports (CS)

Strassen's algorithm multiplies two numerical matrices fast, but when applied to interval matrices, leads to excess width. We use Rump's interval arithmetic to propose an interval version of Strassen's algorithm whose only excess width is in second order terms.


Greedy Algorithms For Optimizing Multivariate Horner Schemes, Martine Ceberio, Vladik Kreinovich Oct 2003

Greedy Algorithms For Optimizing Multivariate Horner Schemes, Martine Ceberio, Vladik Kreinovich

Departmental Technical Reports (CS)

For univariate polynomials f(x1), Horner scheme provides the fastest way to compute the value. For multivariate polynomials, several different version of Horner scheme are possible; it is not clear which of them is optimal. In this paper, we propose a greedy algorithm that will hopefully lead to good computation times.

A univariate Horner scheme has another advantage: if the value x1 is known with uncertainty, and we are interested in the resulting uncertainty in f(x1), then Horner scheme leads to a better estimate for this uncertainty than many other ways of computing f(x1). The second greedy algorithm that we propose …


Interval Approach To Phase Measurements Can Lead To Arbitrarily Complex Sets - A Theorem And Ways Around It, Bharat C. Mulupuru, Vladik Kreinovich, Roberto Osegueda Oct 2003

Interval Approach To Phase Measurements Can Lead To Arbitrarily Complex Sets - A Theorem And Ways Around It, Bharat C. Mulupuru, Vladik Kreinovich, Roberto Osegueda

Departmental Technical Reports (CS)

We are often interested in phases of complex quantities; e.g., in non-destructive testing of aerospace structures, important information comes from phases of Eddy current and magnetic resonance.

For each measurement, we have an upper bound D on the measurement error dx=X-x, so when the measurement result is X, we know that the actual value x is in [X-D,X+D]. Often, we have no information about probabilities of different values, so this interval is our only information about x. When the accuracy is not sufficient, we perform several repeated measurements, and conclude that x belongs to the intersection of the corresponding intervals. …


A Full Function-Based Calculus Of Directed And Undirected Intervals: Markov's Interval Arithmetic Revisited, Juergen Wolff Von Gudenberg, Vladik Kreinovich Sep 2003

A Full Function-Based Calculus Of Directed And Undirected Intervals: Markov's Interval Arithmetic Revisited, Juergen Wolff Von Gudenberg, Vladik Kreinovich

Departmental Technical Reports (CS)

This paper proposes a new interpretation of intervals as classes of functions having the same domain. Interval operations are seen as operations on these classes. This approach allows to recover Markov's directed interval arithmetic by taking into account the monotonicity of the functions.


A Feasible Algorithm For Locating Concave And Convex Zones Of Interval Data And Its Use In Statistics-Based Clustering, Vladik Kreinovich, Eric J. Pauwels, Scott Ferson, Lev Ginzburg Sep 2003

A Feasible Algorithm For Locating Concave And Convex Zones Of Interval Data And Its Use In Statistics-Based Clustering, Vladik Kreinovich, Eric J. Pauwels, Scott Ferson, Lev Ginzburg

Departmental Technical Reports (CS)

Often, we need to divide n objects into clusters based on the value of a certain quantity x. For example, we can classify insects in the cotton field into groups based on their size and other geometric characteristics. Within each cluster, we usually have a unimodal distribution of x, with a probability density d(x) that increases until a certain value x0 and then decreases. It is therefore natural, based on d(x), to determine a cluster as the interval between two local minima, i.e., as a union of adjacent increasing and decreasing segments. In this paper, we describe a feasible algorithm …


Interval Arithmetic, Affine Arithmetic, Taylor Series Methods: Why, What Next?, Nedialko S. Nedialkov, Vladik Kreinovich, Scott A. Starks Sep 2003

Interval Arithmetic, Affine Arithmetic, Taylor Series Methods: Why, What Next?, Nedialko S. Nedialkov, Vladik Kreinovich, Scott A. Starks

Departmental Technical Reports (CS)

In interval computations, the range of each intermediate result r is described by an interval [r]. To decrease excess interval width, we can keep some information on how r depends on the input x=(x1,...,xn). There are several successful methods of approximating this dependence; in these methods, the dependence is approximated by linear functions (affine arithmetic) or by general polynomials (Taylor series methods). Why linear functions and polynomials? What other classes can we try? These questions are answered in this paper.


Eliminating Duplicates Under Interval And Fuzzy Uncertainty: An Asymptotically Optimal Algorithm And Its Geospatial Applications, Roberto Torres, George R. Keller, Vladik Kreinovich, Luc Longpre Jul 2003

Eliminating Duplicates Under Interval And Fuzzy Uncertainty: An Asymptotically Optimal Algorithm And Its Geospatial Applications, Roberto Torres, George R. Keller, Vladik Kreinovich, Luc Longpre

Departmental Technical Reports (CS)

Geospatial databases generally consist of measurements related to points (or pixels in the case of raster data), lines, and polygons. In recent years, the size and complexity of these databases have increased significantly and they often contain duplicate records, i.e., two or more close records representing the same measurement result. In this paper, we address the problem of detecting duplicates in a database consisting of point measurements. As a test case, we use a database of measurements of anomalies in the Earth's gravity field that we have compiled. In this paper, we show that a natural duplicate deletion algorithm requires …


Computational Complexity And Feasibility Of Data Processing And Interval Computations, With Extension To Cases When We Have Partial Information About Probabilities, Vladik Kreinovich, Luc Longpre Jul 2003

Computational Complexity And Feasibility Of Data Processing And Interval Computations, With Extension To Cases When We Have Partial Information About Probabilities, Vladik Kreinovich, Luc Longpre

Departmental Technical Reports (CS)

In many real-life situations, we are interested in the value of a physical quantity y that is difficult or impossible to measure directly. To estimate y, we find some easier-to-measure quantities x1,...,xn which are related to y by a known relation y=f(x1,...,xn). Measurements are never 100% accurate; hence, the measured values Xi are different from xi, and the resulting estimate Y=f(X1,...,Xn) is different from the desired value y=f(x1,...,xn). How different?

Traditional engineering to error estimation in data processing assumes that we know the probabilities of different measurement error dxi=Xi-xi. …


Determination Of Properties Of Composite Materials From The Lamb Wave Propagation: Probabilistic, Interval, And Fuzzy Approaches, Fariba Ansari, William Durrer, Soheil Nazarian, Vladik Kreinovich Jun 2003

Determination Of Properties Of Composite Materials From The Lamb Wave Propagation: Probabilistic, Interval, And Fuzzy Approaches, Fariba Ansari, William Durrer, Soheil Nazarian, Vladik Kreinovich

Departmental Technical Reports (CS)

One of the most efficient non-destructive techniques for finding hidden faults in a plate is the use of ultrasonic Lamb waves. These waves are reflected by faults, and from this reflection, we can locate the faults. For that, we need to know how the Lamb waves propagate. Their propagation is determined by the dynamic elastic constants C_pq, so we must find these constants. These constants cannot be measured directly; instead, we measure the dependence of the speed of frequency c(f), and we must reconstruct C_pq from the measured values of c(f). In this paper, we show how this can be …


Outlier Detection Under Interval And Fuzzy Uncertainty: Algorithmic Solvability And Computational Complexity, Vladik Kreinovich, Praveen Patangay, Luc Longpre, Scott A. Starks, Cynthia Campos, Scott Ferson, Lev Ginzburg May 2003

Outlier Detection Under Interval And Fuzzy Uncertainty: Algorithmic Solvability And Computational Complexity, Vladik Kreinovich, Praveen Patangay, Luc Longpre, Scott A. Starks, Cynthia Campos, Scott Ferson, Lev Ginzburg

Departmental Technical Reports (CS)

No abstract provided.


Robust Methodology For Characterizing System Response To Damage: Approach Based On Partial Order, Paul J. Tanenbaum, Carlos De La Mora, Piotr Wojciechowski, Olga Kosheleva, Vladik Kreinovich, Scott A. Starks, Alexandr V. Kuzminykh May 2003

Robust Methodology For Characterizing System Response To Damage: Approach Based On Partial Order, Paul J. Tanenbaum, Carlos De La Mora, Piotr Wojciechowski, Olga Kosheleva, Vladik Kreinovich, Scott A. Starks, Alexandr V. Kuzminykh

Departmental Technical Reports (CS)

To describe the response of engineering complex systems to various damage mechanics, engineers have traditionally use number-valued utilities to describe the results of different possible outcomes, and (number-valued) probabilities (often, subjective probabilities) to describe the relative frequency of different outcomes. This description is based on the assumption that experts can always make a definite preference between two possible outcomes, i.e., that the set of all outcomes is linearly (totally) ordered. In practice, experts often cannot make a choice, their preference is only a partial order.


Robust Methodology For Characterizing System Response To Damage: A Subjective (Fuzzy) Partial Ordered Modification Of The Traditional Utility-Probability Scheme, Carlos De La Mora, Piotr Wojciechowski, Vladik Kreinovich, Scott A. Starks, Paul J. Tanenbaum, Alexandr V. Kuzminykh May 2003

Robust Methodology For Characterizing System Response To Damage: A Subjective (Fuzzy) Partial Ordered Modification Of The Traditional Utility-Probability Scheme, Carlos De La Mora, Piotr Wojciechowski, Vladik Kreinovich, Scott A. Starks, Paul J. Tanenbaum, Alexandr V. Kuzminykh

Departmental Technical Reports (CS)

No abstract provided.


Complexity And Approximation Studies Of Finding Polynomially Bounded Length Plans For Temporal Goals, Chitta Baral, Vladik Kreinovich, Sudeshna Sarkar, Nam Tran, Raul Trejo, Xin Zhang May 2003

Complexity And Approximation Studies Of Finding Polynomially Bounded Length Plans For Temporal Goals, Chitta Baral, Vladik Kreinovich, Sudeshna Sarkar, Nam Tran, Raul Trejo, Xin Zhang

Departmental Technical Reports (CS)

In this paper, we consider the problem of planning with temporal goals, focussing on polynomially bounded length plans. Past results about complexity of planning are mostly about finding plans that take the world to one of several desired states, often described using a goal formula. We first consider goals expressed using linear temporal logic and analyze the complexity of planning with respect to such goals for both when the states in the trajectory are complete states, and when they are incomplete states. For the later case we also develop a notion of approximate planning and show its complexity to be …


Fast Quantum Algorithms For Handling Probabilistic, Interval, And Fuzzy Uncertainty, Mark Martinez, Luc Longpre, Vladik Kreinovich, Scott A. Starks, Hung T. Nguyen May 2003

Fast Quantum Algorithms For Handling Probabilistic, Interval, And Fuzzy Uncertainty, Mark Martinez, Luc Longpre, Vladik Kreinovich, Scott A. Starks, Hung T. Nguyen

Departmental Technical Reports (CS)

We show how quantum computing can speed up computations related to processing probabilistic, interval, and fuzzy uncertainty.


The Use Of Fuzzy Measures As A Data Fusion Tool In Geographic Information Systems: Case Study, Cynthia Campos, George R. Keller, Vladik Kreinovich, Luc Longpre, Francois Modave, Scott A. Starks, Roberto Torres May 2003

The Use Of Fuzzy Measures As A Data Fusion Tool In Geographic Information Systems: Case Study, Cynthia Campos, George R. Keller, Vladik Kreinovich, Luc Longpre, Francois Modave, Scott A. Starks, Roberto Torres

Departmental Technical Reports (CS)

Geospatial databases generally consist of measurements related to points (or pixels in the case of raster data), lines, and polygons. In recent years, the size and complexity of these databases have increased significantly and they often contain duplicate records, i.e., two or more close records representing the same measurement result. In this paper, we use fuzzy measures to address the problem of detecting duplicates in a database consisting of point measurements. As a test case, we use a database of measurements of anomalies in the Earth's gravity field that we have compiled. We show that a natural duplicate deletion algorithm …


Detecting Cracks In Thin Plates By Using Lamb Wave Scanning: Geometric Approach, Roberto A. Osegueda, Vladik Kreinovich, Enrique Roldan, Rodrigo Mares Apr 2003

Detecting Cracks In Thin Plates By Using Lamb Wave Scanning: Geometric Approach, Roberto A. Osegueda, Vladik Kreinovich, Enrique Roldan, Rodrigo Mares

Departmental Technical Reports (CS)

A crack in a thin plate reflects ultrasonic waves; therefore, it is reasonable to determine the location of the crack by measuring the reflected waves. The problem of locating the crack can be reformulated in purely geometric terms. Previously, time-consuming iterative numerical methods were used to solve the resulting geometric problem. In this paper, we show that explicit (and fast to compute) formulas can be used instead.


Probabilities, Intervals, What Next? Optimization Problems Related To Extension Interval Computations To Situations With Partial Information About Probabilities, Vladik Kreinovich Apr 2003

Probabilities, Intervals, What Next? Optimization Problems Related To Extension Interval Computations To Situations With Partial Information About Probabilities, Vladik Kreinovich

Departmental Technical Reports (CS)

When we have only interval ranges [xi-,xi+] of sample values x1,...,xn, what is the interval [V-,V+] of possible values for the variance V of these values? We prove that the problem of computing the upper bound V+ is NP-hard. We provide a feasible (quadratic time) algorithm for computing the exact lower bound V- on the variance of interval data. We also provide feasible algorithms that computes V+ under reasonable easily verifiable conditions, in particular, in case interval uncertainty is introduced to maintain privacy in a statistical database.

We also extend the main formulas of interval arithmetic for different arithmetic operations …


On The Relation Between Two Approaches To Combining Evidence: Ordered Abelian Groups And Uninorms, Ronald R. Yager, Vladik Kreinovich Mar 2003

On The Relation Between Two Approaches To Combining Evidence: Ordered Abelian Groups And Uninorms, Ronald R. Yager, Vladik Kreinovich

Departmental Technical Reports (CS)

In this paper, we develop a relationship between two approaches to combining evidence: the Ordered Abelian Group (OAG) approach and the uninorm approach. We show that while there exist uninorms that are not extended OAG's it turns out that for operations which are continuous (in some reasonable sense), these two approaches coincide.


Kolmogorov Complexity: How A Paradigm Motivated By Foundations Of Physics Can Be Applied In Robust Control, Vladik Kreinovich, Issak A. Kunin Mar 2003

Kolmogorov Complexity: How A Paradigm Motivated By Foundations Of Physics Can Be Applied In Robust Control, Vladik Kreinovich, Issak A. Kunin

Departmental Technical Reports (CS)

Born about three decades ago, Kolmogorov Complexity Theory (KC) led to important discoveries that, in particular, give a new understanding of the fundamental problem: interrelations between classical continuum mathematics and reality (physics, biology, engineering sciences, ...). Crudely speaking, it enables us to better distinguish between mathematical possible (possible abnormal) and physically possible situations.

We show that this formalization is not only in good accordance with theoretical physics, but it can also be applied to robust control: instead of requiring that the control work for all mathematically possible situations, we only require that the control works for all "non-abnormal" situations.


Exact Upper Bound On The Mean Of The Product Of Many Random Variables With Known Expectations, Vladik Kreinovich, Scott Ferson, Lev Ginzburg Mar 2003

Exact Upper Bound On The Mean Of The Product Of Many Random Variables With Known Expectations, Vladik Kreinovich, Scott Ferson, Lev Ginzburg

Departmental Technical Reports (CS)

In practice, in addition to the intervals [xi] of possible values of inputs x1,...,xn, we sometimes also know their means Ei. For such cases, we provide an explicit exact (= best possible) upper bound for the mean of the product x1*...*xn of positive values xi.


Use Of Fuzzy Expert's Information In Measurement And What We Can Gain From Its Application In Geophysics, Leon Reznik, Vladik Kreinovich, Scott A. Starks Mar 2003

Use Of Fuzzy Expert's Information In Measurement And What We Can Gain From Its Application In Geophysics, Leon Reznik, Vladik Kreinovich, Scott A. Starks

Departmental Technical Reports (CS)

The paper considers the problem of measurement information fusion from different sources, when one of the sources is an information about approximate values of the measured variables or their combinations. The information is given with fuzzy models and is used in combination with the measurement results. The properties of the modified estimates are studied in comparison with the conventional ones. The conditions when an expert's information application can give a high gain are derived, the gain value is estimated, the recommendations to an expert on making predictions are given. The possible gain in measurement result efficiency in geophysical applications is …


Dimension Compactification -- A Possible Explanation For Superclusters And For Empirical Evidence Usually Interpreted As Dark Matter, Vladik Kreinovich Mar 2003

Dimension Compactification -- A Possible Explanation For Superclusters And For Empirical Evidence Usually Interpreted As Dark Matter, Vladik Kreinovich

Departmental Technical Reports (CS)

According to modern quantum physics, at the microlevel, the dimension of space-time is at least 11; we only observe 4 dimensions because the others are compactified: the size along each of the other dimensions is much smaller than the macroscale. There is no universally accepted explanation of why exactly 4 dimensions remain at the microscopic level. A natural suggestion is: maybe there is no fundamental reason why exactly 4 dimensions should remain, maybe when we go to even larger scales, some of the 4 dimensions will be compactified as well? In this paper, we explore the consequences of the compactification …


Dirty Pages Of Logarithm Tables, Lifetime Of The Universe, And (Subjective) Probabilities On Finite And Infinite Intervals, Hung T. Nguyen, Vladik Kreinovich, Luc Longpre Mar 2003

Dirty Pages Of Logarithm Tables, Lifetime Of The Universe, And (Subjective) Probabilities On Finite And Infinite Intervals, Hung T. Nguyen, Vladik Kreinovich, Luc Longpre

Departmental Technical Reports (CS)

To design data processing algorithms with the smallest average processing time, we need to know what this "average" stands for. At first glance, it may seem that real-life data are really "chaotic", and no probabilities are possible at all: today, we may apply our software package to elementary particles, tomorrow -- to distances between the stars, etc. However, contrary to this intuitive feeling, there are stable probabilities in real-life data. This fact was first discovered in 1881 by Simon Newcomb who noticed that the first pages of logarithm tables (that contain numbers starting with 1) are more used than the …


Statistical And Dempster-Shafer Techniques In Testing Structural Integrity Of Aerospace Structures, Roberto A. Osegueda, Seetharami R. Seelam, Bharat Mulupuru, Vladik Kreinovich Feb 2003

Statistical And Dempster-Shafer Techniques In Testing Structural Integrity Of Aerospace Structures, Roberto A. Osegueda, Seetharami R. Seelam, Bharat Mulupuru, Vladik Kreinovich

Departmental Technical Reports (CS)

Several techniques are known for non-destructive testing of aerospace structures, such as pulse echo, Eddy current, magnetic resonance, etc. Each of these techniques detects some faults but misses others, so it is desirable to combine (fuse) the results of these techniques. Several methods of data fusion are known. To improve the quality of fault detection, we modified the straightforward statistical method as follows: (1) we computed mean and variance iteratively: detected faults are excluded form the computation on the next iteration; (2) we treated the plate's edge and the inside separately; (3) we dismissed measurements in which only one technique …