Open Access. Powered by Scholars. Published by Universities.®

Computer Engineering Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 11 of 11

Full-Text Articles in Computer Engineering

When Can We Be Sure That Measurement Results Are Consistent: 1-D Interval Case And Beyond, Hani Dbouk, Steffen Schön, Ingo Neumann, Vladik Kreinovich Jun 2020

When Can We Be Sure That Measurement Results Are Consistent: 1-D Interval Case And Beyond, Hani Dbouk, Steffen Schön, Ingo Neumann, Vladik Kreinovich

Departmental Technical Reports (CS)

In many practical situations, measurements are characterized by interval uncertainty -- namely, based on each measurement result, the only information that we have about the actual value of the measured quantity is that this value belongs to some interval. If several such intervals -- corresponding to measuring the same quantity -- have an empty intersection, this means that at least one of the corresponding measurement results is an outlier, caused by a malfunction of the measuring instrument. From the purely mathematical viewpoint, if the intersection is non-empty, there is no reason to be suspicious, but from the practical viewpoint, if …


Deep Learning (Partly) Demystified, Vladik Kreinovich, Olga Kosheleva Nov 2019

Deep Learning (Partly) Demystified, Vladik Kreinovich, Olga Kosheleva

Departmental Technical Reports (CS)

Successes of deep learning are partly due to appropriate selection of activation function, pooling functions, etc. Most of these choices have been made based on empirical comparison and heuristic ideas. In this paper, we show that many of these choices -- and the surprising success of deep learning in the first place -- can be explained by reasonably simple and natural mathematics.


Symbolic Aggregate Approximation (Sax) Under Interval Uncertainty, Chrysostomos D. Stylios, Vladik Kreinovich Apr 2015

Symbolic Aggregate Approximation (Sax) Under Interval Uncertainty, Chrysostomos D. Stylios, Vladik Kreinovich

Departmental Technical Reports (CS)

In many practical situations, we monitor a system by continuously measuring the corresponding quantities, to make sure that an abnormal deviation is detected as early as possible. Often, we do not have ready algorithms to detect abnormality, so we need to use machine learning techniques. For these techniques to be efficient, we first need to compress the data. One of the most successful methods of data compression is the technique of Symbolic Aggregate approXimation (SAX). While this technique is motivated by measurement uncertainty, it does not explicitly take this uncertainty into account. In this paper, we show that we can …


Why It Is Important To Precisiate Goals, Olga Kosheleva, Vladik Kreinovich, Hung T. Nguyen Mar 2015

Why It Is Important To Precisiate Goals, Olga Kosheleva, Vladik Kreinovich, Hung T. Nguyen

Departmental Technical Reports (CS)

After Zadeh and Bellman explained how to optimize a function under fuzzy constraints, there have been many successful applications of this optimization. However, in many practical situations, it turns out to be more efficient to precisiate the objective function before performing optimization. In this paper, we provide a possible explanation for this empirical fact.


Simple Linear Interpolation Explains All Usual Choices In Fuzzy Techniques: Membership Functions, T-Norms, T-Conorms, And Defuzzification, Vladik Kreinovich, Jonathan Quijas, Esthela Gallardo, Caio De Sa Lopes, Olga Kosheleva, Shahnaz Shahbazova Mar 2015

Simple Linear Interpolation Explains All Usual Choices In Fuzzy Techniques: Membership Functions, T-Norms, T-Conorms, And Defuzzification, Vladik Kreinovich, Jonathan Quijas, Esthela Gallardo, Caio De Sa Lopes, Olga Kosheleva, Shahnaz Shahbazova

Departmental Technical Reports (CS)

Most applications of fuzzy techniques use piece-wise linear (triangular or trapezoid) membership functions, min or product t-norms, max or algebraic sum t-conorms, and centroid defuzzification. Similarly, most applications of interval-valued fuzzy techniques use piecewise-linear lower and upper membership functions. In this paper, we show that all these choices can be explained as applications of simple linear interpolation.


Minimax Portfolio Optimization Under Interval Uncertainty, Meng Yuan, Xu Lin, Junzo Watada, Vladik Kreinovich Jan 2015

Minimax Portfolio Optimization Under Interval Uncertainty, Meng Yuan, Xu Lin, Junzo Watada, Vladik Kreinovich

Departmental Technical Reports (CS)

In the 1950s, Markowitz proposed to combine different investment instruments to design a portfolio that either maximizes the expected return under constraints on volatility (risk) or minimizes the risk under given expected return. Markowitz's formulas are still widely used in financial practice. However, these formulas assume that we know the exact values of expected return and variance for each instrument, and that we know the exact covariance of every two instruments. In practice, we only know these values with some uncertainty. Often, we only know the bounds of these values -- i.e., in other words, we only know the intervals …


Towards The Possibility Of Objective Interval Uncertainty In Physics. Ii, Luc Longpre, Olga Kosheleva, Vladik Kreinovich Jan 2015

Towards The Possibility Of Objective Interval Uncertainty In Physics. Ii, Luc Longpre, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

Applications of interval computations usually assume that while we only know an interval containing the actual (unknown) value of a physical quantity, there is the exact value of this quantity, and that in principle, we can get more and more accurate estimates of this value. Physicists know, however, that, due to uncertainty principle, there are limitations on how accurately we can measure the values of physical quantities. One of the important principles of modern physics is operationalism -- that a physical theory should only use observable properties. This principle is behind most successes of the 20th century physics, starting with …


Optimizing Pred(25) Is Np-Hard, Martine Ceberio, Olga Kosheleva, Vladik Kreinovich Jan 2015

Optimizing Pred(25) Is Np-Hard, Martine Ceberio, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

Usually, in data processing, to find the parameters of the models that best fits the data, people use the Least Squares method. One of the advantages of this method is that for linear models, it leads to an easy-to-solve system of linear equations. A limitation of this method is that even a single outlier can ruin the corresponding estimates; thus, more robust methods are needed. In particular, in software engineering, often, a more robust pred(25) method is used, in which we maximize the number of cases in which the model's prediction is within the 25% range of the observations. In …


A Catalog Of While Loop Specification Patterns, Aditi Barua, Yoonsik Cheon Sep 2014

A Catalog Of While Loop Specification Patterns, Aditi Barua, Yoonsik Cheon

Departmental Technical Reports (CS)

This document provides a catalog of while loop patterns along with their skeletal specifications. The specifications are written in a functional form known as intended functions. The catalog can be used to derive specifications of while loops by first matching the loops to the cataloged patterns and then instantiating the skeletal specifications of the matched patterns. Once their specifications are formulated and written, the correctness of while loops can be proved rigorously or formally using the functional program verification technique in which a program is viewed as a mathematical function from one program state to another.


Observable Causality Implies Lorentz Group: Alexandrov-Zeeman-Type Theorem For Space-Time Regions, Olga Kosheleva, Vladik Kreinovich Jun 2014

Observable Causality Implies Lorentz Group: Alexandrov-Zeeman-Type Theorem For Space-Time Regions, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

The famous Alexandrov-Zeeman theorem proves that causality implies Lorentz group. The physical meaning of this result is that once we observe which event can causally affect which other events, then, using only this information, we can reconstruct the linear structure of the Minkowski space-time. The original Alexandrov-Zeeman theorem is based on the causality relation between events represented by points in space-time. Knowing such a point means that we know the exact moment of time and the exact location of the corresponding event - and that this event actually occurred at a single moment of time and at a single spatial …


Imprecise Probabilities In Engineering Analyses, Michael Beer, Scott Ferson, Vladik Kreinovich Apr 2013

Imprecise Probabilities In Engineering Analyses, Michael Beer, Scott Ferson, Vladik Kreinovich

Departmental Technical Reports (CS)

Probabilistic uncertainty and imprecision in structural parameters and in environmental conditions and loads are challenging phenomena in engineering analyses. They require appropriate mathematical modeling and quantification to obtain realistic results when predicting the behavior and reliability of engineering structures and systems. But the modeling and quantification is complicated by the characteristics of the available information, which involves, for example, sparse data, poor measurements and subjective information. This raises the question whether the available information is sufficient for probabilistic modeling or rather suggests a set-theoretical approach. The framework of imprecise probabilities provides a mathematical basis to deal with these problems which …