Open Access. Powered by Scholars. Published by Universities.®

Engineering Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 30 of 767

Full-Text Articles in Engineering

How The Pavement's Lifetime Depends On The Stress Level: An Explanation Of The Empirical Formula, Edgar Daniel Rodriguez Velasquez, Vladik Kreinovich, Olga Kosheleva, Hoang Phuong Nguyen Sep 2021

How The Pavement's Lifetime Depends On The Stress Level: An Explanation Of The Empirical Formula, Edgar Daniel Rodriguez Velasquez, Vladik Kreinovich, Olga Kosheleva, Hoang Phuong Nguyen

Departmental Technical Reports (CS)

We show that natural invariance ideas explain the empirical dependence on the pavement's lifetime on the stress level.


Geometric Analysis Leads To Adversarial Teaching Of Cybersecurity, Christian Servin, Olga Kosheleva, Vladik Kreinovich Jul 2021

Geometric Analysis Leads To Adversarial Teaching Of Cybersecurity, Christian Servin, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

As time goes, our civilization becomes more and more dependent on computers and therefore, more and more vulnerable to cyberattacks. Because of this threat, it is very important to make sure that computer science students -- tomorrow's computer professionals -- are sufficiently skilled in cybersecurity. In this paper, we analyze the need for teaching cybersecurity from the geometric viewpoint. We show that the corresponding geometric analysis leads to adversarial teaching -- an empirically effective but not-well-theoretically-understood approach, when the class is divided into sparring mini-teams that try their best to attack each other and defend from each other. Thus, our …


Low-Complexity Zonotopes Can Enhance Uncertainty Quantification (Uq), Olga Kosheleva, Vladik Kreinovich Mar 2021

Low-Complexity Zonotopes Can Enhance Uncertainty Quantification (Uq), Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

In many practical situations, the only information that we know about the measurement error is the upper bound D on its absolute value. In this case, once we know the measurement result X, the only information that we have about the actual value x of the corresponding quantity is that this value belongs to the interval [X − D, X + D]. How can we estimate the accuracy of the result of data processing under this interval uncertainty? In general, computing this accuracy is NP-hard, but in the usual case when measurement errors are relatively small, we can linearize the …


We Need Fuzzy Techniques To Design Successful Human-Like Robots, Vladik Kreinovich, Olga Kosheleva, Laxman Bokati Nov 2020

We Need Fuzzy Techniques To Design Successful Human-Like Robots, Vladik Kreinovich, Olga Kosheleva, Laxman Bokati

Departmental Technical Reports (CS)

In this chapter, we argue that to make sure that human-like robots exhibit human-like behavior, we need to use fuzzy techniques -- and we also provide details of this usage. The chapter is intended both for researchers and practitioners who are very familiar with fuzzy techniques and also for researchers and practitioners who do not know these techniques -- but who are interested in designing human-like robots.


When Can We Be Sure That Measurement Results Are Consistent: 1-D Interval Case And Beyond, Hani Dbouk, Steffen Schön, Ingo Neumann, Vladik Kreinovich Jun 2020

When Can We Be Sure That Measurement Results Are Consistent: 1-D Interval Case And Beyond, Hani Dbouk, Steffen Schön, Ingo Neumann, Vladik Kreinovich

Departmental Technical Reports (CS)

In many practical situations, measurements are characterized by interval uncertainty -- namely, based on each measurement result, the only information that we have about the actual value of the measured quantity is that this value belongs to some interval. If several such intervals -- corresponding to measuring the same quantity -- have an empty intersection, this means that at least one of the corresponding measurement results is an outlier, caused by a malfunction of the measuring instrument. From the purely mathematical viewpoint, if the intersection is non-empty, there is no reason to be suspicious, but from the practical viewpoint, if …


Why Lasso, Ridge Regression, And En: Explanation Based On Soft Computing, Woraphon Yamaka, Hamza Alkhatib, Ingo Neumann, Vladik Kreinovich Jun 2020

Why Lasso, Ridge Regression, And En: Explanation Based On Soft Computing, Woraphon Yamaka, Hamza Alkhatib, Ingo Neumann, Vladik Kreinovich

Departmental Technical Reports (CS)

In many practical situations, observations and measurement results are consistent with many different models -- i.e., the corresponding problem is ill-posed. In such situations, a reasonable idea is to take into account that the values of the corresponding parameters should not be too large; this idea is known as {\it regularization}. Several different regularization techniques have been proposed; empirically the most successful are LASSO method, when we bound the sum of absolute values of the parameters, ridge regression method, when we bound the sum of the squares, and a EN method in which these two approaches are combined. In this …


How To Train A-To-B And B-To-A Neural Networks So That The Resulting Transformations Are (Almost) Exact Inverses, Paravee Maneejuk, Torben Peters, Claus Brenner, Vladik Kreinovich Jun 2020

How To Train A-To-B And B-To-A Neural Networks So That The Resulting Transformations Are (Almost) Exact Inverses, Paravee Maneejuk, Torben Peters, Claus Brenner, Vladik Kreinovich

Departmental Technical Reports (CS)

In many practical situations, there exist several representations, each of which is convenient for some operations, and many data processing algorithms involve transforming back and forth between these representations. Many such transformations are computationally time-consuming when performed exactly. So, taking into account that input data is usually only 1-10% accurate anyway, it makes sense to replace time-consuming exact transformations with faster approximate ones. One of the natural ways to get a fast-computing approximation to a transformation is to train the corresponding neural network. The problem is that if we train A-to-B and B-to-A networks separately, the resulting approximate transformations are …


Lexicographic-Type Extension Of Min-Max Logic Is Not Uniquely Determined, Olga Kosheleva, Vladik Kreinovich Jun 2020

Lexicographic-Type Extension Of Min-Max Logic Is Not Uniquely Determined, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

Since in a computer, "true" is usually represented as 1 and ``false'' as 0, it is natural to represent intermediate degrees of confidence by numbers intermediate between 0 and 1; this is one of the main ideas behind fuzzy logic -- a technique that has led to many useful applications. In many such applications, the degree of confidence in A & B is estimated as the minimum of the degrees of confidence corresponding to A and B, and the degree of confidence in A \/ B is estimated as the maximum; for example, 0.5 \/ 0.3 = 0.5. It is …


A Fully Lexicographic Extension Of Min Or Max Operation Cannot Be Associative, Olga Kosheleva, Vladik Kreinovich Jun 2020

A Fully Lexicographic Extension Of Min Or Max Operation Cannot Be Associative, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

In many applications of fuzzy logic, to estimate the degree of confidence in a statement A&B, we take the minimum min(a,b) of the expert's degrees of confidence in the two statements A and B. When a < b, then an increase in b does not change this estimate, while from the commonsense viewpoint, our degree of confidence in A&B should increase. To take this commonsense idea into account, Ildar Batyrshin and colleagues proposed to extend the original order in the interval [0,1] to a lexicographic order on a larger set. This idea works for expressions of the type A&B, so maybe we can extend it to more general expressions? In this paper, we show that such an extension, while theoretically possible, would violate another commonsense requirement -- associativity of the "and"-operation. A similar negative result is proven for lexicographic extensions of the maximum operation -- that estimates the expert's degree of confidence in a statement A\/B.


What Is The Optimal Annealing Schedule In Quantum Annealing, Oscar Galindo, Vladik Kreinovich Jun 2020

What Is The Optimal Annealing Schedule In Quantum Annealing, Oscar Galindo, Vladik Kreinovich

Departmental Technical Reports (CS)

In many real-life situations in engineering (and in other disciplines), we need to solve an optimization problem: we want an optimal design, we want an optimal control, etc. One of the main problems in optimization is avoiding local maxima (or minima). One of the techniques that helps with solving this problem is annealing: whenever we find ourselves in a possibly local maximum, we jump out with some probability and continue search for the true optimum. A natural way to organize such a probabilistic perturbation of the deterministic optimization is to use quantum effects. It turns out that often, quantum annealing …


Physical Randomness Can Help In Computations, Olga Kosheleva, Vladik Kreinovich Jan 2020

Physical Randomness Can Help In Computations, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

Can we use some so-far-unused physical phenomena to compute something that usual computers cannot? Researchers have been proposing many schemes that may lead to such computations. These schemes use different physical phenomena ranging from quantum-related to gravity-related to using hypothetical time machines. In this paper, we show that, in principle, there is no need to look into state-of-the-art physics to develop such a scheme: computability beyond the usual computations naturally appears if we consider such a basic notion as randomness.


Deep Learning (Partly) Demystified, Vladik Kreinovich, Olga Kosheleva Nov 2019

Deep Learning (Partly) Demystified, Vladik Kreinovich, Olga Kosheleva

Departmental Technical Reports (CS)

Successes of deep learning are partly due to appropriate selection of activation function, pooling functions, etc. Most of these choices have been made based on empirical comparison and heuristic ideas. In this paper, we show that many of these choices -- and the surprising success of deep learning in the first place -- can be explained by reasonably simple and natural mathematics.


Computing Without Computing: Dna Version, Vladik Kreinovich, Julio C. Urenda Nov 2019

Computing Without Computing: Dna Version, Vladik Kreinovich, Julio C. Urenda

Departmental Technical Reports (CS)

The traditional DNA computing schemes are based on using or simulating DNA-related activity. This is similar to how quantum computers use quantum activities to perform computations. Interestingly, in quantum computing, there is another phenomenon known as computing without computing, when, somewhat surprisingly, the result of the computation appears without invoking the actual quantum processes. In this chapter, we show that similar phenomenon is possible for DNA computing: in addition to the more traditional way of using or simulating DNA activity, we can also use DNA inactivity to solve complex problems. We also show that while DNA computing without …


Why Deep Learning Is More Efficient Than Support Vector Machines, And How It Is Related To Sparsity Techniques In Signal Processing, Laxman Bokati, Olga Kosheleva, Vladik Kreinovich Nov 2019

Why Deep Learning Is More Efficient Than Support Vector Machines, And How It Is Related To Sparsity Techniques In Signal Processing, Laxman Bokati, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

Several decades ago, traditional neural networks were the most efficient machine learning technique. Then it turned out that, in general, a different technique called support vector machines is more efficient. Reasonably recently, a new technique called deep learning has been shown to be the most efficient one. These are empirical observations, but how we explain them -- thus making the corresponding conclusions more reliable? In this paper, we provide a possible theoretical explanation for the above-described empirical comparisons. This explanation enables us to explain yet another empirical fact -- that sparsity techniques turned out to be very efficient in signal …


Towards A Theoretical Explanation Of How Pavement Condition Index Deteriorates Over Time, Edgar Daniel Rodriguez Velasquez, Carlos M. Chang Albitres, Vladik Kreinovich Aug 2019

Towards A Theoretical Explanation Of How Pavement Condition Index Deteriorates Over Time, Edgar Daniel Rodriguez Velasquez, Carlos M. Chang Albitres, Vladik Kreinovich

Departmental Technical Reports (CS)

To predict how the Pavement Condition Index will change over time, practitioners use a complex empirical formula derived in the 1980s. In this paper, we provide a possible theoretical explanation for this formula, an explanation based on general ideas of invariance. In general, the existence of a theoretical explanation makes a formula more reliable; thus, we hope that our explanation will make predictions of road quality more reliable.


Nonlinear Mechanical Properties Of Road Pavements: Geometric Symmetries Explain The Empirical Difference Between Roads Built On Clay Vs. Granular Soils, Afshin Gholamy, Vladik Kreinovich Jun 2019

Nonlinear Mechanical Properties Of Road Pavements: Geometric Symmetries Explain The Empirical Difference Between Roads Built On Clay Vs. Granular Soils, Afshin Gholamy, Vladik Kreinovich

Departmental Technical Reports (CS)

It is empirically known that roads built on clay soils have different nonlinear mechanical properties than roads built on granular soils (such as gravel or sand). In this paper, we show that this difficult-to-explain empirical fact can be naturally explained if we analyze the corresponding geometric symmetries.


Probabilistic Graphical Models Follow Directly From Maximum Entropy, Anh H. Ly, Francisco Zapata, Olac Fuentes, Vladik Kreinovich Sep 2017

Probabilistic Graphical Models Follow Directly From Maximum Entropy, Anh H. Ly, Francisco Zapata, Olac Fuentes, Vladik Kreinovich

Departmental Technical Reports (CS)

Probabilistic graphical models are a very efficient machine learning technique. However, their only known justification is based on heuristic ideas, ideas that do not explain why exactly these models are empirically successful. It is therefore desirable to come up with a theoretical explanation for these models' empirical efficiency. At present, the only such explanation is that these models naturally emerge if we maximize the relative entropy; however, why the relative entropy should be maximized is not clear. In this paper, we show that these models can also be obtained from a more natural -- and well-justified -- idea of maximizing …


How To Gauge The Accuracy Of Fuzzy Control Recommendations: A Simple Idea, Patricia Melin, Oscar Castillo, Andrzej Pownuk, Olga Kosheleva, Vladik Kreinovich Jun 2017

How To Gauge The Accuracy Of Fuzzy Control Recommendations: A Simple Idea, Patricia Melin, Oscar Castillo, Andrzej Pownuk, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

Fuzzy control is based on approximate expert information, so its recommendations are also approximate. However, the traditional fuzzy control algorithms do not tell us how accurate are these recommendations. In contrast, for the probabilistic uncertainty, there is a natural measure of accuracy: namely, the standard deviation. In this paper, we show how to extend this idea from the probabilistic to fuzzy uncertainty and thus, to come up with a reasonable way to gauge the accuracy of fuzzy control recommendations.


Normalization-Invariant Fuzzy Logic Operations Explain Empirical Success Of Student Distributions In Describing Measurement Uncertainty, Hamza Alkhatib, Boris Kargoll, Ingo Neumann, Vladik Kreinovich Jun 2017

Normalization-Invariant Fuzzy Logic Operations Explain Empirical Success Of Student Distributions In Describing Measurement Uncertainty, Hamza Alkhatib, Boris Kargoll, Ingo Neumann, Vladik Kreinovich

Departmental Technical Reports (CS)

In engineering practice, usually measurement errors are described by normal distributions. However, in some cases, the distribution is heavy-tailed and thus, not normal. In such situations, empirical evidence shows that the Student distributions are most adequate. The corresponding recommendation -- based on empirical evidence -- is included in the International Organization for Standardization guide. In this paper, we explain this empirical fact by showing that a natural fuzzy-logic-based formalization of commonsense requirements leads exactly to the Student's distributions.


Simplest Polynomial For Which Naive (Straightforward) Interval Computations Cannot Be Exact, Olga Kosheleva, Vladik Kreinovich, Songsak Sriboonchitta Jun 2017

Simplest Polynomial For Which Naive (Straightforward) Interval Computations Cannot Be Exact, Olga Kosheleva, Vladik Kreinovich, Songsak Sriboonchitta

Departmental Technical Reports (CS)

One of the main problem of interval computations is computing the range of a given function over given intervals. It is known that naive interval computations always provide an enclosure for the desired range. Sometimes -- e.g., for single use expressions -- naive interval computations compute the exact range. Sometimes, we do not get the exact range when we apply naive interval computations to the original expression, but we get the exact range if we apply naive interval computations to an equivalent reformulation of the original expression. For some other functions -- including some polynomials -- we do not get …


Do We Have Compatible Concepts Of Epistemic Uncertainty?, Michael Beer, Scott Ferson, Vladik Kreinovich Jan 2016

Do We Have Compatible Concepts Of Epistemic Uncertainty?, Michael Beer, Scott Ferson, Vladik Kreinovich

Departmental Technical Reports (CS)

Epistemic uncertainties appear widely in civil engineering practice. There is a clear consensus that these epistemic uncertainties need to be taken into account for a realistic assessment of the performance and reliability of our structures and systems. However, there is no clearly defined procedure to meet this challenge. In this paper we discuss the phenomena that involve epistemic uncertainties in relation to modeling options. Particular attention is paid to set-theoretical approaches and imprecise probabilities. The respective concepts are categorized, and relationships are highlighted.


Science Is Helpful For Engineering Applications: A Theoretical Explanation Of An Empirical Observation, Olga Kosheleva, Vladik Kreinovich Nov 2015

Science Is Helpful For Engineering Applications: A Theoretical Explanation Of An Empirical Observation, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

Empirical evidence shows that when engineering design uses scientific analysis, we usually get a much better performance that for the system designed by using a trial-and-error engineering approach. In this paper, we provide a quantitative explanation for this empirical observation.


What Is The Right Context For An Engineering Problem: Finding Such A Context Is Np-Hard, Martine Ceberio, Vladik Kreinovich, Hung T. Nguyen, Songsak Sriboonchitta, Rujira Ouncharoen Jun 2015

What Is The Right Context For An Engineering Problem: Finding Such A Context Is Np-Hard, Martine Ceberio, Vladik Kreinovich, Hung T. Nguyen, Songsak Sriboonchitta, Rujira Ouncharoen

Departmental Technical Reports (CS)

In the general case, most computational engineering problems are NP-hard. So, to make the problem feasible, it is important to restrict this problem. Ideally, we should use the most general context in which the problem is still feasible. In this paper, we prove that finding such most general context is itself an NP-hard problem. Since it is not possible to find the appropriate context by utilizing some algorithm, it is therefore necessary to be creative -- i.e., to use some computational intelligence techniques. On three examples, we show how such techniques can help us come up with the appropriate context. …


Comparisons Of Measurement Results As Constraints On Accuracies Of Measuring Instruments: When Can We Determine The Accuracies From These Constraints?, Christian Servin, Vladik Kreinovich Jun 2015

Comparisons Of Measurement Results As Constraints On Accuracies Of Measuring Instruments: When Can We Determine The Accuracies From These Constraints?, Christian Servin, Vladik Kreinovich

Departmental Technical Reports (CS)

For a measuring instrument, a usual way to find the probability distribution of its measurement errors is to compare its results with the results of measuring the same quantity with a much more accurate instrument. But what if we are interested in estimating the measurement accuracy of a state-of-the-art measuring instrument, for which no more accurate instrument is possible? In this paper, we show that while in general, such estimation is not possible; however, can uniquely determine the corresponding probability distributions if we have several state-of-the-art measuring instruments, and for one of them, the corresponding probability distribution is symmetric.


Fuzzy Xor Classes From Quantum Computing, Anderson Ávila, Murilo Schmalfuss, Renata Reiser, Vladik Kreinovich Jun 2015

Fuzzy Xor Classes From Quantum Computing, Anderson Ávila, Murilo Schmalfuss, Renata Reiser, Vladik Kreinovich

Departmental Technical Reports (CS)

By making use of quantum parallelism, quantum processes provide parallel modelling for fuzzy connectives and the corresponding computations of quantum states can be simultaneously performed, based on the superposition of membership degrees of an element with respect to the different fuzzy sets. Such description and modelling is mainly focussed on representable fuzzy Xor connectives and their dual constructions. So, via quantum computing not only the interpretation based on traditional quantum circuit is considered, but also the notion of quantum process in the qGM model is applied, proving an evaluation of a corresponding simulation by considering graphical interfaces of the VPE-qGM …


Model Reduction: Why It Is Possible And How It Can Potentially Help To Control Swarms Of Unmanned Arial Vehicles (Uavs), Martine Ceberio, Leobardo Valera, Olga Kosheleva, Rodrigo A. Romero Apr 2015

Model Reduction: Why It Is Possible And How It Can Potentially Help To Control Swarms Of Unmanned Arial Vehicles (Uavs), Martine Ceberio, Leobardo Valera, Olga Kosheleva, Rodrigo A. Romero

Departmental Technical Reports (CS)

In many application areas, such as meteorology, traffic control, etc., it is desirable to employ swarms of Unmanned Arial Vehicles (UAVs) to provide us with a good picture of the changing situation and thus, to help us make better predictions (and make better decisions based on these predictions). To avoid duplication, interference, and collisions, UAVs must coordinate their trajectories. As a result, the optimal control of each of these UAVs should depend on the positions and velocities of all others -- which makes the corresponding control problem very complicated. Since, in contrast to controlling a single UAV, the resulting problem …


How To Estimate Expected Shortfall When Probabilities Are Known With Interval Or Fuzzy Uncertainty, Christian Servin, Hung T. Nguyen, Vladik Kreinovich Apr 2015

How To Estimate Expected Shortfall When Probabilities Are Known With Interval Or Fuzzy Uncertainty, Christian Servin, Hung T. Nguyen, Vladik Kreinovich

Departmental Technical Reports (CS)

To gauge the risk corresponding to a possible disaster, it is important to know both the probability of this disaster and the expected damage caused by such potential disaster ("expected shortfall"). Both these measures of risk are easy to estimate in the ideal case, when we know the exact probabilities of different disaster strengths. In practice, however, we usually only have a partial information about these probabilities: we may have an interval (or, more generally, fuzzy) uncertainty about these probabilities. In this paper, we show how to efficiently estimate the expected shortfall under such interval and/or fuzzy uncertainty.


Symbolic Aggregate Approximation (Sax) Under Interval Uncertainty, Chrysostomos D. Stylios, Vladik Kreinovich Apr 2015

Symbolic Aggregate Approximation (Sax) Under Interval Uncertainty, Chrysostomos D. Stylios, Vladik Kreinovich

Departmental Technical Reports (CS)

In many practical situations, we monitor a system by continuously measuring the corresponding quantities, to make sure that an abnormal deviation is detected as early as possible. Often, we do not have ready algorithms to detect abnormality, so we need to use machine learning techniques. For these techniques to be efficient, we first need to compress the data. One of the most successful methods of data compression is the technique of Symbolic Aggregate approXimation (SAX). While this technique is motivated by measurement uncertainty, it does not explicitly take this uncertainty into account. In this paper, we show that we can …


Optimizing Cloud Use Under Interval Uncertainty, Vladik Kreinovich, Esthela Gallardo Apr 2015

Optimizing Cloud Use Under Interval Uncertainty, Vladik Kreinovich, Esthela Gallardo

Departmental Technical Reports (CS)

One of the main advantages of cloud computing is that it helps the users to save money: instead of buying a lot of computers to cover all their computations, the user can rent the computation time on the cloud to cover the rare peak spikes of computer need. From this viewpoint, it is important to find the optimal division between in-house and in-the-cloud computations. In this paper, we solve this optimization problem, both in the idealized case when we know the complete information about the costs and the user's need, and in a more realistic situation, when we only know …


Which Bio-Diversity Indices Are Most Adequate, Olga Kosheleva, Craig Tweedie, Vladik Kreinovich Apr 2015

Which Bio-Diversity Indices Are Most Adequate, Olga Kosheleva, Craig Tweedie, Vladik Kreinovich

Departmental Technical Reports (CS)

One of the main objectives of ecology is to analyze, maintain, and enhance the bio-diversity of different ecosystems. To be able to do that, we need to gauge bio-diversity. Several semi-heuristic diversity indices have been shown to be in good accordance with the intuitive notion of bio-diversity. In this paper, we provide a theoretical justification for these empirically successful techniques. Specifically, we show that the most widely used techniques -- Simpson index -- can be justified by using simple fuzzy rules, while a more elaborate justification explains all empirically successful diversity indices.