Open Access. Powered by Scholars. Published by Universities.®

Engineering Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 30 of 778

Full-Text Articles in Engineering

Geometric Analysis Leads To Adversarial Teaching Of Cybersecurity, Christian Servin, Olga Kosheleva, Vladik Kreinovich Jul 2021

Geometric Analysis Leads To Adversarial Teaching Of Cybersecurity, Christian Servin, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

As time goes, our civilization becomes more and more dependent on computers and therefore, more and more vulnerable to cyberattacks. Because of this threat, it is very important to make sure that computer science students -- tomorrow's computer professionals -- are sufficiently skilled in cybersecurity. In this paper, we analyze the need for teaching cybersecurity from the geometric viewpoint. We show that the corresponding geometric analysis leads to adversarial teaching -- an empirically effective but not-well-theoretically-understood approach, when the class is divided into sparring mini-teams that try their best to attack each other and defend from each other. Thus, our …


We Need Fuzzy Techniques To Design Successful Human-Like Robots, Vladik Kreinovich, Olga Kosheleva, Laxman Bokati Nov 2020

We Need Fuzzy Techniques To Design Successful Human-Like Robots, Vladik Kreinovich, Olga Kosheleva, Laxman Bokati

Departmental Technical Reports (CS)

In this chapter, we argue that to make sure that human-like robots exhibit human-like behavior, we need to use fuzzy techniques -- and we also provide details of this usage. The chapter is intended both for researchers and practitioners who are very familiar with fuzzy techniques and also for researchers and practitioners who do not know these techniques -- but who are interested in designing human-like robots.


When Can We Be Sure That Measurement Results Are Consistent: 1-D Interval Case And Beyond, Hani Dbouk, Steffen Schön, Ingo Neumann, Vladik Kreinovich Jun 2020

When Can We Be Sure That Measurement Results Are Consistent: 1-D Interval Case And Beyond, Hani Dbouk, Steffen Schön, Ingo Neumann, Vladik Kreinovich

Departmental Technical Reports (CS)

In many practical situations, measurements are characterized by interval uncertainty -- namely, based on each measurement result, the only information that we have about the actual value of the measured quantity is that this value belongs to some interval. If several such intervals -- corresponding to measuring the same quantity -- have an empty intersection, this means that at least one of the corresponding measurement results is an outlier, caused by a malfunction of the measuring instrument. From the purely mathematical viewpoint, if the intersection is non-empty, there is no reason to be suspicious, but from the practical viewpoint, if …


Why Lasso, Ridge Regression, And En: Explanation Based On Soft Computing, Woraphon Yamaka, Hamza Alkhatib, Ingo Neumann, Vladik Kreinovich Jun 2020

Why Lasso, Ridge Regression, And En: Explanation Based On Soft Computing, Woraphon Yamaka, Hamza Alkhatib, Ingo Neumann, Vladik Kreinovich

Departmental Technical Reports (CS)

In many practical situations, observations and measurement results are consistent with many different models -- i.e., the corresponding problem is ill-posed. In such situations, a reasonable idea is to take into account that the values of the corresponding parameters should not be too large; this idea is known as {\it regularization}. Several different regularization techniques have been proposed; empirically the most successful are LASSO method, when we bound the sum of absolute values of the parameters, ridge regression method, when we bound the sum of the squares, and a EN method in which these two approaches are combined. In this …


How To Train A-To-B And B-To-A Neural Networks So That The Resulting Transformations Are (Almost) Exact Inverses, Paravee Maneejuk, Torben Peters, Claus Brenner, Vladik Kreinovich Jun 2020

How To Train A-To-B And B-To-A Neural Networks So That The Resulting Transformations Are (Almost) Exact Inverses, Paravee Maneejuk, Torben Peters, Claus Brenner, Vladik Kreinovich

Departmental Technical Reports (CS)

In many practical situations, there exist several representations, each of which is convenient for some operations, and many data processing algorithms involve transforming back and forth between these representations. Many such transformations are computationally time-consuming when performed exactly. So, taking into account that input data is usually only 1-10% accurate anyway, it makes sense to replace time-consuming exact transformations with faster approximate ones. One of the natural ways to get a fast-computing approximation to a transformation is to train the corresponding neural network. The problem is that if we train A-to-B and B-to-A networks separately, the resulting approximate transformations are …


Lexicographic-Type Extension Of Min-Max Logic Is Not Uniquely Determined, Olga Kosheleva, Vladik Kreinovich Jun 2020

Lexicographic-Type Extension Of Min-Max Logic Is Not Uniquely Determined, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

Since in a computer, "true" is usually represented as 1 and ``false'' as 0, it is natural to represent intermediate degrees of confidence by numbers intermediate between 0 and 1; this is one of the main ideas behind fuzzy logic -- a technique that has led to many useful applications. In many such applications, the degree of confidence in A & B is estimated as the minimum of the degrees of confidence corresponding to A and B, and the degree of confidence in A \/ B is estimated as the maximum; for example, 0.5 \/ 0.3 = 0.5. It is …


A Fully Lexicographic Extension Of Min Or Max Operation Cannot Be Associative, Olga Kosheleva, Vladik Kreinovich Jun 2020

A Fully Lexicographic Extension Of Min Or Max Operation Cannot Be Associative, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

In many applications of fuzzy logic, to estimate the degree of confidence in a statement A&B, we take the minimum min(a,b) of the expert's degrees of confidence in the two statements A and B. When a < b, then an increase in b does not change this estimate, while from the commonsense viewpoint, our degree of confidence in A&B should increase. To take this commonsense idea into account, Ildar Batyrshin and colleagues proposed to extend the original order in the interval [0,1] to a lexicographic order on a larger set. This idea works for expressions of the type A&B, so maybe we can extend it to more general expressions? In this paper, we show that such an extension, while theoretically possible, would violate another commonsense requirement -- associativity of the "and"-operation. A similar negative result is proven for lexicographic extensions of the maximum operation -- that estimates the expert's degree of confidence in a statement A\/B.


What Is The Optimal Annealing Schedule In Quantum Annealing, Oscar Galindo, Vladik Kreinovich Jun 2020

What Is The Optimal Annealing Schedule In Quantum Annealing, Oscar Galindo, Vladik Kreinovich

Departmental Technical Reports (CS)

In many real-life situations in engineering (and in other disciplines), we need to solve an optimization problem: we want an optimal design, we want an optimal control, etc. One of the main problems in optimization is avoiding local maxima (or minima). One of the techniques that helps with solving this problem is annealing: whenever we find ourselves in a possibly local maximum, we jump out with some probability and continue search for the true optimum. A natural way to organize such a probabilistic perturbation of the deterministic optimization is to use quantum effects. It turns out that often, quantum annealing …


Physical Randomness Can Help In Computations, Olga Kosheleva, Vladik Kreinovich Jan 2020

Physical Randomness Can Help In Computations, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

Can we use some so-far-unused physical phenomena to compute something that usual computers cannot? Researchers have been proposing many schemes that may lead to such computations. These schemes use different physical phenomena ranging from quantum-related to gravity-related to using hypothetical time machines. In this paper, we show that, in principle, there is no need to look into state-of-the-art physics to develop such a scheme: computability beyond the usual computations naturally appears if we consider such a basic notion as randomness.


Deep Learning (Partly) Demystified, Vladik Kreinovich, Olga Kosheleva Nov 2019

Deep Learning (Partly) Demystified, Vladik Kreinovich, Olga Kosheleva

Departmental Technical Reports (CS)

Successes of deep learning are partly due to appropriate selection of activation function, pooling functions, etc. Most of these choices have been made based on empirical comparison and heuristic ideas. In this paper, we show that many of these choices -- and the surprising success of deep learning in the first place -- can be explained by reasonably simple and natural mathematics.


Computing Without Computing: Dna Version, Vladik Kreinovich, Julio C. Urenda Nov 2019

Computing Without Computing: Dna Version, Vladik Kreinovich, Julio C. Urenda

Departmental Technical Reports (CS)

The traditional DNA computing schemes are based on using or simulating DNA-related activity. This is similar to how quantum computers use quantum activities to perform computations. Interestingly, in quantum computing, there is another phenomenon known as computing without computing, when, somewhat surprisingly, the result of the computation appears without invoking the actual quantum processes. In this chapter, we show that similar phenomenon is possible for DNA computing: in addition to the more traditional way of using or simulating DNA activity, we can also use DNA inactivity to solve complex problems. We also show that while DNA computing without …


Why Deep Learning Is More Efficient Than Support Vector Machines, And How It Is Related To Sparsity Techniques In Signal Processing, Laxman Bokati, Olga Kosheleva, Vladik Kreinovich Nov 2019

Why Deep Learning Is More Efficient Than Support Vector Machines, And How It Is Related To Sparsity Techniques In Signal Processing, Laxman Bokati, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

Several decades ago, traditional neural networks were the most efficient machine learning technique. Then it turned out that, in general, a different technique called support vector machines is more efficient. Reasonably recently, a new technique called deep learning has been shown to be the most efficient one. These are empirical observations, but how we explain them -- thus making the corresponding conclusions more reliable? In this paper, we provide a possible theoretical explanation for the above-described empirical comparisons. This explanation enables us to explain yet another empirical fact -- that sparsity techniques turned out to be very efficient in signal …


Probabilistic Graphical Models Follow Directly From Maximum Entropy, Anh H. Ly, Francisco Zapata, Olac Fuentes, Vladik Kreinovich Sep 2017

Probabilistic Graphical Models Follow Directly From Maximum Entropy, Anh H. Ly, Francisco Zapata, Olac Fuentes, Vladik Kreinovich

Departmental Technical Reports (CS)

Probabilistic graphical models are a very efficient machine learning technique. However, their only known justification is based on heuristic ideas, ideas that do not explain why exactly these models are empirically successful. It is therefore desirable to come up with a theoretical explanation for these models' empirical efficiency. At present, the only such explanation is that these models naturally emerge if we maximize the relative entropy; however, why the relative entropy should be maximized is not clear. In this paper, we show that these models can also be obtained from a more natural -- and well-justified -- idea of maximizing …


How To Gauge The Accuracy Of Fuzzy Control Recommendations: A Simple Idea, Patricia Melin, Oscar Castillo, Andrzej Pownuk, Olga Kosheleva, Vladik Kreinovich Jun 2017

How To Gauge The Accuracy Of Fuzzy Control Recommendations: A Simple Idea, Patricia Melin, Oscar Castillo, Andrzej Pownuk, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

Fuzzy control is based on approximate expert information, so its recommendations are also approximate. However, the traditional fuzzy control algorithms do not tell us how accurate are these recommendations. In contrast, for the probabilistic uncertainty, there is a natural measure of accuracy: namely, the standard deviation. In this paper, we show how to extend this idea from the probabilistic to fuzzy uncertainty and thus, to come up with a reasonable way to gauge the accuracy of fuzzy control recommendations.


Normalization-Invariant Fuzzy Logic Operations Explain Empirical Success Of Student Distributions In Describing Measurement Uncertainty, Hamza Alkhatib, Boris Kargoll, Ingo Neumann, Vladik Kreinovich Jun 2017

Normalization-Invariant Fuzzy Logic Operations Explain Empirical Success Of Student Distributions In Describing Measurement Uncertainty, Hamza Alkhatib, Boris Kargoll, Ingo Neumann, Vladik Kreinovich

Departmental Technical Reports (CS)

In engineering practice, usually measurement errors are described by normal distributions. However, in some cases, the distribution is heavy-tailed and thus, not normal. In such situations, empirical evidence shows that the Student distributions are most adequate. The corresponding recommendation -- based on empirical evidence -- is included in the International Organization for Standardization guide. In this paper, we explain this empirical fact by showing that a natural fuzzy-logic-based formalization of commonsense requirements leads exactly to the Student's distributions.


Simplest Polynomial For Which Naive (Straightforward) Interval Computations Cannot Be Exact, Olga Kosheleva, Vladik Kreinovich, Songsak Sriboonchitta Jun 2017

Simplest Polynomial For Which Naive (Straightforward) Interval Computations Cannot Be Exact, Olga Kosheleva, Vladik Kreinovich, Songsak Sriboonchitta

Departmental Technical Reports (CS)

One of the main problem of interval computations is computing the range of a given function over given intervals. It is known that naive interval computations always provide an enclosure for the desired range. Sometimes -- e.g., for single use expressions -- naive interval computations compute the exact range. Sometimes, we do not get the exact range when we apply naive interval computations to the original expression, but we get the exact range if we apply naive interval computations to an equivalent reformulation of the original expression. For some other functions -- including some polynomials -- we do not get …


Comparisons Of Measurement Results As Constraints On Accuracies Of Measuring Instruments: When Can We Determine The Accuracies From These Constraints?, Christian Servin, Vladik Kreinovich Jun 2015

Comparisons Of Measurement Results As Constraints On Accuracies Of Measuring Instruments: When Can We Determine The Accuracies From These Constraints?, Christian Servin, Vladik Kreinovich

Departmental Technical Reports (CS)

For a measuring instrument, a usual way to find the probability distribution of its measurement errors is to compare its results with the results of measuring the same quantity with a much more accurate instrument. But what if we are interested in estimating the measurement accuracy of a state-of-the-art measuring instrument, for which no more accurate instrument is possible? In this paper, we show that while in general, such estimation is not possible; however, can uniquely determine the corresponding probability distributions if we have several state-of-the-art measuring instruments, and for one of them, the corresponding probability distribution is symmetric.


Fuzzy Xor Classes From Quantum Computing, Anderson Ávila, Murilo Schmalfuss, Renata Reiser, Vladik Kreinovich Jun 2015

Fuzzy Xor Classes From Quantum Computing, Anderson Ávila, Murilo Schmalfuss, Renata Reiser, Vladik Kreinovich

Departmental Technical Reports (CS)

By making use of quantum parallelism, quantum processes provide parallel modelling for fuzzy connectives and the corresponding computations of quantum states can be simultaneously performed, based on the superposition of membership degrees of an element with respect to the different fuzzy sets. Such description and modelling is mainly focussed on representable fuzzy Xor connectives and their dual constructions. So, via quantum computing not only the interpretation based on traditional quantum circuit is considered, but also the notion of quantum process in the qGM model is applied, proving an evaluation of a corresponding simulation by considering graphical interfaces of the VPE-qGM …


Model Reduction: Why It Is Possible And How It Can Potentially Help To Control Swarms Of Unmanned Arial Vehicles (Uavs), Martine Ceberio, Leobardo Valera, Olga Kosheleva, Rodrigo A. Romero Apr 2015

Model Reduction: Why It Is Possible And How It Can Potentially Help To Control Swarms Of Unmanned Arial Vehicles (Uavs), Martine Ceberio, Leobardo Valera, Olga Kosheleva, Rodrigo A. Romero

Departmental Technical Reports (CS)

In many application areas, such as meteorology, traffic control, etc., it is desirable to employ swarms of Unmanned Arial Vehicles (UAVs) to provide us with a good picture of the changing situation and thus, to help us make better predictions (and make better decisions based on these predictions). To avoid duplication, interference, and collisions, UAVs must coordinate their trajectories. As a result, the optimal control of each of these UAVs should depend on the positions and velocities of all others -- which makes the corresponding control problem very complicated. Since, in contrast to controlling a single UAV, the resulting problem …


How To Estimate Expected Shortfall When Probabilities Are Known With Interval Or Fuzzy Uncertainty, Christian Servin, Hung T. Nguyen, Vladik Kreinovich Apr 2015

How To Estimate Expected Shortfall When Probabilities Are Known With Interval Or Fuzzy Uncertainty, Christian Servin, Hung T. Nguyen, Vladik Kreinovich

Departmental Technical Reports (CS)

To gauge the risk corresponding to a possible disaster, it is important to know both the probability of this disaster and the expected damage caused by such potential disaster ("expected shortfall"). Both these measures of risk are easy to estimate in the ideal case, when we know the exact probabilities of different disaster strengths. In practice, however, we usually only have a partial information about these probabilities: we may have an interval (or, more generally, fuzzy) uncertainty about these probabilities. In this paper, we show how to efficiently estimate the expected shortfall under such interval and/or fuzzy uncertainty.


Symbolic Aggregate Approximation (Sax) Under Interval Uncertainty, Chrysostomos D. Stylios, Vladik Kreinovich Apr 2015

Symbolic Aggregate Approximation (Sax) Under Interval Uncertainty, Chrysostomos D. Stylios, Vladik Kreinovich

Departmental Technical Reports (CS)

In many practical situations, we monitor a system by continuously measuring the corresponding quantities, to make sure that an abnormal deviation is detected as early as possible. Often, we do not have ready algorithms to detect abnormality, so we need to use machine learning techniques. For these techniques to be efficient, we first need to compress the data. One of the most successful methods of data compression is the technique of Symbolic Aggregate approXimation (SAX). While this technique is motivated by measurement uncertainty, it does not explicitly take this uncertainty into account. In this paper, we show that we can …


Optimizing Cloud Use Under Interval Uncertainty, Vladik Kreinovich, Esthela Gallardo Apr 2015

Optimizing Cloud Use Under Interval Uncertainty, Vladik Kreinovich, Esthela Gallardo

Departmental Technical Reports (CS)

One of the main advantages of cloud computing is that it helps the users to save money: instead of buying a lot of computers to cover all their computations, the user can rent the computation time on the cloud to cover the rare peak spikes of computer need. From this viewpoint, it is important to find the optimal division between in-house and in-the-cloud computations. In this paper, we solve this optimization problem, both in the idealized case when we know the complete information about the costs and the user's need, and in a more realistic situation, when we only know …


Which Bio-Diversity Indices Are Most Adequate, Olga Kosheleva, Craig Tweedie, Vladik Kreinovich Apr 2015

Which Bio-Diversity Indices Are Most Adequate, Olga Kosheleva, Craig Tweedie, Vladik Kreinovich

Departmental Technical Reports (CS)

One of the main objectives of ecology is to analyze, maintain, and enhance the bio-diversity of different ecosystems. To be able to do that, we need to gauge bio-diversity. Several semi-heuristic diversity indices have been shown to be in good accordance with the intuitive notion of bio-diversity. In this paper, we provide a theoretical justification for these empirically successful techniques. Specifically, we show that the most widely used techniques -- Simpson index -- can be justified by using simple fuzzy rules, while a more elaborate justification explains all empirically successful diversity indices.


Adding Possibilistic Knowledge To Probabilities Makes Many Problems Algorithmically Decidable, Olga Kosheleva, Vladik Kreinovich Mar 2015

Adding Possibilistic Knowledge To Probabilities Makes Many Problems Algorithmically Decidable, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

Many physical theories accurately predict which events are possible and which are not, or -- in situations where probabilistic (e.g., quantum) effects are important -- predict the probabilities of different possible outcomes. At first glance, it may seem that this probabilistic information is all we need. We show, however, that to adequately describe physicists' reasoning, it is important to also take into account additional knowledge -- about what is possible and what is not. We show that this knowledge can be described in terms of possibility theory, and that the presence of this knowledge makes many problems algorithmically decidable.


From 1-D To 2-D Fuzzy: A Proof That Interval-Valued And Complex-Valued Are The Only Distributive Options, Christian Servin, Vladik Kreinovich, Olga Kosheleva Mar 2015

From 1-D To 2-D Fuzzy: A Proof That Interval-Valued And Complex-Valued Are The Only Distributive Options, Christian Servin, Vladik Kreinovich, Olga Kosheleva

Departmental Technical Reports (CS)

While the usual 1-D fuzzy logic has many successful applications, in some practical cases, it is desirable to come up with a more subtle way of representing expert uncertainty. A natural idea is to add additional information, i.e., to go from 1-D to 2-D (and multi-D) fuzzy logic. At present, there are two main approaches to 2-D fuzzy logic: interval-valued and complex-valued. At first glance, it may seem that many other options are potentially possible. We show, however, that, under certain reasonable conditions, interval-valued and complex-valued are the only two possible options.


Coming Up With A Good Question Is Not Easy: A Proof, Joe Lorkowski, Luc Longpre, Olga Kosheleva, Salem Benferhat Mar 2015

Coming Up With A Good Question Is Not Easy: A Proof, Joe Lorkowski, Luc Longpre, Olga Kosheleva, Salem Benferhat

Departmental Technical Reports (CS)

Ability to ask good questions is an important part of learning skills. Coming up with a good question, a question that can really improve one's understanding of the topic, is not easy. In this paper, we prove -- on the example of probabilistic and fuzzy uncertainty -- that the problem of selecting of a good question is indeed hard.


Why It Is Important To Precisiate Goals, Olga Kosheleva, Vladik Kreinovich, Hung T. Nguyen Mar 2015

Why It Is Important To Precisiate Goals, Olga Kosheleva, Vladik Kreinovich, Hung T. Nguyen

Departmental Technical Reports (CS)

After Zadeh and Bellman explained how to optimize a function under fuzzy constraints, there have been many successful applications of this optimization. However, in many practical situations, it turns out to be more efficient to precisiate the objective function before performing optimization. In this paper, we provide a possible explanation for this empirical fact.


Setting Up A Highly Configurable, Scalable Nimbus Cloud Test Bed Running On A Manet, Joshua Mckee Mar 2015

Setting Up A Highly Configurable, Scalable Nimbus Cloud Test Bed Running On A Manet, Joshua Mckee

Departmental Technical Reports (CS)

No abstract provided.


Simple Linear Interpolation Explains All Usual Choices In Fuzzy Techniques: Membership Functions, T-Norms, T-Conorms, And Defuzzification, Vladik Kreinovich, Jonathan Quijas, Esthela Gallardo, Caio De Sa Lopes, Olga Kosheleva, Shahnaz Shahbazova Mar 2015

Simple Linear Interpolation Explains All Usual Choices In Fuzzy Techniques: Membership Functions, T-Norms, T-Conorms, And Defuzzification, Vladik Kreinovich, Jonathan Quijas, Esthela Gallardo, Caio De Sa Lopes, Olga Kosheleva, Shahnaz Shahbazova

Departmental Technical Reports (CS)

Most applications of fuzzy techniques use piece-wise linear (triangular or trapezoid) membership functions, min or product t-norms, max or algebraic sum t-conorms, and centroid defuzzification. Similarly, most applications of interval-valued fuzzy techniques use piecewise-linear lower and upper membership functions. In this paper, we show that all these choices can be explained as applications of simple linear interpolation.


A Natural Simple Model Of Scientists' Strength Leads To Skew-Normal Distribution, Komsan Suriya, Tatcha Sudtasan, Tonghui Wang, Octavio Lerma, Vladik Kreinovich Feb 2015

A Natural Simple Model Of Scientists' Strength Leads To Skew-Normal Distribution, Komsan Suriya, Tatcha Sudtasan, Tonghui Wang, Octavio Lerma, Vladik Kreinovich

Departmental Technical Reports (CS)

In many practical situations, we have probability distributions which are close to normal but skewed. Several families of distributions were proposed to describe such phenomena. The most widely used is skew-normal distribution, whose probability density (pdf) is equal to the product of the pdf of a normal distribution and a cumulative distribution function (cdf) of another normal distribution. Out of other possible generalizations of normal distributions, the skew-normal ones were selected because of their computational efficiency, and not because they represent any real-life phenomena. Interestingly, it turns out that these distributions do represent a real-life phenomena: namely, in a natural …