Open Access. Powered by Scholars. Published by Universities.®
Physical Sciences and Mathematics Commons™
Open Access. Powered by Scholars. Published by Universities.®
- Discipline
-
- Computer Sciences (861)
- Mathematics (500)
- Applied Mathematics (114)
- Physics (37)
- Education (29)
-
- Social and Behavioral Sciences (23)
- Economics (19)
- Engineering (16)
- Software Engineering (13)
- Computer Engineering (11)
- Programming Languages and Compilers (10)
- Statistics and Probability (6)
- Earth Sciences (4)
- Civil and Environmental Engineering (3)
- Econometrics (3)
- Educational Methods (3)
- Geography (3)
- Higher Education (3)
- Medicine and Health Sciences (3)
- Arts and Humanities (2)
- Biology (2)
- Construction Engineering and Management (2)
- Geology (2)
- Geophysics and Seismology (2)
- Life Sciences (2)
- Medical Sciences (2)
- Algebra (1)
- Algebraic Geometry (1)
- Applied Statistics (1)
- Keyword
-
- Technical Reports (356)
- UTEP Computer Science Department (355)
- Interval uncertainty (28)
- Fuzzy logic (12)
- Interval computations (10)
-
- Android (7)
- Decision making (6)
- Optimization (6)
- Data processing (5)
- Feasible algorithms (5)
- Functional program verification (5)
- Fuzzy uncertainty (5)
- Invariance (5)
- Java (5)
- Neural networks (5)
- Quantum computing (5)
- Deep learning (4)
- F-transform (4)
- Fuzzy control (4)
- Fuzzy sets (4)
- Imprecise probabilities (4)
- NP-hard (4)
- Probabilistic uncertainty (4)
- Symmetries (4)
- Uncertainty quantification (4)
- Dialog (3)
- Indirect measurements (3)
- Intended function (3)
- Measurement uncertainty (3)
- NP-hard problems (3)
Articles 1 - 30 of 1063
Full-Text Articles in Physical Sciences and Mathematics
How Difficult Is It To Comprehend A Program That Has Significant Repetitions: Fuzzy-Related Explanations Of Empirical Results, Christian Servin, Olga Kosheleva, Vladik Kreinovich
How Difficult Is It To Comprehend A Program That Has Significant Repetitions: Fuzzy-Related Explanations Of Empirical Results, Christian Servin, Olga Kosheleva, Vladik Kreinovich
Departmental Technical Reports (CS)
In teaching computing and in gauging the programmers' productivity, it is important to property estimate how much time it will take to comprehend a program. There are techniques for estimating this time, but these techniques do not take into account that some program segments are similar, and this similarity decreases the time needed to comprehend the second segment. Recently, experiments were performed to describe this decrease. These experiments found an empirical formula for the corresponding decrease. In this paper, we use fuzzy-related ideas to provide commonsense-based theoretical explanation for this empirical formula.
Mcfadden's Discrete Choice And Softmax Under Interval (And Other) Uncertainty: Revisited, Bartlomiej Jacek Kubica, Olga Kosheleva, Vladik Kreinovich
Mcfadden's Discrete Choice And Softmax Under Interval (And Other) Uncertainty: Revisited, Bartlomiej Jacek Kubica, Olga Kosheleva, Vladik Kreinovich
Departmental Technical Reports (CS)
Studies of how people actually make decisions have led to an empirical formula that predicts the probability of different decisions based on the utilities of different alternatives. This formula is known as McFadden's formula, after a Nobel prize winning economist who discovered it. A similar formula -- known as softmax -- describes the probability that the classification predicted by a deep neural network is correct, based on the neural network's degrees of confidence in the object belonging to each class. In practice, we usually do not know the exact values of the utilities -- or of the degrees of confidence. …
Why Bernstein Polynomials: Yet Another Explanation, Olga Kosheleva, Vladik Kreinovich
Why Bernstein Polynomials: Yet Another Explanation, Olga Kosheleva, Vladik Kreinovich
Departmental Technical Reports (CS)
In many computational situations -- in particular, in computations under interval or fuzzy uncertainty -- it is convenient to approximate a function by a polynomial. Usually, a polynomial is represented by coefficients at its monomials. However, in many cases, it turns out more efficient to represent a general polynomial by using a different basis -- of so-called Bernstein polynomials. In this paper, we provide a new explanation for the computational efficiency of this basis.
Somewhat Surprisingly, (Subjective) Fuzzy Technique Can Help To Better Combine Measurement Results And Expert Estimates Into A Model With Guaranteed Accuracy: Digital Twins And Beyond, Niklas Winnewisser, Michael Beer, Olga Kosheleva, Vladik Kreinovich
Somewhat Surprisingly, (Subjective) Fuzzy Technique Can Help To Better Combine Measurement Results And Expert Estimates Into A Model With Guaranteed Accuracy: Digital Twins And Beyond, Niklas Winnewisser, Michael Beer, Olga Kosheleva, Vladik Kreinovich
Departmental Technical Reports (CS)
To understand how different factors and different control strategies will affect a system -- be it a plant, an airplane, etc. -- it is desirable to form an accurate digital model of this system. Such models are known as digital twins. To make a digital twin as accurate as possible, it is desirable to incorporate all available knowledge of the system into this model. In many cases, a significant part of this knowledge comes in terms of expert statements, statements that are often formulated by using imprecise ("fuzzy") words from natural language such as "small", "very possible", etc. To translate …
How To Gauge Inequality And Fairness: A Complete Description Of All Decomposable Versions Of Theil Index, Saeid Tizpaz-Niari, Olga Kosheleva, Vladik Kreinovich
How To Gauge Inequality And Fairness: A Complete Description Of All Decomposable Versions Of Theil Index, Saeid Tizpaz-Niari, Olga Kosheleva, Vladik Kreinovich
Departmental Technical Reports (CS)
In general, in statistics, the most widely used way to describe the difference between different elements of a sample if by using standard deviation. This characteristic has a nice property of being decomposable: e.g., to compute the mean and standard deviation of the income overall the whole US, it is sufficient to compute the number of people, mean, and standard deviation over each state; this state-by-state information is sufficient to uniquely reconstruct the overall standard deviation. However, e.g., for gauging income inequality, standard deviation is not very adequate: it provides too much weight to outliers like billionaires, and thus, does …
Update From Aristotle To Newton, From Sets To Fuzzy Sets, And From Sigmoid To Relu: What Do All These Transitions Have In Common?, Christian Servin, Olga Kosheleva, Vladik Kreinovich
Update From Aristotle To Newton, From Sets To Fuzzy Sets, And From Sigmoid To Relu: What Do All These Transitions Have In Common?, Christian Servin, Olga Kosheleva, Vladik Kreinovich
Departmental Technical Reports (CS)
In this paper, we show that there is a -- somewhat unexpected -- common trend behind several seemingly unrelated historic transitions: from Aristotelian physics to modern (Newton's) approach, from crisp sets (such as intervals) to fuzzy sets, and from traditional neural networks, with close-to-step-function sigmoid activation functions to modern successful deep neural networks that use a completely different ReLU activation function. In all these cases, the main idea of the corresponding transition can be explained, in mathematical terms, as going from the first order to second order differential equations.
How To Make A Decision Under Interval Uncertainty If We Do Not Know The Utility Function, Jeffrey Escamilla, Vladik Kreinovich
How To Make A Decision Under Interval Uncertainty If We Do Not Know The Utility Function, Jeffrey Escamilla, Vladik Kreinovich
Departmental Technical Reports (CS)
Decision theory describes how to make decisions, in particular, how to make decisions under interval uncertainty. However, this theory's recommendations assume that we know the utility function -- a function that describes the decision maker's preferences. Sometimes, we can make a recommendation even when we do not know the utility function. In this paper, we provide a complete description of all such cases.
Paradox Of Causality And Paradoxes Of Set Theory, Alondra Baquier, Bradley Beltran, Gabriel Miki-Silva, Olga Kosheleva, Vladik Kreinovich
Paradox Of Causality And Paradoxes Of Set Theory, Alondra Baquier, Bradley Beltran, Gabriel Miki-Silva, Olga Kosheleva, Vladik Kreinovich
Departmental Technical Reports (CS)
Logical paradoxes show that human reasoning is not always fully captured by the traditional 2-valued logic, that this logic's extensions -- such as multi-valued logics -- are needed. Because of this, the study of paradoxes is important for research on multi-valued logics. In this paper, we focus on paradoxes of set theory. Specifically, we show their analogy with the known paradox of causality, and we use this analogy to come up with similar set-theoretic paradoxes.
Number Representation With Varying Number Of Bits, Anuradha Choudhury, Md Ahsanul Haque, Saeefa Rubaiyet Nowmi, Ahmed Ann Noor Ryen, Sabrina Saika, Vladik Kreinovich
Number Representation With Varying Number Of Bits, Anuradha Choudhury, Md Ahsanul Haque, Saeefa Rubaiyet Nowmi, Ahmed Ann Noor Ryen, Sabrina Saika, Vladik Kreinovich
Departmental Technical Reports (CS)
In a computer, usually, all real numbers are stored by using the same number of bits: usually, 8 bytes, i.e., 64 bits. This amount of bits enables us to represent numbers with high accuracy -- up to 19 decimal digits. However, in most cases -- whether we process measurement results or whether we process expert-generated membership degrees -- we do not need that accuracy, so most bits are wasted. To save space, it is therefore reasonable to consider representations with varying number of bits. This would save space used for representing numbers themselves, but we would also need to store …
Data Fusion Is More Complex Than Data Processing: A Proof, Robert Alvarez, Salvador Ruiz, Martine Ceberio, Vladik Kreinovich
Data Fusion Is More Complex Than Data Processing: A Proof, Robert Alvarez, Salvador Ruiz, Martine Ceberio, Vladik Kreinovich
Departmental Technical Reports (CS)
Empirical data shows that, in general, data fusion takes more computation time than data processing. In this paper, we provide a proof that data fusion is indeed more complex than data processing.
How To Fairly Allocate Safety Benefits Of Self-Driving Cars, Fernando Munoz, Christian Servin, Vladik Kreinovich
How To Fairly Allocate Safety Benefits Of Self-Driving Cars, Fernando Munoz, Christian Servin, Vladik Kreinovich
Departmental Technical Reports (CS)
In this paper, we describe how to fairly allocated safety benefits of self-driving cars between drivers and pedestrians -- so as to minimize the overall harm.
Using Known Relation Between Quantities To Make Measurements More Accurate And More Reliable, Niklas Winnewisser, Felix Mett, Michael Beer, Olga Kosheleva, Vladik Kreinovich
Using Known Relation Between Quantities To Make Measurements More Accurate And More Reliable, Niklas Winnewisser, Felix Mett, Michael Beer, Olga Kosheleva, Vladik Kreinovich
Departmental Technical Reports (CS)
Most of our knowledge comes, ultimately, from measurements and from processing measurement results. In this, metrology is very valuable: it teaches us how to gauge the accuracy of the measurement results and of the results of data processing, and how to calibrate the measuring instruments so as to reach the maximum accuracy. However, traditional metrology mostly concentrates on individual measurements. In practice, often, there are also relations between the current values of different quantities. For example, there is usually an known upper bound on the difference between the values of the same quantity at close moments of time or at …
Why Pavement Cracks Are Mostly Longitudinal, Sometimes Transversal, And Rarely Of Other Directions: A Geometric Explanation, Edgar Daniel Rodriguez Velasquez, Olga Kosheleva, Vladik Kreinovich
Why Pavement Cracks Are Mostly Longitudinal, Sometimes Transversal, And Rarely Of Other Directions: A Geometric Explanation, Edgar Daniel Rodriguez Velasquez, Olga Kosheleva, Vladik Kreinovich
Departmental Technical Reports (CS)
In time, pavements deteriorate, and need maintenance. One of the most typical pavement faults are cracks. Empirically, the most frequent cracks are longitudinal, i.e., following the direction of the road; less frequent are transversal cracks, which are orthogonal to the direction of the road. Sometimes, there are cracks in different directions, but such cracks are much rarer. In this paper, we show that simple geometric analysis and fundamental physical ideas can explain these observed relative frequencies.
Why Linear And Sigmoid Last Layers Work Better In Classification, Lehel Dénes-Fazakas, Lásló Szilágyi, Vladik Kreinovich
Why Linear And Sigmoid Last Layers Work Better In Classification, Lehel Dénes-Fazakas, Lásló Szilágyi, Vladik Kreinovich
Departmental Technical Reports (CS)
Usually, when a deep neural network is used to classify objects, its last layer computes the softmax. Our empirical results show we can improve the classification results if instead, we have linear or sigmoid last layer. In this paper, we provide an explanation for this empirical phenomenon.
Why Two Fish Follow Each Other But Three Fish Form A School: A Symmetry-Based Explanation, Shahnaz Shahbazova, Olga Kosheleva, Vladik Kreinovich
Why Two Fish Follow Each Other But Three Fish Form A School: A Symmetry-Based Explanation, Shahnaz Shahbazova, Olga Kosheleva, Vladik Kreinovich
Departmental Technical Reports (CS)
Recent experiments with fish has shown an unexpected strange behavior: when two fish of the same species are placed in an aquarium, they start following each other, while when three fish are placed there, they form (approximately) an equilateral triangle, and move in the direction (approximately) orthogonal to this triangle. In this paper, we use natural symmetries -- such as rotations, shifts, and permutation of fish -- to show that this observed behavior is actually optimal. This behavior is not just optimal with respect to one specific optimality criterion, it is optimal with respect to any optimality criterion -- as …
Fuzzy Ideas Explain Fechner Law And Help Detect Relation Between Objects In Video, Olga Kosheleva, Vladik Kreinovich, Ahnaf Farhan
Fuzzy Ideas Explain Fechner Law And Help Detect Relation Between Objects In Video, Olga Kosheleva, Vladik Kreinovich, Ahnaf Farhan
Departmental Technical Reports (CS)
How to find relation between objects in a video? If two objects are closely related -- e.g., a computer and it mouse -- then they almost always appear together, and thus, their numbers of occurrences are close. However, simply computing the differences between numbers of occurrences is not a good idea: objects with 100 and 110 occurrences are most probably related, but objects with 1 and 5 occurrences probably not, although 5 − 1 is smaller than 110 − 100. A natural idea is, instead, to compute the difference between re-scaled numbers of occurrences, for an appropriate nonlinear re-scaling. In …
There Is Still Plenty Of Room At The Bottom: Feynman's Vision Of Quantum Computing 65 Years Later, Alexis Lupo, Vladik Kreinovich, Victor L. Timchenko, Yuriy P. Kondratenko
There Is Still Plenty Of Room At The Bottom: Feynman's Vision Of Quantum Computing 65 Years Later, Alexis Lupo, Vladik Kreinovich, Victor L. Timchenko, Yuriy P. Kondratenko
Departmental Technical Reports (CS)
In 1959, Nobelist Richard Feynman gave a talk titled "There's plenty of room at the bottom", in which he emphasized that, to drastically speed up computations, we need to make computer components much smaller -- all the way to the size of molecules, atoms, and even elementary particles. At this level, physics is no longer described by deterministic Newton's mechanics, it is described by probabilistic quantum laws. Because of this, computer designers started thinking how to design a reliable computer based on non-deterministic elements -- and this thinking eventually led to the modern ideas and algorithms of quantum computing. So, …
From Quantifying And Propagating Uncertainty To Quantifying And Propagating Both Uncertainty And Reliability: Practice-Motivated Approach To Measurement Planning And Data Processing, Niklas R. Winnewisser, Vladik Kreinovich, Olga Kosheleva
From Quantifying And Propagating Uncertainty To Quantifying And Propagating Both Uncertainty And Reliability: Practice-Motivated Approach To Measurement Planning And Data Processing, Niklas R. Winnewisser, Vladik Kreinovich, Olga Kosheleva
Departmental Technical Reports (CS)
When we process data, it is important to take into account that data comes with uncertainty. There exist techniques for quantifying uncertainty and propagating this uncertainty through the data processing algorithms. However, most of these techniques do not take into account that in real world, measuring instruments are not 100% reliable -- they sometimes malfunction and produce values which are far off from the measured values of the corresponding quantities. How can we take into account both uncertainty and reliability? In this paper, we consider several possible scenarios, and we show, for each scenario, what is the natural way to …
Every Feasibly Computable Reals-To-Reals Function Is Feasibly Uniformly Continuous, Olga Kosheleva, Vladik Kreinovich
Every Feasibly Computable Reals-To-Reals Function Is Feasibly Uniformly Continuous, Olga Kosheleva, Vladik Kreinovich
Departmental Technical Reports (CS)
It is known that every computable function is continuous; moreover, it is computably continuous in the sense that for every ε > 0, we can compute δ > 0 such that δ-close inputs lead to ε-close outputs. It is also known that not all functions which are, in principle, computable, can actually be computed: indeed, the computation sometimes requires more time than the lifetime of the Universe. A natural question is thus: can the above known result about computable continuity of computable functions be extended to the case when we limit ourselves to feasible computations? In this paper, we prove that this …
From Normal Distribution To What? How To Best Describe Distributions With Known Skewness, Olga Kosheleva, Vladik Kreinovich
From Normal Distribution To What? How To Best Describe Distributions With Known Skewness, Olga Kosheleva, Vladik Kreinovich
Departmental Technical Reports (CS)
In many practical situations, we only have partial information about the probability distribution -- e.g., all we know is its few moments. In such situations, it is desirable to select one of the possible probability distributions. A natural way to select a distribution from a given class of distributions is the maximum entropy approach. For the case when we know the first two moments, this approach selects the normal distribution. However, when we also know the third central moment -- corresponding to skewness -- a direct application of this approach does not work. Instead, practitioners use several heuristic techniques, techniques …
Every Relu-Based Neural Network Can Be Described By A System Of Takagi-Sugeno Fuzzy Rules: A Theorem, Barnabas Bede, Olga Kosheleva, Vladik Kreinovich
Every Relu-Based Neural Network Can Be Described By A System Of Takagi-Sugeno Fuzzy Rules: A Theorem, Barnabas Bede, Olga Kosheleva, Vladik Kreinovich
Departmental Technical Reports (CS)
While modern deep-learning neural networks are very successful, sometimes they make mistakes, and since their results are "black boxes" -- no explanation is provided -- it is difficult to determine which recommendations are erroneous. It is therefore desirable to make the resulting computations explainable, i.e., to describe their results by using commonsense rules. In this paper, we use "fuzzy" techniques -- techniques developed by Lotfi Zadeh to deal with commonsense rules formulated by using imprecise ("fuzzy") words from natural language -- to show that such a rule-based representation is always possible. Our result does not yet provide the desired explainability, …
Smooth Non-Additive Integrals And Measures And Their Potential Applications, Olga Kosheleva, Vladik Kreinovich
Smooth Non-Additive Integrals And Measures And Their Potential Applications, Olga Kosheleva, Vladik Kreinovich
Departmental Technical Reports (CS)
In this paper, we explain why non-additive integrals and measures are needed, how non-additive integrals and measures are related, how to use them in decision making, and how they can help in fundamental physics. These four topics are covered, correspondingly, in Sections 2-5 of this paper.
When Is A Single "And"-Condition Enough?, Olga Kosheleva, Vladik Kreinovich
When Is A Single "And"-Condition Enough?, Olga Kosheleva, Vladik Kreinovich
Departmental Technical Reports (CS)
In many practical situations, there are several possible decisions. Any general recommendation means specifying, for each possible decision, conditions under which this decision is recommended. In some cases, a single "and"-condition is sufficient: e.g., a condition under which a patient is recommended to take aspirin is that "the patient has a fever and the patient does not have stomach trouble". In other cases, conditions are more complicated. A natural question is: when is a single "and"-condition enough? In this paper, we provide an answer to this question.
If We Add Axiom Of Choice To Constructive Analysis, We Get Classical Arithmetic: An Exercise In Reverse Constructive Mathematics, Olga Kosheleva, Vladik Kreinovich
If We Add Axiom Of Choice To Constructive Analysis, We Get Classical Arithmetic: An Exercise In Reverse Constructive Mathematics, Olga Kosheleva, Vladik Kreinovich
Departmental Technical Reports (CS)
A recent paper in Bulletin of Symbolic Logic reminded that the Axiom of Choice is, in general, false in constructive analysis. This result is an immediate consequence of a theorem -- first proved by Tseytin -- that every computable function is continuous. In this paper, we strengthen the result about the Axiom of Choice by proving that this axiom is as non-constructive as possible: namely, that if we add this axiom to constructive analysis, then we get full classical arithmetic.
Why Sigmoid Transformation Helps Incorporate Logic Into Deep Learning: A Theoretical Explanation, Chitta Baral, Vladik Kreinovich
Why Sigmoid Transformation Helps Incorporate Logic Into Deep Learning: A Theoretical Explanation, Chitta Baral, Vladik Kreinovich
Departmental Technical Reports (CS)
Traditional neural networks start from the data, they cannot easily handle prior knowledge -- this is one of the reasons why they often take very long to train. It is desirable to incorporate prior knowledge into deep learning. For the case when this knowledge consists of propositional statements, a successful way to incorporate this knowledge was proposed in a recent paper by van Krieken et al. That paper uses the fact that a neural network does not directly return a truth value, it returns a real value -- in effect, the degree of confidence in the corresponding statement -- from …
Usually, Either Left And Right Brains Are Equally Active Or Only One Of Them Is Active: First-Principles Explanation, Julio C. Urenda, Vladik Kreinovich
Usually, Either Left And Right Brains Are Equally Active Or Only One Of Them Is Active: First-Principles Explanation, Julio C. Urenda, Vladik Kreinovich
Departmental Technical Reports (CS)
It is known that in most practical situations, either both left and right brains are equally active, or only one of them is active. A recent paper showed that this empirical phenomenon can be explained by a realistic model of the brain effectiveness. In this paper, we show that this conclusion can be made without any specific assumptions about the brain, based on first principles.
From Type-2 Fuzzy To Type-2 Intervals And Type-2 Probabilities, Vladik Kreinovich, Olga Kosheleva, Luc Longpré
From Type-2 Fuzzy To Type-2 Intervals And Type-2 Probabilities, Vladik Kreinovich, Olga Kosheleva, Luc Longpré
Departmental Technical Reports (CS)
Our knowledge comes from observations, measurements, and expert opinions. Measurements and observations are never 100% accurate, there is always a difference between the measurement result and the actual value of the corresponding quantity. We gauge the resulting uncertainty either by an interval of possible values, or by a probability distribution on the set of possible values, or by a membership function that describes to what extent different values are possible. The information about uncertainty also comes either from measurements or from expert estimates and is, therefore, also uncertain. It is important to take such "type-2" uncertainty into account. This is …
Which Random-Set Representation Of A Fuzzy Set Is The Simplest?, Vladik Kreinovich, Olga Kosheleva, Hung T. Nguyen
Which Random-Set Representation Of A Fuzzy Set Is The Simplest?, Vladik Kreinovich, Olga Kosheleva, Hung T. Nguyen
Departmental Technical Reports (CS)
One of the ways to elicit membership degrees is by polling. For example, we ask a group of people how many believe that 30 C is hot. If 8 out of ten say that it is hot, we assign the degree 8/10 to the statement "30 C is hot". In precise mathematical terms, polling can be described via so-called random sets. It is known that every fuzzy set can be obtained this way, i.e., that every fuzzy set can be represented by an appropriate random set. Moreover, it is known that for many fuzzy sets, there are several different random-set …
How To Efficiently Propagate P-Box Uncertainty, Olga Kosheleva, Vladik Kreinovich
How To Efficiently Propagate P-Box Uncertainty, Olga Kosheleva, Vladik Kreinovich
Departmental Technical Reports (CS)
In many practical situations, to get the desired estimate or prediction, we need to process existing data. This data usually comes from measurements, and measurements are never 100% accurate. Because we only know the input values with uncertainty, the results of processing this data also comes with uncertainty. To make an appropriate decision, we need to know how accurate is the resulting estimate, i.e., how the input uncertainty "propagates" through the data processing algorithm. In the ideal case, when we know the probability distribution of each measurement error, we can, in principle, use Monte-Carlo simulations to describe the uncertainty of …
Uncertainty Quantification For Results Of Ai-Based Data Processing: Towards More Feasible Algorithms, Christoph Q. Lauter, Martine Ceberio, Vladik Kreinovich, Olga Kosheleva
Uncertainty Quantification For Results Of Ai-Based Data Processing: Towards More Feasible Algorithms, Christoph Q. Lauter, Martine Ceberio, Vladik Kreinovich, Olga Kosheleva
Departmental Technical Reports (CS)
AI techniques have been actively and successfully used in data processing. This tendency started with fuzzy techniques, now neural network techniques are actively used. With each new technique comes the need for the corresponding uncertainty quantification (UQ). In principle, for both fuzzy and neural techniques, we can use the usual UQ methods -- however, these techniques often require an unrealistic amount of computation time. In this paper, we show that in both cases, we can use specific features of the corresponding techniques to drastically speed up the corresponding computations.