Open Access. Powered by Scholars. Published by Universities.®

Computer Engineering Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 30 of 67

Full-Text Articles in Computer Engineering

Computational Complexity Of Determining Which Statements About Causality Hold In Different Space-Time Models, Vladik Kreinovich, Olga Kosheleva Dec 2007

Computational Complexity Of Determining Which Statements About Causality Hold In Different Space-Time Models, Vladik Kreinovich, Olga Kosheleva

Departmental Technical Reports (CS)

Causality is one of the most fundamental notions of physics. It is therefore important to be able to decide which statements about causality are correct in different models of space-time. In this paper, we analyze the computational complexity of the corresponding deciding problems. In particular, we show that: for Minkowski space-time, the deciding problem is as difficult as the Tarski's decision problem for elementary geometry, while for a natural model of primordial space-time, the corresponding decision problem is of the lowest possible complexity among all possible spacetime models.


Interval Computations And Interval-Related Statistical Techniques: Tools For Estimating Uncertainty Of The Results Of Data Processing And Indirect Measurements, Vladik Kreinovich Dec 2007

Interval Computations And Interval-Related Statistical Techniques: Tools For Estimating Uncertainty Of The Results Of Data Processing And Indirect Measurements, Vladik Kreinovich

Departmental Technical Reports (CS)

In many practical situations, we only know the upper bound D on the (absolute value of the) measurement error d, i.e., we only know that the measurement error is located on the interval [-D,D]. The traditional engineering approach to such situations is to assume that d is uniformly distributed on [-D,D], and to use the corresponding statistical techniques. In some situations, however, this approach underestimates the error of indirect measurements. It is therefore desirable to directly process this interval uncertainty. Such "interval computations" methods have been developed since the 1950s. In this chapter, we provide a brief overview of related …


Reasons Why Mobile Telephone Conversations May Be Annoying: Considerations And Pilot Studies, Nigel Ward, Anais G. Rivera, Alejandro Vega Dec 2007

Reasons Why Mobile Telephone Conversations May Be Annoying: Considerations And Pilot Studies, Nigel Ward, Anais G. Rivera, Alejandro Vega

Departmental Technical Reports (CS)

Mobile telephone conversations in public places are often annoying to bystanders. Previous work has focused on the psychological and social causes for this, but has not examined the possible role of properties of the communication channel. In our paper "Do Bystanders and Dialog Participants Differ in Preferences for Telecommunications Channels?" (21st International Symposium on Human Factors in Telecommunication, 2008) we consider the possibility that a reason for the annoyance could be that bystander preferences differ from talker preferences, but conclude that this is in fact unlikely to be a major factor. This technical report provides supplemental information, specifically a broader …


Computing Population Variance And Entropy Under Interval Uncertainty: Linear-Time Algorithms, Gang Xiang, Martine Ceberio, Vladik Kreinovich Nov 2007

Computing Population Variance And Entropy Under Interval Uncertainty: Linear-Time Algorithms, Gang Xiang, Martine Ceberio, Vladik Kreinovich

Departmental Technical Reports (CS)

In statistical analysis of measurement results, it is often necessary to compute the range [V-,V+] of the population variance V=((x1-E)^2+...+(xn-E)^2)/n (where E=(x1+...+xn)/n) when we only know the intervals [Xi-Di,Xi+Di] of possible values of the xi. While V- can be computed efficiently, the problem of computing V+ is, in general, NP-hard. In our previous paper "Population Variance under Interval Uncertainty: A New Algorithm" (Reliable Computing, 2006, Vol. 12, No. 4, pp. 273-280), we showed that in a practically important case, we can use constraints techniques to compute V+ in time O(n*log(n)). In this paper, we provide new algorithms that compute V- …


A Fitness Function To Find Feasible Sequences Of Method Calls For Evolutionary Testing Of Object-Oriented Programs, Myoung Yee Kim, Yoonsik Cheon Nov 2007

A Fitness Function To Find Feasible Sequences Of Method Calls For Evolutionary Testing Of Object-Oriented Programs, Myoung Yee Kim, Yoonsik Cheon

Departmental Technical Reports (CS)

In evolutionary testing of an object-oriented program, the search objective is to find a sequence of method calls that can successfully produce a test object of an interesting state. This is challenging because not all call sequences are feasible; each call of a sequence has to meet the assumption of the called method. The effectiveness of an evolutionary testing thus depends in part on the quality of the so-called fitness function that determines the degree of the fitness of a candidate solution. In this paper, we propose a new fitness function based on assertions such as method preconditions to find …


Propagation And Provenance Of Probabilistic And Interval Uncertainty In Cyberinfrastructure-Related Data Processing And Data Fusion, Paulo Pinheiro Da Silva, Aaron A. Velasco, Martine Ceberio, Christian Servin, Matthew G. Averill, Nicholas Ricky Del Rio, Luc Longpre, Vladik Kreinovich Nov 2007

Propagation And Provenance Of Probabilistic And Interval Uncertainty In Cyberinfrastructure-Related Data Processing And Data Fusion, Paulo Pinheiro Da Silva, Aaron A. Velasco, Martine Ceberio, Christian Servin, Matthew G. Averill, Nicholas Ricky Del Rio, Luc Longpre, Vladik Kreinovich

Departmental Technical Reports (CS)

In the past, communications were much slower than computations. As a result, researchers and practitioners collected different data into huge databases located at a single location such as NASA and US Geological Survey. At present, communications are so much faster that it is possible to keep different databases at different locations, and automatically select, transform, and collect relevant data when necessary. The corresponding cyberinfrastructure is actively used in many applications. It drastically enhances scientists' ability to discover, reuse and combine a large number of resources, e.g., data and services.

Because of this importance, it is desirable to be able to …


Statistical Hypothesis Testing Under Interval Uncertainty: An Overview, Vladik Kreinovich, Hung T. Nguyen, Sa-Aat Niwitpong Nov 2007

Statistical Hypothesis Testing Under Interval Uncertainty: An Overview, Vladik Kreinovich, Hung T. Nguyen, Sa-Aat Niwitpong

Departmental Technical Reports (CS)

An important part of statistical data analysis is hypothesis testing. For example, we know the probability distribution of the characteristics corresponding to a certain disease, we have the values of the characteristics describing a patient, and we must make a conclusion whether this patient has this disease. Traditional hypothesis testing techniques are based on the assumption that we know the exact values of the characteristic(s) x describing a patient. In practice, the value X comes from measurements and is, thus, only known with uncertainty: X =/= x. In many practical situations, we only know the upper bound D on the …


How To Estimate, Take Into Account, And Improve Travel Time Reliability In Transportation Networks, Ruey L. Cheu, Vladik Kreinovich, Francois Modave, Gang Xiang, Tao Li, Tanja Magoc Nov 2007

How To Estimate, Take Into Account, And Improve Travel Time Reliability In Transportation Networks, Ruey L. Cheu, Vladik Kreinovich, Francois Modave, Gang Xiang, Tao Li, Tanja Magoc

Departmental Technical Reports (CS)

Many urban areas suffer from traffic congestion. Intuitively, it may seem that a road expansion (e.g., the opening of a new road) should always improve the traffic conditions. However, in reality, a new road can actually worsen traffic congestion. It is therefore extremely important that before we start a road expansion project, we first predict the effect of this project on traffic congestion.

Traditional approach to this prediction is based on the assumption that for any time of the day, we know the exact amount of traffic that needs to go from each origin city zone A to every …


Trade-Off Between Sample Size And Accuracy: Case Of Static Measurements Under Interval Uncertainty, Hung T. Nguyen, Vladik Kreinovich Oct 2007

Trade-Off Between Sample Size And Accuracy: Case Of Static Measurements Under Interval Uncertainty, Hung T. Nguyen, Vladik Kreinovich

Departmental Technical Reports (CS)

In many practical situations, we are not satisfied with the accuracy of the existing measurements. There are two possible ways to improve the measurement accuracy:

first, instead of a single measurement, we can make repeated measurements; the additional information coming from these additional measurements can improve the accuracy of the result of this series of measurements;

second, we can replace the current measuring instrument with a more accurate one; correspondingly, we can use a more accurate (and more expensive) measurement procedure provided by a measuring lab -- e.g., a procedure that includes the use of a higher quality reagent.

In …


Trade-Off Between Sample Size And Accuracy: Case Of Dynamic Measurements Under Interval Uncertainty, Hung T. Nguyen, Olga Kosheleva, Vladik Kreinovich, Scott Ferson Oct 2007

Trade-Off Between Sample Size And Accuracy: Case Of Dynamic Measurements Under Interval Uncertainty, Hung T. Nguyen, Olga Kosheleva, Vladik Kreinovich, Scott Ferson

Departmental Technical Reports (CS)

In many practical situations, we are not satisfied with the accuracy of the existing measurements. There are two possible ways to improve the measurement accuracy:

first, instead of a single measurement, we can make repeated measurements; the additional information coming from these additional measurements can improve the accuracy of the result of this series of measurements;

second, we can replace the current measuring instrument with a more accurate one; correspondingly, we can use a more accurate (and more expensive) measurement procedure provided by a measuring lab -- e.g., a procedure that includes the use of a higher quality reagent.

In …


Fast Algorithms For Computing Statistics Under Interval Uncertainty: An Overview, Vladik Kreinovich, Gang Xiang Oct 2007

Fast Algorithms For Computing Statistics Under Interval Uncertainty: An Overview, Vladik Kreinovich, Gang Xiang

Departmental Technical Reports (CS)

In many areas of science and engineering, it is desirable to estimate statistical characteristics (mean, variance, covariance, etc.) under interval uncertainty. For example, we may want to use the measured values x(t) of a pollution level in a lake at different moments of time to estimate the average pollution level; however, we do not know the exact values x(t) -- e.g., if one of the measurement results is 0, this simply means that the actual (unknown) value of x(t) can be anywhere between 0 and the detection limit DL. We must therefore modify the existing statistical algorithms to process such …


Aggregation In Biological Systems: Computational Aspects, Vladik Kreinovich, Max Shpak Oct 2007

Aggregation In Biological Systems: Computational Aspects, Vladik Kreinovich, Max Shpak

Departmental Technical Reports (CS)

Many biologically relevant dynamical systems are aggregable, in the sense that one can divide their (micro) variables x1,...,xn into several (k) non-intersecting groups and find functions y1,...,yk (k < n) from these groups (macrovariables) whose dynamics only depend on the initial state of the macrovariable. For example, the state of a population genetic system can be described by listing the frequencies xi of different genotypes, so that the corresponding dynamical system describe the effects of mutation, recombination, and natural selection. The goal of aggregation approaches in population genetics is to find macrovariables y1,...,yk to which aggregated mutation, recombination, and selection functions could be applied. Population genetic models are formally equivalent to genetic algorithms, and are therefore of wide interest in the computational sciences.

Another example of a multi-variable biological system of interest arises in ecology. Ecosystems contain many interacting species, and because of the complexity of multi-variable nonlinear systems, it would be of value to derive a formal description that reduces the number of variables to some macrostates that are weighted sums of the densities of individual species.

In this chapter, we explore different computational aspects of aggregability for linear and non-linear systems. Specifically, we …


How To Avoid Gerrymandering: A New Algorithmic Solution, Gregory B. Lush, Esteban Gamez, Vladik Kreinovich Oct 2007

How To Avoid Gerrymandering: A New Algorithmic Solution, Gregory B. Lush, Esteban Gamez, Vladik Kreinovich

Departmental Technical Reports (CS)

Subdividing an area into voting districts is often a very controversial issue. If we divide purely geographically, then minority groups may not be properly represented. If we start changing the borders of the districts to accommodate different population groups, we may end up with very artificial borders -- borders which are often to set up in such a way as to give an unfair advantage to incumbents. In this paper, we describe redistricting as a precise optimization problem, and we propose a new algorithm for solving this problem.


Estimating Quality Of Support Vector Machines Learning Under Probabilistic And Interval Uncertainty: Algorithms And Computational Complexity, Canh Hao Nguyen, Tu Bao Ho, Vladik Kreinovich Oct 2007

Estimating Quality Of Support Vector Machines Learning Under Probabilistic And Interval Uncertainty: Algorithms And Computational Complexity, Canh Hao Nguyen, Tu Bao Ho, Vladik Kreinovich

Departmental Technical Reports (CS)

Support Vector Machines (SVM) is one of the most widely used technique in machines leaning. After the SVM algorithms process the data and produce some classification, it is desirable to learn how well this classification fits the data. There exist several measures of fit, among them the most widely used is kernel target alignment. These measures, however, assume that the data are known exactly. In reality, whether the data points come from measurements or from expert estimates, they are only known with uncertainty. As a result, even if we know that the classification perfectly fits the nominal data, this same …


Verification Of Automatically Generated Pattern-Based Ltl Specifications, Salamah Salamah, Ann Q. Gates, Vladik Kreinovich, Steve Roach Sep 2007

Verification Of Automatically Generated Pattern-Based Ltl Specifications, Salamah Salamah, Ann Q. Gates, Vladik Kreinovich, Steve Roach

Departmental Technical Reports (CS)

The use of property classifications and patterns, i.e., high-level abstractions that describe common behavior, have been shown to assist practitioners in generating formal specifications that can be used in formal verification techniques. The Specification Pattern System (SPS) provides descriptions of a collection of patterns. The extent of program execution over which a pattern must hold is described by the notion of scope. SPS provides a manual technique for obtaining formal specifications from a pattern and a scope. The Property Specification Tool (Prospec) extends SPS by introducing Composite Propositions (CPs), a classification for defining sequential and concurrent behavior to represent pattern …


Random Fuzzy Sets, Hung T. Nguyen, Vladik Kreinovich, Gang Xiang Sep 2007

Random Fuzzy Sets, Hung T. Nguyen, Vladik Kreinovich, Gang Xiang

Departmental Technical Reports (CS)

It is well known that in decision making under uncertainty, while we are guided by a general (and abstract) theory of probability and of statistical inference, each specific type of observed data requires its own analysis. Thus, while textbook techniques treat precisely observed data in multivariate analysis, there are many open research problems when data are censored (e.g., in medical or bio-statistics), missing, or partially observed (e.g., in bioinformatics). Data can be imprecise due to various reasons, e.g., due to fuzziness of linguistic data. Imprecise observed data are usually called {\it coarse data}. In this chapter, we consider coarse data …


In Some Curved Spaces, One Can Solve Np-Hard Problems In Polynomial Time, Vladik Kreinovich, Maurice Margenstern Sep 2007

In Some Curved Spaces, One Can Solve Np-Hard Problems In Polynomial Time, Vladik Kreinovich, Maurice Margenstern

Departmental Technical Reports (CS)

In the late 1970s and the early 1980s, Yuri Matiyasevich actively used his knowledge of engineering and physical phenomena to come up with parallelized schemes for solving NP-hard problems in polynomial time. In this paper, we describe one such scheme in which we use parallel computation in curved spaces.


Ufuzzy Prediction Models In Measurement, Leon Reznik, Vladik Kreinovich Sep 2007

Ufuzzy Prediction Models In Measurement, Leon Reznik, Vladik Kreinovich

Departmental Technical Reports (CS)

The paper investigates a feasibility of fuzzy models application in measurement procedures. It considers the problem of measurement information fusion from different sources, when one of the sources provides predictions regarding approximate values of the measured variables or their combinations. Typically this information is given by an expert but may be mined from available data also. This information is formalized as fuzzy prediction models and is used in combination with the measurement results to improve the measurement accuracy. The properties of the modified estimates are studied in comparison with the conventional ones. The conditions when fuzzy models application can achieve …


Wdo-It! A Tool For Building Scientific Workflows From Ontologies, Paulo Pinheiro Da Silva, Leonardo Salayandia, Ann Q. Gates Sep 2007

Wdo-It! A Tool For Building Scientific Workflows From Ontologies, Paulo Pinheiro Da Silva, Leonardo Salayandia, Ann Q. Gates

Departmental Technical Reports (CS)

One of the factors that limits scientists from fully adopting e-Science technologies and infrastructure to advance their work is the technical knowledge needed to specify and execute scientific workflows. In this paper we introduce WDO-It!, a scientist-centered tool that facilitates the scientist's task of encoding discipline knowledge in the form of workflow-driven ontologies (WDOs) and presenting process knowledge in the form of model-based workflows (MBWs). The goal of WDO-It! is to facilitate the adoption of e-Science technologies and infrastructures by allowing scientist to encode their discipline knowledge and process knowledge with minimal assistance from technologists. MBWs have demonstrated potential to …


M Solutions Good, M-1 Solutions Better, Luc Longpre, William Gasarch, G. W. Walster, Vladik Kreinovich Aug 2007

M Solutions Good, M-1 Solutions Better, Luc Longpre, William Gasarch, G. W. Walster, Vladik Kreinovich

Departmental Technical Reports (CS)

One of the main objectives of theoretical research in computational complexity and feasibility is to explain experimentally observed difference in complexity.

Empirical evidence shows that the more solutions a system of equations has, the more difficult it is to solve it. Similarly, the more global maxima a continuous function has, the more difficult it is to locate them. Until now, these empirical facts have been only partially formalized: namely, it has been shown that problems with two or more solutions are more difficult to solve than problems with exactly one solution. In this paper, we extend this result and show …


Using Patterns And Composite Propositions To Automate The Generation Of Complex Ltl, Salamah Salamah, Ann Q. Gates, Vladik Kreinovich, Steve Roach Aug 2007

Using Patterns And Composite Propositions To Automate The Generation Of Complex Ltl, Salamah Salamah, Ann Q. Gates, Vladik Kreinovich, Steve Roach

Departmental Technical Reports (CS)

Property classifications and patterns, i.e., high-level abstractions that describe common behavior, have been used to assist practitioners in specifying properties. The Specification Pattern System (SPS) provides descriptions of a collection of patterns. Each pattern is associated with a scope that defines the extent of program execution over which a property pattern is considered. Based on a selected pattern, SPS provides a specification for each type of scope in multiple formal languages including Linear Temporal Logic (LTL). The (Prospec) tool extends SPS by introducing the notion of Composite Propositions (CP), which are classifications for defining sequential and concurrent behavior to represent …


Static Space-Times Naturally Lead To Quasi-Pseudometrics, Hans-Peter A. Kuenzi, Vladik Kreinovich Aug 2007

Static Space-Times Naturally Lead To Quasi-Pseudometrics, Hans-Peter A. Kuenzi, Vladik Kreinovich

Departmental Technical Reports (CS)

The standard 4-dimensional Minkowski space-time of special relativity is based on the 3-dimensional Euclidean metric. In 1967, H.~Busemann showed that similar static space-time models can be based on an arbitrary metric space. In this paper, we search for the broadest possible generalization of a metric under which a construction of a static space-time leads to a physically reasonable space-time model. It turns out that this broadest possible generalization is related to the known notion of a quasi-pseudometric.


Towards Efficient Prediction Of Decisions Under Interval Uncertainty, Van Nam Huynh, Vladik Kreinovich, Yoshiteru Nakamori, Hung T. Nguyen Aug 2007

Towards Efficient Prediction Of Decisions Under Interval Uncertainty, Van Nam Huynh, Vladik Kreinovich, Yoshiteru Nakamori, Hung T. Nguyen

Departmental Technical Reports (CS)

In many practical situations, users select between n alternatives a1, ..., an, and the only information that we have about the utilities vi of these alternatives are bounds vi- <= vi <= v-+. In such situations, it is reasonable to assume that the values vi are independent and uniformly distributed on the corresponding intervals [vi-,vi+]. Under this assumption, we would like to estimate, for each i, the probability pi that the alternative ai will be selected. In this paper, we provide efficient algorithms for computing these probabilities.


Towards A More Physically Adequate Definition Of Randomness: A Topological Approach, Vladik Kreinovich Aug 2007

Towards A More Physically Adequate Definition Of Randomness: A Topological Approach, Vladik Kreinovich

Departmental Technical Reports (CS)

Kolmogorov-Martin-Lof definition describes a random sequence as a sequence which satisfies all the laws of probability. This notion formalizes the intuitive physical idea that if an event has a probability 0, then this event cannot occur. Physicists, however, also believe that if an event has a very small probability, then it cannot occur. In our previous papers, we proposed a modification of the Kolmogorov-Martin-Lof definition which formalizes this idea as well. It turns out that our original definition is too general: e.g., it includes some clearly non-physical situations when the set of all random elements is a one-point set. In …


The Gravity Data Ontology: Laying The Foundation For Workflow-Driven Ontologies, Ann Q. Gates, G. Randy Keller, Flor Salcedo, Paulo Pinheiro Da Silva, Leonardo Salayandia Jul 2007

The Gravity Data Ontology: Laying The Foundation For Workflow-Driven Ontologies, Ann Q. Gates, G. Randy Keller, Flor Salcedo, Paulo Pinheiro Da Silva, Leonardo Salayandia

Departmental Technical Reports (CS)

A workflow-driven ontology is an ontology that encodes discipline-specific knowledge in the form of concepts and relationships and that facilitates the composition of services to create products and derive data. Early work on the development of such an ontology resulted in the construction of a gravity data ontology and the categorization of concepts: "Data," "Method," and "Product." "Data" is further categorized as "Raw Data" and "Derived Data," e.g., reduced data. The relationships that are defined capture inputs to and outputs from methods, e.g., derived data and products are output from methods, as well as other associations that are related to …


Traffic Assignment For Risk Averse Drivers In A Stochastic Network, Ruey L. Cheu, Vladik Kreinovich, Srinivasa R. Manduva Jul 2007

Traffic Assignment For Risk Averse Drivers In A Stochastic Network, Ruey L. Cheu, Vladik Kreinovich, Srinivasa R. Manduva

Departmental Technical Reports (CS)

Most traffic assignment tasks in practice are performed by using deterministic network (DN) models, which assume that the link travel time is uniquely determined by a link performance function. In reality, link travel time, at a given link volume, is a random variable. Such stochastic network (SN) models are not widely used because the traffic assignment algorithms are much more computationally complex and difficult to understand by practitioners. In this paper, we derive an equivalent link disutility (ELD) function, for the case of risk averse drivers in a SN, without assuming any distribution of link travel time. We further derive …


From (Idealized) Exact Causality-Preserving Transformations To Practically Useful Approximately-Preserving Ones: A General Approach, Vladik Kreinovich, Olga Kosheleva Jun 2007

From (Idealized) Exact Causality-Preserving Transformations To Practically Useful Approximately-Preserving Ones: A General Approach, Vladik Kreinovich, Olga Kosheleva

Departmental Technical Reports (CS)

It is known that every causality-preserving transformation of Minkowski space-time is a composition of Lorentz transformations, shifts, rotations, and dilations. In principle, this result means that by only knowing the causality relation, we can determine the coordinate and metric structure on the space-time. However, strictly speaking, the theorem only says that this reconstruction is possible if we know the exact causality relation. In practice, measurements are never 100% accurate. It is therefore desirable to prove that if a transformation approximately preserves causality, then it is approximately equal to an above-described composition.

Such a result was indeed proven, but only for …


Evaluation Of Hf Rfid For Implanted Medical Applications, Eric Freudenthal, David Herrera, Frederick Kautz, Carlos Natividad, Alexandria Ogrey, Justin Sipla, Abimael Sosa, Carlos Betancourt, Leonardo Estevez Jun 2007

Evaluation Of Hf Rfid For Implanted Medical Applications, Eric Freudenthal, David Herrera, Frederick Kautz, Carlos Natividad, Alexandria Ogrey, Justin Sipla, Abimael Sosa, Carlos Betancourt, Leonardo Estevez

Departmental Technical Reports (CS)

Low cost HF RFID scanner subsystems that both deliver power and provide high bandwidth bidirectional communication channels have recently become available. These devices are anticipated to become ubiquitous in next-generation cell phones and enable a wide range of emerging e-commerce applications.

This paper considers the use of HF RFID to power and communicate with implantable medical devices. We successfully communicated with ten transponders that were implanted at three locations within a human cadaver. In this paper, we present measurements collected from four of these transponders that represent a wide range of transponder sizes. We also describe how RFID for medical …


Computing At Least One Of Two Roots Of A Polynomial Is, In General, Not Algorithmic, Vladik Kreinovich Jun 2007

Computing At Least One Of Two Roots Of A Polynomial Is, In General, Not Algorithmic, Vladik Kreinovich

Departmental Technical Reports (CS)

In our previous work, we provided a theoretical explanation for an empirical fact that it is easier to find a unique root than the multiple roots. In this short note, we strengthen that explanation by showing that finding one of many roots is also difficult.


Any (True) Statement Can Be Generalized So That It Becomes Trivial: A Simple Formalization Of D. K. Faddeev's Belief, Vladik Kreinovich Jun 2007

Any (True) Statement Can Be Generalized So That It Becomes Trivial: A Simple Formalization Of D. K. Faddeev's Belief, Vladik Kreinovich

Departmental Technical Reports (CS)

In his unpublished lectures on general algebra, a well-known algebraist D. K. Faddeev expressed a belief that every true mathematical statement can be generalized in such a way that it becomes trivial. To the best of our knowledge, this belief has never been formalized before. In this short paper, we provide a simple formalization (and proof) of this belief.