Open Access. Powered by Scholars. Published by Universities.^{®}
 Keyword

 JML language (5)
 Test data generator (4)
 Runtime assertion checking (3)
 Pre and postconditions (3)
 Interval uncertainty (3)

 Pre and post conditions (2)
 Evaluation (2)
 Test oracle (2)
 Usability (2)
 Random testing (2)
 Computability (1)
 Causality (1)
 Assertion (1)
 Architectural assertion (1)
 Architectural constraint (1)
 Automated testing (1)
 Abstraction (1)
 Cognitive walkthrough (1)
 Coarse data (1)
 Computational complexity (1)
 Architectural description language (1)
 Conversational agents (1)
 Continuous lattices (1)
 Cultural model (1)
 Choquet theorem (1)
 Accuracy gain (1)
 Documentation (1)
 Antitriangle inequality (1)
 Computational statistics (1)
 Cyberinfrastructure (1)
Articles 1  30 of 70
FullText Articles in Computer Engineering
Computational Complexity Of Determining Which Statements About Causality Hold In Different SpaceTime Models, Vladik Kreinovich, Olga Kosheleva
Computational Complexity Of Determining Which Statements About Causality Hold In Different SpaceTime Models, Vladik Kreinovich, Olga Kosheleva
Departmental Technical Reports (CS)
Causality is one of the most fundamental notions of physics. It is therefore important to be able to decide which statements about causality are correct in different models of spacetime. In this paper, we analyze the computational complexity of the corresponding deciding problems. In particular, we show that: for Minkowski spacetime, the deciding problem is as difficult as the Tarski's decision problem for elementary geometry, while for a natural model of primordial spacetime, the corresponding decision problem is of the lowest possible complexity among all possible spacetime models.
Reasons Why Mobile Telephone Conversations May Be Annoying: Considerations And Pilot Studies, Nigel Ward, Anais G. Rivera, Alejandro Vega
Reasons Why Mobile Telephone Conversations May Be Annoying: Considerations And Pilot Studies, Nigel Ward, Anais G. Rivera, Alejandro Vega
Departmental Technical Reports (CS)
Mobile telephone conversations in public places are often annoying to bystanders. Previous work has focused on the psychological and social causes for this, but has not examined the possible role of properties of the communication channel. In our paper "Do Bystanders and Dialog Participants Differ in Preferences for Telecommunications Channels?" (21st International Symposium on Human Factors in Telecommunication, 2008) we consider the possibility that a reason for the annoyance could be that bystander preferences differ from talker preferences, but conclude that this is in fact unlikely to be a major factor. This technical report provides supplemental information, specifically a broader ...
Interval Computations And IntervalRelated Statistical Techniques: Tools For Estimating Uncertainty Of The Results Of Data Processing And Indirect Measurements, Vladik Kreinovich
Interval Computations And IntervalRelated Statistical Techniques: Tools For Estimating Uncertainty Of The Results Of Data Processing And Indirect Measurements, Vladik Kreinovich
Departmental Technical Reports (CS)
In many practical situations, we only know the upper bound D on the (absolute value of the) measurement error d, i.e., we only know that the measurement error is located on the interval [D,D]. The traditional engineering approach to such situations is to assume that d is uniformly distributed on [D,D], and to use the corresponding statistical techniques. In some situations, however, this approach underestimates the error of indirect measurements. It is therefore desirable to directly process this interval uncertainty. Such "interval computations" methods have been developed since the 1950s. In this chapter, we provide a brief ...
How To Estimate, Take Into Account, And Improve Travel Time Reliability In Transportation Networks, Ruey L. Cheu, Vladik Kreinovich, Francois Modave, Gang Xiang, Tao Li, Tanja Magoc
How To Estimate, Take Into Account, And Improve Travel Time Reliability In Transportation Networks, Ruey L. Cheu, Vladik Kreinovich, Francois Modave, Gang Xiang, Tao Li, Tanja Magoc
Departmental Technical Reports (CS)
Many urban areas suffer from traffic congestion. Intuitively, it may seem that a road expansion (e.g., the opening of a new road) should always improve the traffic conditions. However, in reality, a new road can actually worsen traffic congestion. It is therefore extremely important that before we start a road expansion project, we first predict the effect of this project on traffic congestion.
Traditional approach to this prediction is based on the assumption that for any time of the day, we know the exact amount of traffic that needs to go from each origin city zone A to ...
Statistical Hypothesis Testing Under Interval Uncertainty: An Overview, Vladik Kreinovich, Hung T. Nguyen, SaAat Niwitpong
Statistical Hypothesis Testing Under Interval Uncertainty: An Overview, Vladik Kreinovich, Hung T. Nguyen, SaAat Niwitpong
Departmental Technical Reports (CS)
An important part of statistical data analysis is hypothesis testing. For example, we know the probability distribution of the characteristics corresponding to a certain disease, we have the values of the characteristics describing a patient, and we must make a conclusion whether this patient has this disease. Traditional hypothesis testing techniques are based on the assumption that we know the exact values of the characteristic(s) x describing a patient. In practice, the value X comes from measurements and is, thus, only known with uncertainty: X =/= x. In many practical situations, we only know the upper bound D on the ...
Propagation And Provenance Of Probabilistic And Interval Uncertainty In CyberinfrastructureRelated Data Processing And Data Fusion, Paulo Pinheiro Da Silva, Aaron A. Velasco, Martine Ceberio, Christian Servin, Matthew G. Averill, Nicholas Ricky Del Rio, Luc Longpre, Vladik Kreinovich
Propagation And Provenance Of Probabilistic And Interval Uncertainty In CyberinfrastructureRelated Data Processing And Data Fusion, Paulo Pinheiro Da Silva, Aaron A. Velasco, Martine Ceberio, Christian Servin, Matthew G. Averill, Nicholas Ricky Del Rio, Luc Longpre, Vladik Kreinovich
Departmental Technical Reports (CS)
In the past, communications were much slower than computations. As a result, researchers and practitioners collected different data into huge databases located at a single location such as NASA and US Geological Survey. At present, communications are so much faster that it is possible to keep different databases at different locations, and automatically select, transform, and collect relevant data when necessary. The corresponding cyberinfrastructure is actively used in many applications. It drastically enhances scientists' ability to discover, reuse and combine a large number of resources, e.g., data and services.
Because of this importance, it is desirable to be able ...
A Fitness Function To Find Feasible Sequences Of Method Calls For Evolutionary Testing Of ObjectOriented Programs, Myoung Yee Kim, Yoonsik Cheon
A Fitness Function To Find Feasible Sequences Of Method Calls For Evolutionary Testing Of ObjectOriented Programs, Myoung Yee Kim, Yoonsik Cheon
Departmental Technical Reports (CS)
In evolutionary testing of an objectoriented program, the search objective is to find a sequence of method calls that can successfully produce a test object of an interesting state. This is challenging because not all call sequences are feasible; each call of a sequence has to meet the assumption of the called method. The effectiveness of an evolutionary testing thus depends in part on the quality of the socalled fitness function that determines the degree of the fitness of a candidate solution. In this paper, we propose a new fitness function based on assertions such as method preconditions to find ...
Computing Population Variance And Entropy Under Interval Uncertainty: LinearTime Algorithms, Gang Xiang, Martine Ceberio, Vladik Kreinovich
Computing Population Variance And Entropy Under Interval Uncertainty: LinearTime Algorithms, Gang Xiang, Martine Ceberio, Vladik Kreinovich
Departmental Technical Reports (CS)
In statistical analysis of measurement results, it is often necessary to compute the range [V,V+] of the population variance V=((x1E)^2+...+(xnE)^2)/n (where E=(x1+...+xn)/n) when we only know the intervals [XiDi,Xi+Di] of possible values of the xi. While V can be computed efficiently, the problem of computing V+ is, in general, NPhard. In our previous paper "Population Variance under Interval Uncertainty: A New Algorithm" (Reliable Computing, 2006, Vol. 12, No. 4, pp. 273280), we showed that in a practically important case, we can use constraints techniques to compute V+ in time ...
Usability Inspection Methods After 15 Years Of Research And Practice, David G. Novick, Tasha Hollingsed
Usability Inspection Methods After 15 Years Of Research And Practice, David G. Novick, Tasha Hollingsed
Departmental Papers (CS)
Usability inspection methods, such as heuristic evaluation, the cognitive walkthrough, formal usability inspections, and the pluralistic usability walkthrough, were introduced fifteen years ago. Since then, these methods, analyses of their comparative effectiveness, and their use have evolved in different ways. In this paper, we track the fortunes of the methods and analyses, looking at which led to use and to further research, and which led to relative methodological dead ends. Heuristic evaluation and the cognitive walkthrough appear to be the most actively used and researched techniques. The pluralistic walkthrough remains a recognized technique, although not the subject of significant further ...
Toward A More Accurate View Of When And How People Seek Help With Computer Applications, David G. Novick, Edith Elizalde, Nathaniel Bean
Toward A More Accurate View Of When And How People Seek Help With Computer Applications, David G. Novick, Edith Elizalde, Nathaniel Bean
Departmental Papers (CS)
Based on 40 interviews and 11 onsite workplace observations of people using computer applications at work, we confirm that use of printed and online help is very low and find that providing greater detail of categories solution methods can present a more realistic picture of users’ behaviors. Observed study participants encountered a usability problem on average about once every 75 minutes and typically spent about a minute looking for a solution. Participants consumed much more time when they were unaware of a direct way of doing something and instead used less effective methods. Comparison of results from different datacollection methods ...
Aggregation In Biological Systems: Computational Aspects, Vladik Kreinovich, Max Shpak
Aggregation In Biological Systems: Computational Aspects, Vladik Kreinovich, Max Shpak
Departmental Technical Reports (CS)
Many biologically relevant dynamical systems are aggregable, in the sense that one can divide their (micro) variables x1,...,xn into several (k) nonintersecting groups and find functions y1,...,yk (k < n) from these groups (macrovariables) whose dynamics only depend on the initial state of the macrovariable. For example, the state of a population genetic system can be described by listing the frequencies xi of different genotypes, so that the corresponding dynamical system describe the effects of mutation, recombination, and natural selection. The goal of aggregation approaches in population genetics is to find macrovariables y1,...,yk to which aggregated mutation, recombination, and selection functions could be applied. Population genetic models are formally equivalent to genetic algorithms, and are therefore of wide interest in the computational sciences.
Another example of a multivariable biological system of interest arises in ecology. Ecosystems contain many interacting species, and because of the complexity of multivariable nonlinear systems, it would be of value to derive a formal description that reduces the number of variables to some macrostates that are weighted sums of the densities of individual species.
In this chapter, we explore different computational aspects of aggregability for linear and nonlinear systems ...
TradeOff Between Sample Size And Accuracy: Case Of Dynamic Measurements Under Interval Uncertainty, Hung T. Nguyen, Olga Kosheleva, Vladik Kreinovich, Scott Ferson
TradeOff Between Sample Size And Accuracy: Case Of Dynamic Measurements Under Interval Uncertainty, Hung T. Nguyen, Olga Kosheleva, Vladik Kreinovich, Scott Ferson
Departmental Technical Reports (CS)
In many practical situations, we are not satisfied with the accuracy of the existing measurements. There are two possible ways to improve the measurement accuracy:
first, instead of a single measurement, we can make repeated measurements; the additional information coming from these additional measurements can improve the accuracy of the result of this series of measurements;
second, we can replace the current measuring instrument with a more accurate one; correspondingly, we can use a more accurate (and more expensive) measurement procedure provided by a measuring lab  e.g., a procedure that includes the use of a higher quality reagent.
In ...
How To Avoid Gerrymandering: A New Algorithmic Solution, Gregory B. Lush, Esteban Gamez, Vladik Kreinovich
How To Avoid Gerrymandering: A New Algorithmic Solution, Gregory B. Lush, Esteban Gamez, Vladik Kreinovich
Departmental Technical Reports (CS)
Subdividing an area into voting districts is often a very controversial issue. If we divide purely geographically, then minority groups may not be properly represented. If we start changing the borders of the districts to accommodate different population groups, we may end up with very artificial borders  borders which are often to set up in such a way as to give an unfair advantage to incumbents. In this paper, we describe redistricting as a precise optimization problem, and we propose a new algorithm for solving this problem.
Estimating Quality Of Support Vector Machines Learning Under Probabilistic And Interval Uncertainty: Algorithms And Computational Complexity, Canh Hao Nguyen, Tu Bao Ho, Vladik Kreinovich
Estimating Quality Of Support Vector Machines Learning Under Probabilistic And Interval Uncertainty: Algorithms And Computational Complexity, Canh Hao Nguyen, Tu Bao Ho, Vladik Kreinovich
Departmental Technical Reports (CS)
Support Vector Machines (SVM) is one of the most widely used technique in machines leaning. After the SVM algorithms process the data and produce some classification, it is desirable to learn how well this classification fits the data. There exist several measures of fit, among them the most widely used is kernel target alignment. These measures, however, assume that the data are known exactly. In reality, whether the data points come from measurements or from expert estimates, they are only known with uncertainty. As a result, even if we know that the classification perfectly fits the nominal data, this same ...
TradeOff Between Sample Size And Accuracy: Case Of Static Measurements Under Interval Uncertainty, Hung T. Nguyen, Vladik Kreinovich
TradeOff Between Sample Size And Accuracy: Case Of Static Measurements Under Interval Uncertainty, Hung T. Nguyen, Vladik Kreinovich
Departmental Technical Reports (CS)
In many practical situations, we are not satisfied with the accuracy of the existing measurements. There are two possible ways to improve the measurement accuracy:
first, instead of a single measurement, we can make repeated measurements; the additional information coming from these additional measurements can improve the accuracy of the result of this series of measurements;
second, we can replace the current measuring instrument with a more accurate one; correspondingly, we can use a more accurate (and more expensive) measurement procedure provided by a measuring lab  e.g., a procedure that includes the use of a higher quality reagent.
In ...
Fast Algorithms For Computing Statistics Under Interval Uncertainty: An Overview, Vladik Kreinovich, Gang Xiang
Fast Algorithms For Computing Statistics Under Interval Uncertainty: An Overview, Vladik Kreinovich, Gang Xiang
Departmental Technical Reports (CS)
In many areas of science and engineering, it is desirable to estimate statistical characteristics (mean, variance, covariance, etc.) under interval uncertainty. For example, we may want to use the measured values x(t) of a pollution level in a lake at different moments of time to estimate the average pollution level; however, we do not know the exact values x(t)  e.g., if one of the measurement results is 0, this simply means that the actual (unknown) value of x(t) can be anywhere between 0 and the detection limit DL. We must therefore modify the existing statistical algorithms ...
A Computational Model Of CultureSpecific Conversational Behavior, Dusan Jan, David Herrera, Bilyana Martinovski, David G. Novick, David Traum
A Computational Model Of CultureSpecific Conversational Behavior, Dusan Jan, David Herrera, Bilyana Martinovski, David G. Novick, David Traum
Departmental Papers (CS)
This paper presents a model for simulating cultural differences in the conversational behavior of virtual agents. The model provides parameters for differences in proxemics, gaze and overlap in turn taking. We present a review of literature on these factors and show results of a study where native speakers of North American English, Mexican Spanish and Arabic were asked to rate the realism of the simulations generated based on different cultural parameters with respect to their culture.
Random Fuzzy Sets, Hung T. Nguyen, Vladik Kreinovich, Gang Xiang
Random Fuzzy Sets, Hung T. Nguyen, Vladik Kreinovich, Gang Xiang
Departmental Technical Reports (CS)
It is well known that in decision making under uncertainty, while we are guided by a general (and abstract) theory of probability and of statistical inference, each specific type of observed data requires its own analysis. Thus, while textbook techniques treat precisely observed data in multivariate analysis, there are many open research problems when data are censored (e.g., in medical or biostatistics), missing, or partially observed (e.g., in bioinformatics). Data can be imprecise due to various reasons, e.g., due to fuzziness of linguistic data. Imprecise observed data are usually called {\it coarse data}. In this chapter, we ...
Ufuzzy Prediction Models In Measurement, Leon Reznik, Vladik Kreinovich
Ufuzzy Prediction Models In Measurement, Leon Reznik, Vladik Kreinovich
Departmental Technical Reports (CS)
The paper investigates a feasibility of fuzzy models application in measurement procedures. It considers the problem of measurement information fusion from different sources, when one of the sources provides predictions regarding approximate values of the measured variables or their combinations. Typically this information is given by an expert but may be mined from available data also. This information is formalized as fuzzy prediction models and is used in combination with the measurement results to improve the measurement accuracy. The properties of the modified estimates are studied in comparison with the conventional ones. The conditions when fuzzy models application can achieve ...
In Some Curved Spaces, One Can Solve NpHard Problems In Polynomial Time, Vladik Kreinovich, Maurice Margenstern
In Some Curved Spaces, One Can Solve NpHard Problems In Polynomial Time, Vladik Kreinovich, Maurice Margenstern
Departmental Technical Reports (CS)
In the late 1970s and the early 1980s, Yuri Matiyasevich actively used his knowledge of engineering and physical phenomena to come up with parallelized schemes for solving NPhard problems in polynomial time. In this paper, we describe one such scheme in which we use parallel computation in curved spaces.
Verification Of Automatically Generated PatternBased Ltl Specifications, Salamah Salamah, Ann Q. Gates, Vladik Kreinovich, Steve Roach
Verification Of Automatically Generated PatternBased Ltl Specifications, Salamah Salamah, Ann Q. Gates, Vladik Kreinovich, Steve Roach
Departmental Technical Reports (CS)
The use of property classifications and patterns, i.e., highlevel abstractions that describe common behavior, have been shown to assist practitioners in generating formal specifications that can be used in formal verification techniques. The Specification Pattern System (SPS) provides descriptions of a collection of patterns. The extent of program execution over which a pattern must hold is described by the notion of scope. SPS provides a manual technique for obtaining formal specifications from a pattern and a scope. The Property Specification Tool (Prospec) extends SPS by introducing Composite Propositions (CPs), a classification for defining sequential and concurrent behavior to represent ...
WdoIt! A Tool For Building Scientific Workflows From Ontologies, Paulo Pinheiro Da Silva, Leonardo Salayandia, Ann Q. Gates
WdoIt! A Tool For Building Scientific Workflows From Ontologies, Paulo Pinheiro Da Silva, Leonardo Salayandia, Ann Q. Gates
Departmental Technical Reports (CS)
One of the factors that limits scientists from fully adopting eScience technologies and infrastructure to advance their work is the technical knowledge needed to specify and execute scientific workflows. In this paper we introduce WDOIt!, a scientistcentered tool that facilitates the scientist's task of encoding discipline knowledge in the form of workflowdriven ontologies (WDOs) and presenting process knowledge in the form of modelbased workflows (MBWs). The goal of WDOIt! is to facilitate the adoption of eScience technologies and infrastructures by allowing scientist to encode their discipline knowledge and process knowledge with minimal assistance from technologists. MBWs have demonstrated potential ...
Static SpaceTimes Naturally Lead To QuasiPseudometrics, HansPeter A. Kuenzi, Vladik Kreinovich
Static SpaceTimes Naturally Lead To QuasiPseudometrics, HansPeter A. Kuenzi, Vladik Kreinovich
Departmental Technical Reports (CS)
The standard 4dimensional Minkowski spacetime of special relativity is based on the 3dimensional Euclidean metric. In 1967, H.~Busemann showed that similar static spacetime models can be based on an arbitrary metric space. In this paper, we search for the broadest possible generalization of a metric under which a construction of a static spacetime leads to a physically reasonable spacetime model. It turns out that this broadest possible generalization is related to the known notion of a quasipseudometric.
Towards Efficient Prediction Of Decisions Under Interval Uncertainty, Van Nam Huynh, Vladik Kreinovich, Yoshiteru Nakamori, Hung T. Nguyen
Towards Efficient Prediction Of Decisions Under Interval Uncertainty, Van Nam Huynh, Vladik Kreinovich, Yoshiteru Nakamori, Hung T. Nguyen
Departmental Technical Reports (CS)
In many practical situations, users select between n alternatives a1, ..., an, and the only information that we have about the utilities vi of these alternatives are bounds vi <= vi <= v+. In such situations, it is reasonable to assume that the values vi are independent and uniformly distributed on the corresponding intervals [vi,vi+]. Under this assumption, we would like to estimate, for each i, the probability pi that the alternative ai will be selected. In this paper, we provide efficient algorithms for computing these probabilities.
M Solutions Good, M1 Solutions Better, Luc Longpre, William Gasarch, G. W. Walster, Vladik Kreinovich
M Solutions Good, M1 Solutions Better, Luc Longpre, William Gasarch, G. W. Walster, Vladik Kreinovich
Departmental Technical Reports (CS)
One of the main objectives of theoretical research in computational complexity and feasibility is to explain experimentally observed difference in complexity.
Empirical evidence shows that the more solutions a system of equations has, the more difficult it is to solve it. Similarly, the more global maxima a continuous function has, the more difficult it is to locate them. Until now, these empirical facts have been only partially formalized: namely, it has been shown that problems with two or more solutions are more difficult to solve than problems with exactly one solution. In this paper, we extend this result and show ...
Towards A More Physically Adequate Definition Of Randomness: A Topological Approach, Vladik Kreinovich
Towards A More Physically Adequate Definition Of Randomness: A Topological Approach, Vladik Kreinovich
Departmental Technical Reports (CS)
KolmogorovMartinLof definition describes a random sequence as a sequence which satisfies all the laws of probability. This notion formalizes the intuitive physical idea that if an event has a probability 0, then this event cannot occur. Physicists, however, also believe that if an event has a very small probability, then it cannot occur. In our previous papers, we proposed a modification of the KolmogorovMartinLof definition which formalizes this idea as well. It turns out that our original definition is too general: e.g., it includes some clearly nonphysical situations when the set of all random elements is a onepoint set ...
Using Patterns And Composite Propositions To Automate The Generation Of Complex Ltl, Salamah Salamah, Ann Q. Gates, Vladik Kreinovich, Steve Roach
Using Patterns And Composite Propositions To Automate The Generation Of Complex Ltl, Salamah Salamah, Ann Q. Gates, Vladik Kreinovich, Steve Roach
Departmental Technical Reports (CS)
Property classifications and patterns, i.e., highlevel abstractions that describe common behavior, have been used to assist practitioners in specifying properties. The Specification Pattern System (SPS) provides descriptions of a collection of patterns. Each pattern is associated with a scope that defines the extent of program execution over which a property pattern is considered. Based on a selected pattern, SPS provides a specification for each type of scope in multiple formal languages including Linear Temporal Logic (LTL). The (Prospec) tool extends SPS by introducing the notion of Composite Propositions (CP), which are classifications for defining sequential and concurrent behavior to ...
The Gravity Data Ontology: Laying The Foundation For WorkflowDriven Ontologies, Ann Q. Gates, G. Randy Keller, Flor Salcedo, Paulo Pinheiro Da Silva, Leonardo Salayandia
The Gravity Data Ontology: Laying The Foundation For WorkflowDriven Ontologies, Ann Q. Gates, G. Randy Keller, Flor Salcedo, Paulo Pinheiro Da Silva, Leonardo Salayandia
Departmental Technical Reports (CS)
A workflowdriven ontology is an ontology that encodes disciplinespecific knowledge in the form of concepts and relationships and that facilitates the composition of services to create products and derive data. Early work on the development of such an ontology resulted in the construction of a gravity data ontology and the categorization of concepts: "Data," "Method," and "Product." "Data" is further categorized as "Raw Data" and "Derived Data," e.g., reduced data. The relationships that are defined capture inputs to and outputs from methods, e.g., derived data and products are output from methods, as well as other associations that are ...
Traffic Assignment For Risk Averse Drivers In A Stochastic Network, Ruey L. Cheu, Vladik Kreinovich, Srinivasa R. Manduva
Traffic Assignment For Risk Averse Drivers In A Stochastic Network, Ruey L. Cheu, Vladik Kreinovich, Srinivasa R. Manduva
Departmental Technical Reports (CS)
Most traffic assignment tasks in practice are performed by using deterministic network (DN) models, which assume that the link travel time is uniquely determined by a link performance function. In reality, link travel time, at a given link volume, is a random variable. Such stochastic network (SN) models are not widely used because the traffic assignment algorithms are much more computationally complex and difficult to understand by practitioners. In this paper, we derive an equivalent link disutility (ELD) function, for the case of risk averse drivers in a SN, without assuming any distribution of link travel time. We further derive ...
From (Idealized) Exact CausalityPreserving Transformations To Practically Useful ApproximatelyPreserving Ones: A General Approach, Vladik Kreinovich, Olga Kosheleva
From (Idealized) Exact CausalityPreserving Transformations To Practically Useful ApproximatelyPreserving Ones: A General Approach, Vladik Kreinovich, Olga Kosheleva
Departmental Technical Reports (CS)
It is known that every causalitypreserving transformation of Minkowski spacetime is a composition of Lorentz transformations, shifts, rotations, and dilations. In principle, this result means that by only knowing the causality relation, we can determine the coordinate and metric structure on the spacetime. However, strictly speaking, the theorem only says that this reconstruction is possible if we know the exact causality relation. In practice, measurements are never 100% accurate. It is therefore desirable to prove that if a transformation approximately preserves causality, then it is approximately equal to an abovedescribed composition.
Such a result was indeed proven, but only for ...