Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

University of Nebraska - Lincoln

CSE Technical Reports

Discipline
Keyword
Publication Year

Articles 121 - 130 of 130

Full-Text Articles in Physical Sciences and Mathematics

Arktos: An Intelligent System For Satellite Sea Ice Image Analysis, Leen-Kiat Soh Jan 2002

Arktos: An Intelligent System For Satellite Sea Ice Image Analysis, Leen-Kiat Soh

CSE Technical Reports

We present an intelligent system for satellite sea ice image analysis named ARKTOS (Advanced Reasoning using Knowledge for Typing Of Sea ice). The underlying methodology of ARKTOS is to perform fully automated analysis of sea ice images by mimicking the reasoning process of sea ice experts and photo-interpreters. Hence, our approach is feature-based, rule-based classification supported by multisource data fusion and knowledge bases. A feature can be an ice floe, for example. ARKTOS computes a host of descriptors for that feature and then applies expert rules to classify the floe into one of several ice classes. ARKTOS also incorporates information …


The Impact Of Test Suite Granularity On The Costeffectiveness Of Regression Testing, Gregg Rothermel, Sebastian Elbaum, Alexey Malishevsky, Praveen Kallakuri, Brian Davia Sep 2001

The Impact Of Test Suite Granularity On The Costeffectiveness Of Regression Testing, Gregg Rothermel, Sebastian Elbaum, Alexey Malishevsky, Praveen Kallakuri, Brian Davia

CSE Technical Reports

Regression testing is an expensive testing process used to validate software following modifications. The cost-effectiveness of regression testing techniques varies with characteristics of test suites. One such characteristic, test suite granularity, involves the way in which test inputs are grouped into test cases within a test suite. Various cost-benefits tradeoffs have been attributed to choices of test suite granularity, but almost no research has formally examined these tradeoffs. To address this lack, we conducted several controlled experiments, examining the effects of test suite granularity on the costs and benefits of several controlled experiments, examining the effects of test suite granularity …


Combining Ordering Heuristics And Bundling Techniques For Solving Finite Constraint Satisfaction Problems, Amy Beckwith, Berthe Y. Choueiry Jan 2001

Combining Ordering Heuristics And Bundling Techniques For Solving Finite Constraint Satisfaction Problems, Amy Beckwith, Berthe Y. Choueiry

CSE Technical Reports

We investigate techniques to enhance the performance of backtrack search procedure with forward-checking (FC-BT) for finding all solutions to a finite Constraint Satisfaction Problem (CSP). We consider ordering heuristics for variables and/or values and bundling techniques based on the computation of interchangeability. While the former methods allow us to traverse the search space more effectively, the latter allow us to reduce it size. We design and compare strategies that combine static and dynamic versions of these two approaches. We show empirically the utility of dynamic variable ordering combined with dynamic bundling in both random problems and puzzles.


A Generator Of Random Instances Of Binary Finite Constraint Satisfaction Problems With Controllable Levels Of Interchangeability, Hui Zou, Amy Beckwith, Berthe Y. Choueiry Jan 2001

A Generator Of Random Instances Of Binary Finite Constraint Satisfaction Problems With Controllable Levels Of Interchangeability, Hui Zou, Amy Beckwith, Berthe Y. Choueiry

CSE Technical Reports

In order to test the performance of algorithms for solving Constraint Satisfaction Problems (CSPs), we must establish a large collection of CSP instances that meet a given set of specifications, such as the number of variables, domain size, constraint density, tightness, etc. The goal of this report is to describe a generator of instances that have a specified degree of interchangeability. An example of such a generator is described in (Freuder and Sabin 1997), which generates non-reflexive constraints and does not allow us to control concurrently the degree of interchangeability and tightness. We have developed a technique and written a …


Infrastructure Support For Controlled Experimentation With Software Testing And Regression Testing Techniques, Hyunsook Do, Sebastian Elbaum, Gregg Rothermel Jan 2001

Infrastructure Support For Controlled Experimentation With Software Testing And Regression Testing Techniques, Hyunsook Do, Sebastian Elbaum, Gregg Rothermel

CSE Technical Reports

Where the creation, understanding, and assessment of software testing and regression testing techniques are concerned, controlled experimentation is an indispensable research methodology. Obtaining the infrastructure necessary to support such experimentation, however, is difficult and expensive. As a result, progress in experimentation with testing techniques has been slow, and empirical data on the costs and effectiveness of techniques remains relatively scarce. To help address this problem, we have been designing and constructing infrastructure to support controlled experimentation with testing and regression testing techniques. This paper reports on the challenges faced by researchers experimenting with testing techniques, including those that inform the …


An Empirical Study Of The Effects Of Incorporating Fault Exposure Potential Estimates Into A Test Data Adequacy Criterion, Wei Chen, Gregg Rothermel, Roland H. Untch, Jeffery Von Ronne Apr 2000

An Empirical Study Of The Effects Of Incorporating Fault Exposure Potential Estimates Into A Test Data Adequacy Criterion, Wei Chen, Gregg Rothermel, Roland H. Untch, Jeffery Von Ronne

CSE Technical Reports

Code-coverage-based test data adequacy criteria typically treat all code components as equal. In practice, however, the probably that a test case can expose a fault in a code component varies: some faults are more easily revealed than others. Thus, researchers have suggested that if we could estimate the probability that a fault in a code component will cause a failure, we could use this estimate to determine the number of executions of a component that are required to achieve a certain level of confidence in that component’s correctness. This estimate in turn could be used to improve the fault-detection effectiveness …


Prioritizing Test Cases For Regression Testing, Sebastian Elbaum, Alexey G. Malishevsky, Gregg Rothermel Jan 2000

Prioritizing Test Cases For Regression Testing, Sebastian Elbaum, Alexey G. Malishevsky, Gregg Rothermel

CSE Technical Reports

Test case prioritization techniques schedule test cases in an order that increases their effectiveness in meeting some performance goal. One performance goal, rate of fault detection, is a measure of how quickly faults are detected within the testing process; an improved rate of fault detection can pro- vide faster feedback on the system under test, and let soft- ware engineers begin locating and correcting faults earlier than might otherwise be possible. In previous work, we re- ported the results of studies that showed that prioritization techniques can significantly improve rate of fault detection. Those studies, however, raised several additional questions: …


Experiments To Assess The Cost-Benefits Of Test-Suite Reduction, Gregg Rothermel, Mary Jean Harrold, Jeffery Von Ronne, Christie Hang, Jeffery Ostrin Dec 1999

Experiments To Assess The Cost-Benefits Of Test-Suite Reduction, Gregg Rothermel, Mary Jean Harrold, Jeffery Von Ronne, Christie Hang, Jeffery Ostrin

CSE Technical Reports

Test-suite reduction techniques attempt to reduce the cost of saving and reusing test cases during software maintenance by eliminating redundant test cases from test suites. A potential drawback of these techniques is that in reducing a test suite they might reduce the ability of that test suite to reveal faults in the software. Previous studies suggested that test-suite reduction techniques can reduce test suite size without significantly reducing the fault-detection capabilities of test suites. To further investigate this issue we performed experiments in which we examined the costs and benefits of reducing test suites of various sizes for several programs …


The Rate-Based Execution Model, Kevin Jeffay, Steve Goddard Apr 1999

The Rate-Based Execution Model, Kevin Jeffay, Steve Goddard

CSE Technical Reports

We present a new task model for the real-time execution of event-driven tasks in which no a priori characterization of the actual arrival rates of events is known; only the expected arrival rates of events is known. We call this new task model rate-based execution (RBE), and it is a generalization of the common sporadic task model. The RBE model is motivated naturally by distributed multimedia and digital signal processing applications.
We identify necessary and sufficient conditions for determining the feasibility of an RBE task set, and an optimal scheduling algorithm (based on preemptive earliest-deadline-first (EDF) scheduling) for scheduling the …


A Unifying Framework Supporting The Analysis And Development Of Safe Regression Test Selection Techniques, John Bible, Gregg Rothermel Jan 1999

A Unifying Framework Supporting The Analysis And Development Of Safe Regression Test Selection Techniques, John Bible, Gregg Rothermel

CSE Technical Reports

Safe regression test selection (RTS) techniques let software testers reduce the number of test cases that need to be rerun to revalidate new versions of software, while ensuring that no fault-revealing test case (in the existing test suite) is excluded. Most previous work on safe regression test selection has focused on specific safe RTS algorithms, rather than addressing the theoretical foundations of safe RTS techniques in general. In this paper, we present a unifying framework for safe RTS that supports the analysis and development of safe RTS techniques. We show that every safe RTS technique is founded on a regression …