Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 11 of 11

Full-Text Articles in Physical Sciences and Mathematics

Scalable Web Server Clustering Technologies, Trevor Schroeder, Steve Goddard, Byrav Ramamurthy Jun 2000

Scalable Web Server Clustering Technologies, Trevor Schroeder, Steve Goddard, Byrav Ramamurthy

School of Computing: Faculty Publications

The exponential growth of the Internet, coupled with the increasing popularity of dynamically generated content on the World Wide Web, has created the need for more and faster Web servers capable of serving the over 100 million Internet users. Server clustering has emerged as a promising technique to build scalable Web servers. In this article we examine the seminal work, early products, and a sample of contemporary commercial offerings in the field of transparent Web server clustering. We broadly classify transparent server clustering into three categories.


Operational Evaluation Of A Knowledge-Based Sea Ice Classification System, Denise Gineris, Cheryl Bertoia, Mary Ruth Keller, Leen-Kiat Soh, Costas Tsatsoulis May 2000

Operational Evaluation Of A Knowledge-Based Sea Ice Classification System, Denise Gineris, Cheryl Bertoia, Mary Ruth Keller, Leen-Kiat Soh, Costas Tsatsoulis

CSE Conference and Workshop Papers

ARKTOS (Advanced Reasoning Using Knowledge for Typing of Sea Ice) is a fully automated intelligent sea ice classification system. ARKTOS is in use at the U.S. National Ice Center (NIC) for daily operations related to the NIC’S task of mapping the ice covered oceans. ARKTOS incorporates image processing, input from ancillary data, and artificial intelligence (AI) to analyze and classify RADARSAT Synthetic Aperture Radar (SAR) imagery. The NIC and Naval Research Laboratory (NRL/ERIM) have been testing and evaluating ARKTOS through the freeze-up, winter, melt-out and summer seasons of the Beaufort Sea. In this paper we outline the development and evolution …


An Empirical Study Of The Effects Of Incorporating Fault Exposure Potential Estimates Into A Test Data Adequacy Criterion, Wei Chen, Gregg Rothermel, Roland H. Untch, Jeffery Von Ronne Apr 2000

An Empirical Study Of The Effects Of Incorporating Fault Exposure Potential Estimates Into A Test Data Adequacy Criterion, Wei Chen, Gregg Rothermel, Roland H. Untch, Jeffery Von Ronne

CSE Technical Reports

Code-coverage-based test data adequacy criteria typically treat all code components as equal. In practice, however, the probably that a test case can expose a fault in a code component varies: some faults are more easily revealed than others. Thus, researchers have suggested that if we could estimate the probability that a fault in a code component will cause a failure, we could use this estimate to determine the number of executions of a component that are required to achieve a certain level of confidence in that component’s correctness. This estimate in turn could be used to improve the fault-detection effectiveness …


Cataloging Expert Systems: Optimism And Frustrated Reality, William Olmstadt Feb 2000

Cataloging Expert Systems: Optimism And Frustrated Reality, William Olmstadt

E-JASL 1999-2009 (Volumes 1-10)

There is little question that computers have profoundly changed how information professionals work. The process of cataloging and classifying library materials was one of the first activities transformed by information technology. The introduction of the MARC format in the 1960s and the creation of national bibliographic utilities in the 1970s had a lasting impact on cataloging. In the 1980s, the affordability of microcomputers made the computer accessible for cataloging, even to small libraries. This trend toward automating library processes with computers parallels a broader societal interest in the use of computers to organize and store information. Following World War II, …


Separating Touching Objects In Remote Sensing Imagery: The Restricted Growing Concept And Implementations, Leen-Kiat Soh, Costas Tsatsoulis Feb 2000

Separating Touching Objects In Remote Sensing Imagery: The Restricted Growing Concept And Implementations, Leen-Kiat Soh, Costas Tsatsoulis

School of Computing: Faculty Publications

This paper defines the restricted growing concept (RGC) for object separation and provides an algorithmic analysis of its implementations. Our concept decomposes the problem of object separation into two stages. First, separation is achieved by shrinking the objects to their cores while keeping track of their originals as masks. Then the core is grown within the masks obeying the guidelines of a restricted growing algorithm. In this paper, we apply RGC to the remote sensing domain, particularly the synthetic aperture radar (SAR) sea ice images.


Virtual Topology Reconfiguration Of Wavelength-Routed Optical Wdm Networks, Byrav Ramamurthy, Ashok Ramakrishnan Jan 2000

Virtual Topology Reconfiguration Of Wavelength-Routed Optical Wdm Networks, Byrav Ramamurthy, Ashok Ramakrishnan

CSE Conference and Workshop Papers

The bandwidth requirements of the Internet are increasing every day and there are newer and more bandwidth-thirsty applications emerging on the horizon. Wavelength division multiplexing (WDM) is the next step towards leveraging the capabilities of the optical fiber, especially for wide-area backbone networks. The ability to switch a signal at intermediate nodes in a WDM network based on their wavelengths is known as wavelength-routing. One of the greatest advantages of using wavelength-routing WDM is the ability to create a virtual topology different from the physical topology of the underlying network. This virtual topology can be reconfigured when necessary, to improve …


Prioritizing Test Cases For Regression Testing, Sebastian Elbaum, Alexey G. Malishevsky, Gregg Rothermel Jan 2000

Prioritizing Test Cases For Regression Testing, Sebastian Elbaum, Alexey G. Malishevsky, Gregg Rothermel

CSE Technical Reports

Test case prioritization techniques schedule test cases in an order that increases their effectiveness in meeting some performance goal. One performance goal, rate of fault detection, is a measure of how quickly faults are detected within the testing process; an improved rate of fault detection can pro- vide faster feedback on the system under test, and let soft- ware engineers begin locating and correcting faults earlier than might otherwise be possible. In previous work, we re- ported the results of studies that showed that prioritization techniques can significantly improve rate of fault detection. Those studies, however, raised several additional questions: …


Disec: A Distributed Framework For Scalable Secure Many-To-Many Communication, Lakshminath R. Dondeti, Sarit Mukherjee, Ashok Samal Jan 2000

Disec: A Distributed Framework For Scalable Secure Many-To-Many Communication, Lakshminath R. Dondeti, Sarit Mukherjee, Ashok Samal

CSE Conference and Workshop Papers

Secure one-to-many multicasting has been a popular research area in the recent past. Secure many-to-many multicasting is becoming popular with applications such as private conferencing and distributed interactive simulation. Most of the existing secure multicasting protocols use a centralized group manager to enforce access control and for key distribution. In the presence of multiple senders it is desirable to delegate group management responsibility to all the senders. We propose a distributed group key management scheme to support secure many-to-many communication. We divide key distribution overhead evenly among the senders. Our protocol is scalable and places equal trust in all the …


Exploiting Don't Cares To Enhance Functional Tests, Mark W. Weiss, Sharad C. Seth, Shashank K. Mehta, Kent L. Einspahr Jan 2000

Exploiting Don't Cares To Enhance Functional Tests, Mark W. Weiss, Sharad C. Seth, Shashank K. Mehta, Kent L. Einspahr

CSE Conference and Workshop Papers

In simulation based design verification, deterministic or pseudo-random tests are used to check functional correctness of a design. In this paper we present a technique generating tests by specifying the don’t care inputs in the functional specifications so as to improve their coverage of both design errors and manufacturing faults. The don’t cares are chosen to maximize sensitization of signals in the circuit. The tests generated in this way require only a fraction of pseudo-exhaustive test patterns to achieve a high multiplicity of fault coverage.


Lsmac And Lsnait: Two Approaches For Cluster-Based Scalable Web Servers, Xuehong Gana, Trevor Schroeder, Steve Goddard, Byrav Ramamurthy Jan 2000

Lsmac And Lsnait: Two Approaches For Cluster-Based Scalable Web Servers, Xuehong Gana, Trevor Schroeder, Steve Goddard, Byrav Ramamurthy

CSE Conference and Workshop Papers

Server responsiveness and scalability are more important than ever in today’s client/server dominated network environments. Recently, researchers have begun to consider cluster-based computers using commodity hardware as an alternative to expensive specialized hardware for building scalable Web servers. In this paper, we present performance results comparing two cluster-based Web servers based on different server infrastructures: MAC-based dispatching (LSMAC) and IP-based dispatching (LSNAT). Both cluster-based server systems were implemented as application-space programs running on commodity hardware. We point out the advantages and disadvantages of both systems. We also identify when servers should be clustered and when clustering will not improve performance.


Optical Communication Networks For The Next-Generation Internet, Arun K. Somani, Byrav Ramamurthy Jan 2000

Optical Communication Networks For The Next-Generation Internet, Arun K. Somani, Byrav Ramamurthy

School of Computing: Faculty Publications

Computer and telecommunication networks are changing the world dramatically and will continue to do so in the foreseeable future. The Internet, primarily based on packet switches, provides very flexible data services such as e-mail and access to the World Wide Web. The Internet is a variable-delay, variable- bandwidth network that provides no guarantee on quality of service (QoS) in its initial phase. New services are being added to the pure data delivery framework of yesterday. Such high demands on capacity could lead to a “bandwidth crunch” at the core wide-area network, resulting in degradation of service quality. Fortunately, technological innovations …