Open Access. Powered by Scholars. Published by Universities.®
- Keyword
-
- Adaptive computing systems (9)
- Electronic data processing -- Distributed processing (8)
- Multimedia systems (8)
- Operating systems (Computers) -- Design and construction (6)
- Computer architecture -- Design (5)
-
- Neural networks (Computer science) (5)
- Parallel processing (Electronic computers) (5)
- Computer architecture (4)
- Functional programming (Computer science) (4)
- Operating systems (Computers) (4)
- Operating systems (Computers) -- Evaluation (4)
- Streaming technology (Telecommunications) (4)
- Synchronization (4)
- Application software -- Development (3)
- Computer network architectures (3)
- Hewlett-Packard computers (3)
- Middleware (3)
- RISC microprocessors (3)
- Software architecture (3)
- System analysis (3)
- Computer multitasking (2)
- Computer networks (2)
- Computer networks -- Scalability (2)
- Computer networks -- Security measures (2)
- Computer security -- Innnovations (2)
- Computer viruses -- Prevention (2)
- Data structures (Computer science) (2)
- Ecological economics (2)
- Forest Park (Portland Or.) (2)
- Operating systems (Computers) -- Resource allocation (2)
- Publication Year
- Publication
- Publication Type
Articles 31 - 60 of 79
Full-Text Articles in Computer Engineering
Generalized Construction Of Scalable Concurrent Data Structures Via Relativistic Programming, Josh Triplett, Paul E. Mckenney, Philip W. Howard, Jonathan Walpole
Generalized Construction Of Scalable Concurrent Data Structures Via Relativistic Programming, Josh Triplett, Paul E. Mckenney, Philip W. Howard, Jonathan Walpole
Computer Science Faculty Publications and Presentations
We present relativistic programming, a concurrent programming model based on shared addressing, which supports efficient, scalable operation on either uniform shared-memory or distributed shared- memory systems. Relativistic programming provides a strong causal ordering property, allowing a series of read operations to appear as an atomic transaction that occurs entirely between two ordered write operations. This preserves the simple immutable-memory programming model available via mutual exclusion or transactional memory. Furthermore, relativistic programming provides joint-access parallelism, allowing readers to run concurrently with a writer on the same data. We demonstrate a generalized construction technique for concurrent data structures based on relativistic programming, …
Scalable Correct Memory Ordering Via Relativistic Programming, Josh Triplett, Philip William Howard, Paul E. Mckenney, Jonathan Walpole
Scalable Correct Memory Ordering Via Relativistic Programming, Josh Triplett, Philip William Howard, Paul E. Mckenney, Jonathan Walpole
Computer Science Faculty Publications and Presentations
We propose and document a new concurrent programming model, relativistic programming. This model allows readers to run concurrently with writers, without blocking or using expensive synchronization. Relativistic programming builds on existing synchronization primitives that allow writers to wait for current readers to finish with minimal reader overhead. Our methodology models data structures as graphs, and reader algorithms as traversals of these graphs; from this foundation we show how writers can implement arbitrarily strong ordering guarantees for the visibility of their writes, up to and including total ordering.
Relativistic Red-Black Trees, Philip William Howard, Jonathan Walpole
Relativistic Red-Black Trees, Philip William Howard, Jonathan Walpole
Computer Science Faculty Publications and Presentations
Operating system performance and scalability on sharedmemory many-core systems depends critically on efficient access to shared data structures. Scalability has proven difficult to achieve for many data structures. In this paper we present a novel and highly scalable concurrent red-black tree. Red-black trees are widely used in operating systems, but typically exhibit poor scalability. Our red-black tree has linear read scalability, uncontended read performance that is at least 25% faster than other known approaches, and deterministic lookup times for a given tree size, making it suitable for realtime applications.
Comparing Discrete Simulation And System Dynamics: Modeling An Anti-Insurgency Influence Operation, Wayne W. Wakeland, Una E. Medina
Comparing Discrete Simulation And System Dynamics: Modeling An Anti-Insurgency Influence Operation, Wayne W. Wakeland, Una E. Medina
Wayne W. Wakeland
This paper contrasts the tradeoffs of modeling the same dynamic problem at a micro scale and at a macro scale of analysis: discrete system simulation (DS) versus continuous system simulation or system dynamics (SD). Both are employed to model the influence of entertainment education on terrorist system decay, with implications for field application. Each method optimizes different design, scope/scale, data availability/accuracy, parameter settings, and system sensitivities. Whether the research served by the computer model is applied or theoretical, DS tends to be useful for understand low-level individual unit/step influences on system change over time, whereas SD tends to shine when …
System Dynamics Implementation Of An Extended Brander And Taylor-Like Easter Island Model, Takuro Uehara, Yoko Nagase, Wayne W. Wakeland
System Dynamics Implementation Of An Extended Brander And Taylor-Like Easter Island Model, Takuro Uehara, Yoko Nagase, Wayne W. Wakeland
Wayne W. Wakeland
We provide a system dynamics implementation of a dynamic ecological economics model. Dynamic economic models are often constrained to use functions, such as the Cobb-Douglas function, chosen “conveniently” to allow for analytic solutions. The C-D function, however, suffers from its fixed elasticity that does not allow for the substitutability between man-made capital and natural capital to change, which is vital for economic sustainability. Using system dynamics removes this constraint and enables more realistic ecological economics models containing functions not amenable to analytic solution. The base model is the natural resource and population growth model developed by Brander and Taylor (1998) …
Comparing Discrete Simulation And System Dynamics: Modeling An Anti-Insurgency Influence Operation, Wayne Wakeland, Una E. Medina
Comparing Discrete Simulation And System Dynamics: Modeling An Anti-Insurgency Influence Operation, Wayne Wakeland, Una E. Medina
Systems Science Faculty Publications and Presentations
This paper contrasts the tradeoffs of modeling the same dynamic problem at a micro scale and at a macro scale of analysis: discrete system simulation (DS) versus continuous system simulation or system dynamics (SD). Both are employed to model the influence of entertainment education on terrorist system decay, with implications for field application. Each method optimizes different design, scope/scale, data availability/accuracy, parameter settings, and system sensitivities. Whether the research served by the computer model is applied or theoretical, DS tends to be useful for understand low-level individual unit/step influences on system change over time, whereas SD tends to shine when …
System Dynamics Implementation Of An Extended Brander And Taylor-Like Easter Island Model, Takuro Uehara, Yoko Nagase, Wayne Wakeland
System Dynamics Implementation Of An Extended Brander And Taylor-Like Easter Island Model, Takuro Uehara, Yoko Nagase, Wayne Wakeland
Systems Science Faculty Publications and Presentations
We provide a system dynamics implementation of a dynamic ecological economics model. Dynamic economic models are often constrained to use functions, such as the Cobb-Douglas function, chosen “conveniently” to allow for analytic solutions. The C-D function, however, suffers from its fixed elasticity that does not allow for the substitutability between man-made capital and natural capital to change, which is vital for economic sustainability. Using system dynamics removes this constraint and enables more realistic ecological economics models containing functions not amenable to analytic solution. The base model is the natural resource and population growth model developed by Brander and Taylor (1998) …
Is Parallel Programming Hard, And If So, Why?, Paul E. Mckenney, Maged M. Michael, Manish Gupta, Philip William Howard, Josh Triplett, Jonathan Walpole
Is Parallel Programming Hard, And If So, Why?, Paul E. Mckenney, Maged M. Michael, Manish Gupta, Philip William Howard, Josh Triplett, Jonathan Walpole
Computer Science Faculty Publications and Presentations
Of the 200+ parallel-programming languages and environments created in the 1990s, almost all are now defunct. Given that parallel systems are now well within the budget of the typical hobbyist or graduate student, it is not unreasonable to expect a new cohort in excess of several thousand parallel languages and environments to appear in the 2010s. If this expected new cohort is to have more practical impact than did its 1990s counterpart, a robust and widely applicable framework will be required that encompasses exactly what, if anything, is hard about parallel programming. This paper revisits the fundamental precepts of concurrent …
Dynamic Task Prediction For An Spmt Architecture Based On Control Independence, Komal Jothi
Dynamic Task Prediction For An Spmt Architecture Based On Control Independence, Komal Jothi
Dissertations and Theses
Exploiting better performance from computer programs translates to finding more instructions to execute in parallel. Since most general purpose programs are written in an imperatively sequential manner, closely lying instructions are always data dependent, making the designer look far ahead into the program for parallelism. This necessitates wider superscalar processors with larger instruction windows. But superscalars suffer from three key limitations, their inability to scale, sequential fetch bottleneck and high branch misprediction penalty. Recent studies indicate that current superscalars have reached the end of the road and designers will have to look for newer ideas to build computer processors.
Speculative …
What Is Rcu, Fundamentally?, Paul E. Mckenney, Jonathan Walpole
What Is Rcu, Fundamentally?, Paul E. Mckenney, Jonathan Walpole
Computer Science Faculty Publications and Presentations
Read-copy update (RCU) is a synchronization mechanism that was added to the Linux kernel in October of 2002. RCU achieves scalability improvements by allowing reads to occur concurrently with updates. In contrast with conventional locking primitives that ensure mutual exclusion among concurrent threads regardless of whether they be readers or updaters, or with reader-writer locks that allow concurrent reads but not in the presence of updates, RCU supports concurrency between a single updater and multiple readers. RCU ensures that reads are coherent by maintaining multiple versions of objects and ensuring that they are not freed up until all pre-existing read-side …
Directflow: A Domain-Specific Language For Information-Flow Systems, Andrew P. Black, Chuan-Kai Lin
Directflow: A Domain-Specific Language For Information-Flow Systems, Andrew P. Black, Chuan-Kai Lin
Computer Science Faculty Publications and Presentations
Programs that process streams of information are commonly built by assembling reusable information-flow components. In some systems the components must be chosen from a pre-defined set of primitives; in others the programmer can create new custom components using a general-purpose programming language. Neither approach is ideal: restricting programmers to a set of primitive components limits the expressivity of the system, while allowing programmers to define new components in a general-purpose language makes it difficult or impossible to reason about the composite system. We advocate defining information-flow components in a domain-specific language (DSL) that enables us to infer the properties of …
The Case For Thoroughly Testing Complex System Dynamic Models
The Case For Thoroughly Testing Complex System Dynamic Models
Wayne W. Wakeland
In order to determine whether model testing is as useful as suggested by modeling experts, the full battery of model tests recommended by Forrester, Senge, Sterman, and others was applied retrospectively to a complex previously-published system dynamics model. The time required to carry out each type of test was captured, and the benefits that resulted from applying each test was determined subjectively. The resulting benefit to cost ratios are reported. These ratios suggest that rather than focusing primarily on sensitivity testing, modelers should consider other types of model tests such as extreme condition tests and family member tests. The study …
The Case For Thoroughly Testing Complex System Dynamic Models, Wayne Wakeland, Megan Hoarfrost
The Case For Thoroughly Testing Complex System Dynamic Models, Wayne Wakeland, Megan Hoarfrost
Systems Science Faculty Publications and Presentations
In order to determine whether model testing is as useful as suggested by modeling experts, the full battery of model tests recommended by Forrester, Senge, Sterman, and others was applied retrospectively to a complex previously-published system dynamics model. The time required to carry out each type of test was captured, and the benefits that resulted from applying each test was determined subjectively. The resulting benefit to cost ratios are reported. These ratios suggest that rather than focusing primarily on sensitivity testing, modelers should consider other types of model tests such as extreme condition tests and family member tests. The study …
Using Dynamic Optimization For Control Of Real Rate Cpu Resource Management Applications, Varin Vahia, Ashvin Goel, David Steere, Jonathan Walpole, Molly H. Shor
Using Dynamic Optimization For Control Of Real Rate Cpu Resource Management Applications, Varin Vahia, Ashvin Goel, David Steere, Jonathan Walpole, Molly H. Shor
Computer Science Faculty Publications and Presentations
In this paper we design a proportional-period optimal controller for allocating CPU to real rate multimedia applications on a general-purpose computer system. We model this computer system problem in to state space form. We design a controller based on dynamic optimization LQR tracking techniques to minimize short term and long term time deviation from the current time stamp and also CPU usage. Preliminary results on an experimental set up are encouraging.
Adaptive Live Video Streaming By Priority Drop, Jie Huang, Charles Krasic, Jonathan Walpole
Adaptive Live Video Streaming By Priority Drop, Jie Huang, Charles Krasic, Jonathan Walpole
Computer Science Faculty Publications and Presentations
In this paper we explore the use of Priority-progress streaming (PPS) for video surveillance applications. PPS is an adaptive streaming technique for the delivery of continuous media over variable bit-rate channels. It is based on the simple idea of reordering media components within a time window into priority order before transmission. The main concern when using PPS for live video streaming is the time delay introduced by reordering. In this paper we describe how PPS can be extended to support live streaming and show that the delay inherent in the approach can be tuned to satisfy a wide range of …
A Lessons Learned Repository For Computer Forensics, Warren Harrison, George Heuston, Mark Morrissey, David Aucsmith, Sarah Mocas, Steve Russelle
A Lessons Learned Repository For Computer Forensics, Warren Harrison, George Heuston, Mark Morrissey, David Aucsmith, Sarah Mocas, Steve Russelle
Computer Science Faculty Publications and Presentations
The Law Enforcement community possesses a large, but informal, community memory with respect to digital forensics. Large, because the experiences of every forensics technician and investigator contribute to the whole. Informal because there is seldom an explicit mechanism for disseminating this wisdom except “over the water cooler”. As a consequence, the same problems and mistakes continue to resurface and the same solutions are re-invented. In order to better exploit this informal collection of wisdom, the key points of each experience can be placed into a Repository for later dissemination. We describe a web-based Lessons Learned Repository (LLR) that facilitates contribution …
Thread Transparency In Information Flow Middleware, Rainer Koster, Andrew P. Black, Jie Huang, Jonathan Walpole, Calton Pu
Thread Transparency In Information Flow Middleware, Rainer Koster, Andrew P. Black, Jie Huang, Jonathan Walpole, Calton Pu
Computer Science Faculty Publications and Presentations
Existing middleware is based on control-flow centric interaction models such as remote method invocations, poorly matching the structure of applications that process continuous information flows. Difficulties cultiesin building this kind of application on conventional platforms include flow-specific concurrency and timing requirements, necessitating explicit management of threads, synchronization, and timing by the application programmer. We propose Infopipes as a high-level abstraction for information flows, and we are developing a middleware framework that supports this abstraction. Infopipes transparently handle complexities associated with control flow and multi-threading. From high-level configuration descriptions the platform determines what parts of a pipeline require separate threads or …
Reifying Communication At The Application Level, Andrew P. Black, Jie Huang, Jonathan Walpole
Reifying Communication At The Application Level, Andrew P. Black, Jie Huang, Jonathan Walpole
Computer Science Faculty Publications and Presentations
Middleware, from the earliest RPC systems to recent Object-Oriented Remote Message Sending (RMS) systems such as Java RMI and CORBA, claims transparency as one of its main attributes. Coulouris et al. define transparency as “the concealment from the … application programmer of the separation of components in a distributed system.” They go on to identify eight different kinds of transparency.
We considered titling this paper “Transparency Considered Harmful”, but that title is misleading because it implies that all kinds of transparency are bad. This is not our view. Rather, we believe that the choice of which transparencies should be offered …
Rate-Matching Packet Scheduler For Real-Rate Applications, Kang Li, Jonathan Walpole, Dylan Mcnamee, Calton Pu, David Steere
Rate-Matching Packet Scheduler For Real-Rate Applications, Kang Li, Jonathan Walpole, Dylan Mcnamee, Calton Pu, David Steere
Computer Science Faculty Publications and Presentations
A packet scheduler is an operating system component that controls the allocation of network interface bandwidth to outgoing network flows. By deciding which packet to send next, packet schedulers not only determine how bandwidth is shared among flows, but also play a key role in determining the rate and timing behavior of individual flows. The recent explosion of rate and timing-sensitive flows, particularly in the context of multimedia applications, has focused new interest on packet schedulers. Next generation packet schedulers must not only ensure separation among flows and meet real-time performance constraints, they must also support dynamic fine-grain reallocation of …
Application Of Control Theory To Modeling And Analysis Of Computer Systems, Molly H. Shor, Kang Li, Jonathan Walpole, David Steere, Calton Pu
Application Of Control Theory To Modeling And Analysis Of Computer Systems, Molly H. Shor, Kang Li, Jonathan Walpole, David Steere, Calton Pu
Computer Science Faculty Publications and Presentations
Experimentally, we show that Transmission Control Protocol (TCP)’s congestion control algorithm results in dynamic behavior similar to a stable limit cycle (attractor) when data from TCP flow into a fixed-size buffer and data is removed from the buffer at a fixed service rate. This setup represents how TCP buffers packets for transmission onto the network, with the network represented by a fixed-size buffer with a fixed service rate. The closed trajectory may vary slightly from period to period due to the discrete nature of computer systems. The size of the closed trajectory is a function of the network’s buffer size …
Aspects Of Information Flow, Andrew P. Black, Jonathan Walpole
Aspects Of Information Flow, Andrew P. Black, Jonathan Walpole
Computer Science Faculty Publications and Presentations
Along with our colleagues at the Oregon Graduate Institute and Georgia Institute of Technology, we have recently been experimenting with real-rate systems, that is, systems that are required to move data from one place to another at defined rates, such as 30 items per second. Audio conferencing or streaming video systems are typical: they are required to deliver video or audio frames from a source (a server or file system) in one place to a sink (a display or a sound generator) in another; the frames must arrive periodically, with constrained latency and jitter. We have successfully built such systems …
Work In Progress: Automating Proportion/Period Scheduling, David Steere, Jonathan Walpole, Calton Pu
Work In Progress: Automating Proportion/Period Scheduling, David Steere, Jonathan Walpole, Calton Pu
Computer Science Faculty Publications and Presentations
The recent effort to define middleware capable of supporting real-time applications creates the opportunity to raise the level of abstraction presented to the programmer. We propose that proportion/period is a better abstraction for specifying resource needs and allocation than priorities. We are currently investigating techniques to address some issues that are restricting use of proportion/period scheduling to research real-time prototypes. In particular, we are investigating techniques to automate the task of selecting proportion and period, and that allow proportion/period to incorporate job importance under overload conditions.
Qos Scalability For Streamed Media Delivery, Charles Krasic, Jonathan Walpole
Qos Scalability For Streamed Media Delivery, Charles Krasic, Jonathan Walpole
Computer Science Faculty Publications and Presentations
Applications with real-rate progress requirements, such as mediastreaming systems, are difficult to deploy in shared heterogenous environments such as the Internet. On the Internet, mediastreaming systems must be capable of trading off resource requirements against the quality of the media streams they deliver, in order to match wide-ranging dynamic variations in bandwidth between servers and clients. Since quality requirements tend to be user- and task-specific, mechanisms for capturing quality of service requirements and mapping them to appropriate resource-level adaptation policies are required. In this paper, we describe a general approach for automatically mapping user-level quality of service specifications onto resource …
Fine-Grain Period Adaptation In Soft Real-Time Environments, David Steere, Joshua Gruenberg, Dylan Mcnamee, Calton Pu, Jonathan Walpole
Fine-Grain Period Adaptation In Soft Real-Time Environments, David Steere, Joshua Gruenberg, Dylan Mcnamee, Calton Pu, Jonathan Walpole
Computer Science Faculty Publications and Presentations
Reservation-based scheduling delivers a proportion of the CPU to jobs over a period of time. In this paper we argue that automatically determining and assigning this period is both possible and useful in general purpose soft real-time environments such as personal computers and information appliances. The goal of period adaptation is to select the period over which a job is guaranteed to receive its portion of the CPU dynamically and automatically. The choice of period represents a trade-off between the amount of jitter observed by the job and the overall efficiency of the system. Secondary effects of period include quantization …
Adaptive Resource Management Via Modular Feedback Control, Ashvin Goel, David Steere, Calton Pu, Jonathan Walpole
Adaptive Resource Management Via Modular Feedback Control, Ashvin Goel, David Steere, Calton Pu, Jonathan Walpole
Computer Science Faculty Publications and Presentations
A key feature of tomorrow’s operating systems and runtime environments is their ability to adapt. Current state of the art uses an ad-hoc approach to building adaptive software, resulting in systems that can be complex, unpredictable and brittle. We advocate a modular and methodical approach for building adaptive system software based on feedback control. The use of feedback allows a system to automatically adapt to dynamically varying environments and loads, and allows the system designer to utilize the substantial body of knowledge in other engineering disciplines for building adaptive systems. We have developed a toolkit called SWiFT that embodies this …
The Judging Process For Sym Bowl : A High School System Dynamics Modeling Competition, Wayne W. Wakeland
The Judging Process For Sym Bowl : A High School System Dynamics Modeling Competition, Wayne W. Wakeland
Wayne W. Wakeland
This “paper” describes the judging process used to determine the winners in SymBowl, a high school system dynamics modeling competition held in Portland, Oregon the past three years. SymBowl was created by Ed Gallaher, a medical researcher at the Portland VA Hospital and Associate Professor at Oregon Health Sciences University.
The judging criteria and judging process were developed by Wakeland, who has served as the judging coordinating for past three years, overseeing the process, compiling results, etc. Wakeland is an Adjunct Professor of System Science at Portland State University where he teaches graduate-level modeling and simulation classes.
For SymBowl 98, …
Quality Of Service Semantics For Multimedia Database Systems, Jonathan Walpole, Charles Krasic, Ling Liu, David Maier, Calton Pu, Dylan Mcnamee, David Steere
Quality Of Service Semantics For Multimedia Database Systems, Jonathan Walpole, Charles Krasic, Ling Liu, David Maier, Calton Pu, Dylan Mcnamee, David Steere
Computer Science Faculty Publications and Presentations
Quality of service (QoS) support has been a hot research topic in multimedia databases, and multimedia systems in general, for the past several years. However, there remains little consensus on how QoS support should be provided. At the resource-management level, systems designers are still debating the suitability of reservation- based versus adaptive QoS management. The design of higher system layers is less clearly understood, and the specification of QoS requirements in domain-specific terms is still an open research topic. To address these issues, we propose a QoS model for multimedia databases. The model covers the specification of user-level QoS preferences …
Adaptation Space: Surviving Non-Maskable Failures, Crispin Cowan, Lois Delcambre, Anne-Francoise Le Meur, Ling Liu, David Maier, Dylan Mcnamee, Michael Miller, Calton Pu, Perry Wagle, Jonathan Walpole
Adaptation Space: Surviving Non-Maskable Failures, Crispin Cowan, Lois Delcambre, Anne-Francoise Le Meur, Ling Liu, David Maier, Dylan Mcnamee, Michael Miller, Calton Pu, Perry Wagle, Jonathan Walpole
Computer Science Faculty Publications and Presentations
Some failures cannot be masked by redundancies, because an unanticipated situation occurred, because fault-tolerance measures were not adequate, or because there was a security breach (which is not amenable to replication). Applications that wish to continue to offer some service despite nonmaskable failure must adapt to the loss of resources. When numerous combinations of non-maskable failure modes are considered, the set of possible adaptations becomes complex. This paper presents adaptation spaces, a formalism for navigating among combinations of adaptations. An adaptation space describes a collection of possible adaptations of a software component or system, and provides a uniform way of …
Stackguard: Automatic Adaptive Detection And Prevention Of Buffer-Overflow Attacks, Crispin Cowan, Calton Pu, David Maier, Heather Hinton, Jonathan Walpole, Peat Bakke, Steve Beattie, Aaron Grier, Perry Wagle, Qian Zhang
Stackguard: Automatic Adaptive Detection And Prevention Of Buffer-Overflow Attacks, Crispin Cowan, Calton Pu, David Maier, Heather Hinton, Jonathan Walpole, Peat Bakke, Steve Beattie, Aaron Grier, Perry Wagle, Qian Zhang
Computer Science Faculty Publications and Presentations
This paper presents a systematic solution to the persistent problem of buffer overflow attacks. Buffer overflow attacks gained notoriety in 1988 as part of the Morris Worm incident on the Internet. While it is fairly simple to fix individual buffer overflow vulnerabilities, buffer overflow attacks continue to this day. Hundreds of attacks have been discovered, and while most of the obvious vulnerabilities have now been patched, more sophisticated buffer overflow attacks continue to emerge.
We describe StackGuard: a simple compiler technique that virtually eliminates buffer overflow vulnerabilities with only modest performance penalties. Privileged programs that are recompiled with the StackGuard …
A Player For Adaptive Mpeg Video Streaming Over The Internet, Jonathan Walpole, Rainer Koster, Shanwei Cen, Crispin Cowan, David Maier, Dylan Mcnamee, Calton Pu, David Steere, Liujin Yu
A Player For Adaptive Mpeg Video Streaming Over The Internet, Jonathan Walpole, Rainer Koster, Shanwei Cen, Crispin Cowan, David Maier, Dylan Mcnamee, Calton Pu, David Steere, Liujin Yu
Computer Science Faculty Publications and Presentations
This paper describes the design and implementation of a real-time, streaming, Internet video and audio player. The player has a number of advanced features including dynamic adaptation to changes in available bandwidth, latency and latency variation; a multi-dimensional media scaling capability driven by user-specified quality of service (QoS) requirements; and support for complex content comprising multiple synchronized video and audio streams. The player was developed as part of the QUASAR t project at Oregon Graduate Institute, is freely available, and serves as a testbed for research in adaptive resource management and QoS control.