Open Access. Powered by Scholars. Published by Universities.®

Computer Engineering Commons

Open Access. Powered by Scholars. Published by Universities.®

Computer Sciences

Portland State University

Keyword
Publication Year
Publication
Publication Type

Articles 31 - 60 of 112

Full-Text Articles in Computer Engineering

Equivalence Checking For High-Assurance Behavioral Synthesis, Kecheng Hao Jun 2013

Equivalence Checking For High-Assurance Behavioral Synthesis, Kecheng Hao

Dissertations and Theses

The rapidly increasing complexities of hardware designs are forcing design methodologies and tools to move to the Electronic System Level (ESL), a higher abstraction level with better productivity than the state-of-the-art Register Transfer Level (RTL). Behavioral synthesis, which automatically synthesizes ESL behavioral specifications to RTL implementations, plays a central role in this transition. However, since behavioral synthesis is a complex and error-prone translation process, the lack of designers' confidence in its correctness becomes a major barrier to its wide adoption. Therefore, techniques for establishing equivalence between an ESL specification and its synthesized RTL implementation are critical to bring behavioral synthesis …


Measurement And Modeling Of Passive Surface Mount Devices On Fr4 Substrates, Rahulkumar Sadanand Koche Jan 2012

Measurement And Modeling Of Passive Surface Mount Devices On Fr4 Substrates, Rahulkumar Sadanand Koche

Dissertations and Theses

Passive components like resistors, capacitors and inductors are used in every electronic system. These are the very basic components which affect the system performance at higher frequencies and it is necessary to understand and model the behavior of these components in a very accurate manner. This work focuses on utilizing Printed Circuit Board (PCB) test boards, or fixtures, made of FR4 for characterizing Surface Mount Device (SMD) components. Agilent's Advanced Design System (ADS) microwave circuit simulation software was used for designing the microstrip transmission lines as well as for generating the layout for manufacturing of the PCB. SMD resistors, capacitors …


Resizable, Scalable, Concurrent Hash Tables, Josh Triplett, Paul E. Mckenney, Jonathan Walpole Jun 2011

Resizable, Scalable, Concurrent Hash Tables, Josh Triplett, Paul E. Mckenney, Jonathan Walpole

Computer Science Faculty Publications and Presentations

We present algorithms for shrinking and expanding a hash table while allowing concurrent, wait-free, linearly scalable lookups. These resize algorithms allow the hash table to maintain constant-time performance as the number of entries grows, and reclaim memory as the number of entries decreases, without delaying or disrupting readers.

We implemented our algorithms in the Linux kernel, to test their performance and scalability. Benchmarks show lookup scalability improved 125x over readerwriter locking, and 56% over the current state-of-the-art for Linux, with no performance degradation for lookups during a resize.

To achieve this performance, this hash table implementation uses a new concurrent …


Resizable, Scalable, Concurrent Hash Tables Via Relativistic Programming, Josh Triplett, Paul E. Mckenney, Jonathan Walpole Jun 2011

Resizable, Scalable, Concurrent Hash Tables Via Relativistic Programming, Josh Triplett, Paul E. Mckenney, Jonathan Walpole

Computer Science Faculty Publications and Presentations

Presentation focusing on software synchronization, thread locking, transactional memory, and relativistic programming. Hash table algorithms are presented with examples of relativistic list insertion and removal, and related data structures. Existing approaches are compared to new methodologies and future work with relativistic data structures.


A Relativistic Enhancement To Software Transactional Memory, Philip William Howard, Jonathan Walpole May 2011

A Relativistic Enhancement To Software Transactional Memory, Philip William Howard, Jonathan Walpole

Computer Science Faculty Publications and Presentations

Relativistic Programming is a technique that allows low overhead, linearly-scalable concurrent reads. It also allows joint access parallelism between readers and a writer. Unfortunately, it has so far been limited to a single writer so it does not scale on the write side. Software Transactional Memory (STM) is a technique that allows programs to take advantage of disjoint access parallelism on both the read-side and write-side. Unfortunately, STM systems have a higher overhead than many other synchronization mechanisms so although STM scales, STM starts from a lower baseline. We propose combining relativistic programming and software transactional memory in a way …


Generalized Construction Of Scalable Concurrent Data Structures Via Relativistic Programming, Josh Triplett, Paul E. Mckenney, Philip W. Howard, Jonathan Walpole Mar 2011

Generalized Construction Of Scalable Concurrent Data Structures Via Relativistic Programming, Josh Triplett, Paul E. Mckenney, Philip W. Howard, Jonathan Walpole

Computer Science Faculty Publications and Presentations

We present relativistic programming, a concurrent programming model based on shared addressing, which supports efficient, scalable operation on either uniform shared-memory or distributed shared- memory systems. Relativistic programming provides a strong causal ordering property, allowing a series of read operations to appear as an atomic transaction that occurs entirely between two ordered write operations. This preserves the simple immutable-memory programming model available via mutual exclusion or transactional memory. Furthermore, relativistic programming provides joint-access parallelism, allowing readers to run concurrently with a writer on the same data. We demonstrate a generalized construction technique for concurrent data structures based on relativistic programming, …


Scalable Correct Memory Ordering Via Relativistic Programming, Josh Triplett, Philip William Howard, Paul E. Mckenney, Jonathan Walpole Mar 2011

Scalable Correct Memory Ordering Via Relativistic Programming, Josh Triplett, Philip William Howard, Paul E. Mckenney, Jonathan Walpole

Computer Science Faculty Publications and Presentations

We propose and document a new concurrent programming model, relativistic programming. This model allows readers to run concurrently with writers, without blocking or using expensive synchronization. Relativistic programming builds on existing synchronization primitives that allow writers to wait for current readers to finish with minimal reader overhead. Our methodology models data structures as graphs, and reader algorithms as traversals of these graphs; from this foundation we show how writers can implement arbitrarily strong ordering guarantees for the visibility of their writes, up to and including total ordering.


Relativistic Red-Black Trees, Philip William Howard, Jonathan Walpole Jan 2011

Relativistic Red-Black Trees, Philip William Howard, Jonathan Walpole

Computer Science Faculty Publications and Presentations

Operating system performance and scalability on sharedmemory many-core systems depends critically on efficient access to shared data structures. Scalability has proven difficult to achieve for many data structures. In this paper we present a novel and highly scalable concurrent red-black tree. Red-black trees are widely used in operating systems, but typically exhibit poor scalability. Our red-black tree has linear read scalability, uncontended read performance that is at least 25% faster than other known approaches, and deterministic lookup times for a given tree size, making it suitable for realtime applications.


The Ordering Requirements Of Relativistic And Reader-Writer Locking Approaches To Shared Data Access, Philip William Howard, Josh Triplett, Jonathan Walpole, Paul E. Mckenney Jan 2011

The Ordering Requirements Of Relativistic And Reader-Writer Locking Approaches To Shared Data Access, Philip William Howard, Josh Triplett, Jonathan Walpole, Paul E. Mckenney

Computer Science Faculty Publications and Presentations

The semantics of reader-writer locks allow read-side concurrency. Unfortunately, the locking primitives serialize access to the lock variable to an extent that little or no concurrency is realized in practice for small critical sections. Relativistic programming is a methodology that also allows read- side concurrency. Relativistic programming uses dfferent ordering constraints than reader-writer locking. The different ordering constraints allow relativistic readers to proceed without synchronization so relativistic readers scale even for very short critical sections. In this paper we explore the diferences between the ordering constraints for reader-writer locking and relativistic programs. We show how and why the dfferent ordering …


Is Parallel Programming Hard, And If So, Why?, Paul E. Mckenney, Maged M. Michael, Manish Gupta, Philip William Howard, Josh Triplett, Jonathan Walpole Feb 2009

Is Parallel Programming Hard, And If So, Why?, Paul E. Mckenney, Maged M. Michael, Manish Gupta, Philip William Howard, Josh Triplett, Jonathan Walpole

Computer Science Faculty Publications and Presentations

Of the 200+ parallel-programming languages and environments created in the 1990s, almost all are now defunct. Given that parallel systems are now well within the budget of the typical hobbyist or graduate student, it is not unreasonable to expect a new cohort in excess of several thousand parallel languages and environments to appear in the 2010s. If this expected new cohort is to have more practical impact than did its 1990s counterpart, a robust and widely applicable framework will be required that encompasses exactly what, if anything, is hard about parallel programming. This paper revisits the fundamental precepts of concurrent …


Programmer Friendly Refactoring Tools, Emerson Murphy-Hill Feb 2009

Programmer Friendly Refactoring Tools, Emerson Murphy-Hill

Dissertations and Theses

Tools that perform semi-automated refactoring are currently under-utilized by programmers. If more programmers adopted refactoring tools, software projects could make enormous productivity gains. However, as more advanced refactoring tools are designed, a great chasm widens between how the tools must be used and how programmers want to use them. This dissertation begins to bridge this chasm by exposing usability guidelines to direct the design of the next generation of programmer-friendly refactoring tools, so that refactoring tools fit the way programmers behave, not vice-versa.


Computational Techniques For Reducing Spectra Of The Giant Planets In Our Solar System, Holly L. Grimes Jan 2009

Computational Techniques For Reducing Spectra Of The Giant Planets In Our Solar System, Holly L. Grimes

Dissertations and Theses

The dynamic atmospheres of Jupiter, Saturn, Uranus, and Neptune provide a rich source of meteorological phenomena for scientists to study. To investigate these planets, scientists obtain spectral images of these bodies using various instruments including the Cooled Mid-Infrared Camera and Spectrometer (COMICS) at the Subaru Telescope Facility at Mauna Kea, Hawaii. These spectral images are two-dimensional arrays of double precision floating point values that have been read from a detector array. Such images must be reduced before the information they contain can be analyzed. The reduction process for spectral images from COMICS involves several steps:

1. Sky subtraction: the …


Graphical User Interfaces As Updatable Views, James Felger Terwilliger Jan 2009

Graphical User Interfaces As Updatable Views, James Felger Terwilliger

Dissertations and Theses

In contrast to a traditional setting where users express queries against the database schema, we assert that the semantics of data can often be understood by viewing the data in the context of the user interface (UI) of the software tool used to enter the data. That is, we believe that users will understand the data in a database by seeing the labels, dropdown menus, tool tips, help text, control contents, and juxtaposition or arrangement of controls that are built in to the user interface. Our goal is to allow domain experts with little technical skill to understand and query …


Irrelevance, Polymorphism, And Erasure In Type Theory, Richard Nathan Mishra-Linger Nov 2008

Irrelevance, Polymorphism, And Erasure In Type Theory, Richard Nathan Mishra-Linger

Dissertations and Theses

Dependent type theory is a proven technology for verified functional programming in which programs and their correctness proofs may be developed using the same rules in a single formal system. In practice, large portions of programs developed in this way have no computational relevance to the ultimate result of the program and should therefore be removed prior to program execution. In previous work on identifying and removing irrelevant portions of programs, computational irrelevance is usually treated as an intrinsic property of program expressions. We find that such an approach forces programmers to maintain two copies of commonly used datatypes: a …


Window Queries Over Data Streams, Jin Li Oct 2008

Window Queries Over Data Streams, Jin Li

Dissertations and Theses

Evaluating queries over data streams has become an appealing way to support various stream-processing applications. Window queries are commonly used in many stream applications. In a window query, certain query operators, especially blocking operators and stateful operators, appear in their windowed versions. Previous research work in evaluating window queries typically requires ordered streams and this order requirement limits the implementations of window operators and also carries performance penalties. This thesis presents efficient and flexible algorithms for evaluating window queries. We first present a new data model for streams, progressing streams, that separates stream progress from physical-arrival order. Then, we …


Semantic Components: A Model For Enhancing Retrieval Of Domain- Specific Information, Susan Loucette Price Mar 2008

Semantic Components: A Model For Enhancing Retrieval Of Domain- Specific Information, Susan Loucette Price

Dissertations and Theses

Despite the success of general Internet search engines, information retrieval remains an incompletely solved problem. Our research focuses on supporting domain experts when they search domain-specific libraries to satisfy targeted information needs. The semantic components model introduces a schema specific to a particular document collection. A semantic component schema consists of a two-level hierarchy, document classes and semantic components. A document class represents a document grouping, such as topic type or document purpose. A semantic component is a characteristic type of information that occurs in a particular document class and represents an important aspect of the document’s main topic. …


What Is Rcu, Fundamentally?, Paul E. Mckenney, Jonathan Walpole Dec 2007

What Is Rcu, Fundamentally?, Paul E. Mckenney, Jonathan Walpole

Computer Science Faculty Publications and Presentations

Read-copy update (RCU) is a synchronization mechanism that was added to the Linux kernel in October of 2002. RCU achieves scalability improvements by allowing reads to occur concurrently with updates. In contrast with conventional locking primitives that ensure mutual exclusion among concurrent threads regardless of whether they be readers or updaters, or with reader-writer locks that allow concurrent reads but not in the presence of updates, RCU supports concurrency between a single updater and multiple readers. RCU ensures that reads are coherent by maintaining multiple versions of objects and ensuring that they are not freed up until all pre-existing read-side …


Directflow: A Domain-Specific Language For Information-Flow Systems, Andrew P. Black, Chuan-Kai Lin Jan 2007

Directflow: A Domain-Specific Language For Information-Flow Systems, Andrew P. Black, Chuan-Kai Lin

Computer Science Faculty Publications and Presentations

Programs that process streams of information are commonly built by assembling reusable information-flow components. In some systems the components must be chosen from a pre-defined set of primitives; in others the programmer can create new custom components using a general-purpose programming language. Neither approach is ideal: restricting programmers to a set of primitive components limits the expressivity of the system, while allowing programmers to define new components in a general-purpose language makes it difficult or impossible to reason about the composite system. We advocate defining information-flow components in a domain-specific language (DSL) that enables us to infer the properties of …


Gridfields: Model-Driven Data Transformation In The Physical Sciences, Bill Howe Dec 2006

Gridfields: Model-Driven Data Transformation In The Physical Sciences, Bill Howe

Dissertations and Theses

Scientists' ability to generate and store simulation results is outpacing their ability to analyze them via ad hoc programs. We observe that these programs exhibit an algebraic structure that can be used to facilitate reasoning and improve performance. In this dissertation, we present a formal data model that exposes this algebraic structure, then implement the model, evaluate it, and use it to express, optimize, and reason about data transformations in a variety of scientific domains.

Simulation results are defined over a logical grid structure that allows a continuous domain to be represented discretely in the computer. Existing approaches for manipulating …


Efficient Support For Application-Specific Video Adaptation, Jie Huang Oct 2006

Efficient Support For Application-Specific Video Adaptation, Jie Huang

Dissertations and Theses

As video applications become more diverse, video must be adapted in different ways to meet the requirements of different applications when there are insufficient resources. In this dissertation, we address two sorts of requirements that cannot be addressed by existing video adaptation technologies: (i) accommodating large variations in resolution and (ii) collecting video effectively in a multi-hop sensor network. In addition, we also address requirements for implementing video adaptation in a sensor network.

Accommodating large variation in resolution is required by the existence of display devices with widely disparate screen sizes. Existing resolution adaptation technologies usually aim at adapting video …


Addressing Cheating And Workload Characterization In Online Games, Christopher Chambers Jan 2006

Addressing Cheating And Workload Characterization In Online Games, Christopher Chambers

Dissertations and Theses

The Internet has enabled the popular pastime of playing video games to grow rapidly by connecting game players in disparate locations. However, with popularity have come the two challenges of hosting a large number of users and detecting cheating among users. For reasons of control, security, and ease of development, the most popular system for hosting on-line games is the client server architecture. This is also the most expensive and least scalable architecture for the game publisher, which drives hosting costs upwards with the success of the game. In addition to the expense of hosting, as a particular game grows …


Can Infopipes Facilitate Reuse In A Traffic Application?, Emerson Murphy-Hill, Chuan-Kai Lin, Andrew P. Black, Jonathan Walpole Oct 2005

Can Infopipes Facilitate Reuse In A Traffic Application?, Emerson Murphy-Hill, Chuan-Kai Lin, Andrew P. Black, Jonathan Walpole

Computer Science Faculty Publications and Presentations

Infopipes are presented as reusable building blocks for streaming applications. To evaluate this claim, we have built a significant traffic application in Smalltalk using Infopipes. This poster presents a traffic problem and solution, a short introduction to Infopipes, and the types of reuse Infopipes facilitate in our implementation.


Teabag: A Debugger For Curry, Stephen Lee Johnson Jul 2004

Teabag: A Debugger For Curry, Stephen Lee Johnson

Dissertations and Theses

This thesis describes TeaBag, which is a debugger for functional logic computations. TeaBag is an accessory of a virtual machine currently under development. A distinctive feature of this machine is its operational completeness of computations, which places novel demands on a debugger. This thesis describes the features of TeaBag, in particular the handling of non-determinism, the ability to control nondeterministic steps, to remove context information, to toggle eager evaluation, and to set breakpoints on both functions and terms. This thesis also describes TeaBag's architecture and its interaction with the associated virtual machine. Finally, some debugging sessions of defective programs are …


Infrastructure For Performance Tuning Mpi Applications, Kathryn Marie Mohror Jan 2004

Infrastructure For Performance Tuning Mpi Applications, Kathryn Marie Mohror

Dissertations and Theses

Clusters of workstations are becoming increasingly popular as a low-budget alternative for supercomputing power. In these systems,message-passing is often used to allow the separate nodes to act as a single computing machine. Programmers of such systems face a daunting challenge in understanding the performance bottlenecks of their applications. This is largely due to the vast amount of performance data that is collected, and the time and expertise necessary to use traditional parallel performance tools to analyze that data.

The goal of this project is to increase the level of performance tool support for message-passing application programmers on clusters of workstations. …


Pperfgrid: A Grid Services-Based Tool For The Exchange Of Heterogeneous Parallel Performance Data, John Jared Hoffman Jan 2004

Pperfgrid: A Grid Services-Based Tool For The Exchange Of Heterogeneous Parallel Performance Data, John Jared Hoffman

Dissertations and Theses

This thesis details the approach taken in developing PPerfGrid. Section 2 discusses other research related to this project. Section 3 provides general background on the technologies utilized in PPerfGrid, focusing on the components that make up the Grid services architecture. Section 4 provides a description of the architecture of PPerfGrid. Section 5 details the implementation of PPerfGrid. Section 6 presents tests designed to measure the overhead and scalability of the PPerfGrid application. Section 7 suggests future work, and Section 8 concludes the thesis.


Using Dynamic Optimization For Control Of Real Rate Cpu Resource Management Applications, Varin Vahia, Ashvin Goel, David Steere, Jonathan Walpole, Molly H. Shor Dec 2003

Using Dynamic Optimization For Control Of Real Rate Cpu Resource Management Applications, Varin Vahia, Ashvin Goel, David Steere, Jonathan Walpole, Molly H. Shor

Computer Science Faculty Publications and Presentations

In this paper we design a proportional-period optimal controller for allocating CPU to real rate multimedia applications on a general-purpose computer system. We model this computer system problem in to state space form. We design a controller based on dynamic optimization LQR tracking techniques to minimize short term and long term time deviation from the current time stamp and also CPU usage. Preliminary results on an experimental set up are encouraging.


Adaptive Live Video Streaming By Priority Drop, Jie Huang, Charles Krasic, Jonathan Walpole Jul 2003

Adaptive Live Video Streaming By Priority Drop, Jie Huang, Charles Krasic, Jonathan Walpole

Computer Science Faculty Publications and Presentations

In this paper we explore the use of Priority-progress streaming (PPS) for video surveillance applications. PPS is an adaptive streaming technique for the delivery of continuous media over variable bit-rate channels. It is based on the simple idea of reordering media components within a time window into priority order before transmission. The main concern when using PPS for live video streaming is the time delay introduced by reordering. In this paper we describe how PPS can be extended to support live streaming and show that the delay inherent in the approach can be tuned to satisfy a wide range of …


Under The Plastic: A Quantitative Look At Dvd Video Encoding And Its Impact On Video Modeling, Wu-Chi Feng, Jin Choi, Wu-Chang Feng, Jonathan Walpole Jan 2003

Under The Plastic: A Quantitative Look At Dvd Video Encoding And Its Impact On Video Modeling, Wu-Chi Feng, Jin Choi, Wu-Chang Feng, Jonathan Walpole

Computer Science Faculty Publications and Presentations

In this paper, we examine the DVD encoding process and the implications this process has video modeling and network traffic analysis. We have assembled a system that allows us to extract the video data from the DVDs as they were encoded for distribution. Analyzing the resulting video trace data, we describe how DVD encodings have evolved over time. In addition, our findings show that the underlying video content is fundamentally different than those produced by basic consumer video capture boards. We demonstrate how this affects current video modeling proposals and their affect on network traffic characterization. This research is based …


A Performance Study Of Lam And Mpich On An Smp Cluster, Brian Patrick Kearns Dec 2002

A Performance Study Of Lam And Mpich On An Smp Cluster, Brian Patrick Kearns

Dissertations and Theses

Many universities and research laboratories have developed low cost clusters, built from Commodity-Off-The-Shelf (COTS) components and running mostly free software. Research has shown that these types of systems are well-equipped to handle many problems requiring parallel processing. The primary components of clusters are hardware, networking, and system software. An important system software consideration for clusters is the choice of the message passing library.

MPI (Message Passing Interface) has arguably become the most widely used message passing library on clusters and other parallel architectures, due in part to its existence as a standard. As a standard, MPI is open for anyone …


Content Aware Request Distribution For High Performance Web Service: A Performance Study, Robert M. Jones Jul 2002

Content Aware Request Distribution For High Performance Web Service: A Performance Study, Robert M. Jones

Dissertations and Theses

The World Wide Web is becoming a basic infrastructure for a variety of services, and the increases in audience size and client network bandwidth create service demands that are outpacing server capacity. Web clusters are one solution to this need for high performance, highly available web server systems. We are interested in load distribution techniques, specifically Layer-7 algorithms that are content-aware. Layer-7 algorithms allow distribution control based on the specific content requested, which is advantageous for a system that offers highly heterogenous services. We examine the performance of the Client Aware Policy (CAP) on a Linux/Apache web cluster consisting of …