Open Access. Powered by Scholars. Published by Universities.®

Computer Engineering Commons

Open Access. Powered by Scholars. Published by Universities.®

Computer Sciences

Portland State University

Parallel processing (Electronic computers)

Articles 1 - 9 of 9

Full-Text Articles in Computer Engineering

Resizable, Scalable, Concurrent Hash Tables Via Relativistic Programming, Josh Triplett, Paul E. Mckenney, Jonathan Walpole Jun 2011

Resizable, Scalable, Concurrent Hash Tables Via Relativistic Programming, Josh Triplett, Paul E. Mckenney, Jonathan Walpole

Computer Science Faculty Publications and Presentations

Presentation focusing on software synchronization, thread locking, transactional memory, and relativistic programming. Hash table algorithms are presented with examples of relativistic list insertion and removal, and related data structures. Existing approaches are compared to new methodologies and future work with relativistic data structures.


Generalized Construction Of Scalable Concurrent Data Structures Via Relativistic Programming, Josh Triplett, Paul E. Mckenney, Philip W. Howard, Jonathan Walpole Mar 2011

Generalized Construction Of Scalable Concurrent Data Structures Via Relativistic Programming, Josh Triplett, Paul E. Mckenney, Philip W. Howard, Jonathan Walpole

Computer Science Faculty Publications and Presentations

We present relativistic programming, a concurrent programming model based on shared addressing, which supports efficient, scalable operation on either uniform shared-memory or distributed shared- memory systems. Relativistic programming provides a strong causal ordering property, allowing a series of read operations to appear as an atomic transaction that occurs entirely between two ordered write operations. This preserves the simple immutable-memory programming model available via mutual exclusion or transactional memory. Furthermore, relativistic programming provides joint-access parallelism, allowing readers to run concurrently with a writer on the same data. We demonstrate a generalized construction technique for concurrent data structures based on relativistic programming, …


The Ordering Requirements Of Relativistic And Reader-Writer Locking Approaches To Shared Data Access, Philip William Howard, Josh Triplett, Jonathan Walpole, Paul E. Mckenney Jan 2011

The Ordering Requirements Of Relativistic And Reader-Writer Locking Approaches To Shared Data Access, Philip William Howard, Josh Triplett, Jonathan Walpole, Paul E. Mckenney

Computer Science Faculty Publications and Presentations

The semantics of reader-writer locks allow read-side concurrency. Unfortunately, the locking primitives serialize access to the lock variable to an extent that little or no concurrency is realized in practice for small critical sections. Relativistic programming is a methodology that also allows read- side concurrency. Relativistic programming uses dfferent ordering constraints than reader-writer locking. The different ordering constraints allow relativistic readers to proceed without synchronization so relativistic readers scale even for very short critical sections. In this paper we explore the diferences between the ordering constraints for reader-writer locking and relativistic programs. We show how and why the dfferent ordering …


Infrastructure For Performance Tuning Mpi Applications, Kathryn Marie Mohror Jan 2004

Infrastructure For Performance Tuning Mpi Applications, Kathryn Marie Mohror

Dissertations and Theses

Clusters of workstations are becoming increasingly popular as a low-budget alternative for supercomputing power. In these systems,message-passing is often used to allow the separate nodes to act as a single computing machine. Programmers of such systems face a daunting challenge in understanding the performance bottlenecks of their applications. This is largely due to the vast amount of performance data that is collected, and the time and expertise necessary to use traditional parallel performance tools to analyze that data.

The goal of this project is to increase the level of performance tool support for message-passing application programmers on clusters of workstations. …


Pperfgrid: A Grid Services-Based Tool For The Exchange Of Heterogeneous Parallel Performance Data, John Jared Hoffman Jan 2004

Pperfgrid: A Grid Services-Based Tool For The Exchange Of Heterogeneous Parallel Performance Data, John Jared Hoffman

Dissertations and Theses

This thesis details the approach taken in developing PPerfGrid. Section 2 discusses other research related to this project. Section 3 provides general background on the technologies utilized in PPerfGrid, focusing on the components that make up the Grid services architecture. Section 4 provides a description of the architecture of PPerfGrid. Section 5 details the implementation of PPerfGrid. Section 6 presents tests designed to measure the overhead and scalability of the PPerfGrid application. Section 7 suggests future work, and Section 8 concludes the thesis.


A Performance Study Of Lam And Mpich On An Smp Cluster, Brian Patrick Kearns Dec 2002

A Performance Study Of Lam And Mpich On An Smp Cluster, Brian Patrick Kearns

Dissertations and Theses

Many universities and research laboratories have developed low cost clusters, built from Commodity-Off-The-Shelf (COTS) components and running mostly free software. Research has shown that these types of systems are well-equipped to handle many problems requiring parallel processing. The primary components of clusters are hardware, networking, and system software. An important system software consideration for clusters is the choice of the message passing library.

MPI (Message Passing Interface) has arguably become the most widely used message passing library on clusters and other parallel architectures, due in part to its existence as a standard. As a standard, MPI is open for anyone …


Querying Geographically Dispersed, Heterogeneous Data Stores: The Pperfxchange Approach, Matthew Edward Colgrove Jan 2002

Querying Geographically Dispersed, Heterogeneous Data Stores: The Pperfxchange Approach, Matthew Edward Colgrove

Dissertations and Theses

This thesis details PPerfXchange’s approach for querying geographically dispersed heterogeneous data stores. While elements of PPerfXchange’s method have been implemented for other application areas, PPerfXchange shows how these elements can be applied to parallel performance analysis. The accomplishments of this thesis are:

  • The design of an architecture for PPerfXchange, giving a uniform method to query heterogeneous data stores;
  • A proof of concept prototype implementation of PPerfXchange including a partial implementation of an XQuery processor and a relational database virtual XML document; and
  • Evaluation of PPerfXchange using example parallel performance analysis data.


Towards Comparative Profiling Of Parallel Applications With Pperfdb, Christian Leland Hansen Jan 2001

Towards Comparative Profiling Of Parallel Applications With Pperfdb, Christian Leland Hansen

Dissertations and Theses

Due to the complex nature of parallel programming, it is difficult to diagnose and solve performance related problems. Knowledge of program behavior is obtained experimentally, with repeated runs of a slightly modified version of the application or the same code in different environments. In these circumstances, comparative performance analysis can provide meaningful insights into the subtle effects of system and code changes on parallel program behavior by highlighting the difference in performance results across executions.

I have designed and implemented modules which extend the PPerfDB performance tool to allow access to existing performance data generated by several commonly used tracing …


Scheduling Of Parallel Jobs On Dynamic, Heterogenous Networks, Dan Clark, Jeremy Casas, Steve Otto, Robert Prouty, Jonathan Walpole Jan 1995

Scheduling Of Parallel Jobs On Dynamic, Heterogenous Networks, Dan Clark, Jeremy Casas, Steve Otto, Robert Prouty, Jonathan Walpole

Computer Science Faculty Publications and Presentations

In using a shared network of workstations for parallel processing, it is not only important to consider heterogeneity and differences in processing power between the workstations but also the dynamics of the system as a whole. In such a computing environment where the use of resources vary as other applications consume and release resources, intelligent scheduling of the parallel jobs onto the available resources is essential to maximize resource utilization. Despite this realization, however, there are few systems available that provide an infrastructure for the easy development and testing of these intelligent schedulers. In this paper, an infrastructure is presented …