Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Computer Sciences

Portland State University

Parallel processing (Electronic computers)

Articles 1 - 13 of 13

Full-Text Articles in Physical Sciences and Mathematics

Resizable, Scalable, Concurrent Hash Tables Via Relativistic Programming, Josh Triplett, Paul E. Mckenney, Jonathan Walpole Jun 2011

Resizable, Scalable, Concurrent Hash Tables Via Relativistic Programming, Josh Triplett, Paul E. Mckenney, Jonathan Walpole

Computer Science Faculty Publications and Presentations

Presentation focusing on software synchronization, thread locking, transactional memory, and relativistic programming. Hash table algorithms are presented with examples of relativistic list insertion and removal, and related data structures. Existing approaches are compared to new methodologies and future work with relativistic data structures.


Generalized Construction Of Scalable Concurrent Data Structures Via Relativistic Programming, Josh Triplett, Paul E. Mckenney, Philip W. Howard, Jonathan Walpole Mar 2011

Generalized Construction Of Scalable Concurrent Data Structures Via Relativistic Programming, Josh Triplett, Paul E. Mckenney, Philip W. Howard, Jonathan Walpole

Computer Science Faculty Publications and Presentations

We present relativistic programming, a concurrent programming model based on shared addressing, which supports efficient, scalable operation on either uniform shared-memory or distributed shared- memory systems. Relativistic programming provides a strong causal ordering property, allowing a series of read operations to appear as an atomic transaction that occurs entirely between two ordered write operations. This preserves the simple immutable-memory programming model available via mutual exclusion or transactional memory. Furthermore, relativistic programming provides joint-access parallelism, allowing readers to run concurrently with a writer on the same data. We demonstrate a generalized construction technique for concurrent data structures based on relativistic programming, …


The Ordering Requirements Of Relativistic And Reader-Writer Locking Approaches To Shared Data Access, Philip William Howard, Josh Triplett, Jonathan Walpole, Paul E. Mckenney Jan 2011

The Ordering Requirements Of Relativistic And Reader-Writer Locking Approaches To Shared Data Access, Philip William Howard, Josh Triplett, Jonathan Walpole, Paul E. Mckenney

Computer Science Faculty Publications and Presentations

The semantics of reader-writer locks allow read-side concurrency. Unfortunately, the locking primitives serialize access to the lock variable to an extent that little or no concurrency is realized in practice for small critical sections. Relativistic programming is a methodology that also allows read- side concurrency. Relativistic programming uses dfferent ordering constraints than reader-writer locking. The different ordering constraints allow relativistic readers to proceed without synchronization so relativistic readers scale even for very short critical sections. In this paper we explore the diferences between the ordering constraints for reader-writer locking and relativistic programs. We show how and why the dfferent ordering …


Goal-Oriented Job Scheduling For Parallel Computer Systems, Sangsuree Vasupongayya Dec 2007

Goal-Oriented Job Scheduling For Parallel Computer Systems, Sangsuree Vasupongayya

Dissertations and Theses

System administrators for parallel computers face many difficulties when managing job scheduling systems. First, current production job schedulers use many parameters, which seem flexible but it is highly challenging to configure and tune these parameters. Second, fair share is an important scheduling goal, but it is not clear what kind of fair share can be expected under current schedulers and how fair share impacts scheduling performance. Third, several job runtime prediction methods were proposed to improve inaccurate user-estimated runtimes, but these methods could under-estimate runtimes by a large amount and it is not clear whether they are practical for use …


Rcu Semantics: A First Attempt, Paul E. Mckenney, Jonathan Walpole Jan 2005

Rcu Semantics: A First Attempt, Paul E. Mckenney, Jonathan Walpole

Computer Science Faculty Publications and Presentations

There is not yet a formal statement of RCU (read-copy update) semantics. While this lack has thus far not been an impediment to adoption and use of RCU, it is quite possible that formal semantics would point the way towards tools that automatically validate uses of RCU or that permit RCU algorithms to be automatically generated by a parallel compiler. This paper is a first attempt to supply a formal definition of RCU. Or at least a semi-formal definition: although RCU does not yet wear a tux (though it does run in Linux), at least it might yet wear some …


Pperfgrid: A Grid Services-Based Tool For The Exchange Of Heterogeneous Parallel Performance Data, John Jared Hoffman Jan 2004

Pperfgrid: A Grid Services-Based Tool For The Exchange Of Heterogeneous Parallel Performance Data, John Jared Hoffman

Dissertations and Theses

This thesis details the approach taken in developing PPerfGrid. Section 2 discusses other research related to this project. Section 3 provides general background on the technologies utilized in PPerfGrid, focusing on the components that make up the Grid services architecture. Section 4 provides a description of the architecture of PPerfGrid. Section 5 details the implementation of PPerfGrid. Section 6 presents tests designed to measure the overhead and scalability of the PPerfGrid application. Section 7 suggests future work, and Section 8 concludes the thesis.


Infrastructure For Performance Tuning Mpi Applications, Kathryn Marie Mohror Jan 2004

Infrastructure For Performance Tuning Mpi Applications, Kathryn Marie Mohror

Dissertations and Theses

Clusters of workstations are becoming increasingly popular as a low-budget alternative for supercomputing power. In these systems,message-passing is often used to allow the separate nodes to act as a single computing machine. Programmers of such systems face a daunting challenge in understanding the performance bottlenecks of their applications. This is largely due to the vast amount of performance data that is collected, and the time and expertise necessary to use traditional parallel performance tools to analyze that data.

The goal of this project is to increase the level of performance tool support for message-passing application programmers on clusters of workstations. …


A Performance Study Of Lam And Mpich On An Smp Cluster, Brian Patrick Kearns Dec 2002

A Performance Study Of Lam And Mpich On An Smp Cluster, Brian Patrick Kearns

Dissertations and Theses

Many universities and research laboratories have developed low cost clusters, built from Commodity-Off-The-Shelf (COTS) components and running mostly free software. Research has shown that these types of systems are well-equipped to handle many problems requiring parallel processing. The primary components of clusters are hardware, networking, and system software. An important system software consideration for clusters is the choice of the message passing library.

MPI (Message Passing Interface) has arguably become the most widely used message passing library on clusters and other parallel architectures, due in part to its existence as a standard. As a standard, MPI is open for anyone …


Querying Geographically Dispersed, Heterogeneous Data Stores: The Pperfxchange Approach, Matthew Edward Colgrove Jan 2002

Querying Geographically Dispersed, Heterogeneous Data Stores: The Pperfxchange Approach, Matthew Edward Colgrove

Dissertations and Theses

This thesis details PPerfXchange’s approach for querying geographically dispersed heterogeneous data stores. While elements of PPerfXchange’s method have been implemented for other application areas, PPerfXchange shows how these elements can be applied to parallel performance analysis. The accomplishments of this thesis are:

  • The design of an architecture for PPerfXchange, giving a uniform method to query heterogeneous data stores;
  • A proof of concept prototype implementation of PPerfXchange including a partial implementation of an XQuery processor and a relational database virtual XML document; and
  • Evaluation of PPerfXchange using example parallel performance analysis data.


Towards Comparative Profiling Of Parallel Applications With Pperfdb, Christian Leland Hansen Jan 2001

Towards Comparative Profiling Of Parallel Applications With Pperfdb, Christian Leland Hansen

Dissertations and Theses

Due to the complex nature of parallel programming, it is difficult to diagnose and solve performance related problems. Knowledge of program behavior is obtained experimentally, with repeated runs of a slightly modified version of the application or the same code in different environments. In these circumstances, comparative performance analysis can provide meaningful insights into the subtle effects of system and code changes on parallel program behavior by highlighting the difference in performance results across executions.

I have designed and implemented modules which extend the PPerfDB performance tool to allow access to existing performance data generated by several commonly used tracing …


Mist: Pvm With Transparent Migration And Checkpointing, Jeremy Casas, Dan Clark, Phil Galbiati, Ravi Konuru, Steve Otto, Robert Prouty, Jonathan Walpole May 1995

Mist: Pvm With Transparent Migration And Checkpointing, Jeremy Casas, Dan Clark, Phil Galbiati, Ravi Konuru, Steve Otto, Robert Prouty, Jonathan Walpole

Computer Science Faculty Publications and Presentations

We are currently involved in research to enable PVM to take advantage of shared networks of workstations (NOWs) more effectively. In such a computing environment, it is important to utilize workstations unobtrusively and recover from machine failures. Towards this goal, we have enhanced PVM with transparent task migration, checkpointing, and global scheduling. These enhancements are part of the MIST project which takes an open systems approach in developing a cohesive, distributed parallel computing environment. This open systems approach promotes plug-and-play integration of independently developed modules, such as Condor, DQS, A VS, Prospero, XPVM, PIOUS, Ptools, etc. Transparent task migration, in …


Scheduling Of Parallel Jobs On Dynamic, Heterogenous Networks, Dan Clark, Jeremy Casas, Steve Otto, Robert Prouty, Jonathan Walpole Jan 1995

Scheduling Of Parallel Jobs On Dynamic, Heterogenous Networks, Dan Clark, Jeremy Casas, Steve Otto, Robert Prouty, Jonathan Walpole

Computer Science Faculty Publications and Presentations

In using a shared network of workstations for parallel processing, it is not only important to consider heterogeneity and differences in processing power between the workstations but also the dynamics of the system as a whole. In such a computing environment where the use of resources vary as other applications consume and release resources, intelligent scheduling of the parallel jobs onto the available resources is essential to maximize resource utilization. Despite this realization, however, there are few systems available that provide an infrastructure for the easy development and testing of these intelligent schedulers. In this paper, an infrastructure is presented …


Concurrency Control, Version Management And Transactions In Advanced Database Systems, Jonathan Walpole, Muntuck Yap Feb 1991

Concurrency Control, Version Management And Transactions In Advanced Database Systems, Jonathan Walpole, Muntuck Yap

Computer Science Faculty Publications and Presentations

This document constitutes the final deliverable for the research project titled “An Investigation of Selected Issues in Transaction Mechanism Design for Object Oriented Databases.” The document describes our ideas for extending the traditional transaction concept for use in object oriented databases, and concentrates specifically on providing an underlying model to support the concurrency control and version management aspects of the problem. The ideas presented here are not however restricted to the domain of object oriented databases. They are more generally applicable to database systems that require flexibility in their versioning and concurrency control policies.

In this document we define a …