Open Access. Powered by Scholars. Published by Universities.®

Engineering Commons

Open Access. Powered by Scholars. Published by Universities.®

Electrical and Computer Engineering

Portland State University

Dissertations and Theses

Theses/Dissertations

Multiprocessors

Publication Year

Articles 1 - 5 of 5

Full-Text Articles in Engineering

The Multiple Pool Migrating Worker Paradigm : A Distributed System Framework And Model, Cynthia Ann Stanley Dec 1996

The Multiple Pool Migrating Worker Paradigm : A Distributed System Framework And Model, Cynthia Ann Stanley

Dissertations and Theses

A network of workstations (NOW) can provide an inexpensive and effective distributed processing platform. The purpose of this thesis is two-fold, providing a methodology for distributed computing on a NOW first, and providing a model to predict and monitor performance second. The Multiple Pool-Migrating Worker (MPMW) paradigm uses multiple job pools to divide up tasks and migrating workers to balance the work load. The MPMW paradigm is a quick and efficient way of implementing problems using distributed processing without extensive knowledge of parallel programming. A model describing the MPMW paradigm is developed using queuing theory and Mean Value Analysis techniques. …


Performance Evaluation Of Specialized Hardware For Fast Global Operations On Distributed Memory Multicomputers, Rajesh Madukkarumukumana Sankaran Oct 1995

Performance Evaluation Of Specialized Hardware For Fast Global Operations On Distributed Memory Multicomputers, Rajesh Madukkarumukumana Sankaran

Dissertations and Theses

Workstation cluster multicomputers are increasingly being applied for solving scientific problems that require massive computing power. Parallel Virtual Machine (PVM) is a popular message-passing model used to program these clusters. One of the major performance limiting factors for cluster multicomputers is their inefficiency in performing parallel program operations involving collective communications. These operations include synchronization, global reduction, broadcast/multicast operations and orderly access to shared global variables. Hall has demonstrated that a .secondary network with wide tree topology and centralized coordination processors (COP) could improve the performance of global operations on a variety of distributed architectures [Hall94a]. My hypothesis was that …


Hardware For Fast Global Operations On Distributed Memory Multicomputers And Multiprocessors, Douglas V. Hall Jan 1995

Hardware For Fast Global Operations On Distributed Memory Multicomputers And Multiprocessors, Douglas V. Hall

Dissertations and Theses

"Grand Challenge" problems such as climate modeling to predict droughts and human genome mapping to predict and possibly cure diseases such as cancer require massive computing power. Three kinds of computer systems currently used in attempts to solve these problems are "Big Iron" multicomputers such as the Intel Paragon, workstation cluster multicomputers, and distributed shared memory multiprocessors such as the Cray T3D. Machines such as these are inefficient in executing some or all of a set of global program operations which are important in many of the "Grand Challenge" programs. These operations include synchronization, reduction, MAX, MIN, one-to-all broadcasting, all-to-all …


Methodology For Accurate Speedup Prediction, Aruna Chittor Dec 1994

Methodology For Accurate Speedup Prediction, Aruna Chittor

Dissertations and Theses

The effective use of computational resources requires a good understanding of parallel architectures and algorithms. The effect of the parallel architecture and also the parallel application on the performance of the parallel systems becomes more complex with increasing numbers of processors. We will address this issue in this thesis, and develop a methodology to predict the overall execution time of a parallel application as a function of the system and problem size by combining simple analysis with a few experimental results. We show that runtimes and speedup can be predicted more accurately by analyzing the functional forms of the sequential …


Multiplexed Pipelining : A Cost Effective Loop Transformation Technique, Satish Pai Jan 1992

Multiplexed Pipelining : A Cost Effective Loop Transformation Technique, Satish Pai

Dissertations and Theses

Parallel processing has gained increasing importance over the last few years. A key aim of parallel processing is to improve the execution times of scientific programs by mapping them to many processors. Loops form an important part of most computational programs and must be processed efficiently to get superior performance in terms of execution times. Important examples of such programs include graphics algorithms, matrix operations (which are used in signal processing and image processing applications), particle simulation, and other scientific applications. Pipelining uses overlapped parallelism to efficiently reduce execution time.