Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 8 of 8

Full-Text Articles in Physical Sciences and Mathematics

Mpj: Mpi-Like Message Passing For Java, Bryan Carpenter, Vladimir Getov, Glenn Judd, Anthony Skjellum, Geoffrey C. Fox Jan 2000

Mpj: Mpi-Like Message Passing For Java, Bryan Carpenter, Vladimir Getov, Glenn Judd, Anthony Skjellum, Geoffrey C. Fox

Northeast Parallel Architecture Center

Recently, there has been a lot of interest in using Java for parallel programming. Efforts have been hindered by lack of standard Java parallel programming APIs. To alleviate this problem, various groups started projects to develop Java message passing systems modeled on the successful Message Passing Interface (MPI). Official MPI bindings are currently defined only for C, Fortran, and C++, so early MPI-like environments for Java have been divergent. This paper related an effort undertaken by a working group of the Java Grande Forum, seeking a consensus on an MPI-like API, to enhance the viability of parallel programming using Java.


Mpijava: An Object-Oriented Java Interface To Mpi, Mark Baker, Bryan Carpenter, Geoffrey C. Fox, Sung Hoon Ko Jan 1999

Mpijava: An Object-Oriented Java Interface To Mpi, Mark Baker, Bryan Carpenter, Geoffrey C. Fox, Sung Hoon Ko

Northeast Parallel Architecture Center

A basic prerequisite for parallel programming is a good communication API. The recent interest in using Java for scientific and engineering application has led to several international efforts to produce a message passing interface to support parallel computation. In this paper we describe and then discuss the syntax, functionality and performance of one such interface, mpiJava, an object-oriented Java interface to MPI. We first discuss the design of the mpiJava API and the issues associated with its development. We then move on to briefly outline the steps necessary to 'port' mpiJava onto a range of operating systems, including Windows NT, …


Thoughts On The Structure Of An Mpj Reference Implementation, Mark Baker, Bryan Carpenter Jan 1999

Thoughts On The Structure Of An Mpj Reference Implementation, Mark Baker, Bryan Carpenter

Northeast Parallel Architecture Center

We sketch a proposed reference implementation for MPJ, the Java Grande Forum's MPI-like message-passing API [9, 3]. The proposal relies heavily on RMI and Jini for finding computational resources, creating slave processes, and handling failures. User-level communication is implemented efficiently directly on top of Java sockets.


Mpijava 1.2: Api Specification, Bryan Carpenter, Geoffrey C. Fox, Sung-Hoon Ko, Sang Lim Jan 1999

Mpijava 1.2: Api Specification, Bryan Carpenter, Geoffrey C. Fox, Sung-Hoon Ko, Sang Lim

Northeast Parallel Architecture Center

This document defines the API of mpiJava, a Java language binding for MPI 1.1. The document is not a standalone specification of the behaviour of MPI--it is meant to be read in conjunction with the MPI standard document [2]. Subsections are laid out in the same way as in the standard document, to allow cross-referencing. Where the mpiJava binding makes no significant change to a particular section of the standard document, we will just note here that there are no special issues for the Java binding. This does not mean that the corresponding section of the standard is irrelevant to …


A Multithreaded Message-Passing System For High Performance Distributed Computing Applications, Sung-Yong Park, Joohan Lee, Salim Hariri Jan 1998

A Multithreaded Message-Passing System For High Performance Distributed Computing Applications, Sung-Yong Park, Joohan Lee, Salim Hariri

Electrical Engineering and Computer Science - All Scholarship

High Performance Distributed Computing (HPDC) applications require low-latency and high-throughput communication services and HPDC applications have different Quality of Service (QOS) requirements (e.g., bandwidth requirement, flow/error control algorithms, etc.). The communication services provided by traditional message-passing systems are fixed and thus can not be changed to meet the requirements of different HPDC applications. NYNET (ATM wide area network testbed in New York state) Communication System (NCS) is a multithreaded message-passing system developed at Syracuse University that provides high-performance and flexible communication services. In this paper, we overview the general architecture of NCS and present how NCS communication services are implemented. …


Standardization Of A Communication Middleware For High-Performance Real-Time Systems, Arkady Kanevsky, Anthony Skjellum, Jerrell Watts Jan 1997

Standardization Of A Communication Middleware For High-Performance Real-Time Systems, Arkady Kanevsky, Anthony Skjellum, Jerrell Watts

Electrical Engineering and Computer Science - All Scholarship

The last several years saw an emergence of standardization activities for real-time systems including standardization of operating systems (series of POSIX standards [1]), of communication for distributed (POSIX.21 [10]) and parallel systems (MPI/RT [5]) and real-time object management (realtime CORBA [9]). This article describes the ongoing standardization work and implementation of communication middleware for high performance real-time computing. The real-time message passing interface (MPI/RT) advances the non-real-time high-performance communication standard Message Passing Interface Standard (MPI), emphasizing changes that enable and support real-time communication, and is targeted for embedded, fault-tolerant and other real-time systems. MPI/RT is the only communication middleware layer …


A Library-Based Approach To Task Parallelism In A Data-Parallel Language, Ian Foster, David R. Kohr, Rakesh Krishnaiyer, Alok Choudhary Jan 1996

A Library-Based Approach To Task Parallelism In A Data-Parallel Language, Ian Foster, David R. Kohr, Rakesh Krishnaiyer, Alok Choudhary

College of Engineering and Computer Science - Former Departments, Centers, Institutes and Projects

The data-parallel language High Performance Fortran (HPF) does not allow efficient expression of mixed task/data-parallel computations or the coupling of separately compiled data-parallel modules. In this paper, we show how these common parallel program structures can be represented, with only minor extensions to the HPF model, by using a coordination library based on the Message Passing Interface (MPI). This library allows data-parallel tasks to exchange distributed data structures using calls to simple communication functions. We present microbenchmark results that characterize the performance of this library and that quantify the impact of optimizations that allow reuse of communication schedules in common …


The Design And Evolution Of Zipcode, Anthony Skjellum, Steven G. Smith, Nathan E. Doss, Alvin Leung Jan 1994

The Design And Evolution Of Zipcode, Anthony Skjellum, Steven G. Smith, Nathan E. Doss, Alvin Leung

Northeast Parallel Architecture Center

Zipcode is a message-passing and process-management system that was designed for multicomputers and homogeneous networks of computers in order to support libraries and large-scale multicomputer software. The system has evolved significantly over the last five years, based on our experiences and identified needs. Features of Zipcode that were originally unique to it, were its simultaneous support of static process groups, communication contexts, and virtual topologies, forming the "mailer" data structure. Point-to-point and collective operations reference the underlying group, and use contexts to avoid mixing up messages. Recently, we have added "gather-send" and "receive-scatter" semantics, based on persistent Zipcode "invoices," both …