Open Access. Powered by Scholars. Published by Universities.®

Systems Architecture Commons

Open Access. Powered by Scholars. Published by Universities.®

Portland State University

Discipline
Keyword
Publication Year
Publication
Publication Type

Articles 1 - 30 of 68

Full-Text Articles in Systems Architecture

Occam Manual, Martin Zwick Jan 2021

Occam Manual, Martin Zwick

Systems Science Faculty Publications and Presentations

Occam is a Discrete Multivariate Modeling (DMM) tool based on the methodology of Reconstructability Analysis (RA). Its typical usage is for analysis of problems involving large numbers of discrete variables. Models are developed which consist of one or more components, which are then evaluated for their fit and statistical significance. Occam can search the lattice of all possible models, or can do detailed analysis on a specific model.

In Variable-Based Modeling (VBM), model components are collections of variables. In State-Based Modeling (SBM), components identify one or more specific states or substates.

Occam provides a web-based interface, which …


Joint Lattice Of Reconstructability Analysis And Bayesian Network General Graphs, Marcus Harris, Martin Zwick Jul 2020

Joint Lattice Of Reconstructability Analysis And Bayesian Network General Graphs, Marcus Harris, Martin Zwick

Systems Science Faculty Publications and Presentations

This paper integrates the structures considered in Reconstructability Analysis (RA) and those considered in Bayesian Networks (BN) into a joint lattice of probabilistic graphical models. This integration and associated lattice visualizations are done in this paper for four variables, but the approach can easily be expanded to more variables. The work builds on the RA work of Klir (1985), Krippendorff (1986), and Zwick (2001), and the BN work of Pearl (1985, 1987, 1988, 2000), Verma (1990), Heckerman (1994), Chickering (1995), Andersson (1997), and others. The RA four variable lattice and the BN four variable lattice partially overlap: there are ten …


Reconstructability Analysis & Its Occam Implementation, Martin Zwick Jul 2020

Reconstructability Analysis & Its Occam Implementation, Martin Zwick

Systems Science Faculty Publications and Presentations

This talk will describe Reconstructability Analysis (RA), a probabilistic graphical modeling methodology deriving from the 1960s work of Ross Ashby and developed in the systems community in the 1980s and afterwards. RA, based on information theory and graph theory, resembles and partially overlaps Bayesian networks (BN) and log-linear techniques, but also has some unique capabilities. (A paper explaining the relationship between RA and BN will be given in this special session.) RA is designed for exploratory modeling although it can also be used for confirmatory hypothesis testing. In RA modeling, one either predicts some DV from a set of IVs …


Functional Programming For Systems Software: Implementing Baremetal Programs In Habit, Donovan Ellison Jul 2020

Functional Programming For Systems Software: Implementing Baremetal Programs In Habit, Donovan Ellison

University Honors Theses

Programming in a baremetal environment, directly on top of hardware with very little to help manage memory or ensure safety, can be dangerous even for experienced programmers. Programming languages can ease the burden on developers and sometimes take care of entire sets of errors. This is not the case for a language like C that will do almost anything you want, for better or worse. To operate in a baremetal environment often requires direct control over memory, but it would be nice to have that capability without sacrificing safety guarantees. Rust is a new language that aims to fit this …


Enhancing Value-Based Healthcare With Reconstructability Analysis: Predicting Cost Of Care In Total Hip Replacement, Cecily Corrine Froemke, Martin Zwick Nov 2018

Enhancing Value-Based Healthcare With Reconstructability Analysis: Predicting Cost Of Care In Total Hip Replacement, Cecily Corrine Froemke, Martin Zwick

Systems Science Faculty Publications and Presentations

Legislative reforms aimed at slowing growth of US healthcare costs are focused on achieving greater value per dollar. To increase value healthcare providers must not only provide high quality care, but deliver this care at a sustainable cost. Predicting risks that may lead to poor outcomes and higher costs enable providers to augment decision making for optimizing patient care and inform the risk stratification necessary in emerging reimbursement models. Healthcare delivery systems are looking at their high volume service lines and identifying variation in cost and outcomes in order to determine the patient factors that are driving this variation and …


Introduction To Reconstructability Analysis, Martin Zwick Jul 2018

Introduction To Reconstructability Analysis, Martin Zwick

Systems Science Faculty Publications and Presentations

This talk will introduce Reconstructability Analysis (RA), a data modeling methodology deriving from the 1960s work of Ross Ashby and developed in the systems community in the 1980s and afterwards. RA, based on information theory and graph theory, is a member of the family of methods known as ‘graphical models,’ which also include Bayesian networks and log-linear techniques. It is designed for exploratory modeling, although it can also be used for confirmatory hypothesis testing. RA can discover high ordinality and nonlinear interactions that are not hypothesized in advance. Its conceptual framework illuminates the relationships between wholes and parts, a subject …


Preliminary Results Of Bayesian Networks And Reconstructability Analysis Applied To The Electric Grid, Marcus Harris, Martin Zwick Jul 2018

Preliminary Results Of Bayesian Networks And Reconstructability Analysis Applied To The Electric Grid, Marcus Harris, Martin Zwick

Systems Science Faculty Publications and Presentations

Reconstructability Analysis (RA) is an analytical approach developed in the systems community that combines graph theory and information theory. Graph theory provides the structure of relations (model of the data) between variables and information theory characterizes the strength and the nature of the relations. RA has three primary approaches to model data: variable based (VB) models without loops (acyclic graphs), VB models with loops (cyclic graphs) and state-based models (nearly always cyclic, individual states specifying model constraints). These models can either be directed or neutral. Directed models focus on a single response variable whereas neutral models focus on all relations …


Beyond Spatial Autocorrelation: A Novel Approach Using Reconstructability Analysis, David Percy, Martin Zwick Jul 2018

Beyond Spatial Autocorrelation: A Novel Approach Using Reconstructability Analysis, David Percy, Martin Zwick

Systems Science Faculty Publications and Presentations

Raster data are digital representations of spatial phenomena that are organized into rows and columns that typically have the same dimensions in each direction. They are used to represent image data at any scale. Common raster data are medical images, satellite data, and photos generated by modern smartphones.
Satellites capture reflectance data in specific bands of wavelength that correspond to red, green, blue, and often some infrared and thermal bands. These composite vectors can then be classified into actual land use categories such as forest or water using automated techniques. These classifications are verified on the ground using hand-held sensors. …


Reconstructability & Dynamics Of Elementary Cellular Automata, Martin Zwick Jul 2018

Reconstructability & Dynamics Of Elementary Cellular Automata, Martin Zwick

Systems Science Faculty Publications and Presentations

Reconstructability analysis (RA) is a method to determine whether a multivariate relation, defined set- or information-theoretically, is decomposable with or without loss into lower ordinality relations. Set-theoretic RA (SRA) is used to characterize the mappings of elementary cellular automata. The decomposition possible for each mapping w/o loss is a better predictor than the λ parameter (Walker & Ashby, Langton) of chaos, & non-decomposable mappings tend to produce chaos. SRA yields not only the simplest lossless structure but also a vector of losses for all structures, indexed by parameter τ. These losses are analogous to transmissions in information-theoretic RA (IRA). IRA …


Statistical Analysis Of Network Change, Teresa D. Schmidt, Martin Zwick Feb 2018

Statistical Analysis Of Network Change, Teresa D. Schmidt, Martin Zwick

Systems Science Faculty Publications and Presentations

Networks are rarely subjected to hypothesis tests for difference, but when they are inferred from datasets of independent observations statistical testing is feasible. To demonstrate, a healthcare provider network is tested for significant change after an intervention using Medicaid claims data. First, the network is inferred for each time period with (1) partial least squares (PLS) regression and (2) reconstructability analysis (RA). Second, network distance (i.e., change between time periods) is measured as the mean absolute difference in (1) coefficient matrices for PLS and (2) calculated probability distributions for RA. Third, the network distance is compared against a reference distribution …


Ideas & Graphs, Martin Zwick Oct 2017

Ideas & Graphs, Martin Zwick

Systems Science Faculty Publications and Presentations

A graph can specify the skeletal structure of an idea, onto which meaning can be added by interpreting the structure.

This paper considers graphs (but not hypergraphs) consisting of four nodes, and suggests meanings that can be associated with several different directed and undirected graphs.

Drawing on Bennett's "systematics," specifically on the Tetrad that systematics offers as a model of 'activity,' the analysis here shows that the Tetrad is versatile model of problem-solving, regulation and control, and other processes.


Mining Data On Traumatic Brain Injury With Reconstructability Analysis, Martin Zwick, Nancy Carney, Rosemary Nettleton Jan 2017

Mining Data On Traumatic Brain Injury With Reconstructability Analysis, Martin Zwick, Nancy Carney, Rosemary Nettleton

Systems Science Faculty Publications and Presentations

This paper reports the analysis of data on traumatic brain injury using a probabilistic graphical modeling technique known as reconstructability analysis (RA). The analysis shows the flexibility, power, and comprehensibility of RA modeling, which is well-suited for mining biomedical data. One finding of the analysis is that education is a confounding variable for the Digit Symbol Test in discriminating the severity of concussion; another - and anomalous - finding is that previous head injury predicts improved performance on the Reaction Time test. This analysis was exploratory, so its findings require follow-on confirmatory tests of their generalizability.


Exploratory Data Modeling Of Traumatic Brain Injury, Martin Zwick Jun 2015

Exploratory Data Modeling Of Traumatic Brain Injury, Martin Zwick

Systems Science Faculty Publications and Presentations

A short presentation of an analysis of data from Dr. Megan Preece on traumatic brain injury, the first in a series of planned secondary analyses of multiple TBI data sets. The analysis employs the systems methodology of reconstructability analysis (RA), utilizing both variable- and state-based and both neutral and directed models. The presentation explains RA and illustrates the results it can obtain. Unlike the confirmatory approach standard to most data analyses, this methodology is designed for exploratory modeling. It thus allows the discovery of unanticipated associations among variables, including multi-variable interaction effects of unknown form. It offers the opportunity for …


Hardware/Software Interface Assurance With Conformance Checking, Li Lei Jun 2015

Hardware/Software Interface Assurance With Conformance Checking, Li Lei

Dissertations and Theses

Hardware/Software (HW/SW) interfaces are pervasive in modern computer systems. Most of HW/SW interfaces are implemented by devices and their device drivers. Unfortunately, HW/SW interfaces are unreliable and insecure due to their intrinsic complexity and error-prone nature. Moreover, assuring HW/SW interface reliability and security is challenging. First, at the post-silicon validation stage, HW/SW integration validation is largely an ad-hoc and time-consuming process. Second, at the system deployment stage, transient hardware failures and malicious attacks make HW/SW interfaces vulnerable even after intensive testing and validation. In this dissertation, we present a comprehensive solution for HW/SW interface assurance over the system life cycle. …


Using Acl2 To Verify Loop Pipelining In Behavioral Synthesis, Disha Puri, Sandip Ray, Kecheng Hao, Fei Xie Jan 2014

Using Acl2 To Verify Loop Pipelining In Behavioral Synthesis, Disha Puri, Sandip Ray, Kecheng Hao, Fei Xie

Civil and Environmental Engineering Faculty Publications and Presentations

Behavioral synthesis involves compiling an Electronic System-Level (ESL) design into its RegisterTransfer Level (RTL) implementation. Loop pipelining is one of the most critical and complex transformations employed in behavioral synthesis. Certifying the loop pipelining algorithm is challenging because there is a huge semantic gap between the input sequential design and the output pipelined implementation making it infeasible to verify their equivalence with automated sequential equivalence checking techniques. We discuss our ongoing effort using ACL2 to certify loop pipelining transformation. The completion of the proof is work in progress. However, some of the insights developed so far may already be of …


On The Effect Of Heterogeneity On The Dynamics And Performance Of Dynamical Networks, Alireza Goudarzi Jan 2012

On The Effect Of Heterogeneity On The Dynamics And Performance Of Dynamical Networks, Alireza Goudarzi

Dissertations and Theses

The high cost of processor fabrication plants and approaching physical limits have started a new wave research in alternative computing paradigms. As an alternative to the top-down manufactured silicon-based computers, research in computing using natural and physical system directly has recently gained a great deal of interest. A branch of this research promotes the idea that any physical system with sufficiently complex dynamics is able to perform computation. The power of networks in representing complex interactions between many parts make them a suitable choice for modeling physical systems. Many studies used networks with a homogeneous structure to describe the computational …


A Data-Descriptive Feedback Framework For Data Stream Management Systems, Rafael J. Fernández Moctezuma Jan 2012

A Data-Descriptive Feedback Framework For Data Stream Management Systems, Rafael J. Fernández Moctezuma

Dissertations and Theses

Data Stream Management Systems (DSMSs) provide support for continuous query evaluation over data streams. Data streams provide processing challenges due to their unbounded nature and varying characteristics, such as rate and density fluctuations. DSMSs need to adapt stream processing to these changes within certain constraints, such as available computational resources and minimum latency requirements in producing results. The proposed research develops an inter-operator feedback framework, where opportunities for run-time adaptation of stream processing are expressed in terms of descriptions of substreams and actions applicable to the substreams, called feedback punctuations. Both the discovery of adaptation opportunities and the exploitation of …


The Basic Scheme For The Evaluation Of Functional Logic Programs, Arthur Peters Jan 2012

The Basic Scheme For The Evaluation Of Functional Logic Programs, Arthur Peters

Dissertations and Theses

Functional logic languages provide a powerful programming paradigm combining the features of functional languages and logic languages. However, current implementations of functional logic languages are complex, slow, or both. This thesis presents a scheme, called the Basic Scheme, for compiling and executing functional logic languages based on non-deterministic graph rewriting. This thesis also describes the implementation and optimization of a prototype of the Basic Scheme. The prototype is simple and performs well compared to other current implementations.


Hardware Acceleration Of Inference Computing: The Numenta Htm Algorithm, Dan Hammerstrom May 2011

Hardware Acceleration Of Inference Computing: The Numenta Htm Algorithm, Dan Hammerstrom

Systems Science Friday Noon Seminar Series

In this presentation I will describe the latest version of the Numenta HTM Cortical Learning Algorithm and why it is interesting for doing research into radical new computer architectures. Then I will discuss the hardware acceleration research we are doing, and briefly look at some preliminary applications development.


Generalized Construction Of Scalable Concurrent Data Structures Via Relativistic Programming, Josh Triplett, Paul E. Mckenney, Philip W. Howard, Jonathan Walpole Mar 2011

Generalized Construction Of Scalable Concurrent Data Structures Via Relativistic Programming, Josh Triplett, Paul E. Mckenney, Philip W. Howard, Jonathan Walpole

Computer Science Faculty Publications and Presentations

We present relativistic programming, a concurrent programming model based on shared addressing, which supports efficient, scalable operation on either uniform shared-memory or distributed shared- memory systems. Relativistic programming provides a strong causal ordering property, allowing a series of read operations to appear as an atomic transaction that occurs entirely between two ordered write operations. This preserves the simple immutable-memory programming model available via mutual exclusion or transactional memory. Furthermore, relativistic programming provides joint-access parallelism, allowing readers to run concurrently with a writer on the same data. We demonstrate a generalized construction technique for concurrent data structures based on relativistic programming, …


Scalable Correct Memory Ordering Via Relativistic Programming, Josh Triplett, Philip William Howard, Paul E. Mckenney, Jonathan Walpole Mar 2011

Scalable Correct Memory Ordering Via Relativistic Programming, Josh Triplett, Philip William Howard, Paul E. Mckenney, Jonathan Walpole

Computer Science Faculty Publications and Presentations

We propose and document a new concurrent programming model, relativistic programming. This model allows readers to run concurrently with writers, without blocking or using expensive synchronization. Relativistic programming builds on existing synchronization primitives that allow writers to wait for current readers to finish with minimal reader overhead. Our methodology models data structures as graphs, and reader algorithms as traversals of these graphs; from this foundation we show how writers can implement arbitrarily strong ordering guarantees for the visibility of their writes, up to and including total ordering.


A Comparison Of Relativistic And Reader-Writer Locking Approaches To Shared Data Access, Philip William Howard, Josh Triplett, Jonathan Walpole Feb 2011

A Comparison Of Relativistic And Reader-Writer Locking Approaches To Shared Data Access, Philip William Howard, Josh Triplett, Jonathan Walpole

Computer Science Faculty Publications and Presentations

This paper explores the relationship between reader-writer locking and relativistic programming approaches to managing accesses to shared data. It demonstrates that by placing certain restrictions on writers, relativistic programming allows more concurrency than reader-writer locking while still providing the same isolation guarantees. Relativistic programming also allows for a straightforward model for reasoning about the correctness of programs that allow concurrent read-write accesses.


The Ordering Requirements Of Relativistic And Reader-Writer Locking Approaches To Shared Data Access, Philip William Howard, Josh Triplett, Jonathan Walpole, Paul E. Mckenney Jan 2011

The Ordering Requirements Of Relativistic And Reader-Writer Locking Approaches To Shared Data Access, Philip William Howard, Josh Triplett, Jonathan Walpole, Paul E. Mckenney

Computer Science Faculty Publications and Presentations

The semantics of reader-writer locks allow read-side concurrency. Unfortunately, the locking primitives serialize access to the lock variable to an extent that little or no concurrency is realized in practice for small critical sections. Relativistic programming is a methodology that also allows read- side concurrency. Relativistic programming uses dfferent ordering constraints than reader-writer locking. The different ordering constraints allow relativistic readers to proceed without synchronization so relativistic readers scale even for very short critical sections. In this paper we explore the diferences between the ordering constraints for reader-writer locking and relativistic programs. We show how and why the dfferent ordering …


Random Automata Networks: Why Playing Dice Is Not A Vice, Christof Teuscher Dec 2010

Random Automata Networks: Why Playing Dice Is Not A Vice, Christof Teuscher

Systems Science Friday Noon Seminar Series

Random automata networks consist of a set of simple compute nodes interacting with each other. In this generic model, one or multiple model parameters, such as the the node interactions and/or the compute functions, are chosen at random. Random Boolean Networks (RBNs) are a particular case of discrete dynamical automata networks where both time and states are discrete. While traditional RBNs are generally credited to Stuart Kauffman (1969), who introduced them as simplified models of gene regulation, Alan Turing proposed unorganized machines as early as 1948. In this talk I will start with Alan Turing's early work on unorganized machines, …


Scalable Event Tracking On High-End Parallel Systems, Kathryn Marie Mohror Jan 2010

Scalable Event Tracking On High-End Parallel Systems, Kathryn Marie Mohror

Dissertations and Theses

Accurate performance analysis of high end systems requires event-based traces to correctly identify the root cause of a number of the complex performance problems that arise on these highly parallel systems. These high-end architectures contain tens to hundreds of thousands of processors, pushing application scalability challenges to new heights. Unfortunately, the collection of event-based data presents scalability challenges itself: the large volume of collected data increases tool overhead, and results in data files that are difficult to store and analyze. Our solution to these problems is a new measurement technique called trace profiling that collects the information needed to diagnose …


Pvw: Designing Virtual World Server Infrastructure, Francis Chang, C. Mic Bowman, Wu-Chi Feng Jan 2010

Pvw: Designing Virtual World Server Infrastructure, Francis Chang, C. Mic Bowman, Wu-Chi Feng

Computer Science Faculty Publications and Presentations

This paper presents a high level overview of PVW (Partitioned Virtual Worlds), a distributed system architecture for the management of virtual worlds. PVW is designed to support arbitrarily large and complex virtual worlds while accommodating dynamic and highly variable user population and content distribution density. The PVW approach enables the task of simulating and managing the virtual world to be distributed over many servers by spatially partitioning the environment into a hierarchical structure. This structure is useful both for balancing the simulation load across many nodes, as well as features such as geometric simplification and distribution of dynamic content.


Xpu: A Distributed Architecture For Metaverses, Francis Chang, C. Mic Bowman, Wu-Chi Feng Jan 2010

Xpu: A Distributed Architecture For Metaverses, Francis Chang, C. Mic Bowman, Wu-Chi Feng

Computer Science Faculty Publications and Presentations

A significant problem of designing 3D virtual worlds (such as metaverses) is developing a scalable architecture that can manage millions of simultaneous users in an interactive 3D environment. This paper presents XPU (Extremely Partitioned Universe), a hierarchical client-server architecture for developing highly scalable metaverses. This design addresses the problem of dynamically partitioning the world to manage network and computing resources.


Is Parallel Programming Hard, And If So, Why?, Paul E. Mckenney, Maged M. Michael, Manish Gupta, Philip William Howard, Josh Triplett, Jonathan Walpole Feb 2009

Is Parallel Programming Hard, And If So, Why?, Paul E. Mckenney, Maged M. Michael, Manish Gupta, Philip William Howard, Josh Triplett, Jonathan Walpole

Computer Science Faculty Publications and Presentations

Of the 200+ parallel-programming languages and environments created in the 1990s, almost all are now defunct. Given that parallel systems are now well within the budget of the typical hobbyist or graduate student, it is not unreasonable to expect a new cohort in excess of several thousand parallel languages and environments to appear in the 2010s. If this expected new cohort is to have more practical impact than did its 1990s counterpart, a robust and widely applicable framework will be required that encompasses exactly what, if anything, is hard about parallel programming. This paper revisits the fundamental precepts of concurrent …


A Pattern Language For Extensible Program Representation, Andrew P. Black, Daniel Vainsencher Oct 2006

A Pattern Language For Extensible Program Representation, Andrew P. Black, Daniel Vainsencher

Computer Science Faculty Publications and Presentations

For the last 15 years, implementors of multiple view programming environments have sought a single code model that would form a suitable basis for all of the program analyses and tools that might be applied to the code. They have been unsuccessful. The consequences are a tendency to build monolithic, single-purpose tools, each of which implements its own specialized analyses and optimized representation. This restricts the availability of the analyses, and also limits the reusability of the representation by other tools. Unintegrated tools also produce inconsistent views, which reduce the value of multiple views. This article describes a set of …


Application Of Information-Theoretic Data Mining Techniques In A National Ambulatory Practice Outcomes Research Network, Adam Wright, Thomas N. Ricciardi, Martin Zwick Oct 2005

Application Of Information-Theoretic Data Mining Techniques In A National Ambulatory Practice Outcomes Research Network, Adam Wright, Thomas N. Ricciardi, Martin Zwick

Systems Science Faculty Publications and Presentations

The Medical Quality Improvement Consortium data warehouse contains de-identified data on more than 3.6 million patients including their problem lists, test results, procedures and medication lists. This study uses reconstructability analysis, an information-theoretic data mining technique, on the MQIC data warehouse to empirically identify risk factors for various complications of diabetes including myocardial infarction and microalbuminuria. The risk factors identified match those risk factors identified in the literature, demonstrating the utility of the MQIC data warehouse for outcomes research, and RA as a technique for mining clinical data warehouses.