Open Access. Powered by Scholars. Published by Universities.®
Physical Sciences and Mathematics Commons™
Open Access. Powered by Scholars. Published by Universities.®
- Keyword
-
- Data mining (6)
- Information Theory (6)
- Multivariate analysis discrete multivariate modeling (6)
- Probabilistic graphical modeling (6)
- Reconstructability Analysis (6)
-
- Finite element method (4)
- Galerkin methods (3)
- Inequalities (Mathematics) (3)
- Numerical analysis (3)
- Algorithms (2)
- Bayesian field theory (2)
- Computer vision (2)
- Discontinuous functions (2)
- Machine learning (2)
- Attractors (Mathematics) (1)
- Autocorrelation (Statistics) (1)
- Boolean dynamics (1)
- Boundary value problems (1)
- Cluster analysis -- Algorithms (1)
- Cost of medical care -- Models (1)
- Data structures (Statistics) (1)
- Diameter (Geometry) (1)
- Dirichlet problem (1)
- Dislocation density (1)
- Dislocations in crystals (1)
- Finite differences (1)
- Forest management (1)
- Gaussian processes (1)
- Geographic information systems (1)
- Harmonic functions (1)
- Publication
-
- Mathematics and Statistics Faculty Publications and Presentations (10)
- Portland Institute for Computational Science Publications (6)
- Systems Science Faculty Publications and Presentations (6)
- Dissertations and Theses (1)
- Engineering and Technology Management Faculty Publications and Presentations (1)
- Publication Type
Articles 1 - 24 of 24
Full-Text Articles in Physical Sciences and Mathematics
Dispersion Analysis Of Hdg Methods, Jay Gopalakrishnan, Manuel Solano, Felipe Vargas
Dispersion Analysis Of Hdg Methods, Jay Gopalakrishnan, Manuel Solano, Felipe Vargas
Mathematics and Statistics Faculty Publications and Presentations
This work presents a dispersion analysis of the Hybrid Discontinuous Galerkin (HDG) method. Considering the Helmholtz system, we quantify the discrepancies between the exact and discrete wavenumbers. In particular, we obtain an analytic expansion for the wavenumber error for the lowest order Single Face HDG (SFH) method. The expansion shows that the SFH method exhibits convergence rates of the wavenumber errors comparable to that of the mixed hybrid Raviart–Thomas method. In addition, we observe the same behavior for the higher order cases in numerical experiments.
Enhancing Value-Based Healthcare With Reconstructability Analysis: Predicting Cost Of Care In Total Hip Replacement, Cecily Corrine Froemke, Martin Zwick
Enhancing Value-Based Healthcare With Reconstructability Analysis: Predicting Cost Of Care In Total Hip Replacement, Cecily Corrine Froemke, Martin Zwick
Systems Science Faculty Publications and Presentations
Legislative reforms aimed at slowing growth of US healthcare costs are focused on achieving greater value per dollar. To increase value healthcare providers must not only provide high quality care, but deliver this care at a sustainable cost. Predicting risks that may lead to poor outcomes and higher costs enable providers to augment decision making for optimizing patient care and inform the risk stratification necessary in emerging reimbursement models. Healthcare delivery systems are looking at their high volume service lines and identifying variation in cost and outcomes in order to determine the patient factors that are driving this variation and …
The Dpg-Star Method, Leszek Demkowicz, Jay Gopalakrishnan, Brendan Keith
The Dpg-Star Method, Leszek Demkowicz, Jay Gopalakrishnan, Brendan Keith
Portland Institute for Computational Science Publications
This article introduces the DPG-star (from now on, denoted DPG*) finite element method. It is a method that is in some sense dual to the discontinuous Petrov– Galerkin (DPG) method. The DPG methodology can be viewed as a means to solve an overdetermined discretization of a boundary value problem. In the same vein, the DPG* methodology is a means to solve an underdetermined discretization. These two viewpoints are developed by embedding the same operator equation into two different saddle-point problems. The analyses of the two problems have many common elements. Comparison to othermethods in the literature round out the newly …
Spatial Factor Models For High-Dimensional And Large Spatial Data: An Application In Forest Variable Mapping, Daniel Taylor-Rodríguez, Andrew O. Finley, Abhirup Datta, Chad Babcock, Hans-Erik Andersen, Bruce D. Cook, Douglas C. Morton, Sudipto Banerjee
Spatial Factor Models For High-Dimensional And Large Spatial Data: An Application In Forest Variable Mapping, Daniel Taylor-Rodríguez, Andrew O. Finley, Abhirup Datta, Chad Babcock, Hans-Erik Andersen, Bruce D. Cook, Douglas C. Morton, Sudipto Banerjee
Mathematics and Statistics Faculty Publications and Presentations
Gathering information about forest variables is an expensive and arduous activity. As such, directly collecting the data required to produce high-resolution maps over large spatial domains is infeasible. Next generation collection initiatives of remotely sensed Light Detection and Ranging (LiDAR) data are specifically aimed at producing complete-coverage maps over large spatial domains. Given that LiDAR data and forest characteristics are often strongly correlated, it is possible to make use of the former to model, predict, and map forest variables over regions of interest. This entails dealing with the high-dimensional (∼102 ) spatially dependent LiDAR outcomes over a large number …
Keyword-Based Patent Citation Prediction Via Information Theory, Farshad Madani, Martin Zwick, Tugrul U. Daim
Keyword-Based Patent Citation Prediction Via Information Theory, Farshad Madani, Martin Zwick, Tugrul U. Daim
Engineering and Technology Management Faculty Publications and Presentations
Patent citation shows how a technology impacts other inventions, so the number of patent citations (backward citations) is used in many technology prediction studies. Current prediction methods use patent citations, but since it may take a long time till a patent is cited by other inventors, identifying impactful patents based on their citations is not an effective way. The prediction method offered in this article predicts patent citations based on the content of patents. In this research, Reconstructability Analysis (RA), which is based on information theory and graph theory, is applied to predict patent citations based on keywords extracted from …
A Spacetime Dpg Method For The Wave Equation In Multiple Dimensions, Jay Gopalakrishnan, Paulina Sepulveda
A Spacetime Dpg Method For The Wave Equation In Multiple Dimensions, Jay Gopalakrishnan, Paulina Sepulveda
Portland Institute for Computational Science Publications
A spacetime discontinuous Petrov-Galerkin (DPG) method for the linear wave equation is presented. This method is based on a weak formulation that uses a broken graph space. The wellposedness of this formulation is established using a previously presented abstract framework. One of the main tasks in the verification of the conditions of this framework is proving a density result. This is done in detail for a simple domain in arbitrary dimensions. The DPG method based on the weak formulation is then studied theoretically and numerically. Error estimates and numerical results are presented for triangular, rectangular, tetrahedral, and hexahedral meshes of …
Ideals, Big Varieties, And Dynamic Networks, Ian H. Dinwoodie
Ideals, Big Varieties, And Dynamic Networks, Ian H. Dinwoodie
Mathematics and Statistics Faculty Publications and Presentations
The advantage of using algebraic geometry over enumeration for describing sets related to attractors in large dynamic networks from biology is advocated. Examples illustrate the gains.
Connection And Curvature In Crystals With Non-Constant Dislocation Density, Marek Z. Elźanowski, Gareth P. Parry
Connection And Curvature In Crystals With Non-Constant Dislocation Density, Marek Z. Elźanowski, Gareth P. Parry
Mathematics and Statistics Faculty Publications and Presentations
Given a smooth defective solid crystalline structure defined by linearly independent ‘lattice’ vector fields, the Burgers vector construction characterizes some aspect of the ‘defectiveness’ of the crystal by virtue of its interpretation in terms of the closure failure of appropriately defined paths in the material and this construction partly determines the distribution of dislocations in the crystal. In the case that the topology of the body manifold M is trivial (e.g., a smooth crystal defined on an open set in R2), it would seem at first glance that there is no corresponding construction that leads to the notion of a …
Preliminary Results Of Bayesian Networks And Reconstructability Analysis Applied To The Electric Grid, Marcus Harris, Martin Zwick
Preliminary Results Of Bayesian Networks And Reconstructability Analysis Applied To The Electric Grid, Marcus Harris, Martin Zwick
Systems Science Faculty Publications and Presentations
Reconstructability Analysis (RA) is an analytical approach developed in the systems community that combines graph theory and information theory. Graph theory provides the structure of relations (model of the data) between variables and information theory characterizes the strength and the nature of the relations. RA has three primary approaches to model data: variable based (VB) models without loops (acyclic graphs), VB models with loops (cyclic graphs) and state-based models (nearly always cyclic, individual states specifying model constraints). These models can either be directed or neutral. Directed models focus on a single response variable whereas neutral models focus on all relations …
Beyond Spatial Autocorrelation: A Novel Approach Using Reconstructability Analysis, David Percy, Martin Zwick
Beyond Spatial Autocorrelation: A Novel Approach Using Reconstructability Analysis, David Percy, Martin Zwick
Systems Science Faculty Publications and Presentations
Raster data are digital representations of spatial phenomena that are organized into rows and columns that typically have the same dimensions in each direction. They are used to represent image data at any scale. Common raster data are medical images, satellite data, and photos generated by modern smartphones.
Satellites capture reflectance data in specific bands of wavelength that correspond to red, green, blue, and often some infrared and thermal bands. These composite vectors can then be classified into actual land use categories such as forest or water using automated techniques. These classifications are verified on the ground using hand-held sensors. …
Reconstructability & Dynamics Of Elementary Cellular Automata, Martin Zwick
Reconstructability & Dynamics Of Elementary Cellular Automata, Martin Zwick
Systems Science Faculty Publications and Presentations
Reconstructability analysis (RA) is a method to determine whether a multivariate relation, defined set- or information-theoretically, is decomposable with or without loss into lower ordinality relations. Set-theoretic RA (SRA) is used to characterize the mappings of elementary cellular automata. The decomposition possible for each mapping w/o loss is a better predictor than the λ parameter (Walker & Ashby, Langton) of chaos, & non-decomposable mappings tend to produce chaos. SRA yields not only the simplest lossless structure but also a vector of losses for all structures, indexed by parameter τ. These losses are analogous to transmissions in information-theoretic RA (IRA). IRA …
Introduction To Reconstructability Analysis, Martin Zwick
Introduction To Reconstructability Analysis, Martin Zwick
Systems Science Faculty Publications and Presentations
This talk will introduce Reconstructability Analysis (RA), a data modeling methodology deriving from the 1960s work of Ross Ashby and developed in the systems community in the 1980s and afterwards. RA, based on information theory and graph theory, is a member of the family of methods known as ‘graphical models,’ which also include Bayesian networks and log-linear techniques. It is designed for exploratory modeling, although it can also be used for confirmatory hypothesis testing. RA can discover high ordinality and nonlinear interactions that are not hypothesized in advance. Its conceptual framework illuminates the relationships between wholes and parts, a subject …
Cox Processes For Counting By Detection, Purnima Rajan, Yongming Ma, Bruno Jedynak
Cox Processes For Counting By Detection, Purnima Rajan, Yongming Ma, Bruno Jedynak
Portland Institute for Computational Science Publications
In this work, doubly stochastic Poisson (Cox) processes and convolutional neural net (CNN) classifiers are used to estimate the number of instances of an object in an image. Poisson processes are well suited to model events that occur randomly in space, such as the location of objects in an image or the enumeration of objects in a scene. The proposed algorithm selects a subset of bounding boxes in the image domain, then queries them for the presence of the object of interest by running a pre-trained CNN classifier. The resulting observations are then aggregated, and a posterior distribution over the …
Space-Time Discretizations Using Constrained First-Order System Least Squares (Cfosls), Kirill Voronin, Chak Shing Lee, Martin Neumüller, Paulina Sepulveda, Panayot S. Vassilevski
Space-Time Discretizations Using Constrained First-Order System Least Squares (Cfosls), Kirill Voronin, Chak Shing Lee, Martin Neumüller, Paulina Sepulveda, Panayot S. Vassilevski
Portland Institute for Computational Science Publications
This paper studies finite element discretizations for three types of time-dependent PDEs, namely heat equation, scalar conservation law and wave equation, which we reformulate as first order systems in a least-squares setting subject to a space-time conservation constraint (coming from the original PDE). Available piece- wise polynomial finite element spaces in (n + 1)-dimensions for functional spaces from the (n + 1)-dimensional de Rham sequence for n = 3, 4 are used for the implementation of the method. Computational results illustrating the error behavior, iteration counts and performance of block-diagonal and monolithic geometric multi- grid preconditioners are …
Gaussian Processes With Context-Supported Priors For Active Object Localization, Bruno Jedynak
Gaussian Processes With Context-Supported Priors For Active Object Localization, Bruno Jedynak
Portland Institute for Computational Science Publications
We devise an algorithm using a Bayesian optimization framework in conjunction with contextual visual data for the efficient localization of objects in still images. Recent research has demonstrated substantial progress in object localization and related tasks for computer vision. However, many current state-of-the-art object localization procedures still suffer from inaccuracy and inefficiency, in addition to failing to provide a principled and interpretable system amenable to high-level vision tasks. We address these issues with the current research.
Our method encompasses an active search procedure that uses contextual data to generate initial bounding-box proposals for a target object. We train a convolutional …
A New Finite Difference Time Domain Method To Solve Maxwell's Equations, Timothy P. Meagher
A New Finite Difference Time Domain Method To Solve Maxwell's Equations, Timothy P. Meagher
Dissertations and Theses
We have constructed a new finite-difference time-domain (FDTD) method in this project. Our new algorithm focuses on the most important and more challenging transverse electric (TE) case. In this case, the electric field is discontinuous across the interface between different dielectric media. We use an electric permittivity that stays as a constant in each medium, and magnetic permittivity that is constant in the whole domain. To handle the interface between different media, we introduce new effective permittivities that incorporates electromagnetic fields boundary conditions. That is, across the interface between two different media, the tangential component, Er(x,y), …
The Auxiliary Space Preconditioner For The De Rham Complex, Jay Gopalakrishnan, Martin Neumüller, Panayot S. Vassilevski
The Auxiliary Space Preconditioner For The De Rham Complex, Jay Gopalakrishnan, Martin Neumüller, Panayot S. Vassilevski
Portland Institute for Computational Science Publications
We generalize the construction and analysis of auxiliary space preconditioners to the n-dimensional finite element subcomplex of the de Rham complex. These preconditioners are based on a generalization of a decomposition of Sobolev space functions into a regular part and a potential. A discrete version is easily established using the tools of finite element exterior calculus. We then discuss the four-dimensional de Rham complex in detail. By identifying forms in four dimensions (4D) with simple proxies, form operations are written out in terms of familiar algebraic operations on matrices, vectors, and scalars. This provides the basis for our implementation of …
A New Method For Multi-Bit And Qudit Transfer Based On Commensurate Waveguide Arrays, Jovan Petrovic, J. J. P. Veerman
A New Method For Multi-Bit And Qudit Transfer Based On Commensurate Waveguide Arrays, Jovan Petrovic, J. J. P. Veerman
Mathematics and Statistics Faculty Publications and Presentations
The faithful state transfer is an important requirement in the construction of classical and quantum computers. While the high-speed transfer is realized by optical-fibre interconnects, its implementation in integrated optical circuits is affected by cross-talk. The cross-talk between densely packed optical waveguides limits the transfer fidelity and distorts the signal in each channel, thus severely impeding the parallel transfer of states such as classical registers, multiple qubits and qudits. Here, we leverage on the suitably engineered cross-talk between waveguides to achieve the parallel transfer on optical chip. Waveguide coupling coefficients are designed to yield commensurate eigenvalues of the array and …
Statistical Analysis Of Network Change, Teresa D. Schmidt, Martin Zwick
Statistical Analysis Of Network Change, Teresa D. Schmidt, Martin Zwick
Systems Science Faculty Publications and Presentations
Networks are rarely subjected to hypothesis tests for difference, but when they are inferred from datasets of independent observations statistical testing is feasible. To demonstrate, a healthcare provider network is tested for significant change after an intervention using Medicaid claims data. First, the network is inferred for each time period with (1) partial least squares (PLS) regression and (2) reconstructability analysis (RA). Second, network distance (i.e., change between time periods) is measured as the mean absolute difference in (1) coefficient matrices for PLS and (2) calculated probability distributions for RA. Third, the network distance is compared against a reference distribution …
Clustering And Multifacility Location With Constraints Via Distance Function Penalty Methods And Dc Programming, Mau Nam Nguyen, Thai An Nguyen, Sam Reynolds, Tuyen Tran
Clustering And Multifacility Location With Constraints Via Distance Function Penalty Methods And Dc Programming, Mau Nam Nguyen, Thai An Nguyen, Sam Reynolds, Tuyen Tran
Mathematics and Statistics Faculty Publications and Presentations
This paper is a continuation of our effort in using mathematical optimization involving DC programming in clustering and multifacility location. We study a penalty method based on distance functions and apply it particularly to a number of problems in clustering and multifacility location in which the centers to be found must lie in some given set constraints. We also provide different numerical examples to test our method.
On The Girth And Diameter Of Generalized Johnson Graphs, Louis Anthony Agong, Carmen Amarra, John Caughman, Ari J. Herman, Taiyo S. Terada
On The Girth And Diameter Of Generalized Johnson Graphs, Louis Anthony Agong, Carmen Amarra, John Caughman, Ari J. Herman, Taiyo S. Terada
Mathematics and Statistics Faculty Publications and Presentations
Let v > k > i be non-negative integers. The generalized Johnson graph, J(v,k,i), is the graph whose vertices are the k-subsets of a v-set, where vertices A and B are adjacent whenever |A∩B|= i. In this article, we derive general formulas for the girth and diameter of J(v,k,i). Additionally, we provide a formula for the distance between any two vertices A and B in terms of the cardinality of their intersection.
Bootcmatch: A Software Package For Bootstrap Amg Based On Graphweighted Matching, Pasqua D'Ambra, Salvatore Filipone, Panayot S. Vassilevski
Bootcmatch: A Software Package For Bootstrap Amg Based On Graphweighted Matching, Pasqua D'Ambra, Salvatore Filipone, Panayot S. Vassilevski
Mathematics and Statistics Faculty Publications and Presentations
This article has two main objectives: one is to describe some extensions of an adaptive Algebraic Multigrid (AMG) method of the form previously proposed by the first and third authors, and a second one is to present a new software framework, named BootCMatch, which implements all the components needed to build and apply the described adaptive AMG both as a stand-alone solver and as a preconditioner in a Krylov method. The adaptive AMG presented is meant to handle general symmetric and positive definite (SPD) sparse linear systems, without assuming any a priori information of the problem and its origin; the …
Intensity Inhomogeneity Correction Of Sd-Oct Data Using Macular Flatspace, Andrew Lang, Aaron Carass, Bruno M. Jedynak, Sharon D. Solomon, Peter A. Calabresi, Jerry L. Prince
Intensity Inhomogeneity Correction Of Sd-Oct Data Using Macular Flatspace, Andrew Lang, Aaron Carass, Bruno M. Jedynak, Sharon D. Solomon, Peter A. Calabresi, Jerry L. Prince
Mathematics and Statistics Faculty Publications and Presentations
Images of the retina acquired using optical coherence tomography (OCT) often suffer from intensity inhomogeneity problems that degrade both the quality of the images and the performance of automated algorithms utilized to measure structural changes. This intensity variation has many causes, including off-axis acquisition, signal attenuation, multi-frame averaging, and vignetting, making it difficult to correct the data in a fundamental way. This paper presents a method for inhomogeneity correction by acting to reduce the variability of intensities within each layer. In particular, the N3 algorithm, which is popular in neuroimage analysis, is adapted to work for OCT data. N3 works …
High-Order Method For Evaluating Derivatives Of Harmonic Functions In Planar Domains, Jeffrey S. Ovall, Samuel E. Reynolds
High-Order Method For Evaluating Derivatives Of Harmonic Functions In Planar Domains, Jeffrey S. Ovall, Samuel E. Reynolds
Mathematics and Statistics Faculty Publications and Presentations
We propose a high-order integral equation based method for evaluating interior and boundary derivatives of harmonic functions in planar domains that are specified by their Dirichlet data.