Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 15 of 15

Full-Text Articles in Physical Sciences and Mathematics

X2 Tests For The Choice Of The Regularization Parameter In Nonlinear Inverse Problems, J. L. Mead, C. C. Hammerquist Oct 2013

X2 Tests For The Choice Of The Regularization Parameter In Nonlinear Inverse Problems, J. L. Mead, C. C. Hammerquist

Jodi Mead

We address discrete nonlinear inverse problems with weighted least squares and Tikhonov regularization. Regularization is a way to add more information to the problem when it is ill-posed or ill-conditioned. However, it is still an open question as to how to weight this information. The discrepancy principle considers the residual norm to determine the regularization weight or parameter, while the χ2 method [J. Mead, J. Inverse Ill-Posed Probl., 16 (2008), pp. 175–194; J. Mead and R. A. Renaut, Inverse Problems, 25 (2009), 025002; J. Mead, Appl. Math. Comput., 219 (2013), pp. 5210–5223; R. A. Renaut, I. Hnetynkova, and J. L. …


Discontinuous Parameter Estimates With Least Squares Estimators, J. L. Mead Jan 2013

Discontinuous Parameter Estimates With Least Squares Estimators, J. L. Mead

Jodi Mead

We discuss weighted least squares estimates of ill-conditioned linear inverse problems where weights are chosen to be inverse error covariance matrices. Least squares estimators are the maximum likelihood estimate for normally distributed data and parameters, but here we do not assume particular probability distributions. Weights for the estimator are found by ensuring its minimum follows a χ2 distribution. Previous work with this approach has shown that it is competitive with regularization methods such as the L-curve and Generalized Cross Validation (GCV) [20]. In this work we extend the method to find diagonal weighting matrices, rather than a scalar regularization parameter. …


Mathematics Colloquium: Inverse Problems And Uncertainty Quantification, Jodi Mead Apr 2012

Mathematics Colloquium: Inverse Problems And Uncertainty Quantification, Jodi Mead

Jodi Mead

Combining physical or mathematical models with observational data often results in an ill-posed inverse problem. Regularization is typically used to solve ill-posed problems, and it can be viewed as adding statistical or probability information to the problem in the form of uncertainties. These uncertainties occur in the model, parameters or measurements and quantifying them is a challenge. The Bayesian interpretation of uncertainties formalizes the process of updating prior beliefs with observational data, however, it can be computationally demanding. Alternatively, we develop one of the simplest forms of regularization, least squares, to estimate prior uncertainty and call it the chi-squared method. …


A Priori Weighting For Parameter Estimation, Jodi Mead Jun 2011

A Priori Weighting For Parameter Estimation, Jodi Mead

Jodi Mead

We propose a new approach to weighting initial parameter misfits in a least squares optimization problem for linear parameter estimation. Parameter misfit weights are found by solving an optimization problem which ensures the penalty function has the properties of a χ2 random variable with n degrees of freedom, where n is the number of data. This approach differs from others in that weights found by the proposed algorithm vary along a diagonal matrix rather than remain constant. In addition, it is assumed that data and parameters are random, but not necessarily normally distributed. The proposed algorithm successfully solved three benchmark …


Accuracy, Resolution And Stability Properties Of A Modified Chebyshev Method, Jodi Mead, Rosemary A. Renaut Jul 2010

Accuracy, Resolution And Stability Properties Of A Modified Chebyshev Method, Jodi Mead, Rosemary A. Renaut

Jodi Mead

While the Chebyshev pseudospectral method provides a spectrally accurate method, integration of partial differential equations with spatial derivatives of order M requires time steps of approximately O(N−2M) for stable explicit solvers. Theoretically, time steps may be increased to O(N−M) with the use of a parameter, α-dependent mapped method introduced by Kosloff and Tal-Ezer [ J. Comput. Phys., 104 (1993), pp. 457–469]. Our analysis focuses on the utilization of this method for reasonable practical choices for N, namely N ≲ 30, as may be needed for two- or three dimensional modeling. Results presented confirm that spectral accuracy with increasing N is …


The Shallow Water Equations In Lagrangian Coordinates, J. L. Mead May 2010

The Shallow Water Equations In Lagrangian Coordinates, J. L. Mead

Jodi Mead

Recent advances in the collection of Lagrangian data from the ocean and results about the well-posedness of the primitive equations have led to a renewed interest in solving flow equations in Lagrangian coordinates. We do not take the view that solving in Lagrangian coordinates equates to solving on a moving grid that can become twisted or distorted. Rather, the grid in Lagrangian coordinates represents the initial position of particles, and it does not change with time. However, using Lagrangian coordinates results in solving a highly nonlinear partial differential equation. The nonlinearity is mainly due to the Jacobian of the coordinate …


Towards Regional Assimilation Of Lagrangian Data: The Lagrangian Form Of The Shallow Water Reduced Gravity Model And Its Inverse, J. L. Mead, A. F. Bennett May 2010

Towards Regional Assimilation Of Lagrangian Data: The Lagrangian Form Of The Shallow Water Reduced Gravity Model And Its Inverse, J. L. Mead, A. F. Bennett

Jodi Mead

Variational data assimilation for Lagrangian geophysical fluid dynamics is introduced. Lagrangian coordinates add numerical difficulties into an already difficult subject, but also offer certain distinct advantages over Eulerian coordinates. First, float position and depth are defined by linear measurement functionals. Second, Lagrangian or ‘comoving’ open domains are conveniently expressed in Lagrangian coordinates. The attraction of such open domains is that they lead to well-posed prediction problems [Bennett and Chua (1999)] and hence efficient inversion algorithms. Eulerian and Lagrangian solutions of the inviscid forward problem in a doubly periodic domain, with North Atlantic mesoscales, are compared and found to be in …


An Iterated Pseudospectral Method For Functional Partial Differential Equations, J. Mead, B. Zubik-Kowal May 2010

An Iterated Pseudospectral Method For Functional Partial Differential Equations, J. Mead, B. Zubik-Kowal

Jodi Mead

Chebyshev pseudospectral spatial discretization preconditioned by the Kosloff and Tal-Ezer transformation [10] is applied to hyperbolic and parabolic functional equations. A Jacobi waveform relaxation method is then applied to the resulting semi-discrete functional systems, and the result is a simple system of ordinary differential equations d/dtUk+1(t) = MαUk+1(t)+f(t,U kt). Here Mα is a diagonal matrix, k is the index of waveform relaxation iterations, U kt is a functional argument computed from the previous iterate and the function f, like the matrix Mα, depends on the process of semi-discretization. This waveform relaxation splitting has the advantage of straight forward, direct application …


Assimilation Of Simulated Float Data In Lagrangian Coordinates, J. L. Mead May 2010

Assimilation Of Simulated Float Data In Lagrangian Coordinates, J. L. Mead

Jodi Mead

We implement an approach for the accurate assimilation of Lagrangian data into regional general ocean circulation models. The forward model is expressed in Lagrangian coordinates and simulated float data are incorporated into the model via four dimensional variational data assimilation. We show that forward solutions computed in Lagrangian coordinates are reliable for time periods of up to 100 days with phase speeds of 1 m/s and deformation radius of 35 km. The position and depth of simulated floats are assimilated into the viscous, Lagrangian shallow water equations. The weights for the errors in the model and data are varied and …


Least Squares Problems With Inequality Constraints As Quadratic Constraints, Jodi Mead, Rosemary A. Renaut Apr 2010

Least Squares Problems With Inequality Constraints As Quadratic Constraints, Jodi Mead, Rosemary A. Renaut

Jodi Mead

Linear least squares problems with box constraints are commonly solved to find model parameters within bounds based on physical considerations. Common algorithms include Bounded Variable Least Squares (BVLS) and the Matlab function lsqlin. Here, the goal is to find solutions to ill-posed inverse problems that lie within box constraints. To do this, we formulate the box constraints as quadratic constraints, and solve the corresponding unconstrained regularized least squares problem. Using box constraints as quadratic constraints is an efficient approach because the optimization problem has a closed form solution.

The effectiveness of the proposed algorithm is investigated through solving three …


A Newton Root-Finding Algorithm For Estimating The Regularization Parameter For Solving Ill-Conditioned Least Squares Problems, Jodi Mead, Rosemary Renaut Apr 2010

A Newton Root-Finding Algorithm For Estimating The Regularization Parameter For Solving Ill-Conditioned Least Squares Problems, Jodi Mead, Rosemary Renaut

Jodi Mead

We discuss the solution of numerically ill-posed overdetermined systems of equations using Tikhonov a-priori-based regularization. When the noise distribution on the measured data is available to appropriately weight the fidelity term, and the regularization is assumed to be weighted by inverse covariance information on the model parameters, the underlying cost functional becomes a random variable that follows a X2 distribution. The regularization parameter can then be found so that the optimal cost functional has this property. Under this premise a scalar Newton root-finding algorithm for obtaining the regularization parameter is presented. The algorithm, which uses the singular value decomposition of …


Solution Of A Nonlinear System For Uncertainty Quantification In Inverse Problems, Jodi Mead Apr 2010

Solution Of A Nonlinear System For Uncertainty Quantification In Inverse Problems, Jodi Mead

Jodi Mead

No abstract provided.


Non-Smooth Solutions To Least Squares Problems, Jodi Mead Sep 2009

Non-Smooth Solutions To Least Squares Problems, Jodi Mead

Jodi Mead

In an attempt to overcome the ill-posedness or illconditioning of inverse problems, regularization methods are implemented by introducing assumptions on the solution. Common regularization methods include total variation, L-curve, Generalized Cross Validation (GCV), and the discrepancy principle. It is generally accepted that all of these approaches except total variation unnecessarily smooth solutions, mainly because the regularization operator is in an L2 norm. Alternatively, statistical approaches to ill-posed problems typically involve specifying a priori information about the parameters in the form of Bayesian inference. These approaches can be more accurate than typical regularization methods because the regularization term is weighted with …


Pseudospectral Iterated Method For Differential Equations With Delay Terms, Jodi Mead, Barbara Zubik-Kowal Dec 2003

Pseudospectral Iterated Method For Differential Equations With Delay Terms, Jodi Mead, Barbara Zubik-Kowal

Jodi Mead

New efficient numerical methods for hyperbolic and parabolic partial differential equations with delay terms are investigated. These equations model a development of cancer cells in human bodies. Our goal is to study numerical methods which can be applied in a parallel computing environment. We apply our new numerical method to the delay partial differential equations and analyse the error of the method. Numerical experiments confirm our theoretical results.


Stability Of A Pivoting Strategy For Parallel Gaussian Elimination, Jodi Mead, R. Renaud, B. Welfert May 2001

Stability Of A Pivoting Strategy For Parallel Gaussian Elimination, Jodi Mead, R. Renaud, B. Welfert

Jodi Mead

Gaussian elimination with partial pivoting achieved by adding the pivot row to the kth row at step k, was introduced by Onaga and Takechi in 1986 as means for reducing communications in parallel implementations. In this paper it is shown that the growth factor of this partial pivoting algorithm is bounded above by n <#60; 3 n–1, as compared to 2 n–1 for the standard partial pivoting. This bound n, close to 3 n–2, is attainable for class of near-singular matrices. Moreover, for the same matrices the growth factor is small under partial pivoting.