Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 3 of 3

Full-Text Articles in Physical Sciences and Mathematics

Data Compression Based On The Cubic B-Spline Wavelet With Uniform Two-Scale Relation, S. K. Yang, C. H. Cooke Jan 1996

Data Compression Based On The Cubic B-Spline Wavelet With Uniform Two-Scale Relation, S. K. Yang, C. H. Cooke

Mathematics & Statistics Faculty Publications

The aim of this paper is to investigate the potential artificial compression which can be achieved using an interval multiresolution analysis based on a semiorthogonal cubic B-spline wavelet. The Chui-Quak [1] spline multiresolution analysis for the finite interval has been modified [2] so as to be characterized by natural spline projection and uniform two-scale relation. Strengths and weaknesses of the semiorthogonal wavelet as regards artificial compression and data smoothing by the method of thresholding wavelet coefficients are indicated.


An Efficient Runge-Kutta (4,5) Pair, P. Bogacki, L. F. Shampine Jan 1996

An Efficient Runge-Kutta (4,5) Pair, P. Bogacki, L. F. Shampine

Mathematics & Statistics Faculty Publications

A pair of explicit Runge-Kutta formulas of orders 4 and 5 is derived. It is significantly more efficient than the Fehlberg and Dormand-Prince pairs, and by standard measures it is of at least as high quality. There are two independent estimates of the local error. The local error of the interpolant is, to leading order, a problem-independent function of the local error at the end of the step.


A Family Of Parallel Runge-Kutta Pairs, P. Bogacki Jan 1996

A Family Of Parallel Runge-Kutta Pairs, P. Bogacki

Mathematics & Statistics Faculty Publications

Increasing availability of parallel computers has recently spurred a substantial amount of research concerned with designing explicit Runge-Kutta methods to be implemented on such computers. Here, we discuss a family of methods that require fewer processors than methods presently available do, still achieving a similar speed-up. In particular, (5,6) and (6,7) pairs are derived, that require a minimum number of function evaluations on two and three processors, respectively.