Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

2002

Statistics and Probability

Institution
Keyword
Publication
Publication Type
File Type

Articles 1 - 30 of 173

Full-Text Articles in Physical Sciences and Mathematics

Invariant Sets And Inverse Limits, William Thomas Ingram Dec 2002

Invariant Sets And Inverse Limits, William Thomas Ingram

Mathematics and Statistics Faculty Research & Creative Works

In this paper we investigate the nature of inverse limits from the point of view of invariant sets. We then introduce a special class of examples of inverse limits on [0,1] using Markov bonding maps determined by members of the group of permutations on n elements. © 2002 Elsevier Science B.V. All rights reserved.


Recurrent Events Analysis In The Presence Of Time Dependent Covariates And Dependent Censoring, Maja Miloslavsky, Sunduz Keles, Mark J. Van Der Laan, Steve Butler Dec 2002

Recurrent Events Analysis In The Presence Of Time Dependent Covariates And Dependent Censoring, Maja Miloslavsky, Sunduz Keles, Mark J. Van Der Laan, Steve Butler

U.C. Berkeley Division of Biostatistics Working Paper Series

Recurrent events models have lately received a lot of attention in the literature. The majority of approaches discussed show the consistency of parameter estimates under the assumption that censoring is independent of the recurrent events process of interest conditional on the covariates included into the model. We provide an overview of available recurrent events analysis methods, and present an inverse probability of censoring weighted estimator for the regression parameters in the Andersen-Gill model that is commonly used for recurrent event analysis. This estimator remains consistent under informative censoring if the censoring mechanism is estimated consistently, and generally improves on the …


Construction Of Counterfactuals And The G-Computation Formula, Zhuo Yu, Mark J. Van Der Laan Dec 2002

Construction Of Counterfactuals And The G-Computation Formula, Zhuo Yu, Mark J. Van Der Laan

U.C. Berkeley Division of Biostatistics Working Paper Series

Robins' causal inference theory assumes existence of treatment specific counterfactual variables so that the observed data augmented by the counterfactual data will satisfy a consistency and a randomization assumption. Gill and Robins [2001] show that the consistency and randomization assumptions do not add any restrictions to the observed data distribution. In particular, they provide a construction of counterfactuals as a function of the observed data distribution. In this paper we provide a construction of counterfactuals as a function of the observed data itself. Our construction provides a new statistical tool for estimation of counterfactual distributions. Robins [1987b] shows that the …


Resolvability In Graphs, Varaporn Saenpholphat Dec 2002

Resolvability In Graphs, Varaporn Saenpholphat

Dissertations

The distance d (u, v ) between two vertices u and v in a connected graph G is the length of a shortest u - v path in G . For an ordered set W = { w1 , w2 , [Special characters omitted.] &cdots; , wk } of vertices in G and a vertex v of G , the code of v with respect to W is the k -vector cW (v ) = (d ( v, w1 ),d (v, w 2 ), [Special characters omitted.] &cdots; , d …


Robust Residuals And Diagnostics In Autoregressive Time Series, Kirk W. Anderson Dec 2002

Robust Residuals And Diagnostics In Autoregressive Time Series, Kirk W. Anderson

Dissertations

One of the goals of model diagnostics is outlier detection. In particular, we would like to use the residuals, appropriately standardized, to “flag” outliers. Hopefully, our (robust) procedure has yielded a fit that resists undue influence by outlying points, while simultaneously drawing attention to these interesting points via residual analysis. In this study we consider several different methods of standardizing the residuals resulting from autoregression. A large sample approximation for the variance of rank-based first order autoregressive time series residuals is developed. This provides studentized residuals, specific to the time series model and estimation procedure. Simulation studies are presented that …


New Statitstical Methods For The Estimation Of The Mean And Standard Deviation From Normally Distributed Censored Samples, Abou El-Makarim Abd El-Alim Aboueissa Dec 2002

New Statitstical Methods For The Estimation Of The Mean And Standard Deviation From Normally Distributed Censored Samples, Abou El-Makarim Abd El-Alim Aboueissa

Dissertations

The main objective of this dissertation is to estimate the mean /x and standard deviation cr of a normal population from left-censored samples. We have developed new methods for calculating estimates for the mean and standard deviation of a normal population from left-censored samples. Some of these methods based on traditional estimating procedures. A new method of obtaining the Cohen maximum likelihood estimates for fx and cr without the aid of an auxiliary table will be introduced. This new method will be used to extend Cohen table of estimating the Cohen A-parameter that is required for calculating the maximum likelihood …


On Choosing And Bounding Probability Metrics, Alison L. Gibbs, Francis E. Su Dec 2002

On Choosing And Bounding Probability Metrics, Alison L. Gibbs, Francis E. Su

All HMC Faculty Publications and Research

When studying convergence of measures, an important issue is the choice of probability metric. We provide a summary and some new results concerning bounds among some important probability metrics/distances that are used by statisticians and probabilists. Knowledge of other metrics can provide a means of deriving bounds for another one in an applied problem. Considering other metrics can also provide alternate insights. We also give examples that show that rates of convergence can strongly depend on the metric chosen. Careful consideration is necessary when choosing a metric.


On Rank-Based Considerations For Generalized Linear Models And Generalized Estimating Equation Models, Diana R. Cucos Dec 2002

On Rank-Based Considerations For Generalized Linear Models And Generalized Estimating Equation Models, Diana R. Cucos

Dissertations

This study discusses rank-based robust methods for estimation of parameters and hypotheses testing in the generalized linear models (GLM) and generalized estimating equations (GEE) setting. The robust estimates are obtained by minimizing a Wilcoxon drop in dispersion function for linear or nonlinear regression models. In addition, diagnostic tools for outliers and influential observations are being developed. These models are generalizations of linear and nonlinear models. They allow for both nonlinear mean functions and heteroscedasticity of their random errors. This makes them quite useful in practice. Rank-based inference has been developed for linear models over the last thirty years. This inference …


Multi-Level Decomposition Of Probalistic Relations, Stanislaw Grygiel, Martin Zwick, Marek Perkowski Dec 2002

Multi-Level Decomposition Of Probalistic Relations, Stanislaw Grygiel, Martin Zwick, Marek Perkowski

Systems Science Faculty Publications and Presentations

Two methods of decomposition of probabilistic relations are presented in this paper. They consist of splitting relations (blocks) into pairs of smaller blocks related to each other by new variables generated in such a way so as to minimize a cost function which depends on the size and structure of the result. The decomposition is repeated iteratively until a stopping criterion is met. Topology and contents of the resulting structure develop dynamically in the decomposition process and reflect relationships hidden in the data.


Quantitative Evaluation Of Hiv Preventon Programs, Edward Kaplan, Ron Brookmeyer Nov 2002

Quantitative Evaluation Of Hiv Preventon Programs, Edward Kaplan, Ron Brookmeyer

Ron Brookmeyer

How successful are HIV prevention programs? Which HIV prevention programs are most cost effective? Which programs are worth expanding and which should be abandoned altogether? This book addresses the quantitative evaluation of HIV prevention programs, assessing for the first time several different quantitative methods of evaluation


Prevention Of Inhalational Anthrax In The U.S. Outbreak, Ron Brookmeyer, Natalie Blades Nov 2002

Prevention Of Inhalational Anthrax In The U.S. Outbreak, Ron Brookmeyer, Natalie Blades

Ron Brookmeyer

No abstract provided.


Analysis Of Longitudinal Marginal Structural Models , Jennifer F. Bryan, Zhuo Yu, Mark J. Van Der Laan Nov 2002

Analysis Of Longitudinal Marginal Structural Models , Jennifer F. Bryan, Zhuo Yu, Mark J. Van Der Laan

U.C. Berkeley Division of Biostatistics Working Paper Series

In this article we construct and study estimators of the causal effect of a time-dependent treatment on survival in longitudinal studies. We employ a particular marginal structural model (MSM), and follow a general methodology for constructing estimating functions in censored data models. The inverse probability of treatment weighted (IPTW) estimator is used as an initial estimator and the corresponding treatment-orthogonalized, one-step estimator is consistent and asymptotically linear when the treatment mechanism is consistently estimated. We extend these methods to handle informative censoring. A simulation study demonstrates that the the treatment-orthogonalized, one-step estimator is superior to the IPTW estimator in terms …


Association Between Mean Residual Life (Mrl) And Failure Rate Functions For Continuous And Discrete Lifetime Distributions, Leonid Bekker Nov 2002

Association Between Mean Residual Life (Mrl) And Failure Rate Functions For Continuous And Discrete Lifetime Distributions, Leonid Bekker

FIU Electronic Theses and Dissertations

The purpose of this study was to correct some mistakes in the literature and derive a necessary and sufficient condition for the MRL to follow the roller-coaster pattern of the corresponding failure rate function. It was also desired to find the conditions under which the discrete failure rate function has an upside-down bathtub shape if corresponding MRL function has a bathtub shape. The study showed that if discrete MRL has a bathtub shape, then under some conditions the corresponding failure rate function has an upside-down bathtub shape. Also the study corrected some mistakes in proofs of Tang, Lu and Chew …


Jmasm4: Critical Values For Four Nonparametric And/Or Distribution-Free Tests Of Location For Two Independent Samples, Bruce R. Fay Nov 2002

Jmasm4: Critical Values For Four Nonparametric And/Or Distribution-Free Tests Of Location For Two Independent Samples, Bruce R. Fay

Journal of Modern Applied Statistical Methods

Researchers engaged in computer-intensive studies may need exact critical values, especially for sample sizes and alpha levels not normally found in published tables, as well as the ability to control ‘best-fit’ criteria. They may also benefit from the ability to directly generate these values rather than having to create lookup tables. Fortran 90 programs generate ‘best-conservative’ (bc) and ‘best-fit’ (bf) critical values with associated probabilities for the Kolmogorov-Smirnov test of general differences (bc), Rosenbaum’s test of location (bc), Tukey’s quick test (bc and bf)) and the Wilcoxon rank-sum test (bc).


Constructive Criticism, Ronald C. Serlin Nov 2002

Constructive Criticism, Ronald C. Serlin

Journal of Modern Applied Statistical Methods

Attempts to attain knowledge as certified true belief have failed to circumvent Hume’s injunction against induction. Theories must be viewed as unprovable, improbable, and undisprovable. The empirical basis is fallible, and yet the method of conjectures and refutations is untouched by Hume’s insights. The implications for statistical methodology is that the requisite severity of testing is achieved through the use of robust procedures, whose assumptions have not been shown to be substantially violated, to test predesignated range null hypotheses. Nonparametric range null hypothesis tests need to be developed to examine whether or not effect sizes or measures of association, as …


Extensions Of The Concept Of Exchangeability And Their Applications, Phillip I. Good Nov 2002

Extensions Of The Concept Of Exchangeability And Their Applications, Phillip I. Good

Journal of Modern Applied Statistical Methods

Permutation tests provide exact p-values in a wide variety of practical testing situations. But permutation tests rely on the assumption of exchangeability, that is, under the hypothesis, the joint distribution of the observations is invariant under permutations of the subscripts. Observations are exchangeable if they are independent, identically distributed (i.i.d.), or if they are jointly normal with identical covariances. The range of applications of these exact, powerful, distribution-free tests can be enlarged through exchangeability- preserving transforms, asymptotic exchangeability, partial exchangeability, and weak exchangeability. Original exact tests for comparing the slopes of two regression lines and for the analysis of …


Within Groups Multiple Comparisons Based On Robust Measures Of Location, Rand R. Wilcox, H. J. Keselman Nov 2002

Within Groups Multiple Comparisons Based On Robust Measures Of Location, Rand R. Wilcox, H. J. Keselman

Journal of Modern Applied Statistical Methods

Consider the problem of performing all pair-wise comparisons among J dependent groups based on measures of location associated with the marginal distributions. It is well known that the standard error of the sample mean can be large relative to other estimators when outliers are common. Two general strategies for addressing this problem are to trim a fixed proportion of observations or empirically check for outliers and remove (or down-weight) any that are found. However, simply applying conventional methods for means to the data that remain results in using the wrong standard error. Methods that address this problem have been proposed, …


A Comparison Of The D’Agostino S_U Test To The Triples Test For Testing Of Symmetry Versus Asymmetry As A Preliminary Test To Testing The Equality Of Means, Kimberly T. Perry, Michael R. Stoline Nov 2002

A Comparison Of The D’Agostino S_U Test To The Triples Test For Testing Of Symmetry Versus Asymmetry As A Preliminary Test To Testing The Equality Of Means, Kimberly T. Perry, Michael R. Stoline

Journal of Modern Applied Statistical Methods

This paper evaluates the D’Agostino SU test and the Triples test for testing symmetry versus asymmetry. These procedures are evaluated as preliminary tests in the selection of the most appropriate procedure for testing the equality of means with two independent samples under a variety of symmetric and asymmetric sampling situations. Key words: symmetry; asymmetry; preliminary testing.


Adaptive Tests For Ordered Categorical Data, Vance W. Berger, Anastasia Ivanova Nov 2002

Adaptive Tests For Ordered Categorical Data, Vance W. Berger, Anastasia Ivanova

Journal of Modern Applied Statistical Methods

Consider testing for independence against stochastic order in an ordered 2xJ contingency table, under product multinomial sampling. In applications one may wish to exploit prior information concerning the direction of the treatment effect, yet ultimately end up with a testing procedure with good frequentist properties. As such, a reasonable objective may be to simultaneously maximize power at a specified alternative and ensure reasonable power for all other alternatives of interest. For this objective, none of the available testing approaches are completely satisfactory. A new class of admissible adaptive tests is derived. Each test in this class strictly preserves the Type …


Determining Predictor Importance In Multiple Regression Under Varied Correlational And Distributional Conditions, Tiffany A. Whittaker, Rachel T. Fouladi, Natasha J. Williams Nov 2002

Determining Predictor Importance In Multiple Regression Under Varied Correlational And Distributional Conditions, Tiffany A. Whittaker, Rachel T. Fouladi, Natasha J. Williams

Journal of Modern Applied Statistical Methods

This study examines the performance of eight methods of predictor importance under varied correlational and distributional conditions. The proportion of times a method correctly identified the dominant predictor was recorded. Results indicated that the new methods of importance proposed by Budescu (1993) and Johnson (2000) outperformed commonly used importance methods.


Simulation Study Of Chemical Inhibition Modeling, Pali Sen, Mary Anderson Nov 2002

Simulation Study Of Chemical Inhibition Modeling, Pali Sen, Mary Anderson

Journal of Modern Applied Statistical Methods

The combined effects of the activities of different chemicals are of interest of this study. We simulate for the synthetic data, and fit experimental data for three models and estimate the parameters. We assess the fit of the synthetic data and the experimental data by comparing the coefficients of variation for the parameter estimates and identify the best model for the inhibition process.


A Longitudinal Follow-Up Of Discrete Mass At Zero With Gap, Joseph L. Musial, Patrick D. Bridge, Nicol R. Shamey Nov 2002

A Longitudinal Follow-Up Of Discrete Mass At Zero With Gap, Joseph L. Musial, Patrick D. Bridge, Nicol R. Shamey

Journal of Modern Applied Statistical Methods

The first part of this paper discusses a five-year systematic review of the Journal of Consulting and Clinical Psychology following the landmark power study conducted by Sawilowsky and Hillman (1992). The second part discusses a five-year longitudinal follow-up of a radically nonnormal population distribution: discrete mass at zero with gap. This distribution was based upon a real dataset.


Some Reflections On Significance Testing, Thomas R. Knapp Nov 2002

Some Reflections On Significance Testing, Thomas R. Knapp

Journal of Modern Applied Statistical Methods

This essay presents a variation on a theme from my article “The use of tests of statistical significance”, which appeared in the Spring, 1999, issue of Mid-Western Educational Researcher.


Twenty Nonparametric Statistics And Their Large Sample Approximations, Gail F. Fahoome Nov 2002

Twenty Nonparametric Statistics And Their Large Sample Approximations, Gail F. Fahoome

Journal of Modern Applied Statistical Methods

Nonparametric procedures are often more powerful than classical tests for real world data which are rarely normally distributed. However, there are difficulties in using these tests. Computational formulas are scattered throughout the literature, and there is a lack of availability of tables and critical values. The computational formulas for twenty commonly employed nonparametric tests that have large-sample approximations for the critical value are brought together. Because there is no generally agreed upon lower limit for the sample size, Monte Carlo methods were used to determine the smallest sample size that can be used with the respective large-sample approximation. The statistics …


A Test Of Symmetry, Abdul R. Othman, H. J. Keselman, Rand R. Wilcox, Katherine Fradette, A. R. Padmanabhan Nov 2002

A Test Of Symmetry, Abdul R. Othman, H. J. Keselman, Rand R. Wilcox, Katherine Fradette, A. R. Padmanabhan

Journal of Modern Applied Statistical Methods

When data are nonnormal in form classical procedures for assessing treatment group equality are prone to distortions in rates of Type I error and power to detect effects. Replacing the usual means with trimmed means reduces rates of Type I error and increases sensitivity to detect effects. If data are skewed, say to the right, then it has been postulated that asymmetric trimming, to the right, should be better at controlling rates of Type I error and power to detect effects than symmetric trimming from both tails of the data distribution. Keselman, Wilcox, Othman and Fradette (2002) found that Babu, …


Trimming, Transforming Statistics, And Bootstrapping: Circumventing The Biasing Effects Of Heterescedasticity And Nonnormality, H. J. Keselman, Rand R. Wilcox, Abdul R. Othman, Katherine Fradette Nov 2002

Trimming, Transforming Statistics, And Bootstrapping: Circumventing The Biasing Effects Of Heterescedasticity And Nonnormality, H. J. Keselman, Rand R. Wilcox, Abdul R. Othman, Katherine Fradette

Journal of Modern Applied Statistical Methods

Researchers can adopt different measures of central tendency and test statistics to examine the effect of a treatment variable across groups (e.g., means, trimmed means, M-estimators, & medians. Recently developed statistics are compared with respect to their ability to control Type I errors when data were nonnormal, heterogeneous, and the design was unbalanced: (1) a preliminary test for symmetry which determines whether data should be trimmed symmetrically or asymmetrically, (2) two different transformations to eliminate skewness, (3) the accuracy of assessing statistical significance with a bootstrap methodology was examined, and (4) statistics that use a robust measure of the typical …


The Statistical Modeling Of The Fertility Of Chinese Women, Dudley L. Poston Jr. Nov 2002

The Statistical Modeling Of The Fertility Of Chinese Women, Dudley L. Poston Jr.

Journal of Modern Applied Statistical Methods

This article is concerned with the statistical modeling of children ever born (CEB) fertility data. It is shown that in a low fertility population, such as China, the use of linear regression approaches to model CEB is statistically inappropriate because the distribution of the CEB variable is often heavily skewed with a long right tail. For five sub-groups of Chinese women, their fertility is modeled using Poisson, negative binomial, and ordinary least squares (OLS) regression models. It is shown that in almost all instances there would have been major errors of statistical inference had the interpretations of the results been …


Robust Estimation Of Multivariate Failure Data With Time-Modulated Frailty, Pingfu Fu, J. Sunil Rao, Jiming Jiang Nov 2002

Robust Estimation Of Multivariate Failure Data With Time-Modulated Frailty, Pingfu Fu, J. Sunil Rao, Jiming Jiang

Journal of Modern Applied Statistical Methods

A time-modulated frailty model is proposed for analyzing multivariate failure data. The effect of frailties, which may not be constant over time, is discussed. We assume a parametric model for the baseline hazard, but avoid the parametric assumption for the frailty distribution. The well-known connection between survival times and Poisson regression model is used. The parameters of interest are estimated by generalized estimating equations (GEE) or by penalized GEE. Simulation studies show that the procedure is successful to detect the effect of time-modulated frailty. The method is also applied to a placebo controlled randomized clinical trial of gamma interferon, a …


Double Median Ranked Set Sample: Comparing To Other Double Ranked Samples For Mean And Ratio Estimators, Hani M. Samawi, Eman M. Tawalbeh Nov 2002

Double Median Ranked Set Sample: Comparing To Other Double Ranked Samples For Mean And Ratio Estimators, Hani M. Samawi, Eman M. Tawalbeh

Journal of Modern Applied Statistical Methods

Double median ranked set sample (DMRSS) and its properties for estimating the population mean, when the underlying distribution is assumed to be symmetric about its mean, are introduced. Also, the performance of DMRSS with respect to other ranked set samples and double ranked set samples, for estimating the population mean and ratio, is considered. Real data that consist of heights and diameters of 399 trees are used to illustrate the procedure. The analysis and simulation indicate that using DMRSS for estimating the population mean is more efficient than using the other ranked samples and double ranked samples schemes except in …


On The Misuse Of Confidence Intervals For Two Means In Testing For The Significance Of The Difference Between The Means, George W. Ryan, Steven D. Leadbetter Nov 2002

On The Misuse Of Confidence Intervals For Two Means In Testing For The Significance Of The Difference Between The Means, George W. Ryan, Steven D. Leadbetter

Journal of Modern Applied Statistical Methods

Comparing individual confidence intervals of two population means is an incorrect procedure for determining the statistical significance of the difference between the means. We show conditions where confidence intervals for the means from two independent samples overlap and the difference between the means is in fact significant.