Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Statistics and Probability

Institution
Keyword
Publication Year
Publication
Publication Type
File Type

Articles 10711 - 10740 of 12731

Full-Text Articles in Physical Sciences and Mathematics

Seasonal Analyses Of Air Pollution And Mortality In 100 U.S. Cities, Roger D. Peng, Francesca Dominici, Roberto Pastor-Barriuso, Scott L. Zeger, Jonathan M. Samet May 2004

Seasonal Analyses Of Air Pollution And Mortality In 100 U.S. Cities, Roger D. Peng, Francesca Dominici, Roberto Pastor-Barriuso, Scott L. Zeger, Jonathan M. Samet

Johns Hopkins University, Dept. of Biostatistics Working Papers

Time series models relating short-term changes in air pollution levels to daily mortality counts typically assume that the effects of air pollution on the log relative rate of mortality do not vary with time. However, these short-term effects might plausibly vary by season. Changes in the sources of air pollution and meteorology can result in changes in characteristics of the air pollution mixture across seasons. The authors develop Bayesian semi-parametric hierarchical models for estimating time-varying effects of pollution on mortality in multi-site time series studies. The methods are applied to the updated National Morbidity and Mortality Air Pollution Study database …


Semiparametric Regression Analysis Of Mean Residual Life With Censored Survival Data, Ying Qing Chen, Su-Chun Cheng May 2004

Semiparametric Regression Analysis Of Mean Residual Life With Censored Survival Data, Ying Qing Chen, Su-Chun Cheng

U.C. Berkeley Division of Biostatistics Working Paper Series

As a function of time t, mean residual life is the remaining life expectancy of a subject given survival up to t. The proportional mean residual life model, proposed by Oakes & Dasu (1990), provides an alternative to the Cox proportional hazards model to study the association between survival times and covariates. In the presence of censoring, we develop semiparametric inference procedures for the regression coefficients of the Oakes-Dasu model using martingale theory for counting processes. We also present simulation studies and an application to the Veterans' Administration lung cancer data.


Semiparametic Models And Estimation Procedures For Binormal Roc Curves With Multiple Biomarkers, Debashis Ghosh May 2004

Semiparametic Models And Estimation Procedures For Binormal Roc Curves With Multiple Biomarkers, Debashis Ghosh

The University of Michigan Department of Biostatistics Working Paper Series

In diagnostic medicine, there is great interest in developing strategies for combining biomarkers in order to optimize classification accuracy. A popular model that has been used for receiver operating characteristic (ROC) curve modelling when one biomarker is available is the binormal model. Extension of the model to accommodate multiple biomarkers has not been considered in this literature. Here, we consider a multivariate binormal framework for combining biomarkers using copula functions that leads to a natural multivariate extension of the binormal model. Estimation in this model will be done using rank-based procedures. We show that the Van der Waerden rank score …


Nonparametric And Semiparametric Inference For Models Of Tumor Size And Metastasis, Debashis Ghosh May 2004

Nonparametric And Semiparametric Inference For Models Of Tumor Size And Metastasis, Debashis Ghosh

The University of Michigan Department of Biostatistics Working Paper Series

There has been some recent work in the statistical literature for modelling the relationship between the size of primary cancers and the occurrences of metastases. While nonparametric methods have been proposed for estimation of the tumor size distribution at which metastatic transition occurs, their asymptotic properties have not been studied. In addition, no testing or regression methods are available so that potential confounders and prognostic factors can be adjusted for. We develop a unified approach to nonparametric and semiparametric analysis of modelling tumor size-metastasis data in this article. An equivalence between the models considered by previous authors with survival data …


Model Checking Techniques For Regression Models In Cancer Screening, Debashis Ghosh May 2004

Model Checking Techniques For Regression Models In Cancer Screening, Debashis Ghosh

The University of Michigan Department of Biostatistics Working Paper Series

There has been much work on developing statistical procedures for associating tumor size with the probability of detecting a metastasis. Recently, Ghosh (2004) developed a unified statistical framework in which equivalences with censored data structures and models for tumor size and metastasis were examined. Based on this framework, we consider model checking techniques for semiparametric regression models in this paper. The procedures are for checking the additive hazards model. Goodness of fit methods are described for assessing functional form of covariates as well as the additive hazards assumption. The finite-sample properties of the methods are assessed using simulation studies.


Binary Isotonic Regression Procedures, With Application To Cancer Biomarkers, Debashis Ghosh, Moulinath Banerjee, Pinaki Biswas May 2004

Binary Isotonic Regression Procedures, With Application To Cancer Biomarkers, Debashis Ghosh, Moulinath Banerjee, Pinaki Biswas

The University of Michigan Department of Biostatistics Working Paper Series

There is a lot of interest in the development and characterization of new biomarkers for screening large populations for disease. In much of the literature on diagnostic testing, increased levels of a biomarker correlate with increased disease risk. However, parametric forms are typically used to associate these quantities. In this article, we specify a monotonic relationship between biomarker levels with disease risk. This leads to consideration of a nonparametric regression model for a single biomarker. Estimation results using isotonic regression-type estimators and asymptotic results are given. We also discuss confidence set estimation in this setting and propose three procedures for …


Performance Of The Kenward-Project When The Covariance Structure Is Selected Using Aic And Bic, Elisa Valderas Gomez May 2004

Performance Of The Kenward-Project When The Covariance Structure Is Selected Using Aic And Bic, Elisa Valderas Gomez

Theses and Dissertations

Linear mixed models are frequently used to analyze data with random effects and/or repeated measures. A common approach to such analyses requires choosing a covariance structure. Information criteria, such as AIC and BIC, are often used by statisticians to help with this task. However, these criteria do not always point to the true covariance structure and therefore the wrong covariance structure is sometimes chosen. Once this step is complete, Wald statistics are used to test fixed effects. Degrees of freedom for these statistics are not known. However, there are approximation methods, such as Kenward and Roger (KR) and Satterthwaite (SW) …


The "Fair" Triathlon: Equating Standard Deviations Using Non-Linear Bayesian Models, Steven Mckay Curtis May 2004

The "Fair" Triathlon: Equating Standard Deviations Using Non-Linear Bayesian Models, Steven Mckay Curtis

Theses and Dissertations

The Ironman triathlon was created in 1978 by combining events with the longest distances for races then contested in Hawaii in swimming, cycling, and running. The Half Ironman triathlon was formed using half the distances of each of the events in the Ironman. The Olympic distance triathlon was created by combining events with the longest distances for races sanctioned by the major federations for swimming, cycling, and running. The relative importance of each event in overall race outcome was not given consideration when determining the distances of each of the races in modern triathlons. Thus, there is a general belief …


Validation Of Criteria Used To Predict Warfarin Dosing Decisions, Nicole Thomas May 2004

Validation Of Criteria Used To Predict Warfarin Dosing Decisions, Nicole Thomas

Theses and Dissertations

People at risk for blood clots are often treated with anticoagulants, warfarin is such an anticoagulant. The dose's effect is measured by comparing the time for blood to clot to a control time called an INR value. Previous anticoagulant studies have addressed agreement between fingerstick (POC) devices and the standard laboratory, however these studies rely on mathematical formulas as criteria for clinical evaluations, i.e. clinical evaluation vs. precision and bias. Fourteen such criteria were found in the literature. There exists little consistency among these criteria for assessing clinical agreement, furthermore whether these methods of assessing agreement are reasonable estimates of …


On Corrected Score Approach For Proportional Hazards Model With Covariate Measurement Error, Xiao Song, Yijian Huang May 2004

On Corrected Score Approach For Proportional Hazards Model With Covariate Measurement Error, Xiao Song, Yijian Huang

UW Biostatistics Working Paper Series

In the presence of covariate measurement error with the proportional hazards model, several functional modeling methods have been proposed. These include the conditional score estimator (Tsiatis and Davidian, 2001), the parametric correction estimator (Nakamura, 1992) and the nonparametric correction estimator (Huang and Wang, 2000, 2003) in the order of weaker assumptions on the error. Although they are all consistent, each suffers from potential difficulties with small samples and substantial measurement error. In this article, upon noting that the conditional score and parametric correction estimators are asymptotically equivalent in the case of normal error, we investigate their relative finite sample performance …


Meta-Analysis Of Results And Individual Patient Data In Epidemiologal Studies, Aurelio Tobías, Marc Saez, Manolis Kogevinas May 2004

Meta-Analysis Of Results And Individual Patient Data In Epidemiologal Studies, Aurelio Tobías, Marc Saez, Manolis Kogevinas

Journal of Modern Applied Statistical Methods

Epidemiological information can be aggregated by combining results through a meta-analysis technique, or by pooling and analyzing primary data. Common approaches to analyzing pooled studies through an example on the effect of occupational exposure to wood dust on sinonasal cancer are described. Results were combined applying a meta-analysis technique. Alternatively, primary data from all studies were pooled and re-analyzed using mixed effect models. The combination of individual information rather than results is desirable to facilitate interpretations of epidemiological findings, leading also to more precise estimations and more powerful statistical tests for study heterogeneity.


Statistical Pronouncements Iii, Jmasm Editors May 2004

Statistical Pronouncements Iii, Jmasm Editors

Journal of Modern Applied Statistical Methods

No abstract provided.


Sub-Supersolution Method For Quasilinear Parabolic Variational Inequalities, Siegfried Carl, Vy Khoi Le May 2004

Sub-Supersolution Method For Quasilinear Parabolic Variational Inequalities, Siegfried Carl, Vy Khoi Le

Mathematics and Statistics Faculty Research & Creative Works

This paper is about a systematic attempt to apply the sub-supersolution method to parabolic variational inequalities. We define appropriate concepts of sub-supersolutions and derive existence, comparison, and extremity results for such inequalities.


Multivariate Location: Robust Estimators And Inference, Rand R. Wilcox, H. J. Keselman May 2004

Multivariate Location: Robust Estimators And Inference, Rand R. Wilcox, H. J. Keselman

Journal of Modern Applied Statistical Methods

The sample mean can have poor efficiency relative to various alternative estimators under arbitrarily small departures from normality. In the multivariate case, (affine equivariant) estimators have been proposed for dealing with this problem, but a comparison of various estimators by Massé and Plante (2003) indicated that the small-sample efficiency of some recently derived methods is rather poor. This article reports that a skipped mean, where outliers are removed via a projection-type outlier detection method, is found to be more satisfactory. The more obvious method for computing a confidence region based on the skipped estimator (using a slight modification of the …


A Comparison Of Methods For Longitudinal Analysis With Missing Data, James Algina, H. J. Keselman May 2004

A Comparison Of Methods For Longitudinal Analysis With Missing Data, James Algina, H. J. Keselman

Journal of Modern Applied Statistical Methods

In a longitudinal two-group randomized trials design, also referred to as randomized parallel-groups design or split-plot repeated measures design, the important hypothesis of interest is whether there are differential rates of change over time, that is, whether there is a group by time interaction. Several analytic methods have been presented in the literature for testing this important hypothesis when data are incomplete. We studied these methods for the case in which the missing data pattern is non-monotone. In agreement with earlier work on monotone missing data patterns, our results on bias, sampling variability, Type I error and power support the …


A Rank-Based Estimation Procedure For Linear Models With Clustered Data, Suzanne R. Dubnicka May 2004

A Rank-Based Estimation Procedure For Linear Models With Clustered Data, Suzanne R. Dubnicka

Journal of Modern Applied Statistical Methods

A rank method is presented for estimating regression parameters in the linear model when observations are correlated. This correlation is accounted for by including a random effect term in the linear model. A method is proposed that makes few assumptions about the random effect and error distribution. The main goal of this article is to determine the distributions for which this method performs well relative to existing methods.


Quantifying The Proportion Of Cases Attributable To An Exposure, Camil Fuchs, Vance W. Berger May 2004

Quantifying The Proportion Of Cases Attributable To An Exposure, Camil Fuchs, Vance W. Berger

Journal of Modern Applied Statistical Methods

The attributable fraction and the average attributable fractions, which are commonly used to assess the relative effect of several exposures to the prevalence of a disease, do not represent the proportion of cases caused by each exposure. Furthermore, the sum of attributable fractions over all exposures generally exceeds not only the attributable fraction for all exposures taken together, but also 100%. Other measures are discussed here, including the directly attributable fraction and the confounding fraction, that may be more suitable in defining the fraction directly attributable to an exposure.


On Polynomial Transformations For Simulating Multivariate Non-Normal Distributions, Todd C. Headrick May 2004

On Polynomial Transformations For Simulating Multivariate Non-Normal Distributions, Todd C. Headrick

Journal of Modern Applied Statistical Methods

Procedures are introduced and discussed for increasing the computational and statistical efficiency of polynomial transformations used in Monte Carlo or simulation studies. Comparisons are also made between polynomials of order three and five in terms of (a) computational and statistical efficiency, (b) the skew and kurtosis boundary, and (c) boundaries for Pearson correlations. It is also shown how ranked data can be simulated for specified Spearman correlations and sample sizes. Potential consequences of nonmonotonic transformations on rank correlations are also discussed.


An Alternative Q Chart Incorporating A Robust Estimator Of Scale, Michael B. C. Khoo May 2004

An Alternative Q Chart Incorporating A Robust Estimator Of Scale, Michael B. C. Khoo

Journal of Modern Applied Statistical Methods

In overcoming the shortcomings of the classical control charts in a short runs production, Quesenberry (1991 & 1995a – d) proposed Q charts for attributes and variables data. An approach to enhance the performance of a variable Q chart based on individual measurements using a robust estimator of scale is proposed. Monte carlo simulations are conducted to show that the proposed robust Q chart is superior to the present Q chart.


Beta-Normal Distribution: Bimodality Properties And Application, Felix Famoye, Carl Lee, Nicholas Eugene May 2004

Beta-Normal Distribution: Bimodality Properties And Application, Felix Famoye, Carl Lee, Nicholas Eugene

Journal of Modern Applied Statistical Methods

The beta-normal distribution is characterized by four parameters that jointly describe the location, the scale and the shape properties. The beta-normal distribution can be unimodal or bimodal. This paper studies the bimodality properties of the beta-normal distribution. The region of bimodality in the parameter space is obtained. The beta-normal distribution is applied to fit a numerical bimodal data set. The beta-normal fits are compared with the fits of mixture-normal distribution through simulation.


Respondent-Generated Intervals (Rgi) For Recall In Sample Surveys, S. James Press May 2004

Respondent-Generated Intervals (Rgi) For Recall In Sample Surveys, S. James Press

Journal of Modern Applied Statistical Methods

Respondents are asked for both a basic response to a recall-type question, their usage quantity, and are asked to provide lower and upper bounds for the (Respondent-Generated) interval in which their true values might possibly lie. A Bayesian hierarchical model for estimating the population mean and its variance is presented.. Telephone: (989) 774-


Estimation Using Bivariate Extreme Ranked Set Sampling With Application To The Bivariate Normal Distribution, Mohammad Fraiwan Al-Saleh, Hani M. Samawi May 2004

Estimation Using Bivariate Extreme Ranked Set Sampling With Application To The Bivariate Normal Distribution, Mohammad Fraiwan Al-Saleh, Hani M. Samawi

Journal of Modern Applied Statistical Methods

In this article, the procedure of bivariate extreme ranked set sampling (BVERSS) is introduced and investigated as a procedure of obtaining more accurate samples for estimating the parameters of bivariate populations. This procedure takes its strength from the advantages of bivariate ranked set sampling (BVRSS) over the usual ranked set sampling in dealing with two characteristics simultaneously, and the advantages of extreme ranked set sampling (ERSS) over usual RSS in reducing the ranking errors and hence in being more applicable. The BVERSS procedure will be applied to the case of the parameters of the bivariate normal distributions. Illustration using real …


Kernel-Based Estimation Of P(X Less Than Y)With Paired Data, Omar M. Eidous, Ayman Baklizi May 2004

Kernel-Based Estimation Of P(X Less Than Y)With Paired Data, Omar M. Eidous, Ayman Baklizi

Journal of Modern Applied Statistical Methods

A point estimation of P(X < Y) was considered. A nonparametric estimator for P(X < Y) was developed using the kernel density estimator of the joint distribution of X and Y, may be dependent. The resulting estimator was found to be similar to the estimator based on the sign statistic, however it assigns smooth continuous scores to each pair of the observations rather than the zero or one scores of the sign statistic. The asymptotic equivalence of the sign statistic and the proposed estimator is shown and a simulation study is conducted to investigate the performance of the proposed estimator. Results indicate that …


Some Improvements In Kernel Estimation Using Line Transect Sampling, Omar M. Eidous May 2004

Some Improvements In Kernel Estimation Using Line Transect Sampling, Omar M. Eidous

Journal of Modern Applied Statistical Methods

Kernel estimation provides a nonparametric estimate of the probability density function from which a set of data is drawn. This article proposes a method to choose a reference density in bandwidth calculation for kernel estimator using line transect sampling. The method based on testing the shoulder condition, if the shoulder condition seems to be valid using as reference the half normal density, while if the shoulder condition does not seem to be valid, we will use exponential reference density. Accordingly, the performances of the resultant estimator are studied under a wide range of underlying models using simulation techniques. The results …


A Generalized Quasi-Likelihood Model Application To Modeling Poverty Of Asian American Women, Jeffrey R. Wilson May 2004

A Generalized Quasi-Likelihood Model Application To Modeling Poverty Of Asian American Women, Jeffrey R. Wilson

Journal of Modern Applied Statistical Methods

A generalized quasi-likelihood function that does not require the assumption of an underlying distribution when modeling jointly the mean and the variance, is introduced to examine poverty of Asian American women living in the West coast of the United States, using data from U.S. Census Bureau.


A Visually Adaptive Bayesian Model In Wavelet Regression, Dongfeng Wu May 2004

A Visually Adaptive Bayesian Model In Wavelet Regression, Dongfeng Wu

Journal of Modern Applied Statistical Methods

The implementation of a Bayesian approach to wavelet regression that corresponds to the human visual system is examined. Most existing research in this area assumes non-informative priors, that is, a prior with mean zero. A new way is offered to implement prior information that mimics a visual inspection of noisy data, to obtain a first impression about the shape of the function that results in a prior with non-zero mean. This visually adaptive Bayesian (VAB) prior has a simple structure, intuitive interpretation, and is easy to implement. Skorohod topology is suggested as a more appropriate measure in signal recovering than …


Estimation Of Multiple Linear Functional Relationships, Amjad D. Al-Nasser May 2004

Estimation Of Multiple Linear Functional Relationships, Amjad D. Al-Nasser

Journal of Modern Applied Statistical Methods

This article deals with multiple linear functional relationships models. Two robust estimations procedure are proposed to estimate the model, based on Generalized Maximum Entropy and Partial Least Square. They are distribution free and do not rely (so much) on classical assumptions. The experiments showed that the GME approach outperforms the PLS in terms of mean squares of errors (MSE). Empirical examples are studied.


Validation Studies: Matters Of Dimensionality, Accuracy, And Parsimony With Predictive Discriminant Analysis And Factor Analysis, David A. Walker May 2004

Validation Studies: Matters Of Dimensionality, Accuracy, And Parsimony With Predictive Discriminant Analysis And Factor Analysis, David A. Walker

Journal of Modern Applied Statistical Methods

Two studies were used as examples that examined issues of dimensionality, accuracy, and parsimony in educational research via the use of predictive discriminant analysis and factor analysis. Using a two-group problem, study 1 looked at how accurately group membership could be predicted from subjects’ test scores. Study 2 looked at the dimensionality structure of an instrument and if it developed constructs that would measure theorized domains.


A Test-Retest Transition Matrix: A Modification Of Mcnemar’S Test, J. Wanzer Drane, W. Gregory Thatcher May 2004

A Test-Retest Transition Matrix: A Modification Of Mcnemar’S Test, J. Wanzer Drane, W. Gregory Thatcher

Journal of Modern Applied Statistical Methods

McNemar introduced what is known today as a test for symmetry in a two by two contingency tables. The logic of the test is based on a sample of matched pairs with a dichotomous response. In our example, the sample consists of the scores before and after an education program and the responses before and after the program. Each pair of scores is from only one person. The pretest divides the group of responders according to their answers to a dichotomous question. The posttest divides the two groups into two groups of like labels. The result is a two by …


Jmasm11: Comparing Two Small Binomial Proportions, James F. Reed Iii May 2004

Jmasm11: Comparing Two Small Binomial Proportions, James F. Reed Iii

Journal of Modern Applied Statistical Methods

A large volume of research has focused on comparing the difference between two small binomial proportions. Statisticians recognize that Fisher’s Exact test and Yates chi-square test are excessively conservative. Likewise, many statisticians feel that Pearson’s Chi-square or the likelihood statistic may be inappropriate for small samples. Viable alternatives exist.