Open Access. Powered by Scholars. Published by Universities.®

Statistical Models Commons

Open Access. Powered by Scholars. Published by Universities.®

2011

Statistical Methodology

Institution
Keyword
Publication
Publication Type
File Type

Articles 1 - 16 of 16

Full-Text Articles in Statistical Models

Assessing Association For Bivariate Survival Data With Interval Sampling: A Copula Model Approach With Application To Aids Study, Hong Zhu, Mei-Cheng Wang Nov 2011

Assessing Association For Bivariate Survival Data With Interval Sampling: A Copula Model Approach With Application To Aids Study, Hong Zhu, Mei-Cheng Wang

Johns Hopkins University, Dept. of Biostatistics Working Papers

In disease surveillance systems or registries, bivariate survival data are typically collected under interval sampling. It refers to a situation when entry into a registry is at the time of the first failure event (e.g., HIV infection) within a calendar time interval, the time of the initiating event (e.g., birth) is retrospectively identified for all the cases in the registry, and subsequently the second failure event (e.g., death) is observed during the follow-up. Sampling bias is induced due to the selection process that the data are collected conditioning on the first failure event occurs within a time interval. Consequently, the …


Depicting Estimates Using The Intercept In Meta-Regression Models: The Moving Constant Technique, Blair T. Johnson Dr., Tania B. Huedo-Medina Dr. Oct 2011

Depicting Estimates Using The Intercept In Meta-Regression Models: The Moving Constant Technique, Blair T. Johnson Dr., Tania B. Huedo-Medina Dr.

CHIP Documents

In any scientific discipline, the ability to portray research patterns graphically often aids greatly in interpreting a phenomenon. In part to depict phenomena, the statistics and capabilities of meta-analytic models have grown increasingly sophisticated. Accordingly, this article details how to move the constant in weighted meta-analysis regression models (viz. “meta-regression”) to illuminate the patterns in such models across a range of complexities. Although it is commonly ignored in practice, the constant (or intercept) in such models can be indispensible when it is not relegated to its usual static role. The moving constant technique makes possible estimates and confidence intervals at …


Effectively Selecting A Target Population For A Future Comparative Study, Lihui Zhao, Lu Tian, Tianxi Cai, Brian Claggett, L. J. Wei Aug 2011

Effectively Selecting A Target Population For A Future Comparative Study, Lihui Zhao, Lu Tian, Tianxi Cai, Brian Claggett, L. J. Wei

Harvard University Biostatistics Working Paper Series

When comparing a new treatment with a control in a randomized clinical study, the treatment effect is generally assessed by evaluating a summary measure over a specific study population. The success of the trial heavily depends on the choice of such a population. In this paper, we show a systematic, effective way to identify a promising population, for which the new treatment is expected to have a desired benefit, using the data from a current study involving similar comparator treatments. Specifically, with the existing data we first create a parametric scoring system using multiple covariates to estimate subject-specific treatment differences. …


A Study Of Missing Data Imputation And Predictive Modeling Of Strength Properties Of Wood Composites, Yan Zeng Aug 2011

A Study Of Missing Data Imputation And Predictive Modeling Of Strength Properties Of Wood Composites, Yan Zeng

Masters Theses

Problem: Real-time process and destructive test data were collected from a wood composite manufacturer in the U.S. to develop real-time predictive models of two key strength properties (Modulus of Rupture (MOR) and Internal Bound (IB)) of a wood composite manufacturing process. Sensor malfunction and data “send/retrieval” problems lead to null fields in the company’s data warehouse which resulted in information loss. Many manufacturers attempt to build accurate predictive models excluding entire records with null fields or using summary statistics such as mean or median in place of the null field. However, predictive model errors in validation may be higher …


On The Covariate-Adjusted Estimation For An Overall Treatment Difference With Data From A Randomized Comparative Clinical Trial, Lu Tian, Tianxi Cai, Lihui Zhao, L. J. Wei Jul 2011

On The Covariate-Adjusted Estimation For An Overall Treatment Difference With Data From A Randomized Comparative Clinical Trial, Lu Tian, Tianxi Cai, Lihui Zhao, L. J. Wei

Harvard University Biostatistics Working Paper Series

No abstract provided.


A Unified Approach To Non-Negative Matrix Factorization And Probabilistic Latent Semantic Indexing, Karthik Devarajan, Guoli Wang, Nader Ebrahimi Jul 2011

A Unified Approach To Non-Negative Matrix Factorization And Probabilistic Latent Semantic Indexing, Karthik Devarajan, Guoli Wang, Nader Ebrahimi

COBRA Preprint Series

Non-negative matrix factorization (NMF) by the multiplicative updates algorithm is a powerful machine learning method for decomposing a high-dimensional nonnegative matrix V into two matrices, W and H, each with nonnegative entries, V ~ WH. NMF has been shown to have a unique parts-based, sparse representation of the data. The nonnegativity constraints in NMF allow only additive combinations of the data which enables it to learn parts that have distinct physical representations in reality. In the last few years, NMF has been successfully applied in a variety of areas such as natural language processing, information retrieval, image processing, speech recognition …


Estimating Subject-Specific Treatment Differences For Risk-Benefit Assessment With Competing Risk Event-Time Data, Brian Claggett, Lihui Zhao, Lu Tian, Davide Castagno, L. J. Wei Mar 2011

Estimating Subject-Specific Treatment Differences For Risk-Benefit Assessment With Competing Risk Event-Time Data, Brian Claggett, Lihui Zhao, Lu Tian, Davide Castagno, L. J. Wei

Harvard University Biostatistics Working Paper Series

No abstract provided.


Some Problems And Solutions In The Experimental Science Of Technology: The Proper Use And Reporting Of Statistics In Computational Intelligence, With An Experimental Design From Computational Ethnomusicology, Mehmet Vurkaç Feb 2011

Some Problems And Solutions In The Experimental Science Of Technology: The Proper Use And Reporting Of Statistics In Computational Intelligence, With An Experimental Design From Computational Ethnomusicology, Mehmet Vurkaç

Systems Science Friday Noon Seminar Series

Statistics is the meta-science that lends validity and credibility to The Scientific Method. However, as a complex and advanced Science in itself, Statistics is often misunderstood and misused by scientists, engineers, medical and legal professionals and others. In the area of Computational Intelligence (CI), there have been numerous misuses of statistical techniques leading to the publishing of insupportable results, which, in addition to being a problem in itself, has also contributed to a degree of rift between the Statistics/Statistical Learning community and the Machine Learning/Computational Intelligence community. This talk surveys a number of misuses of statistical inference in CI settings, …


Multilevel Latent Class Models With Dirichlet Mixing Distribution, Chong-Zhi Di, Karen Bandeen-Roche Jan 2011

Multilevel Latent Class Models With Dirichlet Mixing Distribution, Chong-Zhi Di, Karen Bandeen-Roche

Chongzhi Di

Latent class analysis (LCA) and latent class regression (LCR) are widely used for modeling multivariate categorical outcomes in social sciences and biomedical studies. Standard analyses assume data of different respondents to be mutually independent, excluding application of the methods to familial and other designs in which participants are clustered. In this paper, we consider multilevel latent class models, in which sub-population mixing probabilities are treated as random effects that vary among clusters according to a common Dirichlet distribution. We apply the Expectation-Maximization (EM) algorithm for model fitting by maximum likelihood (ML). This approach works well, but is computationally intensive when …


Likelihood Ratio Testing For Admixture Models With Application To Genetic Linkage Analysis, Chong-Zhi Di, Kung-Yee Liang Jan 2011

Likelihood Ratio Testing For Admixture Models With Application To Genetic Linkage Analysis, Chong-Zhi Di, Kung-Yee Liang

Chongzhi Di

We consider likelihood ratio tests (LRT) and their modifications for homogeneity in admixture models. The admixture model is a special case of two component mixture model, where one component is indexed by an unknown parameter while the parameter value for the other component is known. It has been widely used in genetic linkage analysis under heterogeneity, in which the kernel distribution is binomial. For such models, it is long recognized that testing for homogeneity is nonstandard and the LRT statistic does not converge to a conventional 2 distribution. In this paper, we investigate the asymptotic behavior of the LRT for …


Rejoinder: Estimation Issues For Copulas Applied To Marketing Data, Peter Danaher, Michael Smith Dec 2010

Rejoinder: Estimation Issues For Copulas Applied To Marketing Data, Peter Danaher, Michael Smith

Michael Stanley Smith

Estimating copula models using Bayesian methods presents some subtle challenges, ranging from specification of the prior to computational tractability. There is also some debate about what is the most appropriate copula to employ from those available. We address these issues here and conclude by discussing further applications of copula models in marketing.


Forecasting Television Ratings, Peter Danaher, Tracey Dagger, Michael Smith Dec 2010

Forecasting Television Ratings, Peter Danaher, Tracey Dagger, Michael Smith

Michael Stanley Smith

Despite the state of flux in media today, television remains the dominant player globally for advertising spend. Since television advertising time is purchased on the basis of projected future ratings, and ad costs have skyrocketed, there is increasing pressure to forecast television ratings accurately. Previous forecasting methods are not generally very reliable and many have not been validated, but more distressingly, none have been tested in today’s multichannel environment. In this study we compare 8 different forecasting models, ranging from a naïve empirical method to a state-of-the-art Bayesian model-averaging method. Our data come from a recent time period, 2004-2008 in …


Windows Executable For Gaussian Copula With Nbd Margins, Michael S. Smith Dec 2010

Windows Executable For Gaussian Copula With Nbd Margins, Michael S. Smith

Michael Stanley Smith

This is an example Windows 32bit program to estimate a Gaussian copula model with NBD margins. The margins are estimated first using MLE, and the copula second using Bayesian MCMC. The model was discussed in Danaher & Smith (2011; Marketing Science) as example 4 (section 4.2).


Modeling Multivariate Distributions Using Copulas: Applications In Marketing, Peter J. Danaher, Michael S. Smith Dec 2010

Modeling Multivariate Distributions Using Copulas: Applications In Marketing, Peter J. Danaher, Michael S. Smith

Michael Stanley Smith

In this research we introduce a new class of multivariate probability models to the marketing literature. Known as “copula models”, they have a number of attractive features. First, they permit the combination of any univariate marginal distributions that need not come from the same distributional family. Second, a particular class of copula models, called “elliptical copula”, have the property that they increase in complexity at a much slower rate than existing multivariate probability models as the number of dimensions increase. Third, they are very general, encompassing a number of existing multivariate models, and provide a framework for generating many more. …


Bicycle Commuting In Melbourne During The 2000s Energy Crisis: A Semiparametric Analysis Of Intraday Volumes, Michael S. Smith, Goeran Kauermann Dec 2010

Bicycle Commuting In Melbourne During The 2000s Energy Crisis: A Semiparametric Analysis Of Intraday Volumes, Michael S. Smith, Goeran Kauermann

Michael Stanley Smith

Cycling is attracting renewed attention as a mode of transport in western urban environments, yet the determinants of usage are poorly understood. In this paper we investigate some of these using intraday bicycle volumes collected via induction loops located at ten bike paths in the city of Melbourne, Australia, between December 2005 and June 2008. The data are hourly counts at each location, with temporal and spatial disaggregation allowing for the impact of meteorology to be measured accurately for the first time. Moreover, during this period petrol prices varied dramatically and the data also provide a unique opportunity to assess …


The Generalized Shrinkage Estimator For The Analysis Of Functional Connectivity Of Brain Signals, Mark Fiecas, Hernando Ombao Dec 2010

The Generalized Shrinkage Estimator For The Analysis Of Functional Connectivity Of Brain Signals, Mark Fiecas, Hernando Ombao

Mark Fiecas

We develop a new statistical method for estimating functional connectivity between neurophysiological signals represented by a multivariate time series. We use partial coherence as the measure of functional connectivity. Partial coherence identifies the frequency bands that drive the direct linear association between any pair of channels. To estimate partial coherence, one would first need an estimate of the spectral density matrix of the multivariate time series. Parametric estimators of the spectral density matrix provide good frequency resolution but could be sensitive when the parametric model is misspecified. Smoothing-based nonparametric estimators are robust to model misspecification and are consistent but may …