Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

2010

Statistics and Probability

Institution
Keyword
Publication
Publication Type
File Type

Articles 31 - 60 of 531

Full-Text Articles in Physical Sciences and Mathematics

Reliability Measures Of A Three-State Complex System: A Copula Approach, Mangey Ram Dec 2010

Reliability Measures Of A Three-State Complex System: A Copula Approach, Mangey Ram

Applications and Applied Mathematics: An International Journal (AAM)

Improvement in reliability and production play a very important role in system design. The two key factors, considered in predicting system reliability, are failure distribution of the component and system configuration. This research discusses the mathematical modeling of a highly reliable complex system, which is in three states i.e. normal, partial failed (degraded state) and complete failed state. The system, partial failed is due to the partial failure of internal components or redundancies and completely failed is due to catastrophic failure of the system. Repair rates are general functions of the time spent. All the transition rates are constant except …


Spatial Epidemiology Of Birth Defects In The United States And The State Of Utah Using Geographic Information Systems And Spatial Statistics, Samson Y. Gebreab Dec 2010

Spatial Epidemiology Of Birth Defects In The United States And The State Of Utah Using Geographic Information Systems And Spatial Statistics, Samson Y. Gebreab

All Graduate Theses and Dissertations, Spring 1920 to Summer 2023

Oral clefts are the most common form of birth defects in the United States (US) and the State of Utah has among the highest prevalence of oral clefts in the nation. The overall objective of this dissertation was to examine the spatial distribution of oral clefts and their linkage with a broad range of demographic, behavioral, social, economic, and environmental risk factors through the application of Geographic Information Systems (GIS) and spatial statistics. Using innovative linked micromaps plots, we investigated the geographic patterns of oral clefts occurrence from 1998 to 2002 and their relationships with maternal smoking rates and proportion …


An Investigation Of Process Parameters To Optimize The Fiber Diameter Of Electrospun Vascular Scaffolds Through Experimental Design, Steffi Wong Dec 2010

An Investigation Of Process Parameters To Optimize The Fiber Diameter Of Electrospun Vascular Scaffolds Through Experimental Design, Steffi Wong

Biomedical Engineering

No abstract provided.


The Determinants Of Colorectal Cancer Survival Disparities In Nevada, Lucas N. Wassira Dec 2010

The Determinants Of Colorectal Cancer Survival Disparities In Nevada, Lucas N. Wassira

UNLV Theses, Dissertations, Professional Papers, and Capstones

Different population groups across Nevada and throughout the United States suffer disproportionately from colorectal cancer and its after-effects. Overcoming cancer health disparities is important for lessening the burden of cancer. There has been an overall decline in the incidence of and mortality from colorectal cancer (CRC). This is likely due, in part, to the increasing use of screening procedures such as Fecal Occult Blood Test (FOBT) and/or endoscopy, which can reduce the risk of CRC mortality by fifty percent. Nevertheless, screening procedures are routinely used by only fifty percent of Americans aged fifty years and older. Despite overall mortality decreasing …


Poisson Process Monitoring, Test And Comparison, Qing Chen Dec 2010

Poisson Process Monitoring, Test And Comparison, Qing Chen

UNLV Theses, Dissertations, Professional Papers, and Capstones

The task of determining whether a sudden change occurred in the generative parameters of a time series generates application in many areas. In this thesis, we aim at monitoring the change-point of a Poisson process by method, which is characterized by a forward-backward testing algorithm and several overall error control mechanisms. With the application of this proposed method, we declare that Mount Etna is not a simple Poissonian volcano, because two different regimes divided by the change point, January 30th 1974, are identified. The validation procedures, used in a complementary fashion, by the formal hypothesis tests and graphical method will …


Modeling Longitudinal Data Using A Pair-Copula Decomposition Of Serial Dependence, Michael S. Smith, Aleksey Min, Carlos Almeida, Claudia Czado Nov 2010

Modeling Longitudinal Data Using A Pair-Copula Decomposition Of Serial Dependence, Michael S. Smith, Aleksey Min, Carlos Almeida, Claudia Czado

Michael Stanley Smith

Copulas have proven to be very successful tools for the flexible modelling of cross-sectional dependence. In this paper we express the dependence structure of continuous-valued time series data using a sequence of bivariate copulas. This corresponds to a type of decomposition recently called a ‘vine’ in the graphical models literature, where each copula is entitled a ‘pair-copula’. We propose a Bayesian approach for the estimation of this dependence structure for longitudinal data. Bayesian selection ideas are used to identify any independence pair-copulas, with the end result being a parsimonious representation of a time-inhomogeneous Markov process of varying order. Estimates are …


Putting Artists On The Map: A Five Part Study Of Greater Cleveland Artists' Location Decisions - Part 2: Profiles Of Artist Neighborhoods, Mark Salling, Gregory Soltis, Charles Post, Sharon Bliss, Ellen Cyran Nov 2010

Putting Artists On The Map: A Five Part Study Of Greater Cleveland Artists' Location Decisions - Part 2: Profiles Of Artist Neighborhoods, Mark Salling, Gregory Soltis, Charles Post, Sharon Bliss, Ellen Cyran

All Maxine Goodman Levin School of Urban Affairs Publications

A series of reports detailing the residential and work space location preferences of Cuyahoga county's artists.


Asymptotic Theory For Cross-Validated Targeted Maximum Likelihood Estimation, Wenjing Zheng, Mark J. Van Der Laan Nov 2010

Asymptotic Theory For Cross-Validated Targeted Maximum Likelihood Estimation, Wenjing Zheng, Mark J. Van Der Laan

U.C. Berkeley Division of Biostatistics Working Paper Series

We consider a targeted maximum likelihood estimator of a path-wise differentiable parameter of the data generating distribution in a semi-parametric model based on observing n independent and identically distributed observations. The targeted maximum likelihood estimator (TMLE) uses V-fold sample splitting for the initial estimator in order to make the TMLE maximally robust in its bias reduction step. We prove a general theorem that states asymptotic efficiency (and thereby regularity) of the targeted maximum likelihood estimator when the initial estimator is consistent and a second order term converges to zero in probability at a rate faster than the square root of …


A Bayesian Shared Component Model For Genetic Association Studies, Juan J. Abellan, Carlos Abellan, Juan R. Gonzalez Nov 2010

A Bayesian Shared Component Model For Genetic Association Studies, Juan J. Abellan, Carlos Abellan, Juan R. Gonzalez

COBRA Preprint Series

We present a novel approach to address genome association studies between single nucleotide polymorphisms (SNPs) and disease. We propose a Bayesian shared component model to tease out the genotype information that is common to cases and controls from the one that is specific to cases only. This allows to detect the SNPs that show the strongest association with the disease. The model can be applied to case-control studies with more than one disease. In fact, we illustrate the use of this model with a dataset of 23,418 SNPs from a case-control study by The Welcome Trust Case Control Consortium (2007) …


Minimum Description Length And Empirical Bayes Methods Of Identifying Snps Associated With Disease, Ye Yang, David R. Bickel Nov 2010

Minimum Description Length And Empirical Bayes Methods Of Identifying Snps Associated With Disease, Ye Yang, David R. Bickel

COBRA Preprint Series

The goal of determining which of hundreds of thousands of SNPs are associated with disease poses one of the most challenging multiple testing problems. Using the empirical Bayes approach, the local false discovery rate (LFDR) estimated using popular semiparametric models has enjoyed success in simultaneous inference. However, the estimated LFDR can be biased because the semiparametric approach tends to overestimate the proportion of the non-associated single nucleotide polymorphisms (SNPs). One of the negative consequences is that, like conventional p-values, such LFDR estimates cannot quantify the amount of information in the data that favors the null hypothesis of no disease-association.

We …


Cost-Efficient Variable Selection Using Branching Lars, Li Hua Yue Nov 2010

Cost-Efficient Variable Selection Using Branching Lars, Li Hua Yue

Electronic Thesis and Dissertation Repository

Variable selection is a difficult problem in statistical model building. Identification of cost efficient diagnostic factors is very important to health researchers, but most variable selection methods do not take into account the cost of collecting data for the predictors. The trade off between statistical significance and cost of collecting data for the statistical model is our focus. A Branching LARS (BLARS) procedure has been developed that can select and estimate the important predictors to build a model not only good at prediction but also cost efficient. BLARS method is an extension of the LARS variable selection method to incorporate …


Observational Study And Individualized Antiretroviral Therapy Initiation Rules For Reducing Cancer Incidence In Hiv-Infected Patients, Romain Neugebauer, Michael J. Silverberg, Mark J. Van Der Laan Nov 2010

Observational Study And Individualized Antiretroviral Therapy Initiation Rules For Reducing Cancer Incidence In Hiv-Infected Patients, Romain Neugebauer, Michael J. Silverberg, Mark J. Van Der Laan

U.C. Berkeley Division of Biostatistics Working Paper Series

Targeted Maximum Likelihood Learning (TMLL) has been proposed as a general estimation methodology that can, in particular, be applied to draw causal inferences based on marginal structural modeling with observational data using either a point treatment approach (all confounders are assumed not to be affected by the exposure(s) of interest) or a longitudinal data approach (some confounders may be affected by one of the exposures of interest). While formal development of TMLL has included road maps for applications in longitudinal data approaches, real-life implementations have been restricted to studies based on a point treatment approach. In this article, we illustrate …


Inferential Methods For High-Throughput Methylation Data, Maria Capparuccini Nov 2010

Inferential Methods For High-Throughput Methylation Data, Maria Capparuccini

Theses and Dissertations

The role of abnormal DNA methylation in the progression of disease is a growing area of research that relies upon the establishment of sound statistical methods. The common method for declaring there is differential methylation between two groups at a given CpG site, as summarized by the difference between proportions methylated db=b1-b2, has been through use of a Filtered Two Sample t-test, using the recommended filter of 0.17 (Bibikova et al., 2006b). In this dissertation, we performed a re-analysis of the data used in recommending the threshold by fitting a mixed-effects ANOVA model. It was determined that the 0.17 filter …


Survival Analysis Of Microarray Data With Microarray Measurement Subject To Measurement Error, Juan Xiong Nov 2010

Survival Analysis Of Microarray Data With Microarray Measurement Subject To Measurement Error, Juan Xiong

Electronic Thesis and Dissertation Repository

Microarray technology is essentially a measurement tool for measuring expressions of genes, and this measurement is subject to measurement error. Gene expressions could be employed as predictors for patient survival, and the measurement error involved in the gene expression is often ignored in the analysis of microarray data in the literature. Efforts are needed to establish statistical method for analyzing microarray data without ignoring the error in gene expression. A typical microarray data set has a large number of genes far exceeding the sample size. Proper selection of survival relevant genes contributes to an accurate prediction model. We study the …


Improving The Power Of Chronic Disease Surveillance By Incorporating Residential History, Justin Manjourides, Marcello Pagano Nov 2010

Improving The Power Of Chronic Disease Surveillance By Incorporating Residential History, Justin Manjourides, Marcello Pagano

Harvard University Biostatistics Working Paper Series

No abstract provided.


Power And Sample Size For Three-Level Cluster Designs, Tina Cunningham Nov 2010

Power And Sample Size For Three-Level Cluster Designs, Tina Cunningham

Theses and Dissertations

Over the past few decades, Cluster Randomized Trials (CRT) have become a design of choice in many research areas. One of the most critical issues in planning a CRT is to ensure that the study design is sensitive enough to capture the intervention effect. The assessment of power and sample size in such studies is often faced with many challenges due to several methodological difficulties. While studies on power and sample size for cluster designs with one and two levels are abundant, the evaluation of required sample size for three-level designs has been generally overlooked. First, the nesting effect introduces …


Exploring Evaluation In School Districts: School District Evaluators And Their Practice, Susan Hibbard Nov 2010

Exploring Evaluation In School Districts: School District Evaluators And Their Practice, Susan Hibbard

USF Tampa Graduate Theses and Dissertations

This study explored the evaluation practices of internal evaluators in public school districts in a large southern state. The individuals who conduct evaluations in school districts as internal evaluators were identified and background information was collected. The education and training in evaluation was investigated and the types of evaluations typically conducted by those individuals. Respondents (n = 134) revealed conducting evaluations was a secondary role and part of their main job responsibilities. The types of evaluations carried out and the way in which evaluation was practices were revealed. A descriptive framework of the individuals who conduct evaluations in school districts …


A Flexible Method For Testing Independence In Two-Way Contingency Tables, Peyman Jafari, Noori Akhtar-Danesh, Zahra Bagheri Nov 2010

A Flexible Method For Testing Independence In Two-Way Contingency Tables, Peyman Jafari, Noori Akhtar-Danesh, Zahra Bagheri

Journal of Modern Applied Statistical Methods

A flexible approach for testing association in two-way contingency tables is presented. It is simple, does not assume a specific form for the association and is applicable to tables with nominal-by-nominal, nominal-by-ordinal, and ordinal-by-ordinal classifications.


The Not-So-Quiet Revolution: Cautionary Comments On The Rejection Of Hypothesis Testing In Favor Of A “Causal” Modeling Alternative, Daniel H. Robinson, Joel R. Levin Nov 2010

The Not-So-Quiet Revolution: Cautionary Comments On The Rejection Of Hypothesis Testing In Favor Of A “Causal” Modeling Alternative, Daniel H. Robinson, Joel R. Levin

Journal of Modern Applied Statistical Methods

Rodgers (2010) recently applauded a revolution involving the increased use of statistical modeling techniques. It is argued that such use may have a downside, citing empirical evidence in educational psychology that modeling techniques are often applied in cross-sectional, correlational studies to produce unjustified causal conclusions and prescriptive statements.


Statistical And Mathematical Modeling Versus Nhst? There’S No Competition!, Joseph Lee Rodgers Nov 2010

Statistical And Mathematical Modeling Versus Nhst? There’S No Competition!, Joseph Lee Rodgers

Journal of Modern Applied Statistical Methods

Some of Robinson & Levin’s critique of Rodgers (2010) is cogent, helpful, and insightful – although limiting. Recent methodology has advanced through the development of structural equation modeling, multi-level modeling, missing data methods, hierarchical linear modeling, categorical data analysis, as well as the development of many dedicated and specific behavioral models. These methodological approaches are based on a revised epistemological system, and have emerged naturally, without the need for task forces, or even much self-conscious discussion. The original goal was neither to develop nor promote a modeling revolution. That has occurred; I documented its development and its status. Two organizing …


Recommended Sample Size For Conducting Exploratory Factor Analysis On Dichotomous Data, Robert H. Pearson, Daniel J. Mundform Nov 2010

Recommended Sample Size For Conducting Exploratory Factor Analysis On Dichotomous Data, Robert H. Pearson, Daniel J. Mundform

Journal of Modern Applied Statistical Methods

Minimum sample sizes are recommended for conducting exploratory factor analysis on dichotomous data. A Monte Carlo simulation was conducted, varying the level of communalities, number of factors, variable-to-factor ratio and dichotomization threshold. Sample sizes were identified based on congruence between rotated population and sample factor loadings.


Notes On Hypothesis Testing Under A Single-Stage Design In Phase Ii Trial, Kung-Jong Lui Nov 2010

Notes On Hypothesis Testing Under A Single-Stage Design In Phase Ii Trial, Kung-Jong Lui

Journal of Modern Applied Statistical Methods

A primary objective of a phase II trial is to determine future development is warranted for a new treatment based on whether it has sufficient activity against a specified type of tumor. Limitations exist in the commonly-used hypothesis setting and the standard test procedure for a phase II trial. This study reformats the hypothesis setting to mirror the clinical decision process in practice. Under the proposed hypothesis setting, the critical points and the minimum required sample size for a desired power of finding a superior treatment at a given α -level are presented. An example is provided to illustrate how …


Effect Of Measurement Errors On The Separate And Combined Ratio And Product Estimators In Stratified Random Sampling, Housila P. Singh, Namrata Karpe Nov 2010

Effect Of Measurement Errors On The Separate And Combined Ratio And Product Estimators In Stratified Random Sampling, Housila P. Singh, Namrata Karpe

Journal of Modern Applied Statistical Methods

Separate and combined ratio, product and difference estimators are introduced for population mean μY of a study variable Y using auxiliary variable X in stratified sampling when the observations are contaminated with measurement errors. The bias and mean squared error of the proposed estimators have been derived under large sample approximation and their properties are analyzed. Generalized versions of these estimators are given along with their properties.


Use Of Two Variables Having Common Mean To Improve The Bar-Lev, Bobovitch And Boukai Randomized Response Model, Oluseun Odumade, Sarjinder Singh Nov 2010

Use Of Two Variables Having Common Mean To Improve The Bar-Lev, Bobovitch And Boukai Randomized Response Model, Oluseun Odumade, Sarjinder Singh

Journal of Modern Applied Statistical Methods

A new method to improve the randomized response model due to Bar-Lev, Bobovitch and Boukai (2004) is suggested. It has been observed that if two sensitive (or non sensitive) variables exist that are related to the main study sensitive variable, then those variables could be used to construct ratio type adjustments to the usual estimator of the population mean of a sensitive variable due to Bar-Lev, Bobovitch and Boukai (2004).The relative efficiency of the proposed estimators is studied with respect to the Bar-Lev, Bobovitch and Boukai (2004) models under different situations.


Incidence And Prevalence For A Triply Censored Data, Hilmi F. Kittani Nov 2010

Incidence And Prevalence For A Triply Censored Data, Hilmi F. Kittani

Journal of Modern Applied Statistical Methods

The model introduced for the natural history of a progressive disease has four disease states which are expressed as a joint distribution of three survival random variables. Covariates are included in the model using Cox’s proportional hazards model with necessary assumptions needed. Effects of the covariates are estimated and tested. Formulas for incidence in the preclinical, clinical and death states are obtained, and prevalence formulas are obtained for the preclinical and clinical states. Estimates of the sojourn times in the preclinical and clinical states are obtained.


Robust Estimators In Logistic Regression: A Comparative Simulation Study, Sanizah Ahmad, Norazan Mohamed Ramli, Habshah Midi Nov 2010

Robust Estimators In Logistic Regression: A Comparative Simulation Study, Sanizah Ahmad, Norazan Mohamed Ramli, Habshah Midi

Journal of Modern Applied Statistical Methods

The maximum likelihood estimator (MLE) is commonly used to estimate the parameters of logistic regression models due to its efficiency under a parametric model. However, evidence has shown the MLE has an unduly effect on the parameter estimates in the presence of outliers. Robust methods are put forward to rectify this problem. This article examines the performance of the MLE and four existing robust estimators under different outlier patterns, which are investigated by real data sets and Monte Carlo simulation.


A General Class Of Chain-Type Estimators In The Presence Of Non-Response Under Double Sampling Scheme, Sunil Kumar, Housila P. Singh, Sandeep Bhougal Nov 2010

A General Class Of Chain-Type Estimators In The Presence Of Non-Response Under Double Sampling Scheme, Sunil Kumar, Housila P. Singh, Sandeep Bhougal

Journal of Modern Applied Statistical Methods

General class chain ratio type estimators for estimating the population mean of a study variable are examined in the presence of non-response under a double sampling scheme using a factor-type estimator (FTE). Properties of the suggested estimators are studied and compared to those of existing estimators. An empirical study is carried out to demonstrate the performance of the suggested estimators; empirical results support the theoretical study.


Maximum Downside Semi Deviation Stochastic Programming For Portfolio Optimization Problem, Anton Abdulbasah Kamil, Khlipah Ibrahim Nov 2010

Maximum Downside Semi Deviation Stochastic Programming For Portfolio Optimization Problem, Anton Abdulbasah Kamil, Khlipah Ibrahim

Journal of Modern Applied Statistical Methods

Portfolio optimization is an important research field in financial decision making. The chief character within optimization problems is the uncertainty of future returns. Probabilistic methods are used alongside optimization techniques. Markowitz (1952, 1959) introduced the concept of risk into the problem and used a mean-variance model to identify risk with the volatility (variance) of the random objective. The mean-risk optimization paradigm has since been expanded extensively both theoretically and computationally. A single stage and two stage stochastic programming model with recourse are presented for risk averse investors with the objective of minimizing the maximum downside semideviation. The models employ the …


On Bayesian Shrinkage Setup For Item Failure Data Under A Family Of Life Testing Distribution, Gyan Prakash Nov 2010

On Bayesian Shrinkage Setup For Item Failure Data Under A Family Of Life Testing Distribution, Gyan Prakash

Journal of Modern Applied Statistical Methods

Properties of the Bayes shrinkage estimator for the parameter are studied of a family of probability density function when item failure data are available. The symmetric and asymmetric loss functions are considered for two different prior distributions. In addition, the Bayes estimates of reliability function and hazard rate are obtained and their properties are studied.


Empirical Characteristic Function Approach To Goodness Of Fit Tests For The Logistic Distribution Under Srs And Rss, M. T. Alodat, S. A. Al-Subh, Kamaruzaman Ibrahim, Abdul Aziz Jemain Nov 2010

Empirical Characteristic Function Approach To Goodness Of Fit Tests For The Logistic Distribution Under Srs And Rss, M. T. Alodat, S. A. Al-Subh, Kamaruzaman Ibrahim, Abdul Aziz Jemain

Journal of Modern Applied Statistical Methods

The integral of the squares modulus of the difference between the empirical characteristic function and the characteristic function of the hypothesized distribution is used by Wong and Sim (2000) to test for goodness of fit. A weighted version of Wong and Sim (2000) under ranked set sampling, a sampling technique introduced by McIntyre (1952), is examined. Simulations that show the ranked set sampling counterpart of Wong and Sim (2000) is more powerful.