Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 8 of 8

Full-Text Articles in Physical Sciences and Mathematics

Regularization Methods For Predicting An Ordinal Response Using Longitudinal High-Dimensional Genomic Data, Jiayi Hou Nov 2013

Regularization Methods For Predicting An Ordinal Response Using Longitudinal High-Dimensional Genomic Data, Jiayi Hou

Theses and Dissertations

Ordinal scales are commonly used to measure health status and disease related outcomes in hospital settings as well as in translational medical research. Notable examples include cancer staging, which is a five-category ordinal scale indicating tumor size, node involvement, and likelihood of metastasizing. Glasgow Coma Scale (GCS), which gives a reliable and objective assessment of conscious status of a patient, is an ordinal scaled measure. In addition, repeated measurements are common in clinical practice for tracking and monitoring the progression of complex diseases. Classical ordinal modeling methods based on the likelihood approach have contributed to the analysis of data in …


Review And Extension For The O’Brien Fleming Multiple Testing Procedure, Hanan Hammouri Nov 2013

Review And Extension For The O’Brien Fleming Multiple Testing Procedure, Hanan Hammouri

Theses and Dissertations

O'Brien and Fleming (1979) proposed a straightforward and useful multiple testing procedure (group sequential testing procedure) for comparing two treatments in clinical trials where subject responses are dichotomous (e.g. success and failure). O'Brien and Fleming stated that their group sequential testing procedure has the same Type I error rate and power as that of a fixed one-stage chi-square test, but gives the opportunity to terminate the trial early when one treatment is clearly performing better than the other. We studied and tested the O'Brien and Fleming procedure specifically by correcting the originally proposed critical values. Furthermore, we updated the O’Brien …


Response Adaptive Design Using Auxiliary And Primary Outcomes, Shuxian Sinks Nov 2013

Response Adaptive Design Using Auxiliary And Primary Outcomes, Shuxian Sinks

Theses and Dissertations

Response adaptive designs intend to allocate more patients to better treatments without undermining the validity and the integrity of the trial. The immediacy of the primary response (e.g. deaths, remission) determines the efficiency of the response adaptive design, which often requires outcomes to be quickly or immediately observed. This presents difficulties for survival studies, which may require long durations to observe the primary endpoint. Therefore, we introduce auxiliary endpoints to assist the adaptation with the primary endpoint, where an auxiliary endpoint is generally defined as any measurement that is positively associated with the primary endpoint. Our proposed design (referred to …


The Estimation And Evaluation Of Optimal Thresholds For Two Sequential Testing Strategies, Amber R. Wilk Jul 2013

The Estimation And Evaluation Of Optimal Thresholds For Two Sequential Testing Strategies, Amber R. Wilk

Theses and Dissertations

Many continuous medical tests often rely on a threshold for diagnosis. There are two sequential testing strategies of interest: Believe the Positive (BP) and Believe the Negative (BN). BP classifies a patient positive if either the first test is greater than a threshold θ1 or negative on the first test and greater than θ2 on the second test. BN classifies a patient positive if the first test is greater than a threshold θ3 and greater than θ4 on the second test. Threshold pairs θ = (θ1, θ2) or (θ3, θ4), depending on strategy, are defined as optimal if they maximized …


Choosing The Cut Point For A Restricted Mean In Survival Analysis, A Data Driven Method, Emily H. Sheldon Apr 2013

Choosing The Cut Point For A Restricted Mean In Survival Analysis, A Data Driven Method, Emily H. Sheldon

Theses and Dissertations

Survival Analysis generally uses the median survival time as a common summary statistic. While the median possesses the desirable characteristic of being unbiased, there are times when it is not the best statistic to describe the data at hand. Royston and Parmar (2011) provide an argument that the restricted mean survival time should be the summary statistic used when the proportional hazards assumption is in doubt. Work in Restricted Means dates back to 1949 when J.O. Irwin developed a calculation for the standard error of the restricted mean using Greenwood’s formula. Since then the development of the restricted mean has …


Detecting And Correcting Batch Effects In High-Throughput Genomic Experiments, Sarah Reese Apr 2013

Detecting And Correcting Batch Effects In High-Throughput Genomic Experiments, Sarah Reese

Theses and Dissertations

Batch effects are due to probe-specific systematic variation between groups of samples (batches) resulting from experimental features that are not of biological interest. Principal components analysis (PCA) is commonly used as a visual tool to determine whether batch effects exist after applying a global normalization method. However, PCA yields linear combinations of the variables that contribute maximum variance and thus will not necessarily detect batch effects if they are not the largest source of variability in the data. We present an extension of principal components analysis to quantify the existence of batch effects, called guided PCA (gPCA). We describe a …


Characterization Of A Weighted Quantile Score Approach For Highly Correlated Data In Risk Analysis Scenarios, Caroline Carrico Mar 2013

Characterization Of A Weighted Quantile Score Approach For Highly Correlated Data In Risk Analysis Scenarios, Caroline Carrico

Theses and Dissertations

In risk evaluation, the effect of mixtures of environmental chemicals on a common adverse outcome is of interest. However, due to the high dimensionality and inherent correlations among chemicals that occur together, the traditional methods (e.g. ordinary or logistic regression) are unsuitable. We extend and characterize a weighted quantile score (WQS) approach to estimating an index for a set of highly correlated components. In the case with environmental chemicals, we use the WQS to identify “bad actors” and estimate body burden. The accuracy of the WQS was evaluated through extensive simulation studies in terms of validity (ability of the WQS …


Accounting For Model Uncertainty In Linear Mixed-Effects Models, Adam Sima Feb 2013

Accounting For Model Uncertainty In Linear Mixed-Effects Models, Adam Sima

Theses and Dissertations

Standard statistical decision-making tools, such as inference, confidence intervals and forecasting, are contingent on the assumption that the statistical model used in the analysis is the true model. In linear mixed-effect models, ignoring model uncertainty results in an underestimation of the residual variance, contributing to hypothesis tests that demonstrate larger than nominal Type-I errors and confidence intervals with smaller than nominal coverage probabilities. A novel utilization of the generalized degrees of freedom developed by Zhang et al. (2012) is used to adjust the estimate of the residual variance for model uncertainty. Additionally, the general global linear approximation is extended to …