Informatics And Statistics For Analyzing 2-D Gel Electrophoresis Images, 2010 Imperial College London
Informatics And Statistics For Analyzing 2-D Gel Electrophoresis Images, Andrew W. Dowsey, Jeffrey S. Morris, Howard G. Gutstein, Guang Z. Yang
Jeffrey S. Morris
Whilst recent progress in ‘shotgun’ peptide separation by integrated liquid chromatography and mass spectrometry (LC/MS) has enabled its use as a sensitive analytical technique, proteome coverage and reproducibility is still limited and obtaining enough replicate runs for biomarker discovery is a challenge. For these reasons, recent research demonstrates the continuing need for protein separation by two-dimensional gel electrophoresis (2-DE). However, with traditional 2-DE informatics, the digitized images are reduced to symbolic data though spot detection and quantification before proteins are compared for differential expression by spot matching. Recently, a more robust and automated paradigm has emerged where gels are directly …
Bayesian Random Segmentationmodels To Identify Shared Copy Number Aberrations For Array Cgh Data, 2010 Texas A&M University
Bayesian Random Segmentationmodels To Identify Shared Copy Number Aberrations For Array Cgh Data, Veerabhadran Baladandayuthapani, Yuan Ji, Rajesh Talluri, Luis E. Nieto-Barajas, Jeffrey S. Morris
Jeffrey S. Morris
Array-based comparative genomic hybridization (aCGH) is a high-resolution high-throughput technique for studying the genetic basis of cancer. The resulting data consists of log fluorescence ratios as a function of the genomic DNA location and provides a cytogenetic representation of the relative DNA copy number variation. Analysis of such data typically involves estimation of the underlying copy number state at each location and segmenting regions of DNA with similar copy number states. Most current methods proceed by modeling a single sample/array at a time, and thus fail to borrow strength across multiple samples to infer shared regions of copy number aberrations. …
Code For Fitting Bdsacgh, 2010 UT MD Anderson Cancer Center
Code For Fitting Bdsacgh, Veera Baladandayuthapani
No abstract provided.
R Package For Bayesian Ensemble Methods For Survival Prediction In Gene Expression Data, 2010 UT MD Anderson Cancer Center
R Package For Bayesian Ensemble Methods For Survival Prediction In Gene Expression Data, Veera Baladandayuthapani
This is the R package for the methods described in Bayesian ensemble methods for survival prediction in gene expression data by Vinicius Bonato , Veerabhadran Baladandayuthapani, Kim-Anh Do, Bradley M. Broom, Erik P. Sulman, and Kenneth D. Aldape Submitted to Bioinformatics (2010)
Bayesian Random Segmentationmodels To Identify Shared Copy Number Aberrations For Array Cgh Data, 2010 UT MD Anderson Cancer Center
Bayesian Random Segmentationmodels To Identify Shared Copy Number Aberrations For Array Cgh Data, Veera Baladandayuthapani
No abstract provided.
Identification Of Ovarian Cancer Symptoms In Health Insurance Claims Data., 2010 University of Washington
Identification Of Ovarian Cancer Symptoms In Health Insurance Claims Data., Paula Diehr, Sean Devlin
Background: Women with ovarian cancer have reported abdominal=pelvic pain, bloating, difficulty eating or feeling full quickly, and urinary frequency=urgency prior to diagnosis. We explored these findings in a general population using a dataset of insured women aged 40–64 and investigated the potential effectiveness of a routine review of claims data as a prescreen to identify women at high risk for ovarian cancer. Methods: Data from a large Washington State health insurer were merged with the Seattle-Puget Sound Surveillance, Epidemiology and End Results (SEER) cancer registry for 2000–2004. We estimated the prevalence of symptoms in the 36 months prior to diagnosis …
Targeted Maximum Likelihood Estimation Of The Parameter Of A Marginal Structural Model, 2010 Johns Hopkins University
Targeted Maximum Likelihood Estimation Of The Parameter Of A Marginal Structural Model, Michael Rosenblum, Mark J. Van Der Laan
Targeted maximum likelihood estimation is a versatile tool for estimating parameters in semiparametric and nonparametric models. We work through an example applying targeted maximum likelihood methodology to estimate the parameter of a marginal structural model. In the case we consider, we show how this can be easily done by clever use of standard statistical software. We point out differences between targeted maximum likelihood estimation and other approaches (including estimating function based methods). The application we consider is to estimate the effect of adherence to antiretroviral medications on virologic failure in HIV positive individuals.
Discrete Nonparametric Algorithms For Outlier Detection With Genomic Data, 2010 Penn State University
Discrete Nonparametric Algorithms For Outlier Detection With Genomic Data, Debashis Ghosh
In high-throughput studies involving genetic data such as from gene expression mi- croarrays, dierential expression analysis between two or more experimental conditions has been a very common analytical task. Much of the resulting literature on multiple comparisons has paid relatively little attention to the choice of test statistic. In this article, we focus on the issue of choice of test statistic based on a special pattern of dierential expression. The approach here is based on recasting multiple comparisons procedures for assessing outlying expression values. A major complication is that the resulting p-values are discrete; some theoretical properties of sequential testing …
Detecting Outlier Genes From High-Dimensional Data: A Fuzzy Approach, 2010 Penn State University
Detecting Outlier Genes From High-Dimensional Data: A Fuzzy Approach, Debashis Ghosh
A recent nding in cancer research has been the characterization of previously undis- covered chromosomal abnormalities in several types of solid tumors. This was found based on analyses of high-throughput data from gene expression microarrays and motivated the development of so-called `outlier' tests for dierential expression. One statistical issue was the potential discreteness of the test statistics. Using ideas from fuzzy set theory, we develop fuzzy outlier detection algorithms that have links to ideas in multiple comparisons. Two- and K-sample extensions are considered. The methodology is illustrated by application to two microarray studies.
Links Between Analysis Of Surrogate Endpoints And Endogeneity, 2010 Penn State University
Links Between Analysis Of Surrogate Endpoints And Endogeneity, Debashis Ghosh, Jeremy M. Taylor, Michael R. Elliott
There has been substantive interest in the assessment of surrogate endpoints in medical research. These are measures which could potentially replace \true" endpoints in clinical trials and lead to studies that require less follow-up. Recent research in the area has focused on assessments using causal inference frameworks. Beginning with a simple model for associating the surrogate and true endpoints in the population, we approach the problem as one of endogenous covariates. An instrumental variables estimator and general two-stage algorithm is proposed. Existing surrogacy frameworks are then evaluated in the context of the model. A numerical example is used to illustrate …
Meta-Analysis For Surrogacy: Accelerated Failure Time Models And Semicompeting Risks Modelling, 2010 Penn State University
Meta-Analysis For Surrogacy: Accelerated Failure Time Models And Semicompeting Risks Modelling, Debashis Ghosh, Jeremy M. Taylor, Daniel J. Sargent
There has been great recent interest in the medical and statistical literature in the assessment and validation of surrogate endpoints as proxies for clinical endpoints in medical studies. More recently, authors have focused on using meta-analytical methods for quanti cation of surrogacy. In this article, we extend existing procedures for analysis based on the accelerated failure time model to this setting. An advantage of this approach relative to proportional hazards model is that it allows for analysis in the semi-competing risks setting, where we constrain the surrogate endpoint to occur before the true endpoint. A novel principal components procedure is …
Spline-Based Models For Predictiveness Curves, 2010 Penn State University
Spline-Based Models For Predictiveness Curves, Debashis Ghosh, Michael Sabel
A biomarker is dened to be a biological characteristic that is objectively measured and evaluated as an indicator of normal biologic processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention. The use of biomarkers in cancer has been advocated for a variety of purposes, which include use as surrogate endpoints, early detection of disease, proxies for environmental exposure and risk prediction. We deal with the latter issue in this paper. Several authors have proposed use of the predictiveness curve for assessing the capacity of a biomarker for risk prediction. For most situations, it is reasonable to assume monotonicity of …
Combining Multiple Models With Survival Data: The Phase Algorithm, 2010 Penn State University
Combining Multiple Models With Survival Data: The Phase Algorithm, Debashis Ghosh, Zheng Yuan
In many scientic studies, one common goal is to develop good prediction rules based on a set of available measurements. This paper proposes a model averaging methodology using proportional hazards regression models to construct new estimators of predicted survival probabilities. A screening step based on an adaptive searching algorithm is used to handle large numbers of covariates. The nite-sample properties of the proposed methodology is assessed using simulation studies. Application of the method to a cancer biomarker study is also given.
Reordered Subsets Reconstruction Of Proton Computed Tomography, 2010 California State University, San Bernardino
Reordered Subsets Reconstruction Of Proton Computed Tomography, Wenzhe Xue
Theses Digitization Project
This project investigates the improvement of iterative reconstruction using reordered subsets. Block iterative projection and Ordered Subset reconstruction algorithms are developed to improve the performance of image reconstruction. Contains source code.
Cellular Automata Rules Generator For Microbial Communities, 2010 California State University, San Bernardino
Cellular Automata Rules Generator For Microbial Communities, Melissa Marie Quintana
Theses Digitization Project
The purpose of this project is to provide a visual representation as program output so that the rules and the radius of effect can be estimated. Currently there is a need for a method that extracts the cellular automata rules which simulate the growth patterns of microbial communities found within extreme environments. Contains source code.
Authentication Of Biometric Features Using Texture Coding For Id Cards, 2010 Technological University Dublin
Authentication Of Biometric Features Using Texture Coding For Id Cards, Jonathan Blackledge, Eugene Coyle
The use of image based information exchange has grown rapidly over the years in terms of both e-to-e image storage and transmission and in terms of maintaining paper documents in electronic form. Further, with the dramatic improvements in the quality of COTS (Commercial-Off-The-Shelf) printing and scanning devices, the ability to counterfeit electronic and printed documents has become a widespread problem. Consequently, there has been an increasing demand to develop digital watermarking techniques which can be applied to both electronic and printed images (and documents) that can be authenticated, prevent unauthorized copying of their content and, in the case of printed …
To Live And Die In Ca, 2010 California State University, San Bernardino
To Live And Die In Ca, Jane Frances Curnutt
Theses Digitization Project
This thesis investigates the nature of elementary cellular automata to better understand their relationship of the models they support to the biological organisms that create the mats and soil crusts found in extreme environments here on earth. Cellular automata have been used to study growth and patterns in forests, arid desert environments, predator-prey problems, and sea shells. It has also been used to study areas of diverse epidemiology and linguistics. Cellular automata have been used as the core of computer games as well. This investigation has led to develop a graphical grammar for simple cellular automata, using L-systems, a grammar …
Measuring The Hiv/Aids Epidemic: Approaches And Challenges, 2009 University of California, Los Angeles
Measuring The Hiv/Aids Epidemic: Approaches And Challenges, Ron Brookmeyer
In this article, the author reviews current approaches and methods for measuring the scope of the human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome (AIDS) epidemic and their strengths and weaknesses. In recent years, various public health agencies have revised statistical estimates of the scope of the HIV/AIDS pandemic. The author considers the reasons underlying these revisions. New sources of data for estimating HIV prevalence have become available, such as nationally representative probability-based surveys. New technologies such as biomarkers that indicate when persons became infected are now used to determine HIV incidence rates. The author summarizes the main sources of errors and …
On The Statistical Accuracy Of Biomarker Assays Of Hiv Incidence, 2009 University of California, Los Angeles
On The Statistical Accuracy Of Biomarker Assays Of Hiv Incidence, Ron Brookmeyer
Objective: To evaluate the statistical accuracy of estimates of current HIV incidence rates from cross-sectional surveys, and to identify characteristics of assays that improve accuracy.
Methods: Performed mathematical and statistical analysis of the cross-sectional estimator of HIV incidence to evaluate bias and variance. Developed probability models to evaluate impact of long tails of the window period distribution on accuracy.
Results: The standard cross-sectional estimate of HIV incidence rate is estimating a time-lagged incidence where the lag time, called the shadow, depends on the mean and the coefficient of variation of window periods. Equations show how the shadow increases with the …
Semiparametric Analysis Of Recurrent Events: Artificial Censoring, Truncation, Pairwise Estimation And Inference, 2009 Penn State University
Semiparametric Analysis Of Recurrent Events: Artificial Censoring, Truncation, Pairwise Estimation And Inference, Debashis Ghosh
The analysis of recurrent failure time data from longitudinal studies can be complicated by the presence of dependent censoring. There has been a substantive literature that has developed based on an artificial censoring device. We explore in this article the connection between this class of methods with truncated data structures. In addition, a new procedure is developed for estimation and inference in a joint model for recurrent events and dependent censoring. Estimation proceeds using a mixed U-statistic based estimating function approach. New resampling-based methods for variance estimation and model checking are also described. The methods are illustrated by application to …