Open Access. Powered by Scholars. Published by Universities.®

Statistical Methodology Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 30 of 31

Full-Text Articles in Statistical Methodology

Variable Importance Analysis With The Multipim R Package, Stephan J. Ritter, Nicholas P. Jewell, Alan E. Hubbard Jul 2011

Variable Importance Analysis With The Multipim R Package, Stephan J. Ritter, Nicholas P. Jewell, Alan E. Hubbard

U.C. Berkeley Division of Biostatistics Working Paper Series

We describe the R package multiPIM, including statistical background, functionality and user options. The package is for variable importance analysis, and is meant primarily for analyzing data from exploratory epidemiological studies, though it could certainly be applied in other areas as well. The approach taken to variable importance comes from the causal inference field, and is different from approaches taken in other R packages. By default, multiPIM uses a double robust targeted maximum likelihood estimator (TMLE) of a parameter akin to the attributable risk. Several regression methods/machine learning algorithms are available for estimating the nuisance parameters of the models, including …


Targeted Maximum Likelihood Estimation: A Gentle Introduction, Susan Gruber, Mark J. Van Der Laan Aug 2009

Targeted Maximum Likelihood Estimation: A Gentle Introduction, Susan Gruber, Mark J. Van Der Laan

U.C. Berkeley Division of Biostatistics Working Paper Series

This paper provides a concise introduction to targeted maximum likelihood estimation (TMLE) of causal effect parameters. The interested analyst should gain sufficient understanding of TMLE from this introductory tutorial to be able to apply the method in practice. A program written in R is provided. This program implements a basic version of TMLE that can be used to estimate the effect of a binary point treatment on a continuous or binary outcome.


Shrinkage Estimation Of Expression Fold Change As An Alternative To Testing Hypotheses Of Equivalent Expression, Zahra Montazeri, Corey M. Yanofsky, David R. Bickel Aug 2009

Shrinkage Estimation Of Expression Fold Change As An Alternative To Testing Hypotheses Of Equivalent Expression, Zahra Montazeri, Corey M. Yanofsky, David R. Bickel

COBRA Preprint Series

Research on analyzing microarray data has focused on the problem of identifying differentially expressed genes to the neglect of the problem of how to integrate evidence that a gene is differentially expressed with information on the extent of its differential expression. Consequently, researchers currently prioritize genes for further study either on the basis of volcano plots or, more commonly, according to simple estimates of the fold change after filtering the genes with an arbitrary statistical significance threshold. While the subjective and informal nature of the former practice precludes quantification of its reliability, the latter practice is equivalent to using a …


Loss-Based Estimation With Evolutionary Algorithms And Cross-Validation, David Shilane, Richard H. Liang, Sandrine Dudoit Nov 2007

Loss-Based Estimation With Evolutionary Algorithms And Cross-Validation, David Shilane, Richard H. Liang, Sandrine Dudoit

U.C. Berkeley Division of Biostatistics Working Paper Series

Many statistical inference methods rely upon selection procedures to estimate a parameter of the joint distribution of explanatory and outcome data, such as the regression function. Within the general framework for loss-based estimation of Dudoit and van der Laan, this project proposes an evolutionary algorithm (EA) as a procedure for risk optimization. We also analyze the size of the parameter space for polynomial regression under an interaction constraints along with constraints on either the polynomial or variable degree.


Survival Analysis With Large Dimensional Covariates: An Application In Microarray Studies, David A. Engler, Yi Li Jul 2007

Survival Analysis With Large Dimensional Covariates: An Application In Microarray Studies, David A. Engler, Yi Li

Harvard University Biostatistics Working Paper Series

Use of microarray technology often leads to high-dimensional and low- sample size data settings. Over the past several years, a variety of novel approaches have been proposed for variable selection in this context. However, only a small number of these have been adapted for time-to-event data where censoring is present. Among standard variable selection methods shown both to have good predictive accuracy and to be computationally efficient is the elastic net penalization approach. In this paper, adaptation of the elastic net approach is presented for variable selection both under the Cox proportional hazards model and under an accelerated failure time …


Multiple Testing Procedures And Applications To Genomics, Merrill D. Birkner, Katherine S. Pollard, Mark J. Van Der Laan, Sandrine Dudoit Jan 2005

Multiple Testing Procedures And Applications To Genomics, Merrill D. Birkner, Katherine S. Pollard, Mark J. Van Der Laan, Sandrine Dudoit

U.C. Berkeley Division of Biostatistics Working Paper Series

This chapter proposes widely applicable resampling-based single-step and stepwise multiple testing procedures (MTP) for controlling a broad class of Type I error rates, in testing problems involving general data generating distributions (with arbitrary dependence structures among variables), null hypotheses, and test statistics (Dudoit and van der Laan, 2005; Dudoit et al., 2004a,b; van der Laan et al., 2004a,b; Pollard and van der Laan, 2004; Pollard et al., 2005). Procedures are provided to control Type I error rates defined as tail probabilities for arbitrary functions of the numbers of Type I errors, V_n, and rejected hypotheses, R_n. These error rates include: …


Robust Inferences For Covariate Effects On Survival Time With Censored Linear Regression Models, Larry Leon, Tianxi Cai, L. J. Wei Jan 2005

Robust Inferences For Covariate Effects On Survival Time With Censored Linear Regression Models, Larry Leon, Tianxi Cai, L. J. Wei

Harvard University Biostatistics Working Paper Series

Various inference procedures for linear regression models with censored failure times have been studied extensively. Recent developments on efficient algorithms to implement these procedures enhance the practical usage of such models in survival analysis. In this article, we present robust inferences for certain covariate effects on the failure time in the presence of "nuisance" confounders under a semiparametric, partial linear regression setting. Specifically, the estimation procedures for the regression coefficients of interest are derived from a working linear model and are valid even when the function of the confounders in the model is not correctly specified. The new proposals are …


Semiparametric Regression In Capture-Recapture Modelling, O. Gimenez, C. Barbraud, Ciprian M. Crainiceanu, S. Jenouvrier, B.T. Morgan Dec 2004

Semiparametric Regression In Capture-Recapture Modelling, O. Gimenez, C. Barbraud, Ciprian M. Crainiceanu, S. Jenouvrier, B.T. Morgan

Johns Hopkins University, Dept. of Biostatistics Working Papers

Capture-recapture models were developed to estimate survival using data arising from marking and monitoring wild animals over time. Variation in the survival process may be explained by incorporating relevant covariates. We develop nonparametric and semiparametric regression models for estimating survival in capture-recapture models. A fully Bayesian approach using MCMC simulations was employed to estimate the model parameters. The work is illustrated by a study of Snow petrels, in which survival probabilities are expressed as nonlinear functions of a climate covariate, using data from a 40-year study on marked individuals, nesting at Petrels Island, Terre Adelie.


A Bayesian Mixture Model Relating Dose To Critical Organs And Functional Complication In 3d Conformal Radiation Therapy, Tim Johnson, Jeremy Taylor, Randall K. Ten Haken, Avraham Eisbruch Nov 2004

A Bayesian Mixture Model Relating Dose To Critical Organs And Functional Complication In 3d Conformal Radiation Therapy, Tim Johnson, Jeremy Taylor, Randall K. Ten Haken, Avraham Eisbruch

The University of Michigan Department of Biostatistics Working Paper Series

A goal of radiation therapy is to deliver maximum dose to the target tumor while minimizing complications due to irradiation of critical organs. Technological advances in 3D conformal radiation therapy has allowed great strides in realizing this goal, however complications may still arise. Critical organs may be adjacent to tumors or in the path of the radiation beam. Several mathematical models have been proposed that describe a relationship between dose and observed functional complication, however only a few published studies have successfully fit these models to data using modern statistical methods which make efficient use of the data. One complication …


Data Adaptive Estimation Of The Treatment Specific Mean, Yue Wang, Oliver Bembom, Mark J. Van Der Laan Oct 2004

Data Adaptive Estimation Of The Treatment Specific Mean, Yue Wang, Oliver Bembom, Mark J. Van Der Laan

U.C. Berkeley Division of Biostatistics Working Paper Series

An important problem in epidemiology and medical research is the estimation of the causal effect of a treatment action at a single point in time on the mean of an outcome, possibly within strata of the target population defined by a subset of the baseline covariates. Current approaches to this problem are based on marginal structural models, i.e., parametric models for the marginal distribution of counterfactural outcomes as a function of treatment and effect modifiers. The various estimators developed in this context furthermore each depend on a high-dimensional nuisance parameter whose estimation currently also relies on parametric models. Since misspecification …


History-Adjusted Marginal Structural Models And Statically-Optimal Dynamic Treatment Regimes, Mark J. Van Der Laan, Maya L. Petersen Sep 2004

History-Adjusted Marginal Structural Models And Statically-Optimal Dynamic Treatment Regimes, Mark J. Van Der Laan, Maya L. Petersen

U.C. Berkeley Division of Biostatistics Working Paper Series

Marginal structural models (MSM) provide a powerful tool for estimating the causal effect of a treatment. These models, introduced by Robins, model the marginal distributions of treatment-specific counterfactual outcomes, possibly conditional on a subset of the baseline covariates. Marginal structural models are particularly useful in the context of longitudinal data structures, in which each subject's treatment and covariate history are measured over time, and an outcome is recorded at a final time point. However, the utility of these models for some applications has been limited by their inability to incorporate modification of the causal effect of treatment by time-varying covariates. …


Studying Effects Of Primary Care Physicians And Patients On The Trade-Off Between Charges For Primary Care And Specialty Care Using A Hierarchical Multivariate Two-Part Model, John W. Robinson, Scott L. Zeger, Christopher B. Forrest Aug 2004

Studying Effects Of Primary Care Physicians And Patients On The Trade-Off Between Charges For Primary Care And Specialty Care Using A Hierarchical Multivariate Two-Part Model, John W. Robinson, Scott L. Zeger, Christopher B. Forrest

Johns Hopkins University, Dept. of Biostatistics Working Papers

Objective. To examine effects of primary care physicians (PCPs) and patients on the association between charges for primary care and specialty care in a point-of-service (POS) health plan.

Data Source. Claims from 1996 for 3,308 adult male POS plan members, each of whom was assigned to one of the 50 family practitioner-PCPs with the largest POS plan member-loads.

Study Design. A hierarchical multivariate two-part model was fitted using a Gibbs sampler to estimate PCPs' effects on patients' annual charges for two types of services, primary care and specialty care, the associations among PCPs' effects, and within-patient associations between charges for …


A Hierarchical Multivariate Two-Part Model For Profiling Providers' Effects On Healthcare Charges, John W. Robinson, Scott L. Zeger, Christopher B. Forrest Aug 2004

A Hierarchical Multivariate Two-Part Model For Profiling Providers' Effects On Healthcare Charges, John W. Robinson, Scott L. Zeger, Christopher B. Forrest

Johns Hopkins University, Dept. of Biostatistics Working Papers

Procedures for analyzing and comparing healthcare providers' effects on health services delivery and outcomes have been referred to as provider profiling. In a typical profiling procedure, patient-level responses are measured for clusters of patients treated by providers that in turn, can be regarded as statistically exchangeable. Thus, a hierarchical model naturally represents the structure of the data. When provider effects on multiple responses are profiled, a multivariate model rather than a series of univariate models, can capture associations among responses at both the provider and patient levels. When responses are in the form of charges for healthcare services and sampled …


Non-Parametric Estimation Of Roc Curves In The Absence Of A Gold Standard, Xiao-Hua Zhou, Pete Castelluccio, Chuan Zhou Jul 2004

Non-Parametric Estimation Of Roc Curves In The Absence Of A Gold Standard, Xiao-Hua Zhou, Pete Castelluccio, Chuan Zhou

UW Biostatistics Working Paper Series

In evaluation of diagnostic accuracy of tests, a gold standard on the disease status is required. However, in many complex diseases, it is impossible or unethical to obtain such the gold standard. If an imperfect standard is used as if it were a gold standard, the estimated accuracy of the tests would be biased. This type of bias is called imperfect gold standard bias. In this paper we develop a maximum likelihood (ML) method for estimating ROC curves and their areas of ordinal-scale tests in the absence of a gold standard. Our simulation study shows the proposed estimates for the …


A Graph Theoretic Approach To Testing Associations Between Disparate Sources Of Functional Genomic Data, Raji Balasubramanian, Thomas Laframboise, Denise Scholtens, Robert Gentleman Jun 2004

A Graph Theoretic Approach To Testing Associations Between Disparate Sources Of Functional Genomic Data, Raji Balasubramanian, Thomas Laframboise, Denise Scholtens, Robert Gentleman

Bioconductor Project Working Papers

The last few years have seen the advent of high-throughput technologies to analyze various properties of the transcriptome and proteome of several organisms. The congruency of these different data sources, or lack thereof, can shed light on the mechanisms that govern cellular function. A central challenge for bioinformatics research is to develop a unified framework for combining the multiple sources of functional genomics information and testing associations between them, thus obtaining a robust and integrated view of the underlying biology.

We present a graph theoretic approach to test the significance of the association between multiple disparate sources of functional genomics …


Resampling Methods For Estimating Functions With U-Statistic Structure, Wenyu Jiang, Jack Kalbfleisch Apr 2004

Resampling Methods For Estimating Functions With U-Statistic Structure, Wenyu Jiang, Jack Kalbfleisch

The University of Michigan Department of Biostatistics Working Paper Series

Suppose that inference about parameters of interest is to be based on an unbiased estimating function that is U-statistic of degree 1 or 2. We define suitable studentized versions of such estimating functions and consider asymptotic approximations as well as an estimating function bootstrap (EFB) method based on resampling the estimated terms in the estimating functions. These methods are justified asymptotically and lead to confidence intervals produced directly from the studentized estimating functions. Particular examples in this class of estimating functions arise in La estimation as well as Wilcoxon rank regression and other related estimation problems. The proposed methods are …


Loss-Based Cross-Validated Deletion/Substitution/Addition Algorithms In Estimation, Sandra E. Sinisi, Mark J. Van Der Laan Mar 2004

Loss-Based Cross-Validated Deletion/Substitution/Addition Algorithms In Estimation, Sandra E. Sinisi, Mark J. Van Der Laan

U.C. Berkeley Division of Biostatistics Working Paper Series

In van der Laan and Dudoit (2003) we propose and theoretically study a unified loss function based statistical methodology, which provides a road map for estimation and performance assessment. Given a parameter of interest which can be described as the minimizer of the population mean of a loss function, the road map involves as important ingredients cross-validation for estimator selection and minimizing over subsets of basis functions the empirical risk of the subset-specific estimator of the parameter of interest, where the basis functions correspond to a parameterization of a specified subspace of the complete parameter space. In this article we …


Kernel Estimation Of Rate Function For Recurrent Event Data, Chin-Tsang Chiang, Mei-Cheng Wang, Chiung-Yu Huang Dec 2003

Kernel Estimation Of Rate Function For Recurrent Event Data, Chin-Tsang Chiang, Mei-Cheng Wang, Chiung-Yu Huang

Johns Hopkins University, Dept. of Biostatistics Working Papers

Recurrent event data are largely characterized by the rate function but smoothing techniques for estimating the rate function have never been rigorously developed or studied in statistical literature. This paper considers the moment and least squares methods for estimating the rate function from recurrent event data. With an independent censoring assumption on the recurrent event process, we study statistical properties of the proposed estimators and propose bootstrap procedures for the bandwidth selection and for the approximation of confidence intervals in the estimation of the occurrence rate function. It is identified that the moment method without resmoothing via a smaller bandwidth …


Unified Cross-Validation Methodology For Selection Among Estimators And A General Cross-Validated Adaptive Epsilon-Net Estimator: Finite Sample Oracle Inequalities And Examples, Mark J. Van Der Laan, Sandrine Dudoit Nov 2003

Unified Cross-Validation Methodology For Selection Among Estimators And A General Cross-Validated Adaptive Epsilon-Net Estimator: Finite Sample Oracle Inequalities And Examples, Mark J. Van Der Laan, Sandrine Dudoit

U.C. Berkeley Division of Biostatistics Working Paper Series

In Part I of this article we propose a general cross-validation criterian for selecting among a collection of estimators of a particular parameter of interest based on n i.i.d. observations. It is assumed that the parameter of interest minimizes the expectation (w.r.t. to the distribution of the observed data structure) of a particular loss function of a candidate parameter value and the observed data structure, possibly indexed by a nuisance parameter. The proposed cross-validation criterian is defined as the empirical mean over the validation sample of the loss function at the parameter estimate based on the training sample, averaged over …


Semi-Parametric Box-Cox Power Transformation Models For Censored Survival Observations, Tianxi Cai, Lu Tian, L. J. Wei Oct 2003

Semi-Parametric Box-Cox Power Transformation Models For Censored Survival Observations, Tianxi Cai, Lu Tian, L. J. Wei

Harvard University Biostatistics Working Paper Series

No abstract provided.


Statistical Inferences Based On Non-Smooth Estimating Functions, Lu Tian, Jun S. Liu, Mary Zhao, L. J. Wei Oct 2003

Statistical Inferences Based On Non-Smooth Estimating Functions, Lu Tian, Jun S. Liu, Mary Zhao, L. J. Wei

Harvard University Biostatistics Working Paper Series

No abstract provided.


Maximum Likelihood Estimation Of Ordered Multinomial Parameters , Nicholas P. Jewell, Jack Kalbfleisch Oct 2003

Maximum Likelihood Estimation Of Ordered Multinomial Parameters , Nicholas P. Jewell, Jack Kalbfleisch

The University of Michigan Department of Biostatistics Working Paper Series

The pool-adjacent violator-algorithm (Ayer et al., 1955) has long been known to give the maximum likelihood estimator of a series of ordered binomial parameters, based on an independent observation from each distribution (see, Barlow et al., 1972). This result has immediate application to estimation of a survival distribution based on current survival status at a set of monitoring times. This paper considers an extended problem of maximum likelihood estimation of a series of ‘ordered’ multinomial parameters pi = (p1i, p2i, . . . , pmi) for 1 < = I < = k, where ordered means that pj1 < = pj2 < = .. . < = pjk for each j with 1 < = j < = m-1. The data consist of k independent observations X1, . . . ,Xk where Xi has a multinomial distribution with probability parameter pi and known index ni > = 1. By making use of variants of the pool adjacent violator algorithm, …


Locally Efficient Estimation Of Nonparametric Causal Effects On Mean Outcomes In Longitudinal Studies, Romain Neugebauer, Mark J. Van Der Laan Jul 2003

Locally Efficient Estimation Of Nonparametric Causal Effects On Mean Outcomes In Longitudinal Studies, Romain Neugebauer, Mark J. Van Der Laan

U.C. Berkeley Division of Biostatistics Working Paper Series

Marginal Structural Models (MSM) have been introduced by Robins (1998a) as a powerful tool for causal inference as they directly model causal curves of interest, i.e. mean treatment-specific outcomes possibly adjusted for baseline covariates. Two estimators of the corresponding MSM parameters of interest have been proposed, see van der Laan and Robins (2002): the Inverse Probability of Treatment Weighted (IPTW) and the Double Robust (DR) estimators. A parametric MSM approach to causal inference has been favored since the introduction of MSM. It relies on correct specification of a parametric MSM to consistently estimate the parameter of interest using the IPTW …


Resampling-Based Multiple Testing: Asymptotic Control Of Type I Error And Applications To Gene Expression Data, Katherine S. Pollard, Mark J. Van Der Laan Jun 2003

Resampling-Based Multiple Testing: Asymptotic Control Of Type I Error And Applications To Gene Expression Data, Katherine S. Pollard, Mark J. Van Der Laan

U.C. Berkeley Division of Biostatistics Working Paper Series

We define a general statistical framework for multiple hypothesis testing and show that the correct null distribution for the test statistics is obtained by projecting the true distribution of the test statistics onto the space of mean zero distributions. For common choices of test statistics (based on an asymptotically linear parameter estimator), this distribution is asymptotically multivariate normal with mean zero and the covariance of the vector influence curve for the parameter estimator. This test statistic null distribution can be estimated by applying the non-parametric or parametric bootstrap to correctly centered test statistics. We prove that this bootstrap estimated null …


Maximization By Parts In Likelihood Inference, Peter Xuekun Song, Yanqin Fan, Jack Kalbfleisch Jun 2003

Maximization By Parts In Likelihood Inference, Peter Xuekun Song, Yanqin Fan, Jack Kalbfleisch

The University of Michigan Department of Biostatistics Working Paper Series

This paper presents and examines a new algorithm for solving a score equation for the maximum likelyhood estimate in certain problems of practical interest. The method circumvents the need to compute second order derivaties of the full likelihood function. It exploits the structure of certain models that yield a natural decomposition of a very complicated likelihood function. In this decomposition, the first part is a log likelihood from a simply analyzed model and the second part is used to update estimates from the first. Convergence properties of this fixed point algorithm are examined and asymptotics are derived for estimators obtained …


Double Robust Estimation In Longitudinal Marginal Structural Models, Zhuo Yu, Mark J. Van Der Laan Jun 2003

Double Robust Estimation In Longitudinal Marginal Structural Models, Zhuo Yu, Mark J. Van Der Laan

U.C. Berkeley Division of Biostatistics Working Paper Series

Consider estimation of causal parameters in a marginal structural model for the discrete intensity of the treatment specific counting process (e.g. hazard of a treatment specific survival time) based on longitudinal observational data on treatment, covariates and survival. We assume the sequential randomization assumption (SRA) on the treatment assignment mechanism and the so called experimental treatment assignment assumption which is needed to identify the causal parameters from the observed data distribution. Under SRA, the likelihood of the observed data structure factorizes in the auxiliary treatment mechanism and the partial likelihood consisting of the product over time of conditional distributions of …


A Bootstrap Confidence Interval Procedure For The Treatment Effect Using Propensity Score Subclassification, Wanzhu Tu, Xiao-Hua Zhou May 2003

A Bootstrap Confidence Interval Procedure For The Treatment Effect Using Propensity Score Subclassification, Wanzhu Tu, Xiao-Hua Zhou

UW Biostatistics Working Paper Series

In the analysis of observational studies, propensity score subclassification has been shown to be a powerful method for adjusting unbalanced covariates for the purpose of causal inferences. One practical difficulty in carrying out such an analysis is to obtain a correct variance estimate for such inferences, while reducing bias in the estimate of the treatment effect due to an imbalance in the measured covariates. In this paper, we propose a bootstrap procedure for the inferences concerning the average treatment effect; our bootstrap method is based on an extension of Efron’s bias-corrected accelerated (BCa) bootstrap confidence interval to a two-sample problem. …


Checking Assumptions In Latent Class Regression Models Via A Markov Chain Monte Carlo Estimation Approach: An Application To Depression And Socio-Economic Status, Elizabeth Garrett, Richard Miech, Pamela Owens, William W. Eaton, Scott L. Zeger Jan 2003

Checking Assumptions In Latent Class Regression Models Via A Markov Chain Monte Carlo Estimation Approach: An Application To Depression And Socio-Economic Status, Elizabeth Garrett, Richard Miech, Pamela Owens, William W. Eaton, Scott L. Zeger

Johns Hopkins University, Dept. of Biostatistics Working Papers

Latent class regression models are useful tools for assessing associations between covariates and latent variables. However, evaluation of key model assumptions cannot be performed using methods from standard regression models due to the unobserved nature of latent outcome variables. This paper presents graphical diagnostic tools to evaluate whether or not latent class regression models adhere to standard assumptions of the model: conditional independence and non-differential measurement. An integral part of these methods is the use of a Markov Chain Monte Carlo estimation procedure. Unlike standard maximum likelihood implementations for latent class regression model estimation, the MCMC approach allows us to …


Bivariate Current Status Data, Mark J. Van Der Laan, Nicholas P. Jewell Sep 2002

Bivariate Current Status Data, Mark J. Van Der Laan, Nicholas P. Jewell

U.C. Berkeley Division of Biostatistics Working Paper Series

In many applications, it is often of interest to estimate a bivariate distribution of two survival random variables. Complete observation of such random variables is often incomplete. If one only observes whether or not each of the individual survival times exceeds a common observed monitoring time C, then the data structure is referred to as bivariate current status data (Wang and Ding, 2000). For such data, we show that the identifiable part of the joint distribution is represented by three univariate cumulative distribution functions, namely the two marginal cumulative distribution functions, and the bivariate cumulative distribution function evaluated on the …


A New Partitioning Around Medoids Algorithm, Mark J. Van Der Laan, Katherine S. Pollard, Jennifer Bryan Feb 2002

A New Partitioning Around Medoids Algorithm, Mark J. Van Der Laan, Katherine S. Pollard, Jennifer Bryan

U.C. Berkeley Division of Biostatistics Working Paper Series

Kaufman & Rousseeuw (1990) proposed a clustering algorithm Partitioning Around Medoids (PAM) which maps a distance matrix into a specified number of clusters. A particularly nice property is that PAM allows clustering with respect to any specified distance metric. In addition, the medoids are robust representations of the cluster centers, which is particularly important in the common context that many elements do not belong well to any cluster. Based on our experience in clustering gene expression data, we have noticed that PAM does have problems recognizing relatively small clusters in situations where good partitions around medoids clearly exist. In this …