Open Access. Powered by Scholars. Published by Universities.®

Statistics and Probability Commons

Open Access. Powered by Scholars. Published by Universities.®

12,599 Full-Text Articles 19,877 Authors 6,525,218 Downloads 286 Institutions

All Articles in Statistics and Probability

Faceted Search

12,599 full-text articles. Page 404 of 431.

An Analog Experiment Comparing Goal-Free Evaluation And Goal Achievement Evaluation Utility, Brandon W. Youker 2011 Western Michigan University

An Analog Experiment Comparing Goal-Free Evaluation And Goal Achievement Evaluation Utility, Brandon W. Youker

Dissertations

Goal-free evaluation (GFE) is the process of determining the merit of an evaluand independent of the stated or implied goals and objectives, whereas goal achievement evaluation (GAE), as the most rudimentary form of goal-based evaluation, determines merit according to the evaluand’s level of accomplishment with regard to its goals. This study examines the utility of GAE and GFE from the perspective of the evaluation’s intended users. In the study, two evaluation teams, goal achievement and goal-free, independently and simultaneously evaluate the same human service program. Each team produced a final evaluation report, which was read by the evaluation’s users, who …


A Guide To Defining And Implementing Protocols For The Welfare Assessment Of Laboratory Animals: Eleventh Report Of The Bvaawf/Frame/Rspca/Ufaw Joint Working Group On Refinement, P. Hawkins, D. B. Morton, O. Burman, N. Dennison, P. Honess, M. Jennings, S. Lane, V. Middleton, J. V. Roughan, S. Wells, K. Westwood 2011 Royal Society for the Prevention of Cruelty to Animals

A Guide To Defining And Implementing Protocols For The Welfare Assessment Of Laboratory Animals: Eleventh Report Of The Bvaawf/Frame/Rspca/Ufaw Joint Working Group On Refinement, P. Hawkins, D. B. Morton, O. Burman, N. Dennison, P. Honess, M. Jennings, S. Lane, V. Middleton, J. V. Roughan, S. Wells, K. Westwood

Research Methodology and Laboratory Animals Collection

The refinement of husbandry and procedures to reduce animal suffering and improve welfare is an essential component of humane science. Successful refinement depends upon the ability to assess animal welfare effectively, and detect any signs of pain or distress as rapidly as possible, so that any suffering can be alleviated. This document provides practical guidance on setting up and operating effective protocols for the welfare assessment of animals used in research and testing. It sets out general principles for more objective observation of animals, recognizing and assessing indicators of pain or distress and tailoring these to individual projects. Systems for …


Clustering With Exclusion Zones: Genomic Applications, Mark Segal, Yuanyuan Xiao, Fred Huffer 2010 University of California, San Francisco

Clustering With Exclusion Zones: Genomic Applications, Mark Segal, Yuanyuan Xiao, Fred Huffer

Mark R Segal

Methods for formally evaluating the clustering of events in space or time, notably the scan statistic, have been richly developed and widely applied. In order to utilize the scan statistic and related approaches, it is necessary to know the extent of the spatial or temporal domains wherein the events arise. Implicit in their usage is that these domains have no “holes”—hereafter “exclusion zones”—regions in which events a priori cannot occur. However, in many contexts, this requirement is not met. When the exclusion zones are known, it is straightforward to correct the scan statistic for their occurrence by simply adjusting the …


Rejoinder: Estimation Issues For Copulas Applied To Marketing Data, Peter Danaher, Michael Smith 2010 Melbourne Business School

Rejoinder: Estimation Issues For Copulas Applied To Marketing Data, Peter Danaher, Michael Smith

Michael Stanley Smith

Estimating copula models using Bayesian methods presents some subtle challenges, ranging from specification of the prior to computational tractability. There is also some debate about what is the most appropriate copula to employ from those available. We address these issues here and conclude by discussing further applications of copula models in marketing.


Forecasting Television Ratings, Peter Danaher, Tracey Dagger, Michael Smith 2010 Monash University

Forecasting Television Ratings, Peter Danaher, Tracey Dagger, Michael Smith

Michael Stanley Smith

Despite the state of flux in media today, television remains the dominant player globally for advertising spend. Since television advertising time is purchased on the basis of projected future ratings, and ad costs have skyrocketed, there is increasing pressure to forecast television ratings accurately. Previous forecasting methods are not generally very reliable and many have not been validated, but more distressingly, none have been tested in today’s multichannel environment. In this study we compare 8 different forecasting models, ranging from a naïve empirical method to a state-of-the-art Bayesian model-averaging method. Our data come from a recent time period, 2004-2008 in …


Cross-Validated Targeted Minimum-Loss-Based Estimation, Wenjing Zheng, Mark van der Laan 2010 University of California - Berkeley

Cross-Validated Targeted Minimum-Loss-Based Estimation, Wenjing Zheng, Mark Van Der Laan

Wenjing Zheng

No abstract provided.


Accurately Sized Test Statistics With Misspecified Conditional Homoskedasticity, Douglas Steigerwald, Jack Erb 2010 University of California, Santa Barbara

Accurately Sized Test Statistics With Misspecified Conditional Homoskedasticity, Douglas Steigerwald, Jack Erb

Douglas G. Steigerwald

We study the finite-sample performance of test statistics in linear regression models where the error dependence is of unknown form. With an unknown dependence structure there is traditionally a trade-off between the maximum lag over which the correlation is estimated (the bandwidth) and the amount of heterogeneity in the process. When allowing for heterogeneity, through conditional heteroskedasticity, the correlation at far lags is generally omitted and the resultant inflation of the empirical size of test statistics has long been recognized. To allow for correlation at far lags we study test statistics constructed under the possibly misspecified assumption of conditional homoskedasticity. …


The Underground Economy Of Fake Antivirus Software, Douglas Steigerwald, Brett Stone-Gross, Ryan Abman, Richard Kemmerer, Christopher Kruegel, Giovanni Vigna 2010 University of California, Santa Barbara

The Underground Economy Of Fake Antivirus Software, Douglas Steigerwald, Brett Stone-Gross, Ryan Abman, Richard Kemmerer, Christopher Kruegel, Giovanni Vigna

Douglas G. Steigerwald

Fake antivirus (AV) programs have been utilized to defraud millions of computer users into paying as much as one hundred dollars for a phony software license. As a result, fake AV software has evolved into one of the most lucrative criminal operations on the Internet. In this paper, we examine the operations of three large-scale fake AV businesses, lasting from three months to more than two years. More precisely, we present the results of our analysis on a trove of data obtained from several backend servers that the cybercriminals used to drive their scam operations. Our investigations reveal that these …


An Autoregressive Approach To House Price Modeling, Chaitra Nagaraja, Lawrence Brown, Linda Zhao 2010 Fordham University

An Autoregressive Approach To House Price Modeling, Chaitra Nagaraja, Lawrence Brown, Linda Zhao

Chaitra H Nagaraja

No abstract provided.


Extracting Information From Functional Connectivity Maps Via Function-On-Scalar Regression, Philip T. Reiss, Maarten Mennes, Eva Petkova, Lei Huang, Matthew J. Hoptman, Bharat B. Biswal, Stanley J. Colcombe, Xi-Nian Zuo, Michael P. Milham 2010 New York University

Extracting Information From Functional Connectivity Maps Via Function-On-Scalar Regression, Philip T. Reiss, Maarten Mennes, Eva Petkova, Lei Huang, Matthew J. Hoptman, Bharat B. Biswal, Stanley J. Colcombe, Xi-Nian Zuo, Michael P. Milham

Lei Huang

Functional connectivity of an individual human brain is often studied by acquiring a resting state functional magnetic resonance imaging scan, and mapping the correlation of each voxel's BOLD time series with that of a seed region. As large collections of such maps become available, including multisite data sets, there is an increasing need for ways to distill the information in these maps in a readily visualized form. Here we propose a two-step analytic strategy. First, we construct connectivity-distance profiles, which summarize the connectivity of each voxel in the brain as a function of distance from the seed, a functional relationship …


Modelling The Impact Of Personality On Individual Performance Behavior With A Time-Varying Mixture Of Monotonic Random Effects, Sally A. Wood, Edward J. Cripps, Robert E. Wood, John Lau 2010 University of Western Australia

Modelling The Impact Of Personality On Individual Performance Behavior With A Time-Varying Mixture Of Monotonic Random Effects, Sally A. Wood, Edward J. Cripps, Robert E. Wood, John Lau

Sally Wood

A method is presented for flexibly modelling longitudinal data that provides insight to a central question in psychology theory: the dependency between personality clas- sification and individual performance behavior. Flexibility is achieved by assuming the regression coefficients of random effects models are generated from a time-varying mixture of an unknown but finite number of processes, where the weights attached to the number of processes are parameterised to depend upon an individual’s personality classification. For a given number of mixture components the component processes are constrained distributions and the weights attached to them depend upon time. The method is made robust …


A Practical Ad-Hoc Adjustment To The Simes P-Value, Chris Lloyd 2010 Melbourne Business School

A Practical Ad-Hoc Adjustment To The Simes P-Value, Chris Lloyd

Chris J. Lloyd

The Simes P-value is more powerful than Bonferroni but still suffers from some conservatism when the tests are correlated. Based on a massive simulation study, I develop a formula that corrects for this conservatism. it requires the number of experimental arms which is known. It also requires the correlation and skewness of the underlying test statistics, which will need analytic approximation in practice.


Computing Highly Accurate Confidence Limits From Discrete Data Using Importance Sampling, Chris Lloyd 2010 Melbourne Business School

Computing Highly Accurate Confidence Limits From Discrete Data Using Importance Sampling, Chris Lloyd

Chris J. Lloyd

For discrete parametric models, approximate confidence limits perform poorly from a strict frequentist perspective. In principle, exact and optimal confidence limits can be computed using the formula of Buehler (1957), Lloyd and Kabaila (2003). So-called profile upper limits (Kabaila \& Lloyd, 2001) are closely related to Buehler limits and have extremely good properties. Both profile and Buehler limits depend on the probability of a certain tail set as a function of the unknown parameters. Unfortunately, this probability surface is not computable for realistic models. In this paper, importance sampling is used to estimate the surface and hence the confidence limits. …


Ranking Of Provinces In Iran According To Socio-Economic Indices, Jalil Khodaparast Shirazi, Reza Moosavi Mohseni, A. R. Rahmansetayesh 2010 Auckland University of Technology

Ranking Of Provinces In Iran According To Socio-Economic Indices, Jalil Khodaparast Shirazi, Reza Moosavi Mohseni, A. R. Rahmansetayesh

Reza Moosavi Mohseni

Some parts of a country may have lower income earned through business activities in comparison with other parts of the country. When it is accompanied by lack of social income because of less access to the products and services provided by the government, it will lead to the serious lag of some areas of the country in comparison with other areas. The first step to prevent such a problem is the recognition of the present situation and the second step is programming to reach an appropriate situation. This article applied socioeconomic indices to recognize the current condition in Fars province …


Windows Executable For Gaussian Copula With Nbd Margins, Michael S. Smith 2010 Melbourne Business School

Windows Executable For Gaussian Copula With Nbd Margins, Michael S. Smith

Michael Stanley Smith

This is an example Windows 32bit program to estimate a Gaussian copula model with NBD margins. The margins are estimated first using MLE, and the copula second using Bayesian MCMC. The model was discussed in Danaher & Smith (2011; Marketing Science) as example 4 (section 4.2).


Modeling Multivariate Distributions Using Copulas: Applications In Marketing, Peter J. Danaher, Michael S. Smith 2010 Melbourne Business School

Modeling Multivariate Distributions Using Copulas: Applications In Marketing, Peter J. Danaher, Michael S. Smith

Michael Stanley Smith

In this research we introduce a new class of multivariate probability models to the marketing literature. Known as “copula models”, they have a number of attractive features. First, they permit the combination of any univariate marginal distributions that need not come from the same distributional family. Second, a particular class of copula models, called “elliptical copula”, have the property that they increase in complexity at a much slower rate than existing multivariate probability models as the number of dimensions increase. Third, they are very general, encompassing a number of existing multivariate models, and provide a framework for generating many more. …


Bicycle Commuting In Melbourne During The 2000s Energy Crisis: A Semiparametric Analysis Of Intraday Volumes, Michael S. Smith, Goeran Kauermann 2010 Melbourne Business School

Bicycle Commuting In Melbourne During The 2000s Energy Crisis: A Semiparametric Analysis Of Intraday Volumes, Michael S. Smith, Goeran Kauermann

Michael Stanley Smith

Cycling is attracting renewed attention as a mode of transport in western urban environments, yet the determinants of usage are poorly understood. In this paper we investigate some of these using intraday bicycle volumes collected via induction loops located at ten bike paths in the city of Melbourne, Australia, between December 2005 and June 2008. The data are hourly counts at each location, with temporal and spatial disaggregation allowing for the impact of meteorology to be measured accurately for the first time. Moreover, during this period petrol prices varied dramatically and the data also provide a unique opportunity to assess …


The Generalized Shrinkage Estimator For The Analysis Of Functional Connectivity Of Brain Signals, Mark Fiecas, Hernando Ombao 2010 Brown University

The Generalized Shrinkage Estimator For The Analysis Of Functional Connectivity Of Brain Signals, Mark Fiecas, Hernando Ombao

Mark Fiecas

We develop a new statistical method for estimating functional connectivity between neurophysiological signals represented by a multivariate time series. We use partial coherence as the measure of functional connectivity. Partial coherence identifies the frequency bands that drive the direct linear association between any pair of channels. To estimate partial coherence, one would first need an estimate of the spectral density matrix of the multivariate time series. Parametric estimators of the spectral density matrix provide good frequency resolution but could be sensitive when the parametric model is misspecified. Smoothing-based nonparametric estimators are robust to model misspecification and are consistent but may …


National Estimates Of The Prevalence Of Alzheimer's Disease In The United States, Ron Brookmeyer, Denis Evans, Liesi Hebert, Langa Kenneth, Heeringa Steven, Plassman Brenda, Kukull Kenneth 2010 University of California, Los Angeles

National Estimates Of The Prevalence Of Alzheimer's Disease In The United States, Ron Brookmeyer, Denis Evans, Liesi Hebert, Langa Kenneth, Heeringa Steven, Plassman Brenda, Kukull Kenneth

Ron Brookmeyer

Several methods of estimating prevalence of dementia are presented in this article. For both Brookmeyer and the Chicago Health and Aging project (CHAP), the estimates of prevalence are derived statistically, forward calculating from incidence and survival figures. The choice of incidence rates on which to build the estimates may be critical. Brookmeyer used incidence rates from several published studies, whereas the CHAP investigators applied the incidence rates observed in their own cohort. The Aging, Demographics, and Memory Study (ADAMS) and the East Boston Senior Health Project (EBSHP) were sample surveys designed to ascertain the prevalence of Alzheimer’s disease and dementia. …


Statistical Considerations In Determining Hiv Incidence From Changes In Hiv Prevalence, Ron Brookmeyer, Jacob Konikoff 2010 University of California, Los Angeles

Statistical Considerations In Determining Hiv Incidence From Changes In Hiv Prevalence, Ron Brookmeyer, Jacob Konikoff

Ron Brookmeyer

The development of methods for estimating HIV incidence is critical for tracking the epidemic and for designing, targeting and evaluating HIV prevention efforts. One method for estimating incidence is based on changes in HIV prevalence. That method is attracting increased attention because national population-based HIV prevalence surveys, such as Demographic and Health Surveys, are being conducted throughout the world. Here, we consider some statistical issues associated with estimating HIV incidence from two population-based HIV prevalence surveys conducted at two different points in time. We show that the incidence estimator depends on the relative survival rate. We evaluate the sensitivity of …


Digital Commons powered by bepress