Open Access. Powered by Scholars. Published by Universities.®
- Discipline
-
- Biostatistics (21)
- Applied Statistics (15)
- Statistical Models (15)
- Social and Behavioral Sciences (11)
- Multivariate Analysis (10)
-
- Statistical Theory (8)
- Longitudinal Data Analysis and Time Series (6)
- Design of Experiments and Sample Surveys (4)
- Genetics and Genomics (4)
- Life Sciences (4)
- Medicine and Health Sciences (4)
- Probability (4)
- Applied Mathematics (3)
- Categorical Data Analysis (3)
- Computational Biology (3)
- Economics (3)
- Mathematics (3)
- Other Statistics and Probability (3)
- Social Statistics (3)
- Sociology (3)
- Clinical Trials (2)
- Econometrics (2)
- Economic Theory (2)
- Education (2)
- Educational Assessment, Evaluation, and Research (2)
- Environmental Sciences (2)
- Environmental Studies (2)
- Institution
-
- Selected Works (11)
- SelectedWorks (7)
- University of Massachusetts Amherst (4)
- COBRA (3)
- James Madison University (3)
-
- University of Kentucky (2)
- University of Nebraska - Lincoln (2)
- University of South Florida (2)
- Western University (2)
- Bucknell University (1)
- California Polytechnic State University, San Luis Obispo (1)
- Central Washington University (1)
- Cornell University Law School (1)
- Florida International University (1)
- Michigan Technological University (1)
- Munster Technological University (1)
- Old Dominion University (1)
- Otterbein University (1)
- Skidmore College (1)
- University of New Hampshire (1)
- University of North Florida (1)
- University of Pennsylvania Carey Law School (1)
- University of Tennessee, Knoxville (1)
- Ursinus College (1)
- Virginia Commonwealth University (1)
- Keyword
-
- Statistics (4)
- Causal inference (3)
- Clinical Epidemiology (3)
- Functional Data Analysis (3)
- Methodology (3)
-
- Bayesian inference (2)
- Causal Inference (2)
- Cross-validation (2)
- Functional data analysis (2)
- GIS (2)
- General Biostatistics (2)
- Statistical Models (2)
- Treatment effect heterogeneity (2)
- Wavelet regression (2)
- 2x2 binomial trial (1)
- 3.3 HEALTH SCIENCES (1)
- ACT Composite (1)
- AIC (1)
- AR model (1)
- AR-Sieve Bootstrap (1)
- ARMA model (1)
- Alaska fisheries (1)
- Analysis of Designed Experiments (1)
- Anthropic Reasoning (1)
- Applied mathematics (1)
- Approximate conditioning (1)
- Articles (1)
- Artificial Neural Network (1)
- Association (1)
- Attribute-based (1)
- Publication
-
- Debashis Ghosh (4)
- Doctoral Dissertations (3)
- Edward H. Kennedy (3)
- Jeffrey S. Morris (3)
- Dissertations, 2014-2019 (2)
-
- Electronic Thesis and Dissertation Repository (2)
- Johns Hopkins University, Dept. of Biostatistics Working Papers (2)
- Masayoshi Takahashi (2)
- Philip T. Reiss (2)
- Theses and Dissertations--Statistics (2)
- Alex Luedtke (1)
- All Faculty Scholarship (1)
- All Master's Theses (1)
- Chris J. Lloyd (1)
- Cornell Law Faculty Publications (1)
- Department of Management: Faculty Publications (1)
- Department of Mathematics Publications (1)
- Department of Sociology: Faculty Publications (1)
- Dissertations, Master's Theses and Master's Reports (1)
- Economics (1)
- FIU Electronic Theses and Dissertations (1)
- Faculty Journal Articles (1)
- Jennifer McMahon (1)
- Masters Theses (1)
- Mathematics (1)
- Mathematics & Statistics Theses & Dissertations (1)
- Mathematics Honors Papers (1)
- Mathematics and Statistics Department Faculty Publication Series (1)
- Numeracy (1)
- Senior Honors Projects, 2010-2019 (1)
- Publication Type
- File Type
Articles 1 - 30 of 52
Full-Text Articles in Statistical Methodology
Inequality In Treatment Benefits: Can We Determine If A New Treatment Benefits The Many Or The Few?, Emily Huang, Ethan Fang, Daniel Hanley, Michael Rosenblum
Inequality In Treatment Benefits: Can We Determine If A New Treatment Benefits The Many Or The Few?, Emily Huang, Ethan Fang, Daniel Hanley, Michael Rosenblum
Johns Hopkins University, Dept. of Biostatistics Working Papers
The primary analysis in many randomized controlled trials focuses on the average treatment effect and does not address whether treatment benefits are widespread or limited to a select few. This problem affects many disease areas, since it stems from how randomized trials, often the gold standard for evaluating treatments, are designed and analyzed. Our goal is to learn about the fraction who benefit from a treatment, based on randomized trial data. We consider the case where the outcome is ordinal, with binary outcomes as a special case. In general, the fraction who benefit is a non-identifiable parameter, and the best …
Niche-Based Modeling Of Japanese Stiltgrass (Microstegium Vimineum) Using Presence-Only Information, Nathan Bush
Niche-Based Modeling Of Japanese Stiltgrass (Microstegium Vimineum) Using Presence-Only Information, Nathan Bush
Masters Theses
The Connecticut River watershed is experiencing a rapid invasion of aggressive non-native plant species, which threaten watershed function and structure. Volunteer-based monitoring programs such as the University of Massachusetts’ OutSmart Invasives Species Project, Early Detection Distribution Mapping System (EDDMapS) and the Invasive Plant Atlas of New England (IPANE) have gathered valuable invasive plant data. These programs provide a unique opportunity for researchers to model invasive plant species utilizing citizen-sourced data. This study took advantage of these large data sources to model invasive plant distribution and to determine environmental and biophysical predictors that are most influential in dispersion, and to identify …
Gis-Integrated Mathematical Modeling Of Social Phenomena At Macro- And Micro- Levels—A Multivariate Geographically-Weighted Regression Model For Identifying Locations Vulnerable To Hosting Terrorist Safe-Houses: France As Case Study, Elyktra Eisman
FIU Electronic Theses and Dissertations
Adaptability and invisibility are hallmarks of modern terrorism, and keeping pace with its dynamic nature presents a serious challenge for societies throughout the world. Innovations in computer science have incorporated applied mathematics to develop a wide array of predictive models to support the variety of approaches to counterterrorism. Predictive models are usually designed to forecast the location of attacks. Although this may protect individual structures or locations, it does not reduce the threat—it merely changes the target. While predictive models dedicated to events or social relationships receive much attention where the mathematical and social science communities intersect, models dedicated to …
Variable Selection In Single Index Varying Coefficient Models With Lasso, Peng Wang
Variable Selection In Single Index Varying Coefficient Models With Lasso, Peng Wang
Doctoral Dissertations
Single index varying coefficient model is a very attractive statistical model due to its ability to reduce dimensions and easy-of-interpretation. There are many theoretical studies and practical applications with it, but typically without features of variable selection, and no public software is available for solving it. Here we propose a new algorithm to fit the single index varying coefficient model, and to carry variable selection in the index part with LASSO. The core idea is a two-step scheme which alternates between estimating coefficient functions and selecting-and-estimating the single index. Both in simulation and in application to a Geoscience dataset, we …
Threat Analysis, Countermeaures And Design Strategies For Secure Computation In Nanometer Cmos Regime, Raghavan Kumar
Threat Analysis, Countermeaures And Design Strategies For Secure Computation In Nanometer Cmos Regime, Raghavan Kumar
Doctoral Dissertations
Advancements in CMOS technologies have led to an era of Internet Of Things (IOT), where the devices have the ability to communicate with each other apart from their computational power. As more and more sensitive data is processed by embedded devices, the trend towards lightweight and efficient cryptographic primitives has gained significant momentum. Achieving a perfect security in silicon is extremely difficult, as the traditional cryptographic implementations are vulnerable to various active and passive attacks. There is also a threat in the form of "hardware Trojans" inserted into the supply chain by the untrusted third-party manufacturers for economic incentives. Apart …
A Novel Method For Assessing Co-Monotonicity: An Interplay Between Mathematics And Statistics With Applications, Danang T. Qoyyimi
A Novel Method For Assessing Co-Monotonicity: An Interplay Between Mathematics And Statistics With Applications, Danang T. Qoyyimi
Electronic Thesis and Dissertation Repository
Numerous problems in econometrics, insurance, reliability engineering, and statistics rely on the assumption that certain functions are monotonic, which may or may not be true in real life scenarios. To satisfy this requirement, from the theoretical point of view, researchers frequently model the underlying phenomena using parametric and semi-parametric families of functions, thus effectively specifying the required shapes of the functions. To tackle these problems in a non-parametric way, when the shape cannot be specified explicitly but only estimated approximately, we suggest indices for measuring the lack of monotonicity in functions. We investigate properties of these indices and offer convenient …
Flexible Penalized Regression For Functional Data...And Other Complex Data Objects, Philip T. Reiss
Flexible Penalized Regression For Functional Data...And Other Complex Data Objects, Philip T. Reiss
Philip T. Reiss
No abstract provided.
An Omnibus Nonparametric Test Of Equality In Distribution For Unknown Functions, Alexander Luedtke, Marco Carone, Mark Van Der Laan
An Omnibus Nonparametric Test Of Equality In Distribution For Unknown Functions, Alexander Luedtke, Marco Carone, Mark Van Der Laan
Alex Luedtke
We present a novel family of nonparametric omnibus tests of the hypothesis that two unknown but estimable functions are equal in distribution when applied to the observed data structure. We developed these tests, which represent a generalization of the maximum mean discrepancy tests described in Gretton et al. [2006], using recent developments from the higher-order pathwise differentiability literature. Despite their complex derivation, the associated test statistics can be expressed rather simply as U-statistics. We study the asymptotic behavior of the proposed tests under the null hypothesis and under both fixed and local alternatives. We provide examples to which our tests …
Probabilistic Reasoning In Cosmology, Yann Benétreau-Dupin
Probabilistic Reasoning In Cosmology, Yann Benétreau-Dupin
Electronic Thesis and Dissertation Repository
Cosmology raises novel philosophical questions regarding the use of probabilities in inference. This work aims at identifying and assessing lines of arguments and problematic principles in probabilistic reasoning in cosmology.
The first, second, and third papers deal with the intersection of two distinct problems: accounting for selection effects, and representing ignorance or indifference in probabilistic inferences. These two problems meet in the cosmology literature when anthropic considerations are used to predict cosmological parameters by conditionalizing the distribution of, e.g., the cosmological constant on the number of observers it allows for. However, uniform probability distributions usually appealed to in such arguments …
Embアルゴリズムの新たな応用による多重比率補定(高橋将宜), Masayoshi Takahashi
Embアルゴリズムの新たな応用による多重比率補定(高橋将宜), Masayoshi Takahashi
Masayoshi Takahashi
No abstract provided.
Preparedness Of Hospitals In The Republic Of Ireland For An Influenza Pandemic, An Infection Control Perspective, Mary Reidy, Fiona Ryan, Dervla Hogan, Seán Lacey, Claire Buckley
Preparedness Of Hospitals In The Republic Of Ireland For An Influenza Pandemic, An Infection Control Perspective, Mary Reidy, Fiona Ryan, Dervla Hogan, Seán Lacey, Claire Buckley
Department of Mathematics Publications
When an influenza pandemic occurs most of the population is susceptible and attack rates can range as high as 40–50 %. The most important failure in pandemic planning is the lack of standards or guidelines regarding what it means to be ‘prepared’. The aim of this study was to assess the preparedness of acute hospitals in the Republic of Ireland for an influenza pandemic from an infection control perspective.
C-Learning: A New Classification Framework To Estimate Optimal Dynamic Treatment Regimes, Baqun Zhang, Min Zhang
C-Learning: A New Classification Framework To Estimate Optimal Dynamic Treatment Regimes, Baqun Zhang, Min Zhang
The University of Michigan Department of Biostatistics Working Paper Series
Personalizing treatment to accommodate patient heterogeneity and the evolving nature of a disease over time has received considerable attention lately. A dynamic treatment regime is a set of decision rules, each corresponding to a decision point, that determine that next treatment based on each individual’s own available characteristics and treatment history up to that point. We show that identifying the optimal dynamic treatment regime can be recast as a sequential classification problem and is equivalent to sequentially minimizing a weighted expected misclassification error. This general classification perspective targets the exact goal of optimally individualizing treatments and is new and fundamentally …
Evaluating The Effects Of Standardized Patient Care Pathways On Clinical Outcomes, Anna V. Romanova
Evaluating The Effects Of Standardized Patient Care Pathways On Clinical Outcomes, Anna V. Romanova
Doctoral Dissertations
The main focus of this study is to create a standardized approach to evaluating the impact of the patient care pathways across all major disease categories and key outcome measures in a hospital setting when randomized clinical trials are not feasible. Toward this goal I identify statistical methods, control factors, and adjustments that can correct for potential confounding in observational studies. I investigate the efficiency of existing bias correction methods under varying conditions of imbalanced samples through a Monte Carlo simulation. The simulation results are then utilized in a case study for one of the largest primary diagnosis areas, chronic …
Supervised Classification Using Copula And Mixture Copula, Sumen Sen
Supervised Classification Using Copula And Mixture Copula, Sumen Sen
Mathematics & Statistics Theses & Dissertations
Statistical classification is a field of study that has developed significantly after 1960's. This research has a vast area of applications. For example, pattern recognition has been proposed for automatic character recognition, medical diagnostic and most recently in data mining. Classical discrimination rule assumes normality. However in many situations, this assumption is often questionable. In fact for some data, the pattern vector is a mixture of discrete and continuous random variables. In this dissertation, we use copula densities to model class conditional distributions. Such types of densities are useful when the marginal densities of a pattern vector are not normally …
Nonparametric Methods For Doubly Robust Estimation Of Continuous Treatment Effects, Edward Kennedy, Zongming Ma, Matthew Mchugh, Dylan Small
Nonparametric Methods For Doubly Robust Estimation Of Continuous Treatment Effects, Edward Kennedy, Zongming Ma, Matthew Mchugh, Dylan Small
Edward H. Kennedy
Continuous treatments (e.g., doses) arise often in practice, but available causal effect estimators require either parametric models for the effect curve or else consistent estimation of a single nuisance function. We propose a novel doubly robust kernel smoothing approach, which requires only mild smoothness assumptions on the effect curve and allows for misspecification of either the treatment density or outcome regression. We derive asymptotic properties and also discuss an approach for data-driven bandwidth selection. The methods are illustrated via simulation and in a study of the effect of nurse staffing on hospital readmissions penalties.
公的統計における欠測値補定の研究:多重代入法と単一代入法(高橋将宜), Masayoshi Takahashi
公的統計における欠測値補定の研究:多重代入法と単一代入法(高橋将宜), Masayoshi Takahashi
Masayoshi Takahashi
No abstract provided.
Set-Based Tests For Genetic Association In Longitudinal Studies, Zihuai He, Min Zhang, Seunggeun Lee, Jennifer A. Smith, Xiuqing Guo, Walter Palmas, Sharon L.R. Kardia, Ana V. Diez Roux, Bhramar Mukherjee
Set-Based Tests For Genetic Association In Longitudinal Studies, Zihuai He, Min Zhang, Seunggeun Lee, Jennifer A. Smith, Xiuqing Guo, Walter Palmas, Sharon L.R. Kardia, Ana V. Diez Roux, Bhramar Mukherjee
Jennifer McMahon
Genetic association studies with longitudinal markers of chronic diseases (e.g., blood pressure, body mass index) provide a valuable opportunity to explore how genetic variants affect traits over time by utilizing the full trajectory of longitudinal outcomes. Since these traits are likely influenced by the joint eff#11;ect of multiple variants in a gene, a joint analysis of these variants considering linkage disequilibrium (LD) may help to explain additional phenotypic variation. In this article, we propose a longitudinal genetic random field model (LGRF), to test the association between a phenotype measured repeatedly during the course of an observational study and a set …
A Study Of The Parametric And Nonparametric Linear-Circular Correlation Coefficient, Robin Tu
A Study Of The Parametric And Nonparametric Linear-Circular Correlation Coefficient, Robin Tu
Statistics
Circular statistics are specialized statistical methods that deal specifically with directional data. Data that is angular require specialized techniques due to the modulo 2π (in radians) or modulo 360◦ (in degrees) nature of angles.
Correlation, typically in terms of Pearson’s correlation coefficient, is a measure of association between two linear random variables x and y. In this paper, the specific circular technique of the parametric and nonparametric linear-circular correlation coefficient will be explored where correlation is no longer between two linear variables x and y, but between a linear random variable x and circular random variable θ.
A simulation …
The Effects Of Quantitative Easing In The United States: Implications For Future Central Bank Policy Makers, Matthew Q. Rubino
The Effects Of Quantitative Easing In The United States: Implications For Future Central Bank Policy Makers, Matthew Q. Rubino
Senior Honors Projects, 2010-2019
The purpose of this thesis is to examine the effects of the Federal Reserve’s recent bond buying programs, specifically Quantitative Easing 1, Quantitative Easing 2, Operation Twist (or the Fed’s Maturity Extension Program), and Quantitative Easing 3. In this study, I provide a picture of the economic landscape leading up to the deployment of the programs, an overview of quantitative easing including each program’s respective objectives, and how and why the Fed decided to implement the programs. Using empirical analysis, I measure each program’s effectiveness by applying four models including a yield curve model, an inflation model, a money supply …
The Effects Of A Planned Missingness Design On Examinee Motivation And Psychometric Quality, Matthew S. Swain
The Effects Of A Planned Missingness Design On Examinee Motivation And Psychometric Quality, Matthew S. Swain
Dissertations, 2014-2019
Assessment practitioners in higher education face increasing demands to collect assessment and accountability data to make important inferences about student learning and institutional quality. The validity of these high-stakes decisions is jeopardized, particularly in low-stakes testing contexts, when examinees do not expend sufficient motivation to perform well on the test. This study introduced planned missingness as a potential solution. In planned missingness designs, data on all items are collected but each examinee only completes a subset of items, thus increasing data collection efficiency, reducing examinee burden, and potentially increasing data quality. The current scientific reasoning test served as the Long …
Examining The Performance Of The Metropolis-Hastings Robbins-Monro Algorithm In The Estimation Of Multilevel Multidimensional Irt Models, Bozhidar M. Bashkov
Examining The Performance Of The Metropolis-Hastings Robbins-Monro Algorithm In The Estimation Of Multilevel Multidimensional Irt Models, Bozhidar M. Bashkov
Dissertations, 2014-2019
The purpose of this study was to review the challenges that exist in the estimation of complex (multidimensional) models applied to complex (multilevel) data and to examine the performance of the recently developed Metropolis-Hastings Robbins-Monro (MH-RM) algorithm (Cai, 2010a, 2010b), designed to overcome these challenges and implemented in both commercial and open-source software programs. Unlike other methods, which either rely on high-dimensional numerical integration or approximation of the entire multidimensional response surface, MH-RM makes use of Fisher’s Identity to employ stochastic imputation (i.e., data augmentation) via the Metropolis-Hastings sampler and then apply the stochastic approximation method of Robbins and Monro …
Do Footprint-Based Cafe Standards Make Car Models Bigger?, Brianna Marie Jean
Do Footprint-Based Cafe Standards Make Car Models Bigger?, Brianna Marie Jean
Economics
Corporate Average Fuel Economy (CAFE) standards have historically been set equal across all manufacturer fleets of the same type. Concerns about varying costs across firms and safety implications of standards that are set homogeneously across firms and models resulted in a policy shift towards footprint-based standards. Under this type of standard, individual car models face targets based on the size of the area between the wheelbase and wheel track, so that larger models face less stringent standards, and manufacturers who make, on average, larger cars will face a lighter fleet standard. Theoretical models have shown that this type of policy …
Scientific Awareness At Ursinus College, Frank G. Devone
Scientific Awareness At Ursinus College, Frank G. Devone
Mathematics Honors Papers
Ursinus College prides itself on creating well-rounded students, and recent initiatives, such as the Fellowships in the Ursinus Transition to the Undergraduate Research Experience Program and the Center for Science and the Common Good suggest that science is a vital part of the Ursinus liberal arts mission. A scientific awareness pilot survey was administered to a sample of Ursinus students drawn from the Class of 2014 and students residing at Ursinus during summer 2014. Experience and data collected from this pilot were used to create a final survey which was made available to all students at Ursinus College. The survey …
Adaptive Enrichment Designs For Randomized Trials With Delayed Endpoints, Using Locally Efficient Estimators To Improve Precision, Michael Rosenblum, Tianchen Qian, Yu Du, Huitong Qiu
Adaptive Enrichment Designs For Randomized Trials With Delayed Endpoints, Using Locally Efficient Estimators To Improve Precision, Michael Rosenblum, Tianchen Qian, Yu Du, Huitong Qiu
Johns Hopkins University, Dept. of Biostatistics Working Papers
Adaptive enrichment designs involve preplanned rules for modifying enrollment criteria based on accrued data in an ongoing trial. For example, enrollment of a subpopulation where there is sufficient evidence of treatment efficacy, futility, or harm could be stopped, while enrollment for the remaining subpopulations is continued. Most existing methods for constructing adaptive enrichment designs are limited to situations where patient outcomes are observed soon after enrollment. This is a major barrier to the use of such designs in practice, since for many diseases the outcome of most clinical importance does not occur shortly after enrollment. We propose a new class …
Relationship Between High School Math Course Selection And Retention Rates At Otterbein University, Lauren A. Fisher
Relationship Between High School Math Course Selection And Retention Rates At Otterbein University, Lauren A. Fisher
Undergraduate Honors Thesis Projects
Binary logistic regression was used to study the relationship between high school math course selection and retention rates at Otterbein University. Graduation rates from postsecondary institutions are low in the United States and, more specifically, at Otterbein. This study is important in helping to determine what can raise retention rates, and ultimately, graduation rates. It directs focus toward high school math course selection and what should be changed before entering a post-secondary institution. Otterbein will have a better idea of what type of students to recruit and which students may be good candidates with some extra help. Recruiting is expensive, …
Global Network Inference From Ego Network Samples: Testing A Simulation Approach, Jeffrey A. Smith
Global Network Inference From Ego Network Samples: Testing A Simulation Approach, Jeffrey A. Smith
Department of Sociology: Faculty Publications
Network sampling poses a radical idea: that it is possible to measure global network structure without the full population coverage assumed in most network studies. Network sampling is only useful, however, if a researcher can produce accurate global network estimates. This article explores the practicality of making network inference, focusing on the approach introduced in Smith (2012). The method uses sampled ego network data and simulation techniques to make inference about the global features of the true, unknown network. The validity check here includes more difficult scenarios than previous tests, including those that go beyond the initial scope conditions of …
Addressing The Zeros Problem: Regression Models For Outcomes With A Large Proportion Of Zeros, With An Application To Trial Outcomes, Theodore Eisenberg, Thomas Eisenberg, Martin T. Wells, Min Zhang
Addressing The Zeros Problem: Regression Models For Outcomes With A Large Proportion Of Zeros, With An Application To Trial Outcomes, Theodore Eisenberg, Thomas Eisenberg, Martin T. Wells, Min Zhang
Cornell Law Faculty Publications
In law‐related and other social science contexts, researchers need to account for data with an excess number of zeros. In addition, dollar damages in legal cases also often are skewed. This article reviews various strategies for dealing with this data type. Tobit models are often applied to deal with the excess number of zeros, but these are more appropriate in cases of true censoring (e.g., when all negative values are recorded as zeros) and less appropriate when zeros are in fact often observed as the amount awarded. Heckman selection models are another methodology that is applied in this setting, yet …
Surrogate Markers For Time-Varying Treatments And Outcomes, Jesse Hsu, Edward Kennedy, Jason Roy, Alisa Stephens-Shields, Dylan Small, Marshall Joffe
Surrogate Markers For Time-Varying Treatments And Outcomes, Jesse Hsu, Edward Kennedy, Jason Roy, Alisa Stephens-Shields, Dylan Small, Marshall Joffe
Edward H. Kennedy
A surrogate marker is a variable commonly used in clinical trials to guide treatment decisions when the outcome of ultimate interest is not available. A good surrogate marker is one where the treatment effect on the surrogate is a strong predictor of the effect of treatment on the outcome. We review the situation when there is one treatment delivered at baseline, one surrogate measured at one later time point, and one ultimate outcome of interest and discuss new issues arising when variables are time-varying. Most of the literature on surrogate markers has only considered simple settings with one treatment, one …
Best Practice Recommendations For Data Screening, Justin A. Desimone, Peter D. Harms, Alice J. Desimone
Best Practice Recommendations For Data Screening, Justin A. Desimone, Peter D. Harms, Alice J. Desimone
Department of Management: Faculty Publications
Survey respondents differ in their levels of attention and effort when responding to items. There are a number of methods researchers may use to identify respondents who fail to exert sufficient effort in order to increase the rigor of analysis and enhance the trustworthiness of study results. Screening techniques are organized into three general categories, which differ in impact on survey design and potential respondent awareness. Assumptions and considerations regarding appropriate use of screening techniques are discussed along with descriptions of each technique. The utility of each screening technique is a function of survey design and administration. Each technique has …
Review Of Naked Statistics: Stripping The Dread From Data By Charles Wheelan, Michael T. Catalano
Review Of Naked Statistics: Stripping The Dread From Data By Charles Wheelan, Michael T. Catalano
Numeracy
Wheelan, Charles. Naked Statistics: Stripping the Dread from Data (New York, NY, W. W. Norton & Company, 2014). 282 pp. ISBN 978-0-393-07195-5
In his review of What Numbers Say and The Numbers Game, Rob Root (Numeracy 3(1): 9) writes “Popular books on quantitative literacy need to be easy to read, reasonably comprehensive in scope, and include examples that are thought-provoking and memorable.” Wheelan’s book certainly meets this description, and should be of interest to both the general public and those with a professional interest in numeracy. A moderately diligent learner can get a decent understanding of basic statistics …