Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Applied Statistics

Selected Works

Institution
Keyword
Publication Year
Publication
File Type

Articles 31 - 60 of 337

Full-Text Articles in Physical Sciences and Mathematics

Section 12 Of The Canada Evidence Act And The Deliberations Of Simulated Juries, Valerie P. Hans, Anthony N. Doob Jun 2015

Section 12 Of The Canada Evidence Act And The Deliberations Of Simulated Juries, Valerie P. Hans, Anthony N. Doob

Valerie P. Hans

In the past, there have been three major approaches to the experimental investigation of the jury. First, juror selection research involves the study of the relation between verdicts or leniency toward certain classes of defendants and the characteristics of potential jurors. The second class of research is group study, in which the amount and style of individual participation is observed within the context of simulated jury deliberations (e.g., Strodtbeck, James and Hawkins, 1957). Finally, experimental psychology has made another contribution to the study of the jury; numerous researchers have conducted experimental studies employing legal stimulus materials. Typically, in such a …


How Much Justice Hangs In The Balance? A New Look At Hung Jury Rates, Paula Hannaford-Agor, Valerie P. Hans, G. Thomas Munsterman Jun 2015

How Much Justice Hangs In The Balance? A New Look At Hung Jury Rates, Paula Hannaford-Agor, Valerie P. Hans, G. Thomas Munsterman

Valerie P. Hans

Reports of apparent increases in the number of hung juries in some jurisdictions have caused concern among policy makers. A 1995 report by the California District Attorneys Association cited hung jury rates in 1994 that exceeded 15 percent in some jurisdictions (the rates varied from 3 to 23 percent across the nine counties for which data were available). In 1996, the District of Columbia Superior Court reported a higher-than-expected hung jury rate of 11 percent. Why juries hang at these rates isn't clear, but some commentators have claimed that hung juries are the product of eccentric or nullifying holdout jurors …


The Timing Of Opinion Formation By Jurors In Civil Cases: An Empirical Examination, Paula Hannaford-Agor, Valerie P. Hans, Nicole L. Mott, G. Thomas Munsterman Jun 2015

The Timing Of Opinion Formation By Jurors In Civil Cases: An Empirical Examination, Paula Hannaford-Agor, Valerie P. Hans, Nicole L. Mott, G. Thomas Munsterman

Valerie P. Hans

The question of when and how jurors form opinions about evidence presented at trial has been the focus of seemingly endless speculation. For lawyers, the question is how to capture the attention and approval of the jury at the earliest possible point in the trial. Their goal is to maximize the persuasiveness of their arguments--or at least to minimize the persuasiveness of those of the opposing side. Judges, in contrast, are more concerned about prejudgment. They regularly admonish jurors to suspend judgment until after all the evidence has been presented and after the jurors have been instructed on the law. …


An Analysis Of Public Attitudes Toward The Insanity Defense, Valerie P. Hans Jun 2015

An Analysis Of Public Attitudes Toward The Insanity Defense, Valerie P. Hans

Valerie P. Hans

Results from a public opinion survey of knowledge, attitudes, and support for the insanity defense indicate that people dislike the insanity defense for both retributive and utilitarian reasons: they want insane law-breakers punished, and they believe that insanity defense procedures fail to protect the public. However, people vastly overestimate the use and success of the insanity plea. Several attitudinal and demographic variables that other researchers have found to be associated with people's support for the death penalty and perceptions of criminal sentencing are also related to support for the insanity defense. Implications for public policy are discussed.


Statistics In The Jury Box: How Jurors Respond To Mitochondrial Dna Match Probabilities, David H. Kaye, Valerie P. Hans, B. Michael Dann, Erin J. Farley, Stephanie Albertson Jun 2015

Statistics In The Jury Box: How Jurors Respond To Mitochondrial Dna Match Probabilities, David H. Kaye, Valerie P. Hans, B. Michael Dann, Erin J. Farley, Stephanie Albertson

Valerie P. Hans

This article describes parts of an unusually realistic experiment on the comprehension of expert testimony on mitochondrial DNA (mtDNA) sequencing in a criminal trial for robbery. Specifically, we examine how jurors who responded to summonses for jury duty evaluated portions of videotaped testimony involving probabilities and statistics. Although some jurors showed susceptibility to classic fallacies in interpreting conditional probabilities, the jurors as a whole were not overwhelmed by a 99.98% exclusion probability that the prosecution presented. Cognitive errors favoring the defense were more prevalent than ones favoring the prosecution. These findings lend scant support to the legal argument that mtDNA …


Nullification At Work? A Glimpse From The National Center For State Courts Study Of Hung Juries, Paula Hannaford-Agor, Valerie P. Hans Jun 2015

Nullification At Work? A Glimpse From The National Center For State Courts Study Of Hung Juries, Paula Hannaford-Agor, Valerie P. Hans

Valerie P. Hans

In recent years, the criminal justice community has become increasingly concerned about the possibility that jury nullification is the underlying motivation for increasing numbers of acquittals and mistrials due to jury deadlock in felony jury trials. In this Article, the authors discuss the inherent difficulty in defining jury nullification and identifying its occurrence in actual trials. They review the evolution in public and legal opinion about the legitimacy of jury nullification and contemporary judicial responses to perceived instances of jury nullification. Finally, the authors examine the possible presence of jury nullification through empirical analysis of data collected from 372 felony …


Permitting Jury Discussions During Trial: Impact Of The Arizona Reform, Paula Hannaford-Agor, Valerie P. Hans, G. Thomas Munsterman Jun 2015

Permitting Jury Discussions During Trial: Impact Of The Arizona Reform, Paula Hannaford-Agor, Valerie P. Hans, G. Thomas Munsterman

Valerie P. Hans

A field experiment tested the effect of an Arizona civil jury reform that allows jurors to discuss evidence among themselves during the trial. Judges, jurors, attorneys, and litigants completed questionnaires in trials randomly assigned to either a Trial Discussions condition, in which jurors were permitted to discuss the evidence during trial, or a No Discussions condition, in which jurors were prohibited from discussing evidence during trial according to traditional admonitions. Judicial agreement with jury verdicts did not differ between conditions. Permitting jurors to discuss the evidence did affect the degree of certainty that jurors reported about their preferences at the …


Examining The Literature On “Networks In Space And In Time.” An Introduction, Luca De Benedictis, Prosperina Vitale, Stanley Wasserman Mar 2015

Examining The Literature On “Networks In Space And In Time.” An Introduction, Luca De Benedictis, Prosperina Vitale, Stanley Wasserman

Luca De Benedictis

The Network science special issue of “Networks in space and in time: methods and applications” contributes to the debate on contextual analysis in network science. It includes seven research papers that shed light on the analysis of network phenomena studied within geographic space and across temporal dimensions. In these papers, methodological issues as well as specific applications are described from different fields. We take the seven papers, study their citations and texts, and relate them to the broader literature. By exploiting the bibliographic information and the textual data of these seven documents, citation analysis and lexical correspondence analysis allow us …


Judge-Jury Agreement In Criminal Cases: A Partial Replication Of Kalven And Zeisel's The American Jury, Theodore Eisenberg, Paula L. Hannaford-Agor, Valerie P. Hans, Nicole L. Waters, G. Thomas Munsterman, Stewart J. Schwab, Martin T. Wells Feb 2015

Judge-Jury Agreement In Criminal Cases: A Partial Replication Of Kalven And Zeisel's The American Jury, Theodore Eisenberg, Paula L. Hannaford-Agor, Valerie P. Hans, Nicole L. Waters, G. Thomas Munsterman, Stewart J. Schwab, Martin T. Wells

Stewart J Schwab

This study uses a new criminal case data set to partially replicate Kalven and Zeisel's classic study of judge-jury agreement. The data show essentially the same rate of judge-jury agreement as did Kalven and Zeisel for cases tried almost 50 years ago. This study also explores judge-jury agreement as a function of evidentiary strength (as reported by both judges and juries), evidentiary complexity (as reported by both judges and juries), legal complexity (as reported by judges), and locale. Regardless of which adjudicator's view of evidentiary strength is used, judges tend to convict more than juries in cases of "middle" evidentiary …


An Empirical Analysis Of Ceo Employment Contracts: What Do Top Executives Bargain For?, Stewart J. Schwab, Randall S. Thomas Feb 2015

An Empirical Analysis Of Ceo Employment Contracts: What Do Top Executives Bargain For?, Stewart J. Schwab, Randall S. Thomas

Stewart J Schwab

No abstract provided.


How Employment-Discrimination Plaintiffs Fare In The Federal Courts Of Appeals, Kevin M. Clermont, Theodore Eisenberg, Stewart J. Schwab Feb 2015

How Employment-Discrimination Plaintiffs Fare In The Federal Courts Of Appeals, Kevin M. Clermont, Theodore Eisenberg, Stewart J. Schwab

Stewart J Schwab

Employment-discrimination plaintiffs swim against the tide. Compared to the typical plaintiff, they win a lower proportion of cases during pretrial and after trial. Then, many of their successful cases are appealed. On appeal, they have a harder time in upholding their successes, as well in reversing adverse outcome. This tough story does not describe some tiny corner of the litigation world. Employment-discrimination cases constitute an increasing fraction of the federal civil docket, now reigning as the largest single category of cases at nearly 10 percent. In this article, we use official government data to describe the appellate phase of this …


Using The Bootstrap For Estimating The Sample Size In Statistical Experiments, Maher Qumsiyeh Feb 2015

Using The Bootstrap For Estimating The Sample Size In Statistical Experiments, Maher Qumsiyeh

Maher Qumsiyeh

Efron’s (1979) Bootstrap has been shown to be an effective method for statistical estimation and testing. It provides better estimates than normal approximations for studentized means, least square estimates and many other statistics of interest. It can be used to select the active factors - factors that have an effect on the response - in experimental designs. This article shows that the bootstrap can be used to determine sample size or the number of runs required to achieve a certain confidence level in statistical experiments.


Comparison Of Re-Sampling Methods To Generalized Linear Models And Transformations In Factorial And Fractional Factorial Designs, Maher Qumsiyeh, Gerald Shaughnessy Feb 2015

Comparison Of Re-Sampling Methods To Generalized Linear Models And Transformations In Factorial And Fractional Factorial Designs, Maher Qumsiyeh, Gerald Shaughnessy

Maher Qumsiyeh

Experimental situations in which observations are not normally distributed frequently occur in practice. A common situation occurs when responses are discrete in nature, for example counts. One way to analyze such experimental data is to use a transformation for the responses; another is to use a link function based on a generalized linear model (GLM) approach. Re-sampling is employed as an alternative method to analyze non-normal, discrete data. Results are compared to those obtained by the previous two methods.


Marginal Structural Models: An Application To Incarceration And Marriage During Young Adulthood, Valerio Bacak, Edward Kennedy Jan 2015

Marginal Structural Models: An Application To Incarceration And Marriage During Young Adulthood, Valerio Bacak, Edward Kennedy

Edward H. Kennedy

Advanced methods for panel data analysis are commonly used in research on family life and relationships, but the fundamental issue of simultaneous time-dependent confounding and mediation has received little attention. In this article the authors introduce inverse-probability-weighted estimation of marginal structural models, an approach to causal analysis that (unlike conventional regression modeling) appropriately adjusts for confounding variables on the causal pathway linking the treatment with the outcome. They discuss the need for marginal structural models in social science research and describe their estimation in detail. Substantively, the authors contribute to the ongoing debate on the effects of incarceration on marriage …


Negative Binomial Regerssion, 2nd Ed, 2nd Print, Errata And Comments, Joseph Hilbe Jan 2015

Negative Binomial Regerssion, 2nd Ed, 2nd Print, Errata And Comments, Joseph Hilbe

Joseph M Hilbe

Errata and Comments for 2nd printing of NBR2, 2nd edition. Previous errata from first printing all corrected. Some added and new text as well.


Bayesian Function-On-Function Regression For Multi-Level Functional Data, Mark J. Meyer, Brent A. Coull, Francesco Versace, Paul Cinciripini, Jeffrey S. Morris Jan 2015

Bayesian Function-On-Function Regression For Multi-Level Functional Data, Mark J. Meyer, Brent A. Coull, Francesco Versace, Paul Cinciripini, Jeffrey S. Morris

Jeffrey S. Morris

Medical and public health research increasingly involves the collection of more and more complex and high dimensional data. In particular, functional data|where the unit of observation is a curve or set of curves that are finely sampled over a grid -- is frequently obtained. Moreover, researchers often sample multiple curves per person resulting in repeated functional measures. A common question is how to analyze the relationship between two functional variables. We propose a general function-on-function regression model for repeatedly sampled functional data, presenting a simple model as well as a more extensive mixed model framework, along with multiple functional posterior …


Functional Regression, Jeffrey S. Morris Jan 2015

Functional Regression, Jeffrey S. Morris

Jeffrey S. Morris

Functional data analysis (FDA) involves the analysis of data whose ideal units of observation are functions defined on some continuous domain, and the observed data consist of a sample of functions taken from some population, sampled on a discrete grid. Ramsay and Silverman's 1997 textbook sparked the development of this field, which has accelerated in the past 10 years to become one of the fastest growing areas of statistics, fueled by the growing number of applications yielding this type of data. One unique characteristic of FDA is the need to combine information both across and within functions, which Ramsay and …


Ordinal Probit Wavelet-Based Functional Models For Eqtl Analysis, Mark J. Meyer, Jeffrey S. Morris, Craig P. Hersh, Jarret D. Morrow, Christoph Lange, Brent A. Coull Jan 2015

Ordinal Probit Wavelet-Based Functional Models For Eqtl Analysis, Mark J. Meyer, Jeffrey S. Morris, Craig P. Hersh, Jarret D. Morrow, Christoph Lange, Brent A. Coull

Jeffrey S. Morris

Current methods for conducting expression Quantitative Trait Loci (eQTL) analysis are limited in scope to a pairwise association testing between a single nucleotide polymorphism (SNPs) and expression probe set in a region around a gene of interest, thus ignoring the inherent between-SNP correlation. To determine association, p-values are then typically adjusted using Plug-in False Discovery Rate. As many SNPs are interrogated in the region and multiple probe-sets taken, the current approach requires the fitting of a large number of models. We propose to remedy this by introducing a flexible function-on-scalar regression that models the genome as a functional outcome. The …


The Number Of Subjects Per Variable Required In Linear Regression Analyses, Peter Austin, Ewout Steyerberg Jan 2015

The Number Of Subjects Per Variable Required In Linear Regression Analyses, Peter Austin, Ewout Steyerberg

Peter Austin

Objectives: To determine the number of independent variables that can be included in a linear regression model.

Study Design and Setting: We used a series of Monte Carlo simulations to examine the impact of the number of subjects per variable (SPV) on the accuracy of estimated regression coefficients and standard errors, on the empirical coverage of estimated confidence intervals, and on the accuracy of the estimated R2 of the fitted model.

Results: A minimum of approximately two SPV tended to result in estimation of regression coefficients with relative bias of less than 10%. Furthermore, with this minimum number of SPV, …


Moving Towards Best Practice When Using Inverse Probability Of Treatment Weighting (Iptw) Using The Propensity Score To Estimate Causal Treatment Effects In Observational Studies, Peter Austin, Elizabeth Stuart Jan 2015

Moving Towards Best Practice When Using Inverse Probability Of Treatment Weighting (Iptw) Using The Propensity Score To Estimate Causal Treatment Effects In Observational Studies, Peter Austin, Elizabeth Stuart

Peter Austin

The propensity score is defined as a subject’s probability of treatment selection, conditional on observed baseline covariates.Weighting subjects by the inverse probability of treatment received creates a synthetic sample in which treatment assignment is independent of measured baseline covariates. Inverse probability of treatment weighting (IPTW) using the propensity score allows one to obtain unbiased estimates of average treatment effects. However, these estimates are only valid if there are no residual systematic differences in observed baseline characteristics between treated and control subjects in the sample weighted by the estimated inverse probability of treatment. We report on a systematic literature review, in …


On The Interpretation Of Multi-Year Estimates Of The American Community Survey As Period Estimates, Chaitra Nagaraja, Tucker Mcelroy Dec 2014

On The Interpretation Of Multi-Year Estimates Of The American Community Survey As Period Estimates, Chaitra Nagaraja, Tucker Mcelroy

Chaitra H Nagaraja

The rolling sample methodology of the American Community Survey introduces temporal distortions, resulting in Multi-Year Estimates that measure aggregate activity over three or five years. This paper introduces a novel, nonparametric method for quantifying the impact of viewing multi-year estimates as functions of single-year estimates belonging to the same time span. The method is based on examining the changes to confidence interval coverage. As an application of primary interest, the interpretation of a multi-year estimate as the simple average of single-year estimates is a viewpoint that underpins the published estimates of sampling variability. Therefore it is vital to ascertain the …


Financial Statement Fraud Detection Using Supervised Learning Methods (Ph.D. Dissertation), Adrian Gepp Dec 2014

Financial Statement Fraud Detection Using Supervised Learning Methods (Ph.D. Dissertation), Adrian Gepp

Adrian Gepp

No abstract provided.


Promoting Similarity Of Model Sparsity Structures In Integrative Analysis Of Cancer Genetic Data, Shuangge Ma Dec 2014

Promoting Similarity Of Model Sparsity Structures In Integrative Analysis Of Cancer Genetic Data, Shuangge Ma

Shuangge Ma

In profiling studies, the analysis of a single dataset often leads to unsatisfactory results because of the small sample size. Multi-dataset analysis utilizes information across multiple independent datasets and outperforms single-dataset analysis. Among the available multi-dataset analysis methods, integrative analysis methods aggregate and analyze raw data and outperform meta-analysis methods, which analyze multiple datasets separately and then pool summary statistics. In this study, we conduct integrative analysis and marker selection under the heterogeneity structure, which allows different datasets to have overlapping but not necessarily identical sets of markers. Under certain scenarios, it is reasonable to expect some similarity of identified …


Predicting Financial Distress: A Comparison Of Survival Analysis And Decision Tree Techniques, Adrian Gepp, Kuldeep Kumar Dec 2014

Predicting Financial Distress: A Comparison Of Survival Analysis And Decision Tree Techniques, Adrian Gepp, Kuldeep Kumar

Adrian Gepp

Financial distress and then the consequent failure of a business is usually an extremely costly and disruptive event. Statistical financial distress prediction models attempt to predict whether a business will experience financial distress in the future. Discriminant analysis and logistic regression have been the most popular approaches, but there is also a large number of alternative cutting – edge data mining techniques that can be used. In this paper, a semi-parametric Cox survival analysis model and non-parametric CART decision trees have been applied to financial distress prediction and compared with each other as well as the most popular approaches. This …


Case Studies In Evaluating Time Series Prediction Models Using The Relative Mean Absolute Error, Nicholas G. Reich, Justin Lessler, Krzysztof Sakrejda, Stephen A. Lauer, Sopon Iamsirithaworn, Derek A T Cummings Dec 2014

Case Studies In Evaluating Time Series Prediction Models Using The Relative Mean Absolute Error, Nicholas G. Reich, Justin Lessler, Krzysztof Sakrejda, Stephen A. Lauer, Sopon Iamsirithaworn, Derek A T Cummings

Nicholas G Reich

Statistical prediction models inform decision-making processes in many real-world settings. Prior to using predictions in practice, one must rigorously test and validate candidate models to ensure that the proposed predictions have sufficient accuracy to be used in practice. In this paper, we present a framework for evaluating time series predictions that emphasizes computational simplicity and an intuitive interpretation using the relative mean absolute error metric. For a single time series, this metric enables comparisons of candidate model predictions against naive reference models, a method that can provide useful and standardized performance benchmarks. Additionally, in applications with multiple time series, this …


Simulating Univariate And Multivariate Nonnormal Distributions Through The Method Of Percentiles, Jennifer Koran, Todd C. Headrick, Tzu Chun Kuo Dec 2014

Simulating Univariate And Multivariate Nonnormal Distributions Through The Method Of Percentiles, Jennifer Koran, Todd C. Headrick, Tzu Chun Kuo

Todd Christopher Headrick

This article derives a standard normal-based power method polynomial transformation for Monte Carlo simulation studies, approximating distributions, and fitting distributions to data based on the method of percentiles. The proposed method is used primarily when (1) conventional (or L) moment-based estimators such as skew (or L-skew) and kurtosis (or L -kurtosis) are unknown or (2) data are unavailable but percentiles are known (e.g., standardized test score reports). The proposed transformation also has the advantage that solutions to polynomial coefficients are available in simple closed form and thus obviates numerical equation solving. A procedure is also described for simulating power method …


Optimal Full Matching For Survival Outcomes: A Method That Merits More Widespread Use, Peter Austin, Elizabeth Stuart Dec 2014

Optimal Full Matching For Survival Outcomes: A Method That Merits More Widespread Use, Peter Austin, Elizabeth Stuart

Peter Austin

Matching on the propensity score is a commonly used analytic method for estimating the effects of treatments on outcomes. Commonly used propensity score matching methods include nearest neighbor matching and nearest neighbor caliper matching. Rosenbaum (1991) proposed an optimal full matching approach, in which matched strata are formed consisting of either one treated subject and at least one control subject or one control subject and at least one treated subject. Full matching has been used rarely in the applied literature. Furthermore, its performance for use with survival outcomes has not been rigorously evaluated. We propose a method to use full …


Reconciling Experimental Incoherence With Real-World Coherence In Punitive Damages, Theodore Eisenberg, Jeffrey J. Rachlinski, Martin T. Wells Dec 2014

Reconciling Experimental Incoherence With Real-World Coherence In Punitive Damages, Theodore Eisenberg, Jeffrey J. Rachlinski, Martin T. Wells

Jeffrey J. Rachlinski

Experimental evidence generated in controlled laboratory studies suggests that the legal system in general, and punitive damages awards in particular, should display an incoherent pattern. According to the prediction, inexperienced decisionmakers, such as juries, should fail to convert their qualitative judgments of defendants' conduct into consistent, meaningful dollar amounts. This Article tests this prediction and finds modest support for the thesis that experience across different types of cases will lead to greater consistency in awards. Despite this support, numerous studies of damage awards in real cases detect a generally sensible pattern of damage awards. This Article tries to reconcile the …


Implicit Racial Attitudes Of Death Penalty Lawyers, Theodore Eisenberg, Sheri Lynn Johnson Dec 2014

Implicit Racial Attitudes Of Death Penalty Lawyers, Theodore Eisenberg, Sheri Lynn Johnson

Sheri Lynn Johnson

Defense attorneys commonly suspect that the defendant's race plays a role in prosecutors' decisions to seek the death penalty, especially when the victim of the crime was white. When the defendant is convicted of the crime and sentenced to death, it is equally common for such attorneys to question the racial attitudes of the jury. These suspicions are not merely partisan conjectures; ample historical, statistical, and anecdotal evidence supports the inference that race matters in capital cases. Even the General Accounting Office of the United States concludes as much. Despite McCleskey v. Kemp, in which the United States Supreme Court …


The Effects Of Intent: Do We Know How Legal Standards Work?, Theodore Eisenberg, Sheri Lynn Johnson Dec 2014

The Effects Of Intent: Do We Know How Legal Standards Work?, Theodore Eisenberg, Sheri Lynn Johnson

Sheri Lynn Johnson

No one knows how the intent standard works in racial discrimination cases, though many have speculated. To test the speculation, this study examines how the intent standard actually operates. Its findings cast doubt on whether we really know how any legal standard functions.