Bivariate Analogs Of The Wilcoxon–Mann–Whitney Test And The Patel–Hoel Method For Interactions, 2020 University of Southern California

#### Bivariate Analogs Of The Wilcoxon–Mann–Whitney Test And The Patel–Hoel Method For Interactions, Rand Wilcox

*Journal of Modern Applied Statistical Methods*

A fundamental way of characterizing how two independent compares compare is in terms of the probability that a randomly sampled observation from the first group is less than a randomly sampled observation from the second group. The paper suggests a bivariate analog and investigates methods for computing confidence intervals. An interaction for a two-by-two design is investigated as well.

Regression When There Are Two Covariates: Some Practical Reasons For Considering Quantile Grids, 2020 University of Southern California

#### Regression When There Are Two Covariates: Some Practical Reasons For Considering Quantile Grids, Rand Wilcox

*Journal of Modern Applied Statistical Methods*

When dealing with the association between some random variable and two covariates, extensive experience with smoothers indicates that often a linear model poorly reflects the nature of the association. A simple approach via quantile grids that reflects the nature of the association is given. The two main goals are to illustrate this approach can make a practical difference, and to describe R functions for applying it. Included are comments on dealing with more than two covariates.

Assessing The Accuracy Of Approximate Confidence Intervals Proposed For The Mean Of Poisson Distribution, 2020 Velayat University of Iranshahr, Iran

#### Assessing The Accuracy Of Approximate Confidence Intervals Proposed For The Mean Of Poisson Distribution, Alireza Shirvani, Malek Fathizadeh

*Journal of Modern Applied Statistical Methods*

The Poisson distribution is applied as an appropriate standard model to analyze count data. Because this distribution is known as a discrete distribution, representation of accurate confidence intervals for its distribution mean is extremely difficult. Approximate confidence intervals were presented for the Poisson distribution mean. The purpose of this study is to simultaneously compare several confidence intervals presented, according to the average coverage probability and accurate confidence coefficient and the average confidence interval length criteria.

Regression Modeling And Prediction By Individual Observations Versus Frequency, 2020 GfK North America

#### Regression Modeling And Prediction By Individual Observations Versus Frequency, Stan Lipovetsky

*Journal of Modern Applied Statistical Methods*

A regression model built by a dataset could sometimes demonstrate a low quality of fit and poor predictions of individual observations. However, using the frequencies of possible combinations of the predictors and the outcome, the same models with the same parameters may yield a high quality of fit and precise predictions for the frequencies of the outcome occurrence. Linear and logistical regressions are used to make an explicit exposition of the results of regression modeling and prediction.

Analytical Closed-Form Solution For General Factor With Many Variables, 2020 GfK North America

#### Analytical Closed-Form Solution For General Factor With Many Variables, Stan Lipovetsky, Vladimir Manewitsch

*Journal of Modern Applied Statistical Methods*

The factor analytic triad method of one-factor solution gives the explicit analytical form for a common latent factor built by three variables. The current work considers analytical presentation of a general latent factor constructed in a closed-form solution for multivariate case. The results can be supportive to theoretical description and practical application of latent variable modeling, especially for big data because the analytical closed-form solution is not prone to data dimensionality.

Extension Of First Passage Probability, 2020 University of Windsor

#### Extension Of First Passage Probability, Yiping Zhang

*Major Papers*

In this paper, we consider the extension of first passage probability. First, we present the first, second, third, and generally k-th passage probability of a Markov Chain moving from one state to another state through step-by-step calculation and two other matrix-version methods. Similarly, we compute the first passage probability of a Markov Chain moving from one state to multiple states. In all discussions, we take into account the situations that one state moves to a different state and returns to itself. Also, we find the mean number of steps needed from one state to another state in a Markov Chain ...

Generalized Matrix Decomposition Regression: Estimation And Inference For Two-Way Structured Data, 2019 University of Washington

#### Generalized Matrix Decomposition Regression: Estimation And Inference For Two-Way Structured Data, Yue Wang, Ali Shojaie, Tim Randolph, Jing Ma

*UW Biostatistics Working Paper Series*

Analysis of two-way structured data, i.e., data with structures among both variables and samples, is becoming increasingly common in ecology, biology and neuro-science. Classical dimension-reduction tools, such as the singular value decomposition (SVD), may perform poorly for two-way structured data. The generalized matrix decomposition (GMD, Allen et al., 2014) extends the SVD to two-way structured data and thus constructs singular vectors that account for both structures. While the GMD is a useful dimension-reduction tool for exploratory analysis of two-way structured data, it is unsupervised and cannot be used to assess the association between such data and an outcome of ...

Statistical Inference For Networks Of High-Dimensional Point Processes, 2019 University of Washington - Seattle Campus

#### Statistical Inference For Networks Of High-Dimensional Point Processes, Xu Wang, Mladen Kolar, Ali Shojaie

*UW Biostatistics Working Paper Series*

Fueled in part by recent applications in neuroscience, high-dimensional Hawkes process have become a popular tool for modeling the network of interactions among multivariate point process data. While evaluating the uncertainty of the network estimates is critical in scientific applications, existing methodological and theoretical work have only focused on estimation. To bridge this gap, this paper proposes a high-dimensional statistical inference procedure with theoretical guarantees for multivariate Hawkes process. Key to this inference procedure is a new concentration inequality on the first- and second-order statistics for integrated stochastic processes, which summarizes the entire history of the process. We apply this ...

Coverage Properties Of Weibull Prediction Interval Procedures To Contain A Future Number Of Failures, 2019 Iowa State University

#### Coverage Properties Of Weibull Prediction Interval Procedures To Contain A Future Number Of Failures, Fanqi Meng, William Q. Meeker

*Statistics Publications*

Prediction intervals are needed to quantify prediction uncertainty in, for example, warranty prediction and prediction of other kinds of field failures. Naïve prediction intervals (also known as intervals from the “plug-in method”) ignore the uncertainty in parameter estimates. Simulation-based calibration methods can be used to improve the accuracy of prediction interval coverage probabilities. This article investigates the finite-sample coverage probabilities for naive and calibrated prediction interval procedures for the number of future failures, based on the failure-time information obtained before a censoring time. We have designed and conducted a simulation experiment over combinations of factors with levels covering the ranges ...

The Estimation Of Missing Values In Rectangular Lattice Designs, 2019 University of Nigeria - Nsukka

#### The Estimation Of Missing Values In Rectangular Lattice Designs, Emmanuel Ogochukwu Ossai, Abimibola Victoria Oladugba

*Journal of Modern Applied Statistical Methods*

Algebraic expressions for estimating missing data when one or more observation(s) are missing in Rectangular lattice designs with repetition were derived using the method of minimizing the residual sum of squares. Results showed that the estimated value(s) were significantly approximate to that of the actual value(s).

Optimal Design For A Causal Structure, 2019 University of Nebraska-Lincoln

#### Optimal Design For A Causal Structure, Zaher Kmail

*Dissertations and Theses in Statistics*

Linear models and mixed models are important statistical tools. But in many natural phenomena, there is more than one endogenous variable involved and these variables are related in a sophisticated way. Structural Equation Modeling (SEM) is often used to model the complex relationships between the endogenous and exogenous variables. It was first implemented in research to estimate the strength and direction of direct and indirect effects among variables and to measure the relative magnitude of each causal factor.

Historically, traditional optimal design theory focuses on univariate linear, nonlinear, and mixed models. There is no current literature on the subject of ...

Prediction Of High School Graduation With Decision Trees, 2019 Missouri State University

#### Prediction Of High School Graduation With Decision Trees, Andrea M. Lee

*MSU Graduate Theses*

While working as an educator for the past fourteen years, we are always looking at data and determining ways to help our students. Graduation status is one area of interest. I wanted to apply statistical methods to try and find early indicators of those students who may drop out, thus being able to provide early intervention to those students. With early intervention, we may be able to lower our dropout rate. While studying different methods of pattern recognition, I found that the decision tree method in machine learning was the best for the data that I had collected. Decision trees ...

Information Order In Monotone Decision Problems Under Uncertainty, 2019 Iowa State University

#### Information Order In Monotone Decision Problems Under Uncertainty, Jian Li, Junjie Zhou

*Economics Working Papers*

This paper examines the robustness of Lehmann’s ranking of experiments (Lehmann, 1988) for decisionmakers who are uncertainty-averse à la Cerreia- Vioglio et al. (2011). We show that, assuming commitment, for all uncertaintyaverse indices satisfying some mild assumptions, Lehmann’s informativeness ranking is equivalent to the induced uncertainty-averse value ranking of experiments for all agents with single-crossing vNM utility indices (Theorem 1). Moreover, Lehmann ranking can also be detected by varying the uncertainty-averse indices for a fixed finite collection of vNM utility indices (Theorem 2). Our findings suggest that Lehmann’s ranking can be a useful enrichment of Blackwell’s ...

Interpreting Patient Reported Outcomes In Orthopaedic Surgery: A Systematic Review, 2019 Western University

#### Interpreting Patient Reported Outcomes In Orthopaedic Surgery: A Systematic Review, Shgufta Docter, Zina Fathalla, Michael Lukacs, Michaela Khan, Morgan Jennings, Shu-Hsuan Liu, Dong Zi, Dianne Bryant

*Western Research Forum*

**Background: **Reporting methods of patient reported outcome measures (PROMs) vary in orthopaedic surgery literature. While most studies report statistical significance, the interpretation of results would be improved if authors reported confidence intervals (CIs), the minimally clinically important difference (MCID), and number needed to treat (NNT).

**Objective: **To assess the quality and interpretability of reporting the results of PROMs. To evaluate reporting, we will assess the proportion of studies that reported (1) 95% CIs, (2) MCID, and (3) NNT. To evaluate interpretation, we will assess the proportion of studies that discussed results using the MCID or the effect sizes and how ...

Measure Of Departure From Marginal Average Point-Symmetry For Two-Way Contingency Tables, 2019 Tokyo University of Science

#### Measure Of Departure From Marginal Average Point-Symmetry For Two-Way Contingency Tables, Kiyotaka Iki, Sadao Tomizawa

*Journal of Modern Applied Statistical Methods*

For the analysis of two-way contingency tables with ordered categories, Yamamoto, Tahata, Suzuki, and Tomizawa (2011) considered a measure to represent the degree of departure from marginal point-symmetry. The maximum value of the measure cannot distinguish two kinds of marginal complete asymmetry with respect to the midpoint. A measure is proposed which can distinguish two kinds of marginal asymmetry with respect to the midpoint. It also gives large-sample confidence interval for the proposed measure.

The Impact Of Equating On Detection Of Treatment Effects, 2019 University of Alabama

#### The Impact Of Equating On Detection Of Treatment Effects, Youn-Jeng Choi, Seohyun Kim, Allan S. Cohen, Zhenqiu Lu

*Journal of Modern Applied Statistical Methods*

Equating makes it possible to compare performances on different forms of a test. Three different equating methods (baseline selection, subgroup, and subscore equating) using common-item item response theory equating were examined for their impact on detection of treatment effects in multilevel models.

Upper Record Values From Extended Exponential Distribution, 2019 Central University Haryana, Mahendergarh, India

#### Upper Record Values From Extended Exponential Distribution, Devendra Kumar, Sanku Dey

*Journal of Modern Applied Statistical Methods*

Some recurrence relations are established for the single and product moments of upper record values for the extended exponential distribution by Nadarajah and Haghighi (2011) as an alternative to the gamma, Weibull, and the exponentiated exponential distributions. Recurrence relations for negative moments and quotient moments of upper record values are also obtained. Using relations of single moments and product moments, means, variances, and covariances of upper record values from samples of sizes up to 10 are tabulated for various values of the shape parameter and scale parameter. A characterization of this distribution based on conditional moments of record values is ...

The Andersen Likelihood Ratio Test With A Random Split Criterion Lacks Power, 2019 University College of Teacher Education Styria

#### The Andersen Likelihood Ratio Test With A Random Split Criterion Lacks Power, Georg Krammer

*Journal of Modern Applied Statistical Methods*

The Andersen LRT uses sample characteristics as split criteria to evaluate Rasch model fit, or theory driven hypothesis testing for a test. The power and Type I error of a random split criterion was evaluated with a simulation study. Results consistently show a random split criterion lacks power.

Weighted Version Of Generalized Inverse Weibull Distribution, 2019 University of Kashmir, Srinagar, India

#### Weighted Version Of Generalized Inverse Weibull Distribution, Sofi Mudiasir, S. P. Ahmad

*Journal of Modern Applied Statistical Methods*

Weighted distributions are used in many fields, such as medicine, ecology, and reliability. A weighted version of the generalized inverse Weibull distribution, known as weighted generalized inverse Weibull distribution (WGIWD), is proposed. Basic properties including mode, moments, moment generating function, skewness, kurtosis, and Shannon’s entropy are studied. The usefulness of the new model was demonstrated by applying it to a real-life data set. The WGIWD fits better than its submodels, such as length biased generalized inverse Weibull (LGIW), generalized inverse Weibull (GIW), inverse Weibull (IW) and inverse exponential (IE) distributions.

Calibration Of Measurements, 2019 University of British Columbia

#### Calibration Of Measurements, Edward Kroc, Bruno D. Zumbo

*Journal of Modern Applied Statistical Methods*

Traditional notions of measurement error typically rely on a strong mean-zero assumption on the expectation of the errors conditional on an unobservable “true score” (classical measurement error) or on the data themselves (Berkson measurement error). Weakly calibrated measurements for an unobservable true quantity are defined based on a weaker mean-zero assumption, giving rise to a measurement model of differential error. Applications show it retains many attractive features of estimation and inference when performing a naive data analysis (i.e. when performing an analysis on the error-prone measurements themselves), and other interesting properties not present in the classical or Berkson cases ...