Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 27 of 27

Full-Text Articles in Physical Sciences and Mathematics

A New Method To Determine The Posterior Distribution Of Coefficient Alpha, John Mart V. Delosreyes Oct 2023

A New Method To Determine The Posterior Distribution Of Coefficient Alpha, John Mart V. Delosreyes

Psychology Theses & Dissertations

There is a focus within the behavioral/social sciences on non-physical, psychological constructs (i.e., constructs). These constructs are indirectly measured using measurement instruments that consist of questions that capture the manifestations of these constructs. The indirect nature of measuring constructs results in a need of ensuring that measurement instruments are reliable. The most popular statistic used to estimate reliability is coefficient alpha as it is easy to compute and has properties that make it desirable to use. Coefficient alpha’s popularity has resulted in a wide breadth of research into its qualities. Notably, research about coefficient alpha’s distribution has led to developments …


Parametric And Reliability Estimation Of The Kumaraswamy Generalized Distribution Based On Record Values, Mohd. Arshad, Qazi J. Azhad Jan 2022

Parametric And Reliability Estimation Of The Kumaraswamy Generalized Distribution Based On Record Values, Mohd. Arshad, Qazi J. Azhad

Journal of Modern Applied Statistical Methods

A general family of distributions, namely Kumaraswamy generalized family of (Kw-G) distribution, is considered for estimation of the unknown parameters and reliability function based on record data from Kw-G distribution. The maximum likelihood estimators (MLEs) are derived for unknown parameters and reliability function, along with its confidence intervals. A Bayesian study is carried out under symmetric and asymmetric loss functions in order to find the Bayes estimators for unknown parameters and reliability function. Future record values are predicted using Bayesian approach and non Bayesian approach, based on numerical examples and a monte carlo simulation.


Parameter Estimation In Weighted Rayleigh Distribution, M. Ajami, S. M. A. Jahanshahi Dec 2017

Parameter Estimation In Weighted Rayleigh Distribution, M. Ajami, S. M. A. Jahanshahi

Journal of Modern Applied Statistical Methods

A weighted model based on the Rayleigh distribution is proposed and the statistical and reliability properties of this model are presented. Some non-Bayesian and Bayesian methods are used to estimate the β parameter of proposed model. The Bayes estimators are obtained under the symmetric (squared error) and the asymmetric (linear exponential) loss functions using non-informative and reciprocal gamma priors. The performance of the estimators is assessed on the basis of their biases and relative risks under the two above-mentioned loss functions. A simulation study is constructed to evaluate the ability of considered estimation methods. The suitability of the proposed model …


Prediction Of Remaining Life Of Power Transformers Based On Left Truncated And Right Censored Lifetime Data, Yili Hong, William Q. Meeker, James D. Mccalley Jun 2017

Prediction Of Remaining Life Of Power Transformers Based On Left Truncated And Right Censored Lifetime Data, Yili Hong, William Q. Meeker, James D. Mccalley

James McCalley

Prediction of the remaining life of high-voltage power transformers is an important issue for energy companies because of the need for planning maintenance and capital expenditures. Lifetime data for such transformers are complicated because transformer lifetimes can extend over many decades and transformer designs and manufacturing practices have evolved. We were asked to develop statistically-based predictions for the lifetimes of an energy company’s fleet of high-voltage transmission and distribution transformers. The company’s data records begin in 1980, providing information on installation and failure dates of transformers. Although the dataset contains many units that were installed before 1980, there is no …


Resolving The Issue Of How Reliability Is Related To Statistical Power: Adhering To Mathematical Definitions, Donald W. Zimmerman, Bruno D. Zumbo Nov 2015

Resolving The Issue Of How Reliability Is Related To Statistical Power: Adhering To Mathematical Definitions, Donald W. Zimmerman, Bruno D. Zumbo

Journal of Modern Applied Statistical Methods

Reliability in classical test theory is a population-dependent concept, defined as a ratio of true-score variance and observed-score variance, where observed-score variance is a sum of true and error components. On the other hand, the power of a statistical significance test is a function of the total variance, irrespective of its decomposition into true and error components. For that reason, the reliability of a dependent variable is a function of the ratio of true-score variance and observed-score variance, whereas statistical power is a function of the sum of the same two variances. Controversies about how reliability is related to statistical …


Estimation Of Multi Component Systems Reliability In Stress-Strength Models, Adil H. Khan, T R. Jan Nov 2014

Estimation Of Multi Component Systems Reliability In Stress-Strength Models, Adil H. Khan, T R. Jan

Journal of Modern Applied Statistical Methods

In a system with standby redundancy, there are a number of components only one of which works at a time and the other remain as standbys. When an impact of stress exceeds the strength of the active component, for the first time, it fails and another from standbys, if there is any, is activated and faces the impact of stresses, not necessarily identical as faced by the preceding component and the system fails when all the components have failed. Sriwastav and Kakaty (1981) assumed that the components stress-strengths are similarly distributed. However, in general the stress distributions will …


Discrete Generalized Burr-Type Xii Distribution, B. A. Para, T. R. Jan Nov 2014

Discrete Generalized Burr-Type Xii Distribution, B. A. Para, T. R. Jan

Journal of Modern Applied Statistical Methods

A discrete analogue of generalized Burr-type XII distribution is introduced using a general approach of discretizing a continuous distribution. It may be worth exploring the possibility of developing a discrete version of the six parameter generalized Burr-type XII distribution for use in modeling a discrete data. This distribution is suggested as a suitable reliability model to fit a range of discrete lifetime data, as it is shown that hazard rate function can attain monotonic increasing (deceasing) shape for certain values of parameters. The equivalence of discrete generalized Burr-type XII (DGBD-XII) and continuous generalized Burr-type XII (GBD-XII) distributions has been established. …


Performance Modeling And Optimization Techniques For Heterogeneous Computing, Supada Laosooksathit Jan 2014

Performance Modeling And Optimization Techniques For Heterogeneous Computing, Supada Laosooksathit

Doctoral Dissertations

Since Graphics Processing Units (CPUs) have increasingly gained popularity amoung non-graphic and computational applications, known as General-Purpose computation on GPU (GPGPU), CPUs have been deployed in many clusters, including the world's fastest supercomputer. However, to make the most efficiency from a GPU system, one should consider both performance and reliability of the system.

This dissertation makes four major contributions. First, the two-level checkpoint/restart protocol that aims to reduce the checkpoint and recovery costs with a latency hiding strategy in a system between a CPU (Central Processing Unit) and a GPU is proposed. The experimental results and analysis reveals some benefits, …


Bootstrap Interval Estimation Of Reliability Via Coefficient Omega, Miguel A. Padilla, Jasmin Divers May 2013

Bootstrap Interval Estimation Of Reliability Via Coefficient Omega, Miguel A. Padilla, Jasmin Divers

Journal of Modern Applied Statistical Methods

Three different bootstrap confidence intervals (CIs) for coefficient omega were investigated. The CIs were assessed through a simulation study with conditions not previously investigated. All methods performed well; however, the normal theory bootstrap (NTB) CI had the best performance because it had more consistent acceptable coverage under the simulation conditions investigated.


Stress-Lifetime Joint Distribution Model For Performance Degradation Failure, Quan Sun, Yanzhen Tang, Jing Feng, Paul Kvam Dec 2012

Stress-Lifetime Joint Distribution Model For Performance Degradation Failure, Quan Sun, Yanzhen Tang, Jing Feng, Paul Kvam

Department of Math & Statistics Faculty Publications

The high energy density self-healing metallized film pulse capacitor has been applied to all kinds of laser facilities for their power conditioning systems under several stress levels, such as 23kV, 30kV and 35kV, whose reliability performance and maintenance costs are affected by the reliability of capacitors. Due to the costs and time restriction, how to assess the reliability of highly reliable capacitors under a certain stress level as soon as possible becomes a challenge. Accelerated degradation test provides a way to predict its lifetime and reliability effectively. A model called stress-lifetime joint distribution model and an analysis method based on …


Probabilistic Inferences For The Sample Pearson Product Moment Correlation, Jeffrey R. Harring, John A. Wasko Nov 2011

Probabilistic Inferences For The Sample Pearson Product Moment Correlation, Jeffrey R. Harring, John A. Wasko

Journal of Modern Applied Statistical Methods

Fisher’s correlation transformation is commonly used to draw inferences regarding the reliability of tests comprised of dichotomous or polytomous items. It is illustrated theoretically and empirically that omitting test length and difficulty results in inflated Type I error. An empirically unbiased correction is introduced within the transformation that is applicable under any test conditions.


Adjusted Empirical Likelihood Models With Estimating Equations For Accelerated Life Tests, Ni Wang, Jye-Chyi Lu, Di Chen, Paul H. Kvam Jan 2011

Adjusted Empirical Likelihood Models With Estimating Equations For Accelerated Life Tests, Ni Wang, Jye-Chyi Lu, Di Chen, Paul H. Kvam

Department of Math & Statistics Faculty Publications

This article proposes an adjusted empirical likelihood estimation (AMELE) method to model and analyze accelerated life testing data. This approach flexibly and rigorously incorporates distribution assumptions and regression structures by estimating equations within a semiparametric estimation framework. An efficient method is provided to compute the empirical likelihood estimates, and asymptotic properties are studied. Real-life examples and numerical studies demonstrate the advantage of the proposed methodology.


Multi-Cause Degradation Path Model: A Case Study On Rubidium Lamp Degradation, Sun Quan, Paul H. Kvam Jan 2011

Multi-Cause Degradation Path Model: A Case Study On Rubidium Lamp Degradation, Sun Quan, Paul H. Kvam

Department of Math & Statistics Faculty Publications

At the core of satellite rubidium standard clocks is the rubidium lamp, which is a critical piece of equipment in a satellite navigation system. There are many challenges in understanding and improving the reliability of the rubidium lamp, including the extensive lifetime requirement and the dearth of samples available for destructive life tests. Experimenters rely on degradation experiments to assess the lifetime distribution of highly reliable products that seem unlikely to fail under the normal stress conditions, because degradation data can provide extra information about product reliability. Based on recent research on the rubidium lamp, this article presents a multi‐cause …


Application Of The Truncated Skew Laplace Probability Distribution In Maintenance System, Gokarna R. Aryal, Chris P. Tsokos Nov 2009

Application Of The Truncated Skew Laplace Probability Distribution In Maintenance System, Gokarna R. Aryal, Chris P. Tsokos

Journal of Modern Applied Statistical Methods

A random variable X is said to have the skew-Laplace probability distribution if its pdf is given by f(x) = 2g(x)G(λx), where g (.) and G (.), respectively, denote the pdf and the cdf of the Laplace distribution. When the skew Laplace distribution is truncated on the left at 0 it is called it the truncated skew Laplace (TSL) distribution. This article provides a comparison of TSL distribution with twoparameter gamma model and the hypoexponential model, and an application of the subject model in maintenance system is studied.


Beyond Kappa: Estimating Inter-Rater Agreement With Nominal Classifications, Nol Bendermacher, Pierre Souren May 2009

Beyond Kappa: Estimating Inter-Rater Agreement With Nominal Classifications, Nol Bendermacher, Pierre Souren

Journal of Modern Applied Statistical Methods

Cohen’s Kappa and a number of related measures can all be criticized for their definition of correction for chance agreement. A measure is introduced that derives the corrected proportion of agreement directly from the data, thereby overcoming objections to Kappa and its related measures.


Which Is The Best Parametric Statistical Method For Analyzing Delphi Data?, Hiral A. Shah, Sema A. Kalaian May 2009

Which Is The Best Parametric Statistical Method For Analyzing Delphi Data?, Hiral A. Shah, Sema A. Kalaian

Journal of Modern Applied Statistical Methods

This study compares the three parametric statistical methods: coefficient of variation, Pearson correlation coefficient, and F-test to obtain reliability in a Delphi study that involved more than 100 participants. The results of this study indicated that coefficient of variation was the best procedure to obtain reliability in such a study.


Estimating How Many Observations Are Needed To Obtain A Required Level Of Reliability, David A. Walker May 2008

Estimating How Many Observations Are Needed To Obtain A Required Level Of Reliability, David A. Walker

Journal of Modern Applied Statistical Methods

This article provides a detailed table containing estimations of how many observations are needed to obtain an increased reliability coefficient for situations such as observational data collection in the classroom. A SPSS program is provided for users to analyze situations where an initial reliability value is obtained and the user wants to determine how many more observations are needed to reach a required level of reliability.


Examining Cronbach Alpha, Theta, Omega Reliability Coefficients According To Sample Size, Ilker Ercan, Berna Yazici, Deniz Sigirli, Bulent Ediz, Ismet Kan May 2007

Examining Cronbach Alpha, Theta, Omega Reliability Coefficients According To Sample Size, Ilker Ercan, Berna Yazici, Deniz Sigirli, Bulent Ediz, Ismet Kan

Journal of Modern Applied Statistical Methods

Differentiations according to the sample size of different reliability coefficients are examined. It is concluded that the estimates obtained by Cronbach alpha and teta coefficients are not related with the sample size, even the estimates obtained from the small samples can represent the population parameter. However, the Omega coefficient requires large sample sizes.


Ordinal Versions Of Coefficients Alpha And Theta For Likert Rating Scales, Bruno D. Zumbo, Anne M. Gadermann, Cornelia Zeisser May 2007

Ordinal Versions Of Coefficients Alpha And Theta For Likert Rating Scales, Bruno D. Zumbo, Anne M. Gadermann, Cornelia Zeisser

Journal of Modern Applied Statistical Methods

Two new reliability indices, ordinal coefficient alpha and ordinal coefficient theta, are introduced. A simulation study was conducted in order to compare the new ordinal reliability estimates to each other and to coefficient alpha with Likert data. Results indicate that ordinal coefficients alpha and theta are consistently suitable estimates of the theoretical reliability, regardless of the magnitude of the theoretical reliability, the number of scale points, and the skewness of the scale point distributions. In contrast, coefficient alpha is in general a negatively biased estimate of reliability. The use of ordinal coefficients alpha and theta as alternatives to coefficient alpha …


Statistical Models For Hot Electron Degradation In Nano-Scaled Mosfet Devices, Suk Joo Bae, Seong-Joon Kim, Way Kuo, Paul H. Kvam Jan 2007

Statistical Models For Hot Electron Degradation In Nano-Scaled Mosfet Devices, Suk Joo Bae, Seong-Joon Kim, Way Kuo, Paul H. Kvam

Department of Math & Statistics Faculty Publications

In a MOS structure, the generation of hot carrier interface states is a critical feature of the item's reliability. On the nano-scale, there are problems with degradation in transconductance, shift in threshold voltage, and decrease in drain current capability. Quantum mechanics has been used to relate this decrease to degradation, and device failure. Although the lifetime, and degradation of a device are typically used to characterize its reliability, in this paper we model the distribution of hot-electron activation energies, which has appeal because it exhibits a two-point discrete mixture of logistic distributions. The logistic mixture presents computational problems that are …


A Nonlinear Random Coefficients Model For Degradation Testing, Suk Joo Bae, Paul H. Kvam Jan 2004

A Nonlinear Random Coefficients Model For Degradation Testing, Suk Joo Bae, Paul H. Kvam

Department of Math & Statistics Faculty Publications

As an alternative to traditional life testing, degradation tests can be effective in assessing product reliability when measurements of degradation leading to failure can be observed. This article presents a degradation model for highly reliable light displays, such as plasma display panels and vacuum fluorescent displays (VFDs). Standard degradation models fail to capture the burn-in characteristics of VFDs, when emitted light actually increases up to a certain point in time before it decreases (or degrades) continuously. Random coefficients are used to model this phenomenon in a nonlinear way, which allows for a nonmonotonic degradation path. In many situations, the relative …


The Q-Sort Method: Assessing Reliability And Construct Validity Of Questionnaire Items At A Pre-Testing Stage, Abraham Y. Nahm, S. Subba Rao, Luis E. Solis-Galvan, T. S. Ragu-Nathan May 2002

The Q-Sort Method: Assessing Reliability And Construct Validity Of Questionnaire Items At A Pre-Testing Stage, Abraham Y. Nahm, S. Subba Rao, Luis E. Solis-Galvan, T. S. Ragu-Nathan

Journal of Modern Applied Statistical Methods

This paper describes the Q-sort, which is a method of assessing reliability and construct validity of questionnaire items at a pre-testing stage. The method uses Cohen's Kappa and Moore and Benbasat's Hit Ratio in assessing the questionnaire.


Nonparametric Estimation Of A Distribution Subject To A Stochastic Precedence Constraint, Miguel A. Arcones, Paul H. Kvam, Francisco J. Samaniego Jan 2002

Nonparametric Estimation Of A Distribution Subject To A Stochastic Precedence Constraint, Miguel A. Arcones, Paul H. Kvam, Francisco J. Samaniego

Department of Math & Statistics Faculty Publications

For any two random variables X and Y with distributions F and G defined on [0,∞), X is said to stochastically precede Y if P(XY) ≥ 1/2. For independent X and Y, stochastic precedence (denoted by XspY) is equivalent to E[G(X–)] ≤ 1/2. The applicability of stochastic precedence in various statistical contexts, including reliability modeling, tests for distributional equality versus various alternatives, and the relative performance of comparable tolerance bounds, is discussed. The problem of estimating the underlying distribution(s) of experimental data under the assumption that they obey a …


Nonparametric Estimation Of The Survival Function Based On Censored Data With Additional Observations From The Residual Distribution, Paul Kvam, Harshinder Singh, Ram C. Tiwari Jan 1999

Nonparametric Estimation Of The Survival Function Based On Censored Data With Additional Observations From The Residual Distribution, Paul Kvam, Harshinder Singh, Ram C. Tiwari

Department of Math & Statistics Faculty Publications

We derive the nonparametric maximum likelihood estimator (NPMLE) of the distribution of the test items using a random, right-censored sample combined with an additional right-censored, residual-lifetime sample in which only lifetimes past a known, fixed time are collected. This framework is suited for samples for which individual test data are combined with left-truncated and randomly censored data from an operating environment. The NPMLE of the survival function using the combined sample is identical to the Kaplan-Meier product-limit estimator only up to the time at which the test items corresponding to the residual sample were known to survive. The limiting distribution …


Reliability Trend Analyses With Statistical Confidence Limits Using The Luke Reliability Trend Chart, Stephen R. Luke Jan 1993

Reliability Trend Analyses With Statistical Confidence Limits Using The Luke Reliability Trend Chart, Stephen R. Luke

Engineering Management & Systems Engineering Theses & Dissertations

In electronic systems, it is interesting to understand exactly how the reliability is changing with time. Dynamic performance changes when a system passes from infant mortality stage into useful life phase and when the system passes from useful life phase into wearout phase. Dynamic performance also changes when the system is redesigned or when the system is acted on by a number of other outside forces such as a change in maintenance policy, escalation of alignment problems, or a change in training program. It is important to know when a system is changing dynamically in order to assess design, policy …


Nonparametric Confidence Intervals For The Reliability Of Real Systems Calculated From Component Data, Jean Spooner May 1987

Nonparametric Confidence Intervals For The Reliability Of Real Systems Calculated From Component Data, Jean Spooner

All Graduate Theses and Dissertations, Spring 1920 to Summer 2023

A methodology which calculates a point estimate and confidence intervals for system reliability directly from component failure data is proposed and evaluated. This is a nonparametric approach which does not require the component time to failures to follow a known reliability distribution.

The proposed methods have similar accuracy to the traditional parametric approaches, can be used when the distribution of component reliability is unknown or there is a limited amount of sample component data, are simpler to compute, and use less computer resources. Depuy et al. (1982) studied several parametric approaches to calculating confidence intervals on system reliability. The test …


A Monte Carlo Comparison Of Nonparametric Reliability Estimators, Jia-Jinn Yueh Jan 1973

A Monte Carlo Comparison Of Nonparametric Reliability Estimators, Jia-Jinn Yueh

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

It is very difficult to construct a reliability model for a complex system. However, the reliability model for a series configuration is relatively simple. In the simplest case in which the components are mutually independent, the system reliability can be represented as follows:

Rs(x) = ∑ni=1Ri(x),

where Ri is the reliability for the ith component. It is also known that for moderate levels of system reliability for large systems, the component reliability must be high.

Extreme Value Theory indicates that under very general conditions, the initial form of the distribution function …