Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 30 of 42

Full-Text Articles in Physical Sciences and Mathematics

Quantum Computing Simulation Of The Hydrogen Molecule System With Rigorous Quantum Circuit Derivations, Yili Zhang Aug 2022

Quantum Computing Simulation Of The Hydrogen Molecule System With Rigorous Quantum Circuit Derivations, Yili Zhang

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

Quantum computing has been an emerging technology in the past few decades. It utilizes the power of programmable quantum devices to perform computation, which can solve complex problems in a feasible time that is impossible with classical computers. Simulating quantum chemical systems using quantum computers is one of the most active research fields in quantum computing. However, due to the novelty of the technology and concept, most materials in the literature are not accessible for newbies in the field and sometimes can cause ambiguity for practitioners due to missing details.

This report provides a rigorous derivation of simulating quantum chemistry …


Examining Quadratic Relationships Between Traits And Methods In Two Multitrait-Multimethod Models, Fredric A. Hintz May 2018

Examining Quadratic Relationships Between Traits And Methods In Two Multitrait-Multimethod Models, Fredric A. Hintz

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

Psychological researchers are interested in the validity of the measures they use, and the multitrait-multimethod design is one of the most frequently employed methods to examine validity. Confirmatory factor analysis is now a commonly used analytic tool for examining multitrait-multimethod data, where an underlying mathematical model is fit to data and the amount of variance due to the trait and method factors is estimated. While most contemporary confirmatory factor analysis methods for examining multi-trait multi-method data do not allow relationships between the trait and method factors, a few recently proposed models allow for the examination of linear relationships between traits …


Prediction Of Stress Increase In Unbonded Tendons Using Sparse Principal Component Analysis, Eric Mckinney Aug 2017

Prediction Of Stress Increase In Unbonded Tendons Using Sparse Principal Component Analysis, Eric Mckinney

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

While internal and external unbonded tendons are widely utilized in concrete structures, the analytic solution for the increase in unbonded tendon stress, Δ���, is challenging due to the lack of bond between strand and concrete. Moreover, most analysis methods do not provide high correlation due to the limited available test data. In this thesis, Principal Component Analysis (PCA), and Sparse Principal Component Analysis (SPCA) are employed on different sets of candidate variables, amongst the material and sectional properties from the database compiled by Maguire et al. [18]. Predictions of Δ��� are made via Principal Component Regression models, and the method …


Comparison Of Survival Curves Between Cox Proportional Hazards, Random Forests, And Conditional Inference Forests In Survival Analysis, Brandon Weathers May 2017

Comparison Of Survival Curves Between Cox Proportional Hazards, Random Forests, And Conditional Inference Forests In Survival Analysis, Brandon Weathers

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

Survival analysis methods are a mainstay of the biomedical fields but are finding increasing use in other disciplines including finance and engineering. A widely used tool in survival analysis is the Cox proportional hazards regression model. For this model, all the predicted survivor curves have the same basic shape, which may not be a good approximation to reality. In contrast the Random Survival Forests does not make the proportional hazards assumption and has the flexibility to model survivor curves that are of quite different shapes for different groups of subjects. We applied both techniques to a number of publicly available …


Statistical Methods For Assessing Individual Oocyte Viability Through Gene Expression Profiles, Michael O. Bishop May 2017

Statistical Methods For Assessing Individual Oocyte Viability Through Gene Expression Profiles, Michael O. Bishop

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

Abstract

Statistical Methods for Assessing Individual Oocyte Viability Through Gene Expression Profiles

By

Michael O. Bishop

Utah State University, 2017

Major Professor: Dr. John R. Stevens

Department: Mathematics and Statistics

Oocytes are the precursor cells to the female gamete, or egg. While reproduction may vary from species to species, within humans and most domesticated animals, the oocyte maturation process is fairly similar. As an oocyte matures, there are various processes that take place, all of which have an effect on the viability of the individual oocyte. Barring outside damage that may come to the oocyte, one of the primary reasons …


A Comparison Of Statistical Methods Relating Pairwise Distance To A Binary Subject-Level Covariate, Rachael Stone May 2017

A Comparison Of Statistical Methods Relating Pairwise Distance To A Binary Subject-Level Covariate, Rachael Stone

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

A community ecologist provided a motivating data set involving a certain animal species with two behavior groups, along with a pairwise genetic distance matrix among individuals. Many community ecologists have analyzed similar data sets with a method known as the Hopkins method, testing for an association between the subject-level covariate (behavior group) and the pairwise distance. This community ecologist wanted to know if they used the Hopkins method, would their results be meaningful? Their question inspired this thesis work, where a different data set was used for confidentiality reasons. Multiple methods (Hopkins method, ADONIS, ANOSIM, and Distance Regression) were used …


Collecting, Analyzing And Interpreting Bivariate Data From Leaky Buckets: A Project-Based Learning Unit, Florence Funmilayo Obielodan May 2011

Collecting, Analyzing And Interpreting Bivariate Data From Leaky Buckets: A Project-Based Learning Unit, Florence Funmilayo Obielodan

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

Despite the significance and the emphasis placed on mathematics as a subject and field of study, achieving the right attitude to improve students‟ understanding and performance is still a challenge. Previous studies have shown that the problem cuts across nations around the world, both developing countries and developed alike. Teachers and educators of the subject have responsibilities to continuously develop innovative pedagogical approaches that will enhance students‟ interests and performance. Teaching approaches that emphasize real life applications of the subject have become imperative. It is believed that this will stimulate learners‟ interest in the subject as they will be able …


Probe-Level Statistical Models For Differential Expression Of Genes In Bovine Nt Studies, Jason L. Bell Jan 2009

Probe-Level Statistical Models For Differential Expression Of Genes In Bovine Nt Studies, Jason L. Bell

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

A brief introduction of microarray technology and its uses is given. This technology is commonly used in agricultural research, including research in nuclear transfer, which motivated this study. There are 3 classes of statistical models compared: probeset-level, weighted probeset-level and probe-level.

Different statistical mod els are compared on 3 spike-in experiments to assess the relative performance in identifying differentially expressed genes . A novel nested factorial model was found to outperform all other models compared in this study in one spike-in experiment, and was found to be competitive in its performance relative to the other models on the other spike-in …


Comparison Of Random Forests And Cforest: Variable Importance Measures And Prediction Accuracies, Rong Xia Jan 2009

Comparison Of Random Forests And Cforest: Variable Importance Measures And Prediction Accuracies, Rong Xia

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

Random forests are ensembles of trees that give accurate predictions for regression, classification and clustering problems. The CART tree, the base learn er employed by random forests, has been criticized because of bias in the selection of splitting variables. The performance of random forests is suspect due to this criticism. A new implementation of random forests, Cforest, which is claimed to outperform random forests in both predictive power and variable importance measures , was developed based on Ctree, an implementation of conditional inference trees.

We address the underlying mechanism of random forests and Cforest in this report. Comparison of random …


Computer Program Generation Of Extreme Value Distribution Data, Stephen (Wan-Tsing) Lei Jan 1986

Computer Program Generation Of Extreme Value Distribution Data, Stephen (Wan-Tsing) Lei

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

The application of the Monte Carlo method on the estimation in Gumbel extreme value distribution was studied. The Gumbel extreme value distribution is used to estimate the flood flow of specific return period for the design of flood mitigation project. This paper is a programming effort (1) to estimate the parameters of Gumbel distribution using the observed data and (2) to provide a random variate generating subroutine to generate random samples and order statistics of a Gumbel distribution random variable. The mean squared error is used to measure the accuracy of the estimation method. Finally, an example of the use …


Monte Carlo Simulation Of The Game Of Twenty-One, Douglas E. Loer Jan 1985

Monte Carlo Simulation Of The Game Of Twenty-One, Douglas E. Loer

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

The purpose of this paper is to demonstrate the application of computer simulation to the game of Twenty-One to predict a player's expected return from the game. Twenty-One has traditionally been one of the most popular casino games and has attracted much effort to accurately estimate the house's true advantage. Probability theory has been tried, but the thousands of different combinations of cards possible in all hands throughout the entire pack make it practically impossible to apply probability theory without overlooking some possibilities. For this reason, Twenty-One is a perfect candidate for simulation. By blocking several simulations, normal theory can …


Nonparametric Analysis Of Right Censored Data With Multiple Comparisons, Hwei-Weng Shih Jan 1982

Nonparametric Analysis Of Right Censored Data With Multiple Comparisons, Hwei-Weng Shih

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

This report demonstrates the use of a computer program written in FORTRAN for the Burroughs B6800 computer at Utah State University to perform Breslow's (1970) generalization of the Kruskal-Wallis test for right censored data. A pairwise multiple comparison procedure using Bonferroni's inequality is also introduced and demonstrated. Comparisons are also made with a parametric F test and the original Kruskal-Wallis test. Application of these techniques to two data sets indicate that there is little difference among the procedures with the F test being slightly more liberal (too many differences) and the Kruskal-Wallis test corrected for ties being slightly more conservative …


Factorial Analysis Of Variance And Covariance On A Minicomputer, Ladonna Black Kemmerle Jan 1980

Factorial Analysis Of Variance And Covariance On A Minicomputer, Ladonna Black Kemmerle

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

Statistical analysis of large data sets is commonly performed on computers using one of the many available programs. Most of these programs have been written for computers with internal storage large enough to handle nearly any data set. Recently, however, there has been a trend to computers with more limited storage capabilities. New programs must be written or old programs adapted so that large data sets may also be analyzed on these smaller machines.

This report describes a program to analyze data from a balanced experiment of crossed and/or nested design. It was written for the Data General Nova minicomputer …


The Evolution Of Ibm's Information Management System-- A Significant Data Base/Data Communications Software Product, Brent W. Anderson Jan 1979

The Evolution Of Ibm's Information Management System-- A Significant Data Base/Data Communications Software Product, Brent W. Anderson

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

In the early 1970's it was said that data base management systems (DBMS) would be to the 70's what COBOL was to the 60's. Clearly, recognition of the need to manage and effectively utilize data has resulted in significant efforts to develop computer hardware and software to meet this great challenge.


A Discussion Of An Empirical Bayes Multiple Comparison Technique, Donna Baranowski Jan 1979

A Discussion Of An Empirical Bayes Multiple Comparison Technique, Donna Baranowski

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

This paper considers the application and comparison of Bayesian and nonBayesian multiple comparison techniques applied to sets of chemical analysis data. Suggestions are also made as to which methods should be used.


Estimation Of Μy Using The General Regression Model (In Sampling), Michael R. Manieri Jan 1978

Estimation Of Μy Using The General Regression Model (In Sampling), Michael R. Manieri

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

The methods of ratio and regression estimators discussed by Cochran(l977) are given as background materials and extended to the estimation of µy, the population mean of the Y's, using a general regression model.

The propagation of error technique given by Deming(l948) is used as an approximation to find the variance of the estimator µy.

Examples are given for each of the various models. Variances of μy are calculated and compared


Factor Analysis Method, Stephen Hauwah Kan Jan 1978

Factor Analysis Method, Stephen Hauwah Kan

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

The logical steps performed when doing a factor analysis can be classified into three operation s. The first step concerns the exact mode of analysis and involves the type of centering, scaling and formation o f sums of squares . The second step involves extraction of initial factors. The algebraic basis of the factors are rotated in the last step to obtain a more easily interpreted set of factors. At each step several different methods have been suggested and appear in the literature. Two primary modes of factor analysis are commonly used an d they are denoted as R-mode and …


Comparison Of The Fisher's Method Of Randomization With Other Tests Based On Ranks And The F-Test, Francisco J. González Jan 1978

Comparison Of The Fisher's Method Of Randomization With Other Tests Based On Ranks And The F-Test, Francisco J. González

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

Classical statistical inference methods (parametric methods) have a common denominator, i.e. a population para meter (μ, o, n) about which we wish to draw inferences from a random sample. R) are selected. Point estimators of the parameters (X, S, Their sampling distribution is used to construct hypothesis testing decision rules or, confidence interval formulas. This is the reason for calling this method of obtaining inferences a parametric method. They are based on knowing the distribution of the population random variable from which the sampling distribution of the point estimator is determined. In addition, it is generally assumed that the population, …


Specific Hypotheses In Linear Models And Their Power Function In Unbalanced Data, Seyed Mohtaba Taheri Jan 1977

Specific Hypotheses In Linear Models And Their Power Function In Unbalanced Data, Seyed Mohtaba Taheri

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

A hypothesis is a statement or claim about the state of nature. Scientific investigators, market researchers, governmental decision makers, among others, will often have hypotheses about the particular facet of nature, hypotheses that need verification or rejection, for one purpose or another. Statisticians concerned with testing hypotheses using unbalanced data on the basis of linear models have talked about the difficulties involved for many years but, probably because the problems are not easily resolved, there is yet no satisfactory solution to these problems


An Empirical Comparison Of Confidence Interval For Relative Potency, Catherine H. Lung Jan 1976

An Empirical Comparison Of Confidence Interval For Relative Potency, Catherine H. Lung

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

Biological assays are essentially biological experiments. To compare the potencies of treatments on an agreed scale is generally of more interest than to compare the magnitude of effects of different treatments.

The relative potency, R = a/b, is defined as the ratio of the means of two equally effective doses where a is the mean of A and bis the mean of B. It is an estimate of the potency of one preparation, A, relative to that of the other, B.

Different procedures have been proposed to obtain the values of R and its confidence interval. Three of the these …


Linear Comparisons In Multivariate Analysis Of Variance, Hsin-Ming Tzeng Jan 1976

Linear Comparisons In Multivariate Analysis Of Variance, Hsin-Ming Tzeng

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

The analysis of variance was created by Ronald Fisher in 1923. It is most widely used and basically useful approach to study differences among treatment averages.


The Computation Of Eigenvalues And Eigenvectors Of An Nxn Real General Matrix, Yeh-Hao Ma Jan 1975

The Computation Of Eigenvalues And Eigenvectors Of An Nxn Real General Matrix, Yeh-Hao Ma

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

The eigenvalues of the matrix eigenproblem Ax = λx are computed by the QR double-step method and the eigenvectors by inverse power method.

The matrix A is preliminarily scaled by the equilibration and normalization procedure. The scaled matrix is then reduced to an upper-Hessenberg form by Householder's method. The QR double-step iteration is performed on the upper-Hessenberg matrix. After all the eigenvalues are found, the inverse power method is performed on the upper-Hessenberg matrix to obtain the corresponding eigenvectors.

The program consists of five subroutines which is able to find real and/or complex eigen value/vector of an nxn real matrix.


Program For Missing Data In The Multivariate Normal Distribution, Chi-Ping Lu Jan 1975

Program For Missing Data In The Multivariate Normal Distribution, Chi-Ping Lu

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

Missing data can often cause many problems in research work. Therefore for carrying out analysis, some procedure for obtaining estimates in the presence of missing data should be applied. Various theories and techniques have been developed for different types of problems.

Analysis of the Multivariate Normal Distribution with missing data is one of the areas studied. It has been discussed earlier by Wilkes (1932), Lord (1955), Edgett (1956) and Hartley (1958). They have established some basic concepts and an outline in the way of estimation.

In the last ten years, A. A. Afifi and R. M. Elasfoff also have contributed …


Discriminant Function Analysis, Kuo Hsiung Su Jan 1975

Discriminant Function Analysis, Kuo Hsiung Su

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

The technique of discriminant function analysis was originated by R.A. Fisher and first applied by Barnard (1935). Two very useful summaries of the recent work in this technique can be found in Hodges (1950) and in Tosuoka and Tiedeman (1954). The techniques have been used primarily in the fields of anthropology, psychology, biology, medicine, and education, and have only begun to be applied to other fields in recent years.

Classification and discriminant function analyses are two phases in the attempt to predict which of several populations an observation might be a member of, on the basis of multivariate measurements. Both …


An Evaluation Of Bartlett's Chi-Square Approximation For The Determinant Of A Matrix Of Sample Zero-Order Correlation Coefficients, Stephen M. Hattori Jan 1975

An Evaluation Of Bartlett's Chi-Square Approximation For The Determinant Of A Matrix Of Sample Zero-Order Correlation Coefficients, Stephen M. Hattori

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

The single equation least-squares regression model has been extensively studied by economists and statisticians alike in order to determine the problems which arise when particular assumptions are violated. Much literature is available in terms of the properties and limitations of the model. However, on the multicollinearity problem, there has been little research, and consequently, limited literature is available when the problem is encountered. Farrar & Glauber (1967) present a collection of techniques to use in order to detect or diagnose the occurrence of multicollinearity within a regression analysis. They attempt to define multicollinearity in terms of departures from a hypothesized …


Multivariate Analysis Of Variance For Simple Designs, Yin-Yin Chen Jan 1975

Multivariate Analysis Of Variance For Simple Designs, Yin-Yin Chen

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

The analysis of variance is a well known tool for testing how treatments change the average response of experimental units. The essence of the procedure is to compare the variation among means of groups of units subjected to the same treatment with the within treatment variation. If the variation among means is large with respect to the within group variation we are likely to conclude that the treatments caused the variation and hence we say the treatments cause some change in the group means.

The usual analysis of variance checks how far apart the group means are in a single …


Principal Component Factor Analysis, Kuang-Ming Chu Jan 1974

Principal Component Factor Analysis, Kuang-Ming Chu

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

The principal-factor solution is probably the most widely used technique in factor analysis and a relatively straight forward method to determine the minimum number of independent dimensions needed to account for most of the variance in the original set of variables.

The principal components approach to parsimony was first proposed by Karl Pearson (1901) who studied the problem for the case of nonstochastic variables, and in a different context. Hotelling provided the full development of the method (1933) and Thomson (1947) was the first to apply it to the principal factor analysis.

This method was first developed to deal with …


Matrix Norms, I-Hui C. Cheng Jan 1974

Matrix Norms, I-Hui C. Cheng

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

In many situations it is very useful to have a single nonnegative real number to be, in some sense, the measure of the size of a vector or a matrix. As a matter of fact we do a similiar thing with scalars, we let jÀj represent the familiar absolute value or modulus of À. Fora vector x e: C , one way n of assigning magnitude is the usual definition of length, Il I 1/2 2 1/2 xl= = {jxij } , which is called the euclidean norm of x. In this case, length gives an overall estimate of the …


The Evaluation Of Glasser's Maximum Likelihood Method On Missing Data In Regression, Gayle M. Yamasaki Jan 1973

The Evaluation Of Glasser's Maximum Likelihood Method On Missing Data In Regression, Gayle M. Yamasaki

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

Missing data in regression is often a problem to research workers because standard regression methods are applicable only to complete data sets. At present there are three general methods for solving the problem of missing data.

At first, the reduced data method, reduces the incomplete data set to a complete data set before analyzing. Although this method is very simple to apply, substantial amounts of information are sometimes lost when data is eliminated. This results in less precise estimates of the regression parameters.

The second method, generalized least squares, estimates the missing values through least squares techniques, thus obtaining a …


Integer Programming By Cutting Planes Methods, Sung-Yen Wu Jan 1973

Integer Programming By Cutting Planes Methods, Sung-Yen Wu

All Graduate Plan B and other Reports, Spring 1920 to Spring 2023

Linear programming is a relatively new, very important branch of modern mathematics and is about twenty five years old.

In this day and age, most planners and decision makers will acknowledge that some linear optimization problems are worth the expense and trouble to solve. Using linear programming technique as a tool to make decision plannes are able to greatly redice cost or increase profit for any project under consideration.

Since Dr. George B. Dantzig published his first paper on the simplex method in 1947, progress in that field has been rapid. Although the first applications were military in nature, it …