Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

PDF

Utah State University

Mathematics and Statistics Faculty Publications

Convergence rates

Publication Year

Articles 1 - 2 of 2

Full-Text Articles in Physical Sciences and Mathematics

Convergence Rates For Empirical Estimation Of Binary Classification Bounds, Salimeh Yasaei Sekeh, Morteza Noshad, Kevin R. Moon, Alfred O. Hero Nov 2019

Convergence Rates For Empirical Estimation Of Binary Classification Bounds, Salimeh Yasaei Sekeh, Morteza Noshad, Kevin R. Moon, Alfred O. Hero

Mathematics and Statistics Faculty Publications

Bounding the best achievable error probability for binary classification problems is relevant to many applications including machine learning, signal processing, and information theory. Many bounds on the Bayes binary classification error rate depend on information divergences between the pair of class distributions. Recently, the Henze–Penrose (HP) divergence has been proposed for bounding classification error probability. We consider the problem of empirically estimating the HP-divergence from random samples. We derive a bound on the convergence rate for the Friedman–Rafsky (FR) estimator of the HP-divergence, which is related to a multivariate runs statistic for testing between two distributions. The FR estimator is …


Ensemble Estimation Of Information Divergence, Kevin R. Moon, Kumar Sricharan, Kristjan Greenewald, Alfred O. Hero Iii Jul 2018

Ensemble Estimation Of Information Divergence, Kevin R. Moon, Kumar Sricharan, Kristjan Greenewald, Alfred O. Hero Iii

Mathematics and Statistics Faculty Publications

Recent work has focused on the problem of nonparametric estimation of information divergence functionals between two continuous random variables. Many existing approaches require either restrictive assumptions about the density support set or difficult calculations at the support set boundary which must be known a priori. The mean squared error (MSE) convergence rate of a leave-one-out kernel density plug-in divergence functional estimator for general bounded density support sets is derived where knowledge of the support boundary, and therefore, the boundary correction is not required. The theory of optimally weighted ensemble estimation is generalized to derive a divergence estimator that achieves the …