Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Ole J Mengshoel

Expectation Maximization

Articles 1 - 3 of 3

Full-Text Articles in Physical Sciences and Mathematics

Scaling Bayesian Network Parameter Learning With Expectation Maximization Using Mapreduce, Erik B. Reed, Ole J. Mengshoel Nov 2012

Scaling Bayesian Network Parameter Learning With Expectation Maximization Using Mapreduce, Erik B. Reed, Ole J. Mengshoel

Ole J Mengshoel

Bayesian network (BN) parameter learning from incomplete data can be a computationally expensive task for incomplete data. Applying the EM algorithm to learn BN parameters is unfortunately susceptible to local optima and prone to premature convergence. We develop and experiment with two methods for improving EM parameter learning by using MapReduce: Age-Layered Expectation Maximization (ALEM) and Multiple Expectation Maximization (MEM). Leveraging MapReduce for distributed machine learning, these algorithms (i) operate on a (potentially large) population of BNs and (ii) partition the data set as is traditionally done with MapReduce machine learning. For example, we achieved gains using the Hadoop implementation …


Mapreduce For Bayesian Network Parameter Learning Using The Em Algorithm, Aniruddha Basak, Irina Brinster, Ole J. Mengshoel Nov 2012

Mapreduce For Bayesian Network Parameter Learning Using The Em Algorithm, Aniruddha Basak, Irina Brinster, Ole J. Mengshoel

Ole J Mengshoel

This work applies the distributed computing framework MapReduce to Bayesian network parameter learning from incomplete data. We formulate the classical Expectation Maximization (EM) algorithm within the MapReduce framework. Analytically and experimentally we analyze the speed-up that can be obtained by means of MapReduce. We present details of the MapReduce formulation of EM, report speed-ups versus the sequential case, and carefully compare various Hadoop cluster configurations in experiments with Bayesian networks of different sizes and structures.


Age-Layered Expectation Maximization For Parameter Learning In Bayesian Networks, Avneesh Saluja, Priya Sundararajan, Ole J. Mengshoel Apr 2012

Age-Layered Expectation Maximization For Parameter Learning In Bayesian Networks, Avneesh Saluja, Priya Sundararajan, Ole J. Mengshoel

Ole J Mengshoel

The expectation maximization (EM) algorithm is a popular algorithm for parameter estimation in models with hidden variables. However, the algorithm has several non-trivial limitations, a significant one being variation in eventual solutions found, due to convergence to local optima. Several techniques have been proposed to allay this problem, for example initializing EM from multiple random starting points and selecting the highest likelihood out of all runs. In this work, we a) show that this method can be very expensive computationally for difficult Bayesian networks, and b) in response we propose an age-layered EM approach (ALEM) that efficiently discards less promising …