Probabilistic inference is the process of updating a-priori uncertainty with new 
information such as new data samples, and is a cornerstone of modern machine learning and applied statistics. Using a probabilistic approach, the practitioner is able to incorporate various notions of prior uncertainty, incorporate samples with missing data, and provide confidence estimates for model predictions. Divergence methods define the posterior distribution as the minimizer of the divergence to a prior distribution subject to additional constraints defined by the observed data and other new information. The workshop aims to explore divergence methods for probabilistic inference, considering issues such as the choice of divergence, the choice of constraints, efficient and scalable inference, and applications to big data.

Researchers in various sub-fields of machine learning and statistics such as natural language processing, collaborative filtering and neuroinformatics regularly employ divergence methods for probabilistic inference. The goal of this workshop is to provide a venue for researchers in these disparate fields to interact, map out the state of the art in the field, and encourage discussion to stimulate new theoretical and practical developments.




© 2013-2014 ICML | International Conference on Machine Learning