Normal view MARC view ISBD view

Handbook of Mixture Analysis [electronic resource] / edited by Sylvia Frühwirth-Schnatter, Gilles Celeux, Christian P. Robert.

Contributor(s): Frühwirth-Schnatter, Sylvia, 1959- | Celeux, Gilles | Robert, Christian P, 1961-.
Material type: materialTypeLabelBookSeries: Publisher: Milton : Chapman and Hall/CRC, 2018Description: 1 online resource (522 p.).ISBN: 9780429508240; 0429508247; 9780429055911; 0429055919; 9780429508868; 0429508867; 9780429509483; 0429509480.Subject(s): COMPUTERS / Machine Theory | MATHEMATICS / Probability & Statistics / General | Mixture distributions (Probability theory) | Distribution (Probability theory)DDC classification: 519.24 Online resources: Taylor & Francis | OCLC metadata license agreement
Contents:
Cover; Half Title; Title Page; Copyright Page; Table of Contents; Preface; Editors; Contributors; List of Symbols; I: Foundations and Methods; 1: Introduction to Finite Mixtures; 1.1 Introduction and Motivation; 1.1.1 Basic formulation; 1.1.2 Likelihood; 1.1.3 Latent allocation variables; 1.1.4 A little history; 1.2 Generalizations; 1.2.1 Infinite mixtures; 1.2.2 Continuous mixtures; 1.2.3 Finite mixtures with nonparametric components; 1.2.4 Covariates and mixtures of experts; 1.2.5 Hidden Markov models; 1.2.6 Spatial mixtures; 1.3 Some Technical Concerns; 1.3.1 Identifiability
1.3.2 Label switching1.4 Inference; 1.4.1 Frequentist inference, and the role of EM; 1.4.2 Bayesian inference, and the role of MCMC; 1.4.3 Variable number of components; 1.4.4 Modes versus components; 1.4.5 Clustering and classification; 1.5 Concluding Remarks; Bibliography; 2: EM Methods for Finite Mixtures; 2.1 Introduction; 2.2 The EM Algorithm; 2.2.1 Description of EM for finite mixtures; 2.2.2 EM as an alternating-maximization algorithm; 2.3 Convergence and Behavior of EM; 2.4 Cousin Algorithms of EM; 2.4.1 Stochastic versions of the EM algorithm; 2.4.2 The Classification EM algorithm
2.5 Accelerating the EM Algorithm2.6 Initializing the EM Algorithm; 2.6.1 Random initialization; 2.6.2 Hierarchical initialization; 2.6.3 Recursive initialization; 2.7 Avoiding Spurious Local Maximizers; 2.8 Concluding Remarks; Bibliography; 3: An Expansive View of EM Algorithms; 3.1 Introduction; 3.2 The Product-of-Sums Formulation; 3.2.1 Iterative algorithms and the ascent property; 3.2.2 Creating a minorizing surrogate function; 3.3 Likelihood as a Product of Sums; 3.4 Non-standard Examples of EM Algorithms; 3.4.1 Modes of a density; 3.4.2 Gradient maxima; 3.4.3 Two-step EM
3.5 Stopping Rules for EM Algorithms3.6 Concluding Remarks; Bibliography; 4: Bayesian Mixture Models: Theory and Methods; 4.1 Introduction; 4.2 Bayesian Mixtures: From Priors to Posteriors; 4.2.1 Models and representations; 4.2.2 Impact of the prior distribution; 4.2.2.1 Conjugate priors; 4.2.2.2 Improper and non-informative priors; 4.2.2.3 Data-dependent priors; 4.2.2.4 Priors for overfitted mixtures; 4.3 Asymptotic Properties of the Posterior Distribution in the Finite Case; 4.3.1 Posterior concentration around the marginal density; 4.3.2 Recovering the parameters in the well-behaved case
4.3.3 Boundary parameters: overfitted mixtures4.3.4 Asymptotic behaviour of posterior estimates of the number of components; 4.4 Concluding Remarks; Bibliography; 5: Computational Solutions for Bayesian Inference in Mixture Models; 5.1 Introduction; 5.2 Algorithms for Posterior Sampling; 5.2.1 A computational problem? Which computational problem?; 5.2.2 Gibbs sampling; 5.2.3 Metropolis-Hastings schemes; 5.2.4 Reversible jump MCMC; 5.2.5 Sequential Monte Carlo; 5.2.6 Nested sampling; 5.3 Bayesian Inference in the Model-Based Clustering Context; 5.4 Simulation Studies
Summary: Mixture models have been around for over 150 years, and they are found in many branches of statistical modelling, as a versatile and multifaceted tool. They can be applied to a wide range of data: univariate or multivariate, continuous or categorical, cross-sectional, time series, networks, and much more. Mixture analysis is a very active research topic in statistics and machine learning, with new developments in methodology and applications taking place all the time. The Handbook of Mixture Analysis is a very timely publication, presenting a broad overview of the methods and applications of this important field of research. It covers a wide array of topics, including the EM algorithm, Bayesian mixture models, model-based clustering, high-dimensional data, hidden Markov models, and applications in finance, genomics, and astronomy. Features: Provides a comprehensive overview of the methods and applications of mixture modelling and analysis Divided into three parts: Foundations and Methods; Mixture Modelling and Extensions; and Selected Applications Contains many worked examples using real data, together with computational implementation, to illustrate the methods described Includes contributions from the leading researchers in the field The Handbook of Mixture Analysis is targeted at graduate students and young researchers new to the field. It will also be an important reference for anyone working in this field, whether they are developing new methodology, or applying the models to real scientific problems.
    average rating: 0.0 (0 votes)
No physical items for this record

Description based upon print version of record.

Cover; Half Title; Title Page; Copyright Page; Table of Contents; Preface; Editors; Contributors; List of Symbols; I: Foundations and Methods; 1: Introduction to Finite Mixtures; 1.1 Introduction and Motivation; 1.1.1 Basic formulation; 1.1.2 Likelihood; 1.1.3 Latent allocation variables; 1.1.4 A little history; 1.2 Generalizations; 1.2.1 Infinite mixtures; 1.2.2 Continuous mixtures; 1.2.3 Finite mixtures with nonparametric components; 1.2.4 Covariates and mixtures of experts; 1.2.5 Hidden Markov models; 1.2.6 Spatial mixtures; 1.3 Some Technical Concerns; 1.3.1 Identifiability

1.3.2 Label switching1.4 Inference; 1.4.1 Frequentist inference, and the role of EM; 1.4.2 Bayesian inference, and the role of MCMC; 1.4.3 Variable number of components; 1.4.4 Modes versus components; 1.4.5 Clustering and classification; 1.5 Concluding Remarks; Bibliography; 2: EM Methods for Finite Mixtures; 2.1 Introduction; 2.2 The EM Algorithm; 2.2.1 Description of EM for finite mixtures; 2.2.2 EM as an alternating-maximization algorithm; 2.3 Convergence and Behavior of EM; 2.4 Cousin Algorithms of EM; 2.4.1 Stochastic versions of the EM algorithm; 2.4.2 The Classification EM algorithm

2.5 Accelerating the EM Algorithm2.6 Initializing the EM Algorithm; 2.6.1 Random initialization; 2.6.2 Hierarchical initialization; 2.6.3 Recursive initialization; 2.7 Avoiding Spurious Local Maximizers; 2.8 Concluding Remarks; Bibliography; 3: An Expansive View of EM Algorithms; 3.1 Introduction; 3.2 The Product-of-Sums Formulation; 3.2.1 Iterative algorithms and the ascent property; 3.2.2 Creating a minorizing surrogate function; 3.3 Likelihood as a Product of Sums; 3.4 Non-standard Examples of EM Algorithms; 3.4.1 Modes of a density; 3.4.2 Gradient maxima; 3.4.3 Two-step EM

3.5 Stopping Rules for EM Algorithms3.6 Concluding Remarks; Bibliography; 4: Bayesian Mixture Models: Theory and Methods; 4.1 Introduction; 4.2 Bayesian Mixtures: From Priors to Posteriors; 4.2.1 Models and representations; 4.2.2 Impact of the prior distribution; 4.2.2.1 Conjugate priors; 4.2.2.2 Improper and non-informative priors; 4.2.2.3 Data-dependent priors; 4.2.2.4 Priors for overfitted mixtures; 4.3 Asymptotic Properties of the Posterior Distribution in the Finite Case; 4.3.1 Posterior concentration around the marginal density; 4.3.2 Recovering the parameters in the well-behaved case

4.3.3 Boundary parameters: overfitted mixtures4.3.4 Asymptotic behaviour of posterior estimates of the number of components; 4.4 Concluding Remarks; Bibliography; 5: Computational Solutions for Bayesian Inference in Mixture Models; 5.1 Introduction; 5.2 Algorithms for Posterior Sampling; 5.2.1 A computational problem? Which computational problem?; 5.2.2 Gibbs sampling; 5.2.3 Metropolis-Hastings schemes; 5.2.4 Reversible jump MCMC; 5.2.5 Sequential Monte Carlo; 5.2.6 Nested sampling; 5.3 Bayesian Inference in the Model-Based Clustering Context; 5.4 Simulation Studies

5.4.1 Known number of components

Mixture models have been around for over 150 years, and they are found in many branches of statistical modelling, as a versatile and multifaceted tool. They can be applied to a wide range of data: univariate or multivariate, continuous or categorical, cross-sectional, time series, networks, and much more. Mixture analysis is a very active research topic in statistics and machine learning, with new developments in methodology and applications taking place all the time. The Handbook of Mixture Analysis is a very timely publication, presenting a broad overview of the methods and applications of this important field of research. It covers a wide array of topics, including the EM algorithm, Bayesian mixture models, model-based clustering, high-dimensional data, hidden Markov models, and applications in finance, genomics, and astronomy. Features: Provides a comprehensive overview of the methods and applications of mixture modelling and analysis Divided into three parts: Foundations and Methods; Mixture Modelling and Extensions; and Selected Applications Contains many worked examples using real data, together with computational implementation, to illustrate the methods described Includes contributions from the leading researchers in the field The Handbook of Mixture Analysis is targeted at graduate students and young researchers new to the field. It will also be an important reference for anyone working in this field, whether they are developing new methodology, or applying the models to real scientific problems.

OCLC-licensed vendor bibliographic record.

There are no comments for this item.

Log in to your account to post a comment.