The expectation-maximization (EM) algorithm is a method used in statistics to estimate parameters in statistical models that involve unobserved latent variables. The algorithm iteratively alternates between an expectation step, which calculates the expected log-likelihood using current parameter estimates, and a maximization step, which computes new parameter estimates that maximize the expected log-likelihood. These new parameter estimates are then used to determine the distribution of the latent variables in the next iteration.
Stanford University
Winter 2023
This comprehensive course covers various machine learning principles from supervised, unsupervised to reinforcement learning. Topics also touch on neural networks, support vector machines, bias-variance tradeoffs, and many real-world applications. It requires a background in computer science, probability, multivariable calculus, and linear algebra.
No concepts data
+ 32 more concepts