Computer Science
>
>

COS 324: Introduction to Machine Learning

Spring 2019

Princeton University

This introductory course focuses on machine learning, probabilistic reasoning, and decision-making in uncertain environments. A blend of theory and practice, the course aims to answer how systems can learn from experience and manage real-world uncertainties.

Course Page

Overview

This course provides a broad introduction to machine learning, probabilistic reasoning and decision making in uncertain environments. The course should be of interest to undergraduate students in computer science, applied mathematics, sciences and engineering, and lower-level graduate students looking to gain an introduction to the tools of machine learning and probabilistic reasoning with applications to data-intensive problems in the applied sciences, natural sciences and social sciences.

For students with interests in the fundamentals of machine learning and probabilistic artificial intelligence, this course will address three central, related questions in the design and engineering of intelligent systems. How can a system process its perceptual inputs in order to obtain a reasonable picture of the world? How can we build programs that learn from experience? How can we design systems to deal with the inherent uncertainty in the real world?

Our approach to these questions will be both theoretical and practical. We will develop a mathematical underpinning for the methods of machine learning and probabilistic reasoning. We will look at a variety of successful algorithms and applications. We will also discuss the motivations behind the algorithms, and the properties that determine whether or not they will work well for a particular task.

Prerequisites

Students should be comfortable with writing non-trivial programs in Python. Students should have a background in basic probability theory, and some level of mathematical sophistication, including calculus and linear algebra.

Learning objectives

No data.

Textbooks and other notes

There is no required textbook for the course. This course has its own notes that are considered the required reading. Nevertheless, people learn in different ways and seeing the material presented in different formats can be valuable. To that end, additional optional material is linked on the course website and several books provide useful additional reading:

  • Kevin Murphy. Machine Learning: A Probabilistic Perspective. MIT Press. 2012.
  • Christopher M. Bishop. Pattern Recognition and Machine Learning. Springer. 2011.
  • David J.C. MacKay. Information Theory, Inference, and Learning Algorithms. Cambridge University Press. 2003. Freely available online at http://www.inference.org.uk/itila/book.html.
  • Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning. Springer. 2001. Freely available online at http://www-stat.stanford.edu/~tibs/ElemStatLearn/
  • Gareth James, Daniela Witten, Trevor Hastie, and Robert Tibshirani. An Introduction to Statistical Learning. Springer. 2013. Freely available online at http://www-bcf.usc.edu/~gareth/ISL/
  • Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. MIT Press. 1998. Freely available online at http://incompleteideas.net/book/the-book-2nd.html

Other courses in Machine Learning

CS 224W: Machine Learning with Graphs

Winter 2023

Stanford University

CS 229: Machine Learning

Winter 2023

Stanford University

CS 228 - Probabilistic Graphical Models

Winter 2023

Stanford University

CS246: Mining Massive Data Sets

Spring 2023

Stanford University

Courseware availability

Course notes available at Schedule

No videos available

Assignments available at Assignements

No other materials available

Covered concepts