Dimensionality reduction is the transformation of data from a high-dimensional space into a low-dimensional space to retain meaningful properties of the original data. It is used in fields with large numbers of observations and/or variables, and can be divided into linear and nonlinear approaches, as well as feature selection and extraction. Dimensionality reduction can be used for noise reduction, data visualization, cluster analysis, or other analyses.
Stanford University
Spring 2022
CS 168 provides a comprehensive introduction to modern algorithm concepts, covering hashing, dimension reduction, programming, gradient descent, and regression. It emphasizes both theoretical understanding and practical application, with each topic complemented by a mini-project. It's suitable for those who have taken CS107 and CS161.
No concepts data
+ 57 more conceptsStanford University
Spring 2023
This course focuses on data mining and machine learning algorithms for large scale data analysis. The emphasis is on parallel algorithms with tools like MapReduce and Spark. Topics include frequent itemsets, locality sensitive hashing, clustering, link analysis, and large-scale supervised machine learning. Familiarity with Java, Python, basic probability theory, linear algebra, and algorithmic analysis is required.
No concepts data
+ 17 more conceptsCarnegie Mellon University
Spring 2018
A comprehensive exploration of machine learning theories and practical algorithms. Covers a broad spectrum of topics like decision tree learning, neural networks, statistical learning, and reinforcement learning. Encourages hands-on learning via programming assignments.
No concepts data
+ 55 more concepts