Hyperparameter optimization is the process of finding the best set of hyperparameters for a machine learning algorithm to optimize its performance. It involves tuning parameters such as weights, constraints and learning rates to find the optimal model that minimizes a predefined loss function. Cross-validation is often used to estimate the generalization performance and choose the best set of hyperparameters.
Stanford University
Fall 2022
An in-depth course focused on building neural networks and leading successful machine learning projects. It covers Convolutional Networks, RNNs, LSTM, Adam, Dropout, BatchNorm, Xavier/He initialization, and more. Students are expected to have basic computer science skills, probability theory knowledge, and linear algebra familiarity.
No concepts data
+ 35 more conceptsStanford University
Spring 2022
This is a deep-dive into the details of deep learning architectures for visual recognition tasks. The course provides students with the ability to implement, train their own neural networks and understand state-of-the-art computer vision research. It requires Python proficiency and familiarity with calculus, linear algebra, probability, and statistics.
No concepts data
+ 55 more concepts