Adam

Adam

Stochastic gradient descent is an iterative method for optimizing an objective function with smoothness properties. It replaces the actual gradient calculated from the entire data set with an estimate calculated from a randomly selected subset of the data. This reduces computational burden and allows for faster iterations, though at a lower convergence rate.

1 courses cover this concept

CS231n: Deep Learning for Computer Vision

Stanford University

Spring 2022

This is a deep-dive into the details of deep learning architectures for visual recognition tasks. The course provides students with the ability to implement, train their own neural networks and understand state-of-the-art computer vision research. It requires Python proficiency and familiarity with calculus, linear algebra, probability, and statistics.

No concepts data

+ 55 more concepts