Batch Normalization

Batch normalization

Batch normalization is a technique used to improve the training of artificial neural networks by normalizing the inputs of each layer. It was proposed in 2015 and its effectiveness is still being debated, with some arguing that it reduces internal covariate shift and others suggesting it improves performance by smoothing the objective function. However, it has been found that batch normalization can cause gradient explosion in deep networks, which is mitigated by skip connections in residual networks.

2 courses cover this concept

CS 230 Deep Learning

Stanford University

Fall 2022

An in-depth course focused on building neural networks and leading successful machine learning projects. It covers Convolutional Networks, RNNs, LSTM, Adam, Dropout, BatchNorm, Xavier/He initialization, and more. Students are expected to have basic computer science skills, probability theory knowledge, and linear algebra familiarity.

No concepts data

+ 35 more concepts

CS231n: Deep Learning for Computer Vision

Stanford University

Spring 2022

This is a deep-dive into the details of deep learning architectures for visual recognition tasks. The course provides students with the ability to implement, train their own neural networks and understand state-of-the-art computer vision research. It requires Python proficiency and familiarity with calculus, linear algebra, probability, and statistics.

No concepts data

+ 55 more concepts