Batch normalization (also known as batch norm) is a technique for improving the speed, performance, and stability of artificial neural networks.
It is used to normalize the input layer by re-centering and re-scaling.
Gradient descent converges faster after normalization.
x = x / ||x||