PadhAI-Deep Learning Course
1 min readApr 14, 2020
This article serves as the index page for the articles corresponding to different modules of the Deep Learning course.
- Expert Systems
- 6 Jars of Machine Learning
- Vectors and Matrices
- MP Neuron
- Perceptron
- Sigmoid Neuron and Gradient Descent Part 1
- Sigmoid Neuron and Gradient Descent Part 2
- Mathematics behind the parameters update rule
- Basics: Probability Theory
- Information Theory
- Sigmoid Neuron and Cross-Entropy
- Representation Power of Functions
- Feedforward Neural Networks Part 1
- Feedforward Neural Networks Part 2
- Backpropagation(light math)
- Backpropagation(vectorized)
- Optimization Algorithms(Part 1)
- Optimization Algorithms(Part 2)
- Activation Functions and Initialization Methods
- Bias-variance Tradeoff
- Regularization Methods
- The Convolutional Operation
- Convolutional Neural Networks
- CNN Architectures
- CNN Architectures — Part 2
- Visualizing CNNs
- Batch Normalization
- Dropout Technique
- Sequence Learning Problems
- How to model sequence learning problems?
- Data and Tasks for Sequence Classification
- Data and Tasks for Sequence Labeling
- Data Modeling for Recurrent Neural Networks
- The loss function for Recurrent Neural Networks
- Learning Algorithm - Recurrent Neural Networks
- Evaluation metrics for sequence-based tasks
- Vanishing and Exploding gradients
- The whiteboard analogy to deal with vanishing and exploding gradients
- LSTMs and GRUs
References: PadhAI