Feedforward Neural Networks — Part 1

Parveen Khurana
16 min readJan 21, 2020

This article covers the content discussed in the Feedforward Neural Networks module of the Deep Learning course and all the images are taken from the same module.

So far, we have discussed the MP Neuron, Perceptron, Sigmoid Neuron model and none of these models are able to deal with non-linear data. Then in the last article, we have seen the UAT which says that a Deep Neural Network can approximate the relationship between the input and the output no matter how complex the relationship between the input and the true output is.

In this article, we discuss the Feedforward neural network and the situation is going to be like the below with respect to 6 jars of ML.

We will now start dealing with multi-class classification also, and finally, we will be able to deal with non-linearly separable data. We will discuss task-specific loss functions and not just the squared error loss.

Data and Tasks:

Let’s look at what data and tasks DNNs have been used for:

First is the MNIST dataset, the task here is that given an image, we have to identify which of the 10 digits(0 to 9) that it belongs to:

--

--

Parveen Khurana
Parveen Khurana

Written by Parveen Khurana

Writing on Data Science, Philosophy, Emotional Health | Grateful for the little moments and every reader | Nature lover at heart | Follow for reflective musings

No responses yet