Sequence Learning Problems

Parveen Khurana
10 min readJul 6, 2019

In all of the networks that we have covered so far(Fully Connected Neural Network(FCNN), Convolutional Neural Network(CNN)):

  • the output at any time step is independent of the previous layer input/output
  • the input was always of the fixed-length/size

Say the below neural network is used to link patient biologic parameters to health risk (more like a classifier saying if the patient has health risk or not), then the model’s output for patient 2 would not be in any way linked to the model’s output for patient 1.

And all the patients/cases as input to this model would have the same number of parameters (height, weight, …., sugar, …etc.)

Fully Connected Neural Network (say all inputs are numeric and are standardized before passing to the model)

Similarly, for a CNN (say image classification), no matter if the output for input 1 was apple/bus/car/<any class>, it would not have any impact on output for input 2.

Let’s say all the input images are of size ’30 X 30’ (or if of different size, then we can rescale the input image to the required/appropriate dimension) so that all inputs have the same size

In general, we can say that for fully connected neural networks, convolutional neural networks, the output at “t” time step is independent of any of the previous input/output(s)

--

--

Parveen Khurana
Parveen Khurana

Written by Parveen Khurana

Writing on Data Science, Philosophy, Emotional Health | Grateful for the little moments and every reader | Nature lover at heart | Follow for reflective musings

No responses yet