Sequence Learning Problems
In all of the networks that we have covered so far(Fully Connected Neural Network(FCNN), Convolutional Neural Network(CNN)):
- the output at any time step is independent of the previous layer input/output
- the input was always of the fixed-length/size
Say the below neural network is used to link patient biologic parameters to health risk (more like a classifier saying if the patient has health risk or not), then the model’s output for patient 2 would not be in any way linked to the model’s output for patient 1.
And all the patients/cases as input to this model would have the same number of parameters (height, weight, …., sugar, …etc.)
Similarly, for a CNN (say image classification), no matter if the output for input 1 was apple/bus/car/<any class>, it would not have any impact on output for input 2.
Let’s say all the input images are of size ’30 X 30’ (or if of different size, then we can rescale the input image to the required/appropriate dimension) so that all inputs have the same size
In general, we can say that for fully connected neural networks, convolutional neural networks, the output at “t” time step is independent of any of the previous input/output(s)