Member-only story
Data and Tasks jar for Sequence Classification — Recurrent Neural Networks(RNNs)
In the last article, we touched upon the base of RNNs and discussed how RNNs inherently covers all the desired properties of an ideal network to address sequence-based problems


In this article, Data and Tasks jar (6 jars of Machine Learning) specific to Recurrent neural networks are discussed
Data and Tasks
RNNs are typically used for 3 types of tasks:

Sequence Classification:
- Here the complete sequence is ingested as the input
- And the “model produces one output at the end” for example say if the sequence conveys positive/negative sentiment or the video-based sequence represents a specific class (example: “Surya namaskar” pose)
- Here the “input sequence” might have “n tokens/words/video frames” but the “model produces one output”
Sequence Labeling:
- For “every word” in the sequence, the “idea is to attach a label” to that — say part of speech tag (if the word is a noun, verb, adjective,.. and so on) or the named entity recognition
- Here the “output is produced for each word in the input” sequence basically for “n” words in the input sequence the model will produce “n” outputs)
Named entity recognition:
This basically refers to “assigning a label (NE or not) for each word in the sequence” if it contains “people names, location names, organization names”. Sometimes dates, numbers, and so on are also considered under Named entities depending on the use of the application.
So, for each word the model would provide a label whether it is NE(named entity) or not for example: let’s say the “input” is: “Ram went to Delhi yesterday”, the named entities, in this case, are listed with the “NE” label in the snipped below