You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Solution is implemented by several models combined into an ensemble. With the chosen architecture,
we were able to achieve batch processing.
The architecture is built as follows:
splitter
input: "text"
data_type: TYPE_STRING
dims: [ 1 ]
As an input first model takes a text.
Texts may be of different length and may contain different number of sentences.
Model splits that text into sentences.
First model output is an array of one text sentences.
As as input model takes an array of sentences with batch size 128.
Inside the model, assembles a batch of the optimal length. Batch has a following structure
[[sent1_text1, ... sent_n_text1], [sent1_text2, ... , sent_n_text2] ... , [sent1_text_n, ...]]
The model itself triggers the next stage - classifier
As an input takes an array of sentences with batch size -1, collected in batcher.
As an output model returns an array of the same length as in input with scores for each sentence.
The output returns back to batcher model
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Binary text classification.
Solution is implemented by several models combined into an ensemble. With the chosen architecture,
we were able to achieve batch processing.
The architecture is built as follows:
classifier
Code to collect batch in batcher model:
Code to take apart batch back to requests:
So the question is
Is that batching type a bag or a feature? Do we use it correctly? How do you see the architecture of such model?
Beta Was this translation helpful? Give feedback.
All reactions