You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Many test users have expressed the desire to get an estimate of how long they will have to wait until their corpus is done processing. Can we guess this somehow? How long it takes depends on:
place in the queue
total corpus size (amount of files, file sizes, total size)
which annotators are run (stanza, swener and compound analysis take the longest)
how clean the data is (a lot of ORC-trash often increases processing time)
if there are many long strings in the texts it will take longer time to process
...?
If we have a time estimate per corpus we should be able to get an estimate for a queued corpus.
The text was updated successfully, but these errors were encountered:
Many test users have expressed the desire to get an estimate of how long they will have to wait until their corpus is done processing. Can we guess this somehow? How long it takes depends on:
If we have a time estimate per corpus we should be able to get an estimate for a queued corpus.
The text was updated successfully, but these errors were encountered: