You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are big differences between log message verbosity of different models.
This problem is probably caused due to the fact that annif doesn't filter the messages in any way, but merely directly passes everything the original process produces (at least this seems to be the case for Omikuji).
In production environment this can cause at least two distinct problems:
-Upon container start, the Maui model is mostly silent, but Omikuji floods all kinds of messages; while this is not dangerous it can look daunting for someone new to the project and sometimes it is easy to forget while debugging other bugs, which can cause doubts targeting to wrong candidates
-When training the models, Omikuji floods warnings that depend on the size of the data set. This can cause the training to fail to exceeding the log size, rather than real error or continuing regardless of warnings; it would be better that these warnings would be aggregated and an explicit "maximum allowed warnings" might generate an error rather than the exceeding of logs.
There are probably more problems, but in general it might be nice that annif would provide some universal log messages / steps of processing, that would be easy and coherent to follow and then give the user ability to add deeper levels of logging, which would allow more streamlined access to the original warnings etc. provided by the underlying models.
The text was updated successfully, but these errors were encountered:
Upon container start, the Maui model is mostly silent, but Omikuji floods all kinds of messages; while this is not dangerous it can look daunting for someone new to the project and sometimes it is easy to forget while debugging other bugs, which can cause doubts targeting to wrong candidates
Related to this: when Annif is starting, especially in a container environment when one is just looking a log stream, there is no very clear indication that all models are already loaded and Annif is ready to serve requests. It could happen that the Pod or container for Annif is killed for some reason, and I think this is not visible in the logs.
There could be an info message from Annif like Model(s) loaded. when all models are ready, if not a message for every individual model.
There are big differences between log message verbosity of different models.
This problem is probably caused due to the fact that annif doesn't filter the messages in any way, but merely directly passes everything the original process produces (at least this seems to be the case for Omikuji).
In production environment this can cause at least two distinct problems:
-Upon container start, the Maui model is mostly silent, but Omikuji floods all kinds of messages; while this is not dangerous it can look daunting for someone new to the project and sometimes it is easy to forget while debugging other bugs, which can cause doubts targeting to wrong candidates
-When training the models, Omikuji floods warnings that depend on the size of the data set. This can cause the training to fail to exceeding the log size, rather than real error or continuing regardless of warnings; it would be better that these warnings would be aggregated and an explicit "maximum allowed warnings" might generate an error rather than the exceeding of logs.
There are probably more problems, but in general it might be nice that annif would provide some universal log messages / steps of processing, that would be easy and coherent to follow and then give the user ability to add deeper levels of logging, which would allow more streamlined access to the original warnings etc. provided by the underlying models.
The text was updated successfully, but these errors were encountered: