Difference in Question And Answer prediction score while using HuggingFaceTransformer pipeline and FARMReader pipeline #2257
Unanswered
karndeepsingh
asked this question in
Questions
Replies: 2 comments 9 replies
-
Hi @karndeepsingh, it seems that you are using different questions in haystack and in transformers, which probably is the reason for the high discrepancy that you are observing. |
Beta Was this translation helpful? Give feedback.
5 replies
-
query_doc_list = [
{
"question": <QUESTION_1>,
"docs": <LIST_OF_DOCUMENTS>
},
{
"question": <QUESTION_2>,
"docs": <LIST_OF_DOCUMENTS>
},
...
] |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
I have trained a Question Answering Model using FARMReader and when I do the inferencing using Pipeline provided in Haystack, I am getting good score. Following are the results.
After, converting the trained model into hugging face transformers model file and when loading the model with hugging face transfomer question answering pipeline, I am not getting the same score for the same input queries. Following are the results:
It is quiet visible that the difference in score as well answer is huge.
Is that some weights are being loss while converting it to hugging face transformer model?
Please help me, I want to deploy the model on server as Haystack doesn't support any batch mode to supply list of queries to the model once hence I have to do by using hugging face pipeline but it seems my current trained is loosing the information and unable to predict the same way it has been doing with FARMReader.
Please help me to resolve this issue. Thanks.
Beta Was this translation helpful? Give feedback.
All reactions