Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve AllenNLPProcessor's performance #7

Merged
merged 1 commit into from
Apr 5, 2021

Conversation

DianLiI
Copy link
Collaborator

@DianLiI DianLiI commented Apr 5, 2021

This PR fixes [#4].

Description of changes

Reduce running time by enabling batch inference.
The processor is still a single batch processor but all sentences in one DataPack is processed in one batch (by allennlp.Predictor).

Possible influences of this PR.

N/A

Test Conducted

Tested on a pipeline containing AllenNLPProcessor.

@DianLiI DianLiI requested a review from hunterhector April 5, 2021 19:37
Copy link
Member

@hunterhector hunterhector left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, unfortunately we haven't move the test cases over here to test it.

@hunterhector hunterhector merged commit 9cb5724 into asyml:main Apr 5, 2021
@DianLiI DianLiI deleted the improve-allen-efficiency branch April 5, 2021 20:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants