-
Notifications
You must be signed in to change notification settings - Fork 387
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add the ability to have multi-round conversation with llama #6758
add the ability to have multi-round conversation with llama #6758
Conversation
Ad the ability to have multi-round conversations with LLM. This will be helpful for testing long context length. Differential Revision: [D65771122](https://our.internmc.facebook.com/intern/diff/D65771122/) [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/6758
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 6709d4a with merge base 793f17e (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D65771122 |
""" | ||
exit_prompt = "exit" | ||
tokens = [] | ||
prompt = input("Me: ") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
prompt = input("Me: ") | |
print("You are now chatting with the LLM. Input "exit" to quit.") | |
prompt = input("Me: ") |
bde545f
into
gh/helunwencser/74/base
* update llama runner to decode single token Pull Request resolved: #6703 Right now, we don't print the generated response in the eager runner until all tokens are generated. This is not good experience as we need to wait until all tokens are generated to see the response. This PR updates it to decode each new token immediately after it is generated. ghstack-source-id: 252924039 Differential Revision: [D65578306](https://our.internmc.facebook.com/intern/diff/D65578306/) * add the ability to have multi-round conversation with llama Ad the ability to have multi-round conversations with LLM. This will be helpful for testing long context length. Differential Revision: [D65771122](https://our.internmc.facebook.com/intern/diff/D65771122/) ghstack-source-id: 252934165 Pull Request resolved: #6758 --------- Co-authored-by: Lunwen He <[email protected]>
Stack from ghstack (oldest at bottom):
Ad the ability to have multi-round conversations with LLM. This will be helpful for testing long context length.
Differential Revision: D65771122