You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi Author, I recently read your paper on ChemReasoner and I think it is a wonderful paper. However, I would like to know more about the following points that you did not mention in your paper.
How was LLM trained, through Lora, Prompt project or RAG?
What is the training data for LLM?
Thanks!
The text was updated successfully, but these errors were encountered:
Hi,
The LLM was NOT trained at all. It was run in zero shot fashion mostly. All the prompts and questions used in the paper are available in the repository.
We are cleaning up the repository in coming weeks to make the prompts and data easier to trace. We can notify you if you provide a contact upon update.
Sutanay
Get Outlook for iOS<https://aka.ms/o0ukef>
________________________________
From: sherlockma11 ***@***.***>
Sent: Friday, July 26, 2024 9:25:08 PM
To: pnnl/chemreasoner ***@***.***>
Cc: Subscribed ***@***.***>
Subject: [pnnl/chemreasoner] How was LLM trained (Issue #29)
Check twice before you click! This email originated from outside PNNL.
Hi Author, I recently read your paper on ChemReasoner and I think it is a wonderful paper. However, I would like to know more about the following points that you did not mention in your paper.
How was LLM trained, through Lora, Prompt project or RAG?
What is the training data for LLM?
Thanks!
—
Reply to this email directly, view it on GitHub<#29>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AB2LVBAFWBCBQHQ2NZMSGV3ZOMOKJAVCNFSM6AAAAABLRRDUAOVHI2DSMVQWIX3LMV43ASLTON2WKOZSGQZTGMRSG42TGNA>.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
Hi Author, I recently read your paper on ChemReasoner and I think it is a wonderful paper. However, I would like to know more about the following points that you did not mention in your paper.
How was LLM trained, through Lora, Prompt project or RAG?
What is the training data for LLM?
Thanks!
The text was updated successfully, but these errors were encountered: