You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to use the rptoolkit using the same samples in raw_txt_input but I am getting:
REALLY BAD EXCEPTION ENCOUNTERED: 'NoneType' object is not subscriptable
TypeError: 'NoneType' object is not subscriptable
And no stories are generated in the output_final files.
Here is my config.yaml (tested these models with openAI ollama calls like here, everything works fine on this front):
================== ALL DATA WRITTEN!! HERE ARE YOUR STATS: ==================
Total stories generated: 0
Stories that are at least OK across the board, but might slightly flawed ('good' and above, according to the AI rater): 0
Stories that are highly rated by the AI across the board ('incredible' and above, according to the AI rater.): 0
Total tokens of all stories (roughly equivalent to the number of training tokens): 0
Time taken: 121.87675976753235 seconds
ShareGPT-format .json export is created, and the full dataset is also available in the final_outputs folder.
Hmm... No stories were generated. Check the logs for more information, and consider creating an issue if this is unexpected. If you do make an issue, please include your input data and the logs!
=============================================================================
This looks like an issue with the model that you're running RPToolkit with -- it's getting the output format of the early steps wrong, so the regex parsing is throwing an error. I really need to improve the error logging to make this a bit more clear. Either way, to fix this, consider running the pipeline with a larger or smarter model? For instance, in the past I have used llama 3 70b for the early steps of this pipeline. I don't think that's the minimum requirement, but starting there and going down can't hurt. Also different finetunes differ in intelligence, finding one that follows formats well is key.
Hi, @e-p-armstrong, thanks for the repo!
I am trying to use the
rptoolkit
using the same samples inraw_txt_input
but I am getting:And no stories are generated in the output_final files.
Here is my
config.yaml
(tested these models with openAI ollama calls like here, everything works fine on this front):Final output:
FULL OUTPUT (pasted on pastejustit for better visual clarity):
https://pastejustit.com/j36pmf1imm
Please, let me know whether I am missing something that may be causing this issue.
The text was updated successfully, but these errors were encountered: