You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, after following the steps on the README I am facing this warning at training dam model for English. After it, I have division by 0 errors.
I am using Reta-LLM and what cause me the errors is the index_pipeline.py script.
[WARNING|trainer.py:2013] ... >> There seems to be not a single sample in your epoch_iterator, stopping training at step 0! This is expected if you're using an IterableDataset and set num_steps (X) higher than the number of available samples.
I have tried to change num_steps to 1, 10 and, 100000. All gave me same errors
The text was updated successfully, but these errors were encountered:
Thank you for your attention to our projects and sorry for this error.
We find this error will occur when the number of examples (9 in your run) is smaller than the batch size (64).
We have fixed this error by minimizing the batch size with the num examples.
I hope this can fix your bugs. If you have any other problems, please let us know!
Hi, after following the steps on the README I am facing this warning at training dam model for English. After it, I have division by 0 errors.
I am using Reta-LLM and what cause me the errors is the index_pipeline.py script.
[WARNING|trainer.py:2013] ... >> There seems to be not a single sample in your epoch_iterator, stopping training at step 0! This is expected if you're using an IterableDataset and set num_steps (X) higher than the number of available samples.
I have tried to change num_steps to 1, 10 and, 100000. All gave me same errors
The text was updated successfully, but these errors were encountered: