We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
作者您好,我用的Imagenet-mini数据集,在单卡GPU上训练。看到您之前回答的问题:在单卡GPU上训练,需要将lr减小到原来的8倍,这样设置:--nproc_per_node=1、--lr 2.5e-4。但是我尝试了这样设置的前提下,batchsize=128 batchsize=64 batchsize=32 batchsize=16 代码没有任何修改的情况下,都会出现不同程度的问题。
The text was updated successfully, but these errors were encountered:
这是什么模型
Sorry, something went wrong.
很高兴能够收到您的回复!我使用的imagenet-mini数据集(https://www.kaggle.com/datasets/ifigotin/imagenetmini-1000)每个类别的样本数太少了,发现是数据集的问题,换了一个数据集发现问题解决了,但是也会出现loss比较大的情况,不过epoch迭代到后面Loss就降下来了。我的设置是--nproc_per_node=2、--lr 5e-4、-b 32
您好,我想问一下ghostnet的imagenet预训练模型是不是在model文件夹里面自带的那个
No branches or pull requests
作者您好,我用的Imagenet-mini数据集,在单卡GPU上训练。看到您之前回答的问题:在单卡GPU上训练,需要将lr减小到原来的8倍,这样设置:--nproc_per_node=1、--lr 2.5e-4。但是我尝试了这样设置的前提下,batchsize=128
batchsize=64
batchsize=32
batchsize=16
代码没有任何修改的情况下,都会出现不同程度的问题。
The text was updated successfully, but these errors were encountered: