We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
此处内容似乎有一定问题。
# 选择一种优化器 optimizer = torch.optim.Adam(...) # 选择上面提到的一种或多种动态调整学习率的方法 scheduler1 = torch.optim.lr_scheduler.... scheduler2 = torch.optim.lr_scheduler.... ... schedulern = torch.optim.lr_scheduler.... # 进行训练 for epoch in range(100): train(...) validate(...) optimizer.step() # 需要在优化器参数更新之后再动态调整学习率 # scheduler的优化是在每一轮后面进行的 scheduler1.step() ... schedulern.step()
文档中实例表明训练了100个epoch,可以认为训练已经终止,动态调整学习率不生效。
官方给的其中一个示例代码如下:
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9) scheduler = ExponentialLR(optimizer, gamma=0.9) for epoch in range(20): for input, target in dataset: optimizer.zero_grad() output = model(input) loss = loss_fn(output, target) loss.backward() optimizer.step() scheduler.step()
可以看出有两个for循环迭代,scheduler.step()位于第一个for循环之后。
The text was updated successfully, but these errors were encountered:
您说的是对的,您可以提一个pr修改下。同时修改下https://github.com/datawhalechina/thorough-pytorch/blob/main/source/%E7%AC%AC%E5%85%AD%E7%AB%A0/6.2%20%E5%8A%A8%E6%80%81%E8%B0%83%E6%95%B4%E5%AD%A6%E4%B9%A0%E7%8E%87.md 谢谢
Sorry, something went wrong.
No branches or pull requests
此处内容似乎有一定问题。
文档中实例表明训练了100个epoch,可以认为训练已经终止,动态调整学习率不生效。
官方给的其中一个示例代码如下:
可以看出有两个for循环迭代,scheduler.step()位于第一个for循环之后。
The text was updated successfully, but these errors were encountered: