Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

训练效果查看 #15

Open
zuti666 opened this issue Mar 30, 2023 · 5 comments
Open

训练效果查看 #15

zuti666 opened this issue Mar 30, 2023 · 5 comments

Comments

@zuti666
Copy link

zuti666 commented Mar 30, 2023

请问训练结束后,得到logs 和 models 怎么查看和使用?log
image
s使用tensoboard进行查看吗, 怎么来加载模型测试来查看效果呢?

@JobethLi
Copy link

JobethLi commented Apr 6, 2023

在base_runner.py里面有restore函数加载模型。env_runner.init函数中会先确认是否存在保存模型的run_dir目录,如果目录已经存在,就会用restore函数去加载模型,可以通过对env_runner.init之后调用env_runner.eval直接对加载的模型进行评估。

@zuti666
Copy link
Author

zuti666 commented Apr 9, 2023

但是 我发现env_runner.eval()函数没有实现但环境为连续时的情况,这时候应该怎么进行测试模型效果呢?
QQ截图20230409130507

@JobethLi
Copy link

JobethLi commented Apr 12, 2023

light_mappo的作者应该还未实现continues环境下的eval部分,也许可以参照env_runner.collect函数中对连续action的处理。你可以尝试将return删除,令eval_actions_env=eval_actions,用debug模式看下eval_actions_env里面存储的是否是continues action?

@bitbjt
Copy link

bitbjt commented Sep 19, 2023

你好,我也遇到了这个问题,也是连续动作空间,请问你解决这个问题了吗?

@qiyunying
Copy link

在base_runner.py里面有restore函数加载模型。env_runner.init函数中会先确认是否存在保存模型的run_dir目录,如果目录已经存在,就会用restore函数去加载模型,可以通过对env_runner.init之后调用env_runner.eval直接对加载的模型进行评估。

你好,我这样测试之后,发现测试效果比训练效果很差是有什么问题呢

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants