Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

为什么剪枝之后在nx开发板上运行速度也没有提升 #109

Open
WEI-6 opened this issue Jun 30, 2022 · 0 comments
Open

为什么剪枝之后在nx开发板上运行速度也没有提升 #109

WEI-6 opened this issue Jun 30, 2022 · 0 comments

Comments

@WEI-6
Copy link

WEI-6 commented Jun 30, 2022

剪枝后的onnx通过python版tensorrt转成engine文件,但跑在nx上的速度和PT生成engine速度一样,还有疑惑的是剪枝后生成的模型居然比PT文件直接转engine文件大不少。
我的流程是先原yolov5v4代码训练,然后换成楼主提供的v6代码做稀疏训练,接着将模型放到prune代码里进行剪枝,生成的cfg和pt再进行微调训练,转成onnx,最后用py tensorrt生成engine,c++推理,如果流程有什么问题恳请大佬请教

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant