You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for participating in the TVM community! We use https://discuss.tvm.ai for any general usage questions and discussions. The issue tracker is used for actionable items such as feature proposals discussion, roadmaps, and bug tracking. You are always welcomed to post on the forum first 😸
Issues that are inactive for a period of time may get closed. We adopt this policy so that we won't lose track of actionable issues that may fall at the bottom of the pile. Feel free to reopen a new one if you feel there is an additional problem that needs attention when an old one gets closed.
Expected behavior
show tuning log and start auto-schedule
Actual behavior
failed with error:
File "e2e_auto_tir.py", line 254, in
main()
File "e2e_auto_tir.py", line 195, in main
db = ms.relax_integration.tune_relax(
File "/home//mlc-relax/relax/python/tvm/meta_schedule/relax_integration.py", line 255, in tune_relax
return tune_tasks(
File "/home//mlc-relax/relax/python/tvm/meta_schedule/tune.py", line 118, in tune_tasks
task_scheduler.tune(
File "/home//mlc-relax/relax/python/tvm/meta_schedule/task_scheduler/task_scheduler.py", line 132, in tune
_ffi_api.TaskSchedulerTune( # type: ignore # pylint: disable=no-member
File "/home//mlc-relax/relax/python/tvm/_ffi/_ctypes/packed_func.py", line 238, in call
raise get_last_ffi_error()
tvm.error.InternalError: Traceback (most recent call last):
[bt] (8) /home//mlc-relax/relax/build/libtvm.so(tvm::meta_schedule::TaskSchedulerNode::Tune(tvm::runtime::Array<tvm::meta_schedule::TuneContext, void>, tvm::runtime::Array<tvm::FloatImm, void>, int, int, int, tvm::meta_schedule::Builder, tvm::meta_schedule::Runner, tvm::runtime::Array<tvm::meta_schedule::MeasureCallback, void>, tvm::runtime::Optionaltvm::meta_schedule::Database, tvm::runtime::Optionaltvm::meta_schedule::CostModel)+0x62b) [0x7ff172abcaab]
[bt] (7) /home//mlc-relax/relax/build/libtvm.so(tvm::meta_schedule::PostOrderApplyNode::GenerateDesignSpace(tvm::IRModule const&)+0xe93) [0x7ff172a98ed3]
[bt] (6) /home//mlc-relax/relax/build/libtvm.so(tvm::meta_schedule::MultiLevelTilingNode::Apply(tvm::tir::Schedule const&, tvm::tir::BlockRV const&)+0x496) [0x7ff172a174a6]
[bt] (5) /home//mlc-relax/relax/build/libtvm.so(tvm::meta_schedule::MultiLevelTilingNode::ApplySubRules(std::vector<tvm::meta_schedule::State, std::allocatortvm::meta_schedule::State >)+0x24a) [0x7ff172a1d9ea]
[bt] (4) /home//mlc-relax/relax/build/libtvm.so(tvm::meta_schedule::MultiLevelTilingNode::AddWriteReuse(tvm::meta_schedule::State) const+0x1c2) [0x7ff172a1c032]
[bt] (3) /home//mlc-relax/relax/build/libtvm.so(tvm::runtime::Optional<tvm::runtime::Array<tvm::Integer, void> > tvm::tir::GetAnn<tvm::runtime::Array<tvm::Integer, void> >(tvm::tir::StmtSRef const&, tvm::runtime::String const&)+0x26c) [0x7ff172a20e7c]
[bt] (2) /home//mlc-relax/relax/build/libtvm.so(tvm::runtime::Array<tvm::Integer, void> tvm::runtime::Downcast<tvm::runtime::Array<tvm::Integer, void>, tvm::runtime::ObjectRef>(tvm::runtime::ObjectRef)+0x165) [0x7ff172a20bd5]
[bt] (1) /home//mlc-relax/relax/build/libtvm.so(tvm::runtime::detail::LogFatal::Entry::Finalize()+0x3d) [0x7ff1725097dd]
[bt] (0) /home//mlc-relax/relax/build/libtvm.so(tvm::runtime::Backtraceabi:cxx11+0x2c) [0x7ff174821ecc]
File "/home//mlc-relax/relax/include/tvm/runtime/object.h", line 920
InternalError: Check failed: (ref->template IsInstance()) is false: Downcast from runtime.String to Array failed.
Environment
Ubuntu20.04, mlc-relax latest.
Steps to reproduce
1.Change "tar" to "ndk" in function tvm.meta_schedule.testing.custom_builder_runner.run_module_via_rpc.
2.use relax_example/e2e_auto_tir.py as:
python3 e2e_auto_tir.py --workload=resnet_18 --target=“opencl -device=mali”
Just for tuning on opencl mobile gpu.
Triage
Please refer to the list of label tags here to find the relevant tags and add them below in a bullet format (example below).
needs-triage
The text was updated successfully, but these errors were encountered:
Thanks for participating in the TVM community! We use https://discuss.tvm.ai for any general usage questions and discussions. The issue tracker is used for actionable items such as feature proposals discussion, roadmaps, and bug tracking. You are always welcomed to post on the forum first 😸
Issues that are inactive for a period of time may get closed. We adopt this policy so that we won't lose track of actionable issues that may fall at the bottom of the pile. Feel free to reopen a new one if you feel there is an additional problem that needs attention when an old one gets closed.
Expected behavior
show tuning log and start auto-schedule
Actual behavior
failed with error:
File "e2e_auto_tir.py", line 254, in
main()
File "e2e_auto_tir.py", line 195, in main
db = ms.relax_integration.tune_relax(
File "/home//mlc-relax/relax/python/tvm/meta_schedule/relax_integration.py", line 255, in tune_relax
return tune_tasks(
File "/home//mlc-relax/relax/python/tvm/meta_schedule/tune.py", line 118, in tune_tasks
task_scheduler.tune(
File "/home//mlc-relax/relax/python/tvm/meta_schedule/task_scheduler/task_scheduler.py", line 132, in tune
_ffi_api.TaskSchedulerTune( # type: ignore # pylint: disable=no-member
File "/home//mlc-relax/relax/python/tvm/_ffi/_ctypes/packed_func.py", line 238, in call
raise get_last_ffi_error()
tvm.error.InternalError: Traceback (most recent call last):
[bt] (8) /home//mlc-relax/relax/build/libtvm.so(tvm::meta_schedule::TaskSchedulerNode::Tune(tvm::runtime::Array<tvm::meta_schedule::TuneContext, void>, tvm::runtime::Array<tvm::FloatImm, void>, int, int, int, tvm::meta_schedule::Builder, tvm::meta_schedule::Runner, tvm::runtime::Array<tvm::meta_schedule::MeasureCallback, void>, tvm::runtime::Optionaltvm::meta_schedule::Database, tvm::runtime::Optionaltvm::meta_schedule::CostModel)+0x62b) [0x7ff172abcaab]
[bt] (7) /home//mlc-relax/relax/build/libtvm.so(tvm::meta_schedule::PostOrderApplyNode::GenerateDesignSpace(tvm::IRModule const&)+0xe93) [0x7ff172a98ed3]
[bt] (6) /home//mlc-relax/relax/build/libtvm.so(tvm::meta_schedule::MultiLevelTilingNode::Apply(tvm::tir::Schedule const&, tvm::tir::BlockRV const&)+0x496) [0x7ff172a174a6]
[bt] (5) /home//mlc-relax/relax/build/libtvm.so(tvm::meta_schedule::MultiLevelTilingNode::ApplySubRules(std::vector<tvm::meta_schedule::State, std::allocatortvm::meta_schedule::State >)+0x24a) [0x7ff172a1d9ea]
[bt] (4) /home//mlc-relax/relax/build/libtvm.so(tvm::meta_schedule::MultiLevelTilingNode::AddWriteReuse(tvm::meta_schedule::State) const+0x1c2) [0x7ff172a1c032]
[bt] (3) /home//mlc-relax/relax/build/libtvm.so(tvm::runtime::Optional<tvm::runtime::Array<tvm::Integer, void> > tvm::tir::GetAnn<tvm::runtime::Array<tvm::Integer, void> >(tvm::tir::StmtSRef const&, tvm::runtime::String const&)+0x26c) [0x7ff172a20e7c]
[bt] (2) /home//mlc-relax/relax/build/libtvm.so(tvm::runtime::Array<tvm::Integer, void> tvm::runtime::Downcast<tvm::runtime::Array<tvm::Integer, void>, tvm::runtime::ObjectRef>(tvm::runtime::ObjectRef)+0x165) [0x7ff172a20bd5]
[bt] (1) /home//mlc-relax/relax/build/libtvm.so(tvm::runtime::detail::LogFatal::Entry::Finalize()+0x3d) [0x7ff1725097dd]
[bt] (0) /home//mlc-relax/relax/build/libtvm.so(tvm::runtime::Backtraceabi:cxx11+0x2c) [0x7ff174821ecc]
File "/home//mlc-relax/relax/include/tvm/runtime/object.h", line 920
InternalError: Check failed: (ref->template IsInstance()) is false: Downcast from runtime.String to Array failed.
Environment
Ubuntu20.04, mlc-relax latest.
Steps to reproduce
1.Change "tar" to "ndk" in function tvm.meta_schedule.testing.custom_builder_runner.run_module_via_rpc.
2.use relax_example/e2e_auto_tir.py as:
python3 e2e_auto_tir.py --workload=resnet_18 --target=“opencl -device=mali”
Just for tuning on opencl mobile gpu.
Triage
Please refer to the list of label tags here to find the relevant tags and add them below in a bullet format (example below).
The text was updated successfully, but these errors were encountered: