You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue is served as a place for discussing various ways / ideas we can benchmark runwasi the wasm shims that was proposed by @ipuustin
One idea is that we can write a simple wasm program (Fibonacci) and execute in runwasi and a native program executing in runc. This provides a base benchmark of comparising the performance of wasi program vs. native runc processes. It is not meant to benchmark the performance of WASI in general.
Having the base benchmark set, we can observe the performance difference for each version increments. For example, we can observe how much speed increase / descrease for version 0.2 vs. 0.3
Another idea of benchmarking is testing how "dense" of wasm pods can we go for a node. It is often advertised that wasm modules can increase CPU utilization and thus increasing the density of running pods per node. We can verify this point to push the containerd runtime to the extreme by running thousands of pods at the same time.
Feel free to add ideas and thoughts on this topic! Any suggestion is welcome 🙏
Wanted to reopen this issue because I think #126 does not fully address the scope of the issue mentioned above. A few things that will make runwasi benchmarking stories better
This issue is served as a place for discussing various ways / ideas we can benchmark
runwasi
the wasm shims that was proposed by @ipuustinrunwasi
and a native program executing inrunc
. This provides a base benchmark of comparising the performance of wasi program vs. native runc processes. It is not meant to benchmark the performance of WASI in general.Feel free to add ideas and thoughts on this topic! Any suggestion is welcome 🙏
cargo bench
#612cargo bench
#612The text was updated successfully, but these errors were encountered: