-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MacOS]: Performance WindUp (~28 seconds) vs Kantra (~237 seconds) on sample app jakartaee-duke with target jakarta-ee #96
Comments
Another run with Kantra on Mac M1 Max after bumping podman machine to have more resources
Run #1 (assume some time for fetching images as the VM was destroyed and recreated)
Run #2 (in theory images have been pulled this should be a quicker run)
|
Hey,
I notice the postman machine only has 4gb of memory, can we run with that a little higher?
I also notice the CPUs is 4 is that the max that windup was running as?
… On Oct 21, 2023, at 8:03 AM, John Matthews ***@***.***> wrote:
DiskSize": 100, "Memory": 4096
|
Yeah, I posted another run with higher CPU's and memory after I did the initial, thinking maybe it was due to lower resources but I didn't see an improvement. See the prior comment and I shared the specs and output |
Yeah, I saw that after read the first email 😭 I wonder if we are having an issue with volume mounts file I/o getting code snips. I have a sneaky suspicion that this could be pretty bad performance-wise, I don't think we are trying to do anything smart, batching up file reads, etc. @pranavgaikwad @eemcmullan let me know what y'all think? |
@shawn-hurley I would like to get to the bottom of what exactly happening in John's case here. While I don't deny the I/O bit, there are other several areas I think account for majority of time spent. I/O is a small percentage compared to that. I use Jaeger very often when I run analysis, we have the tracer in analyzer and common thing I have observed in most testing I have done so far is that there are two major factors:
Anyway, I am curious to know what exactly is happening in this case. |
That is good to know; where is the bulk of the time spent in the conditions? |
@shawn-hurley See two traces - one on Linux, the other on Mac: On Linux: Notice that on average, the The main reason seems to be the overhead in mounting the volumes via podman machine. |
A quick test to be sure would be to build an image so that you don't have to mount things and run the analysis. |
Below was run on MacOS using Kantra (Macbook Pro M1Max)
|
containers/podman#16994 (comment) just to note, it looks like Podman machine on Mac is a problem w/ mounting and file I/o. Do we think we can wait for this to be fixed in the underlying thing or not? |
FYI: I have similar (performance) problems with Docker (on Apple Silicon). |
Tracking issue: coreos/fedora-coreos-tracker#1548 I feel that we should wait until the above is done to further research |
Closing this issues by #127 While the performance may not be one-to-one, this drastically reduces the time we take in file I/o. If we are still seeing drastic issues, can we add to #121 as I have a feeling we are going to see a difference in the network load on subsquent load because we don't have a shared .m2 while wind up will. Let me know if we feel this should be re-opened if the explanation is not sufficient |
I re-ran today using :latest with #127. $ time ./kantra analyze -i coolstuff-javaee -t "quarkus" -t "jakarta-ee" -t "cloud-readiness" -o ./out2 0.13s user 0.18s system 0% cpu 3:19.08 total |
I am seeing a noticeable performance difference between running Kantra and WindUp on the same source application and target on my Mac M1 Max (64GB)
Sample app: https://github.com/ivargrimstad/jakartaee-duke/tree/start-tutorial
Environment: Mac M1 Max 64GB
WindUp
Run #1
Run #2
Kantra
Run #1
Run #2
Notes
I ran Kantra twice, thinking perhaps the first run had some delays pulling images, so second run was invoked soon after with the assumption images have been pulled (running on same podman machine VM)
The text was updated successfully, but these errors were encountered: