Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Garbage Collection goes haywire - affecting framerate #101

Open
hugeblank opened this issue May 28, 2021 · 13 comments
Open

Garbage Collection goes haywire - affecting framerate #101

hugeblank opened this issue May 28, 2021 · 13 comments

Comments

@hugeblank
Copy link

I'm using v1.2.15 of Just Map Unlimited on 1.16.5. Without the mod on logging in after everything has stabilized I get no jitter and memory usage seldom passes 30%. With the mod I get jitter every second and memory usage spikes up to 50% before dropping back down to ~20%.

Without mod:
image

With mod:
image

Included in the pack I'm experiencing this issue in are Sodium and Lithium, which both may have an impact on this issue. I have yet to test without those mods, but will do so if necessary.

@magicus
Copy link
Contributor

magicus commented Sep 13, 2021

I'm also seeing this. I can confirm that the allocation pressure is huge with only JustMap installed, no other mods (except Fabric API which is required).

Some allocation is going all over the top.

Is this a regression for JustMap? (I just found out about it, and it looks really, really good, but this is a complete show-stopper)

@magicus
Copy link
Contributor

magicus commented Sep 13, 2021

I ran JFR to analyze the situation. MapProcessor.loopPos() allocated 6 GB (!) for the1 minute run, so this is the culprit. Unfortunately obfuscation makes further detective work harder. The object that is so massively allocated is net.minecraft.class_2338. I think this is BlockState but I'm not really sure how to look this up in the deobf mappings.

@Bulldog83
Copy link
Owner

I don't like some of current mod realizations. I want to completely rewrite it, but I haven't time now.

@magicus
Copy link
Contributor

magicus commented Sep 13, 2021

I just discovered the mod and I'd really like to start using it. It ticks all the boxes: open source, fabric, vanilla feel, great looking UI, etc. But I have a slow machine and the performance issues are a real problem. So I'll try to see if I can get to the bottom of it. It seems like it can be a trivial fix once the cause is fully understood.

@magicus
Copy link
Contributor

magicus commented Sep 13, 2021

The culprit is pos.down(). This creates a new BlockPos via the offset() method. And if we're looping that over lots of Y-values for all X/Z pairs...

@magicus
Copy link
Contributor

magicus commented Sep 13, 2021

There's a public BlockPos.Mutable class with a move method. I bet that will work tons better in this context. :) I'll see if I can implement a fix.

@magicus
Copy link
Contributor

magicus commented Sep 13, 2021

I have a fix in a local branch, that drastically reduces the amount of allocation done by the mod, so it barely rises above the baseline (compared to allocating >20x as much as the Vanilla game itself).

However, I'm beginning to expect this is the wrong approach. We're still spending like 20% of the CPU time in updating chink data. The underlying problem seem to be that the chunk data gets updated far too often; over and over again, for no reason, it seems. The allocation fixes I've done might be worth it in it's own right anyway, but I'd like to find and fix the core issue. If done properly, the allocation excess might just turn out to be a symptom rather than a bug in itself.

@magicus
Copy link
Contributor

magicus commented Sep 13, 2021

My prime suspicion at this time is that the cached chunks are too aggressively purged in WorldData.clearCache. I'm not sure I follow the logic 100% but it seems that all chunks are deleted after 5000 ms, and so they continuously gets recreated. Commenting out the purging of the cache from WorldManager.update resolved both the performance and memory allocation problems afaict. Otoh, it is likely that no cache clearing might be bad as well....

@magicus
Copy link
Contributor

magicus commented Sep 14, 2021

We schedule chunks for re-processing when the value of chunkUpdateInterval ms has passed. The default value is 1000, meaning all loaded chunks will be re-calculated (which is expensive) every second. On my not so fast machine, this causes a build-up in the processing queue, which further slows things down, since when enqueuing we first check if the chunk is already in the queue, a linear operation O(n) but since this is done for all chunks enqueued, it gets O(n^2), and this really hurts if the queue starts growing.

The good news is that there is an immediate work-around. Go to settings/optimizations, and change chunkUpdateInterval to 5000 (the max allowed). This helped a lot on my machine. In my test builds I've set it at 60000 (meaning chunks are only refreshed once a minute) but that is not allowed in the official build. Also set chunkLevelUpdateInterval to 10000.

@hugeblank You might be interested in the workaround.

@magicus
Copy link
Contributor

magicus commented Sep 14, 2021

(But even with this workaround, JustMap still consumes about 25% of the CPU cycles in Minecraft, and 38% of the allocation pressure. So it needs to be fixed properly. But at least it's playable.)

@magicus
Copy link
Contributor

magicus commented Nov 10, 2021

I have now started working on a complete new backend for the map data, which should not have any lag issues like those reported in this bug. A preview is available in draft PR #107 .

@hugeblank
Copy link
Author

@magicus you are legendary. I'll take a look later.

@magicus
Copy link
Contributor

magicus commented Nov 10, 2021

@hugeblank Be aware that the preview PR is just that, at the moment. The map looks awful, and it is likely to crash sooner rather than later. (But it is faster! :-))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants