-
-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hot Reload Hang on MacOS #79
Comments
Interesting, I have seen this previously when the "animate" prop was set on the canvas element. The clearing of the duk context and release pool on a reload appeared to resolve the issues I was seeing but I'm not using animated draws. I'll be back at me desk shortly so I'll fire this up on Mac and take a look. |
Ahh right, you mentioned those fixes before. Yea, they sound like a good idea to me. I just noticed that I can accelerate the lock up by resizing the window a bunch in gain plugin. Just making note of that here |
Yeah. I seem to remember it appeared to be mostly antagonised by the I'm just on separate thing at the moment but I will for sure take a look at this today. I'll let you know if I get anywhere with it. |
Yea, I think you're right. I don't think it was something you introduced, I think the finalizer stuff I wrote is probably buggy and not always removing lambdas correctly. If you have time I would definitely appreciate the look! I'll try to put some time into it later this week as well |
Yeah no worries. I'll take a look today/tomorrow for sure. Heh last time I worked it all through under gdb made my brain melt ... |
:D Yea, this might be a gnarly one. Thank you! |
Heh. Writing a little level meter component at the minute and getting freezes and bad performance .... I have a sneaking suspicion this has just bitten me. Into the bug dive I go! (I might take a good hard look and profile over the |
😁 good luck! I haven't had a chance to dig in yet; hope it's not too gnarly |
Looking through this at the moment. At a first pass, it almost looks like there is a significant lag in the call to the finalizers. From Duktape docs: "Finalizers are executed for unreachable objects detected by reference counting or mark-and-sweep. The finalizer may not execute immediately, however, not even when reference counting detects that the object became unreachable." I wonder if it literally is taking this long for the finalizers to be called after garbage collection. Hence our filling up of the pool which does eventually get cleared, though clearing blocks the UI/Message thead. So high level thoughts are, can we increase GC rate? Can we reduce the number of LambdaHelpers being created for repeated Canvas redraws? If this lag is unavoidable should the release pool live on some backkground thread which can be incrementally cleared without blocking the message loop? Might be hard if we have to use a mutex to protect the pool ... we'll still block. This comment is more me thinking aloud than anything at the moment..., will continue digging. |
Based on look under the debugger, it does look like the rate at which we push lambdas into the pool when painting on a timer is far higher than the rate of GC. We push enough in there for the loops over the pool to significantly block the UI thread. Effectively doing an N squared lookup I guess, i.e. on garbage collect for every lambda to be cleared we iterate the entire vector in worst case. A first experiment might be trying a different data-structure for the pool to increase the lookup times. unordered_set of pointers might be worth a try and preallocate some sensible amount of memory? Failing that maybe we need a different scheme for the canvas onDraw property function invocations. |
So yeah, a quick hack switching to I might think about this more though and workout if anything nicer can be done. That's a lot of small allocations and deallocations flying around. Any thoughts on this one? I'm going to do some general profiling around canvas timer draws this eve so will let you know what hotspots I find. |
Might also be interesting to manually call duk_gc after invoking the lambdas here. I wonder if the "light function" concept is also worth some investigation, to avoid creating JS function objects in some cases which appear to require periodic garbage collection to destroy rather than being immediately released via ref counting.
https://wiki.duktape.org/howtolightfuncs I need to read into this all in more detail .... |
Also interesting: svaarala/duktape#1872
Sounds very much like we might be seeing similar behaviour. I suspect GC with all these function objects being created is causing performance to drop off a cliff here. Having a bit of trouble getting anything useful out of perf at the moment. Will try mac in a second and maybe use the gain plugin example as less going on in there than this app. I might try disabling mark and sweep and maually invoke gc inside readVarFromDukStack just to see what happens to the performance. |
I can drop the unordered_map fix over as it does fix the In my particular context this hack seems to further improve UI response. To be expected with stateless components till this optimisation is made I expect
|
I'm also wondering whether we can avoid pushing a new canvas context object to the duk stack for every single invocation and if we can instead store a canvas context object permanently in the global namespace or similar ... I'd be interested to see what happens if this approach is possible. There are also a few other performance improvements to be had like the |
Sorry I'm spamming you a bit but I've been profiling so I'll register findings here. . Rendering text via strokeText and fillText on the canvas object is currently a hotspot. Removing that made a fair amount of difference. I've got some text overlaid on this meter. I should probably have two components with one using a higher z-order, I'm not sure how z-order is intended to be controlled via component props at this stage? Is this still a thing to be implemented @nick-thompson ? Relative z orderings on views? . The canvas . Hotspot is various locations in Enabling these badboys (at least on 32bit arm) has also helped nicely: . DUK_USE_LIGHTFUNC_BUILTINS |
OK ... I'm clearly also redrawing way too many things when I should just be drawing my meter ... some prop/state lifecycle thing happening I'm yet to catch/grasp. EDIT: Yep. Removed unrelated component which draws an image to test. Performance back up ... Silly me. Well I've various performance fixes to send over to you off the back of this regardless (and probably some useful tips for docs). This has been most enlightening. So, summary of points after all this spew @nick-thompson.
After changing some of these things performance drawing a very busy meter on an embedded device is pretty darn good, so I think there are strong hopes in regards to: #28 |
@JoshMarler you're the man, this is so awesome. I'll try to address everything in a few high level thoughts, to start:
High level I'm thinking basically that there are two types of functions that we push to javascript: persistent functions via I need to think on this for a bit, but I feel like there's likely an elegant solution there... Thanks a million for the detail here @JoshMarler, I'll write back with more thoughts soon |
No Problem! It's been massively valuable to dig into this. I found the causes for my wasted reconciles. Indeed, I reckon the The idea around CanvasContext is interesting, I totally agree. I wondered about the whole "light function" thing and whether that is some we could somehow leverage but I'm still not sure yet. I did briefly experiment with having a single I'll muse over the "transient" function objects idea too once I've gotten other tasks out of the way ... that does feel like the right approach, maybe even coupled with the "light-function" facility. If we can distinguish these "transient" function objects it feels like light-functions might be a very good idea, I suspect they could improve performance for this sort of thing dramatically (if there is indeed no associated GC etc). If we could workout someway for things like canvas I'll get the unordered_set fix over to you and deal with #5 as a start. |
@nick-thompson, just because you might have missed it? Any thoughts on the z-order thing? I've managed to get |
@nick-thompson, PR with fix for the freeze here: #81 Will follow up with a PR for that |
Ahh yea sorry, I skipped right over the
Implementing a proper Definitely happy to have you tackle #5 and #4. Same PR is cool with me. For #5 I want to dig into the React HostConfig a little more to see if there really is no "commit" callback, as if to say "we've now applied all property updates, commit the update transaction." That certainly seems like the ideal time to do the layout. If not, I think we should use something like https://docs.juce.com/master/classAsyncUpdater.html to coalesce the updates. For example, if I set I'll take a look at the PR now, thanks so much for digging into it. I want to spend some time as well playing with Duktape, lightfuncs, and exploring this GC thing and "transient" function stack objects. I'll write back if I find anything |
This fix resolves #79. The freeze was caused by an O(Nsquared) lookup on the LambdaReleasePool vector when GC kicked in and triggered our LambdaReleasePool finalizers. Note this also bumps juce from 5.4.1 to 5.4.2 as std::hash specialisation for juce::Uuid was added to the Uuid header in 5.4.2.
I'm finding that during a hot reload on MacOS, Xcode just hangs in a permanent loop and the project freezes.
Dropping the debugger in shows that we're stuck in the Duktape finalizer callstack searching through the
lambdaReleasePool
, but it seems like there are a million things to be finalized. It looks like there indeed might be a bug with the lambdaReleasePool, because it's huge, but I'm not sure that's exactly what's hanging up Xcode here.@JoshMarler have you seen this one?
The text was updated successfully, but these errors were encountered: