-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added bulk object allocator (memory pool) for syntax nodes #173
base: master
Are you sure you want to change the base?
Added bulk object allocator (memory pool) for syntax nodes #173
Conversation
…lso very importantly memory fragmentation). Added StringCache.pas. Changed attributes to use the string cache. Note required a define USESTRINGCACHE which is off by default. The cache is not threadsafe.
…ionary of attributes, but attributes no longer work as a dictionary.)
…en threadsafe, any individual instance has get/add method contents wrapped in a lock, to lock the internal structures. The instance is never cleared when the count of objects using it drops to 0, since that's a bit more complex to get right (should only clear while the refcount is 0, but have to lock that to ensure it's not changed while clearing is happening, which severely slows inc/decrementing the count. I couldn't think of a good compare-lock-exchange algorithm. So just don't bother; it's never cleared while alive, when threadsafe.)
…both operations; now holds it for both. Safe to enter a CS twice.)
This reverts commit 95233e9.
… This is off by default, but when on will reduce memory fragmentation at the cost of slightly higher permanent memory usage through the lifetime of the process
Thans, @vintagedave ! But FastMM uses memory pool too, doesn't it? I mean do you have any benchmark that shows that your memory managment is better than FastMM? It's very sensitive area, we have to be bery careful. |
That's tricky. I have empirical evidence, which is memory stability in the IDE. I haven't done a full test which would require tracking all FastMM allocations and free areas of memory. FastMM cannot ever optimize for a specific class. It can only optimize by size, and it does have pools per memory size (small, medium, and I think it goes straight to VirtualAlloc for large.) My suspicion is that the fragmentation is occurring at a higher level, eg it may be releasing entire blocks back to the OS and then re-acquiring them, so the fragmentation is at that level. It will always free memory, whereas this code does not - it is static pool that never shrinks, which means fragmentation cannot occur because there is no repeated releasing and reacquisition. For my purposes (IDE plugins) this works and seems to be important (there is a noticeable stability increase in the IDE, especially older versions), so I will keep this in a private fork even if it's not merged. Personally I suspect this would benefit FixInsight too, if you parse large (say, thousand-unit-plus) projects. Note that it's not enabled by default and if merged in, no-one using it will notice any change because unless defined and built with the define, there is no code change. |
I'm OK to merge this. I just want to be 150% sure, that's why I'm asking. As far as I understood, this pull request includes StringCache, so if I merge this, previous pull request is not needed. Right? Let's wait a couple of days, so everyone can say his opinion. @Wosi and @sglienke do intensively use DelphiAST, so I want them to be happy too. |
Yes, I branched from my string cache branch, since it's part 2 of the On 23 April 2016 at 16:23, Roman Yankovsky [email protected] wrote:
|
To be honest - using a simple object pool would have been enough where you request objects from and then put them back into. If you preallocated n object imo you have the same effect. Messing with the InitInstance and allocating memory yourself that you use by those objects seems unnecessary overkill and a possible source of errors (is it thread-safe? I don't think so) |
It's not threadsafe, no. Of course, it's not used unless you turn it on - the code, when compiled with the define undefined, is absolutely unchanged from the current code. The main thing with reusing objects in an object pool instead of a memory pool is ensuring that they are initialised correctly. With this technique, the constructor etc runs as normal, and memory is zeroed. With a reused object, you need to ensure that everyone one is used, all fields are reset. Without an extensive code inspection, I'm not confident of this in DAST since the code relies on new objects being in the state they are constructed to, ie assumes default without setting everything explicitly. |
Resetting fields is done by calling But to me this would be the cleaner approach and easier to maintain when moving to a more typed node tree in the future because then you don't need to put that custom allocation code into every node class. |
Hmm. Well, I could certainly rework this to do that instead. That would be a better option? |
…into syntaxnode-allocation
Conflicts, now fixed: Source/DelphiAST.Classes.pas Source/DelphiAST.Writer.pas Source/SimpleParser/SimpleParser.Lexer.pas
I have been looking into minimizing heap allocations by refactoring several methods in TPasSyntaxTreeBuilder. So this can be closed imo as the changes are quite outdated by now. |
This branch implements a memory pool for each syntax node class. It means that nodes are allocated not from FastMM, but from a pool. When freed, the memory is returned to the pool, but the large allocation is not freed; in other words, repeated allocations and frees will not fragment memory. This comes at the cost of slightly higher memory usage for the life of the process (typically a few hundred kilobytes.)
This is turned off by default, and when off has absolutely no effect on the code.
This is part of my quest to reduce memory fragmentation in pre-XE8 IDEs for Navigator and Bookmarks. They re-parse files with almost every change (with a small delay) and over many hours this can result in many, many allocations and frees of the syntax node classes. Since other memory operations occur in between these, and since the pre-XE8 IDE has a small address space (2GB) further exacerbated by .Net's allocator sitting in the same process, fragmentation can occur.
This is not necessary for occasional use of DelphiAST, eg running it once. It may help significantly when run many times in a row, as in the Navigator/Bookmarks use case or when parsing several thousand file in a large project.