Split 'go' into with/without attrs code paths #19
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is the first of two pull requests that do some internal rewiring. This first change improves BigTable performance considerably with no negative effect on other benchmarks.
bigTable/Utf8 runs 25% faster.
If the BigTable benchmark is a usefull test, this is not a trivial improvement. It might make the Blaze benchmarks page even more impressive.
The change is simple: begin
renderXXX
with the assumption that an element has no attributes, then proceed with the original algorithm when encountering anAdd(Custom)?Attribute
.As mentioned above, this suggestion precedes another pull request that will yield almost (within 3%) identical performance while promoting CSS styles and classes to the same syntactic sweetness as attributes. I initially split the
go
function because I couldn't find any other way to make my CSS additions an 'optional' package. I thought I'd just make them an optional part of the corealgorithm. It turns out that the same technique (thinking of attributes as an 'optional' feature) has a pleasant effect on the original algorithm.
I have one lingering question (ans: see next comment) about the benchmark results. Why does this modification have such a drastic effect on the Utf8 benchmarks (including wideTable and basic) and not the others? I'll guess that the reason is hiding somewhere in GHC's optimizations, but it would be nice to know. I'd expect to see a similar decrease in execution time across the other modules (in ms, not %).