-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Console] Fix performance bottleneck for large JSON payloads #57668
[Console] Fix performance bottleneck for large JSON payloads #57668
Conversation
The legacy_core_editor implemenation was calculating the current editor line count by .split('\n').length on the entire buffer which was very inefficient in a tight loop. This caused a performance regression. Now we use the cached line count provided by the underlying editor implementation.
Probably was not a performance issue, just taking unnecessary steps. Not sure that this function is even being used.
Pinging @elastic/es-ui (Team:Elasticsearch UI) |
💚 Build SucceededTo update your PR or re-run it, just comment with: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Tested locally and works great with the large body.
Left a comment about var naming that help to enter the context of console
😊
|
||
function tokenTest(tokenList, prefix, data) { | ||
if (data && typeof data !== 'string') { | ||
data = JSON.stringify(data, null, 3); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
3? really? 😊
position: { lineNumber: 1, column: 1 }, | ||
}); | ||
const ret = []; | ||
let t = iter.getCurrentToken(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here: let token = ...
editor: coreEditor, | ||
position: { lineNumber: 1, column: 1 }, | ||
}); | ||
const ret = []; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't it be better to name it? what exactly do we expect to return?
…#57668) * Fix Console performance bug for large request bodies The legacy_core_editor implemenation was calculating the current editor line count by .split('\n').length on the entire buffer which was very inefficient in a tight loop. This caused a performance regression. Now we use the cached line count provided by the underlying editor implementation. * Fix performance regression inside of ace token_provider implementation * Clean up another unnecessary use of getValue().split(..).length. Probably was not a performance issue, just taking unnecessary steps. Not sure that this function is even being used.
…#57668) * Fix Console performance bug for large request bodies The legacy_core_editor implemenation was calculating the current editor line count by .split('\n').length on the entire buffer which was very inefficient in a tight loop. This caused a performance regression. Now we use the cached line count provided by the underlying editor implementation. * Fix performance regression inside of ace token_provider implementation * Clean up another unnecessary use of getValue().split(..).length. Probably was not a performance issue, just taking unnecessary steps. Not sure that this function is even being used. # Conflicts: # src/legacy/core_plugins/console/public/np_ready/application/models/legacy_core_editor/__tests__/input.test.js # src/legacy/core_plugins/console/public/np_ready/application/models/legacy_core_editor/__tests__/input_tokenization.test.js
…#57683) * Fix Console performance bug for large request bodies The legacy_core_editor implemenation was calculating the current editor line count by .split('\n').length on the entire buffer which was very inefficient in a tight loop. This caused a performance regression. Now we use the cached line count provided by the underlying editor implementation. * Fix performance regression inside of ace token_provider implementation * Clean up another unnecessary use of getValue().split(..).length. Probably was not a performance issue, just taking unnecessary steps. Not sure that this function is even being used. # Conflicts: # src/legacy/core_plugins/console/public/np_ready/application/models/legacy_core_editor/__tests__/input.test.js # src/legacy/core_plugins/console/public/np_ready/application/models/legacy_core_editor/__tests__/input_tokenization.test.js
…#57682) * Fix Console performance bug for large request bodies The legacy_core_editor implemenation was calculating the current editor line count by .split('\n').length on the entire buffer which was very inefficient in a tight loop. This caused a performance regression. Now we use the cached line count provided by the underlying editor implementation. * Fix performance regression inside of ace token_provider implementation * Clean up another unnecessary use of getValue().split(..).length. Probably was not a performance issue, just taking unnecessary steps. Not sure that this function is even being used.
Summary
Fix #57431
Notes for reviewers
The bulk of changes are whitespace changes 🤦🏼♂️ for a test file that was updated to check line count number is reporting correctly.
The rest should be in
legacy_core_editor.ts
,sense_editor.ts
andtoken_provider.ts
. Specifically we are now using.getLength
that returns a cached value of the line count.https://github.com/ajaxorg/ace/blob/d35142c47b5549ab5714848f373b98a4462d2bdd/lib/ace/edit_session.js#L1092
How to test
See the original issues for details. Thanks @LucaWintergerst for reporting!
Also attempt to edit text at the bottom of the large body. This will cause the autocomplete token iterator to traverse a large part of the body.
There should now be no noticeable delay when typing or moving the cursor around.
Release Note
We fixed a Console performance regression that caused a big slow down for large request bodies.
Checklist
Delete any items that are not applicable to this PR.
For maintainers