You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
OpenSearch Dashboards Server supports compressed traffic. OpenSearch Core APIs support compression. OpenSearch JS Client supports the compression header but OpenSearch Dashboards does not currently hook into the available compression algorithm.
Describe the solution you'd like
Hook into the the clients compression algorithm (currently only gzip) and allow the feature to be configurable at an cluster admin level (opensearch_dashboards.yml config). Ideally the benefit would help request-response time overall with the compression on the client side. It should improve resource management if too much physical limitations of machines prevent successful response (one example being test runners).
Requirements:
Benchmark before enabling compression
Implementation
Benchmark after enabling compression
Update relevant workflows and CI
Describe alternatives you've considered
n/a
Additional context
Compression on sample data indices can be improvement. Better documentation on node API compression support (saved objects supports compression).
Hook into existing benchmark project can assist with this.
Audit requests and response are correctly compressed and decompressed. For example, concurrent search feature tends to not support the total max hits header
We saw an issue with of reaching the total disk allocation on 2.11 being hit easily and want to be more conscience of hitting this limit. Ideally the more aware of resource needs to run OpenSearch Dashboards, the less resources required by folks who run the application. We will have to think about if the engine supporting the request and generating the responses fail gracefully when compression is not supported.
Do we need to think about how plugins can hook into a framework of compression. For example, if OpenSearch compression is still Lucene based then likely document ID are generated specifically to support compression.
The text was updated successfully, but these errors were encountered:
Support compressing traffic with `opensearch.compression: true`.
Also, set compression in the Node server and OpenSearch traffic for
CI.
Issue resolved:
opensearch-project#5296
Signed-off-by: Kawika Avilla <[email protected]>
Support compressing traffic with `opensearch.compression: true`.
Also, set compression in the Node server and OpenSearch traffic for
CI.
Issue resolved:
opensearch-project#5296
Signed-off-by: Kawika Avilla <[email protected]>
kavilla
added a commit
to kavilla/OpenSearch-Dashboards-1
that referenced
this issue
May 2, 2024
Support compressing traffic with `opensearch.compression: true`.
Also, set compression in the Node server and OpenSearch traffic for
CI.
Issue resolved:
opensearch-project#5296
Signed-off-by: Kawika Avilla <[email protected]>
Is your feature request related to a problem? Please describe.
OpenSearch Dashboards Server supports compressed traffic. OpenSearch Core APIs support compression. OpenSearch JS Client supports the compression header but OpenSearch Dashboards does not currently hook into the available compression algorithm.
Describe the solution you'd like
Hook into the the clients compression algorithm (currently only gzip) and allow the feature to be configurable at an cluster admin level (
opensearch_dashboards.yml
config). Ideally the benefit would help request-response time overall with the compression on the client side. It should improve resource management if too much physical limitations of machines prevent successful response (one example being test runners).Requirements:
Describe alternatives you've considered
n/a
Additional context
Compression on sample data indices can be improvement. Better documentation on node API compression support (saved objects supports compression).
Hook into existing benchmark project can assist with this.
Audit requests and response are correctly compressed and decompressed. For example, concurrent search feature tends to not support the total max hits header
Example of implementation the solution above: https://github.com/opensearch-project/OpenSearch-Dashboards/pull/5223/files
We saw an issue with of reaching the total disk allocation on 2.11 being hit easily and want to be more conscience of hitting this limit. Ideally the more aware of resource needs to run OpenSearch Dashboards, the less resources required by folks who run the application. We will have to think about if the engine supporting the request and generating the responses fail gracefully when compression is not supported.
Do we need to think about how plugins can hook into a framework of compression. For example, if OpenSearch compression is still Lucene based then likely document ID are generated specifically to support compression.
The text was updated successfully, but these errors were encountered: