Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimized Privilege Evaluation [DRAFT] #4380

Draft
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

nibix
Copy link
Collaborator

@nibix nibix commented May 30, 2024

Description

This is a preview of a possible implementation for #3870: Optimized Privilege Evaluation.

The major purpose is to give an initial impression on the approach and to facilitate review. However, more implementation will be necessary.

  • Category: Enhancement
  • Why these changes are required?

Performance tests indicate that the OpenSearch security layer adds a noticeable overhead to the indexing throughput of an OpenSearch cluster. The overhead may vary depending on the number of indices, the use of aliases, the number of roles and the size of the user object. The goal of these changes is to improve privilege evaluation performance and to make it less dependent on the number of indices, etc.

  • What is the old behavior before changes and new behavior after changes?

The main behavior will not change. However, I would like to discuss whether this opportunity can be used to get rid of special behaviors which seem to be obscure and mostly useless.

Issues Resolved

Testing

  • At the moment, there is a unit test at org.opensearch.security.privileges.ActionPrivilegesTest
  • However, as this is a draft, it is likely that some of the existing integration tests will fail.

Check List

  • [] New functionality includes testing
  • New functionality has been documented
  • Commits are signed per the DCO using --signoff

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

@nibix
Copy link
Collaborator Author

nibix commented May 30, 2024

Please have a look at the approaches. I would be very interested in your opinions.

As mentioned above, the implementation is not complete yet. The code contains a couple of TODO comments to indicate what work needs to be done.

I would also like to discuss whether a couple of things would be really necessary or whether there might be a chance to simplify the implementation by abolishing them.

These are:

  • At the moment, only RoleV7 and ActionGroupV7 configurations are supported by the class ActionPrivileges. I am wondering wether there are any plans to get rid of the *V6 configurations at some point in time. The figure V6 still stems from ODFE support for Elasticsearch 6. Are there really still OpenSearch users on this configuration? Additionally, the use of two impl classes per config type makes many unsafe casts of the generic SecurityDynamicConfiguration class necessary. If there would be only one impl class per config type, the APIs could be designed much safer.
  • In config.yml, the semantics of roles.yml can be changed by setting multi_rolespan_enabled to false. The OpenSearch docs do not mention this flag. In my perception, there is no real use of this setting except maintaining backwards compatiblity. However, for OpenSearch, the default of was always true since its inception. Are there really users having it set to false?

@nibix
Copy link
Collaborator Author

nibix commented May 30, 2024

I have also started to work on the micro benchmarks as discussed in #3903. The generally accepted standard for micro benchmarks in Java is the JMH framework. However, this is licensed as GPL v2 with classpath exception: https://github.com/openjdk/jmh/blob/master/LICENSE Is the inclusion of a dependency with such a license acceptable in OpenSearch?

@peternied
Copy link
Member

I have also started to work on the micro benchmarks as discussed in #3903. The generally accepted standard for micro benchmarks in Java is the JMH framework. However, this is licensed as GPL v2 with classpath exception: https://github.com/openjdk/jmh/blob/master/LICENSE Is the inclusion of a dependency with such a license acceptable in OpenSearch?

@cwperks Can you look into this?

Copy link
Member

@peternied peternied left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the great work here @nibix, looking forward to seeing the micro-benchmark numbers and seeing the test start passing again :D

I managed to do a high level pass of the changes, but notably did not look into depth on ActionPrivileges and FlattenedActionGroups.

@@ -0,0 +1,588 @@
/*
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets move these tests under the newer integrationTest framework

return this;
}

public CheckTable<String, String> getCheckTable() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets not bleed this data type outside of the class / isn't this table another form of the 'reason' or the toString?

@cwperks
Copy link
Member

cwperks commented May 30, 2024

@cwperks Can you look into this?

We're looking into this and will get back with an answer.

@DarshitChanpura
Copy link
Member

However, this is licensed as GPL v2 with classpath exception: https://github.com/openjdk/jmh/blob/master/LICENSE Is the inclusion of a dependency with such a license acceptable in OpenSearch?

The feedback we received was that the code can be used only for internal operations. Since JMH usage will be part of Open-source security, my understanding is that this is not approved.

Copy link

codecov bot commented Jun 28, 2024

Codecov Report

Attention: Patch coverage is 73.42657% with 532 lines in your changes missing coverage. Please review.

Project coverage is 67.96%. Comparing base (830b341) to head (3ed06c7).
Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
...search/security/configuration/DlsFlsValveImpl.java 52.45% 72 Missing and 25 partials ⚠️
...ensearch/security/privileges/ActionPrivileges.java 84.63% 40 Missing and 25 partials ⚠️
...privileges/dlsfls/AbstractRuleBasedPrivileges.java 81.72% 33 Missing and 20 partials ⚠️
...security/configuration/DlsFlsFilterLeafReader.java 43.90% 42 Missing and 4 partials ⚠️
...g/opensearch/security/privileges/IndexPattern.java 54.25% 38 Missing and 5 partials ⚠️
...earch/security/privileges/dlsfls/FieldMasking.java 75.64% 32 Missing and 6 partials ⚠️
...ch/security/privileges/dlsfls/FieldPrivileges.java 78.98% 19 Missing and 10 partials ⚠️
...earch/security/privileges/PrivilegesEvaluator.java 71.11% 16 Missing and 10 partials ⚠️
...urity/privileges/dlsfls/FlsStoredFieldVisitor.java 44.82% 14 Missing and 2 partials ⚠️
...security/privileges/dlsfls/DocumentPrivileges.java 68.08% 15 Missing ⚠️
... and 23 more
Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main    #4380      +/-   ##
==========================================
- Coverage   67.98%   67.96%   -0.02%     
==========================================
  Files         310      325      +15     
  Lines       20924    21916     +992     
  Branches     3318     3489     +171     
==========================================
+ Hits        14225    14896     +671     
- Misses       4951     5243     +292     
- Partials     1748     1777      +29     
Files with missing lines Coverage Δ
...ty/configuration/ConfigurationLoaderSecurity7.java 70.24% <100.00%> (+0.24%) ⬆️
...ecurity/configuration/ConfigurationRepository.java 72.72% <100.00%> (ø)
...urity/configuration/PrivilegesInterceptorImpl.java 59.77% <100.00%> (+2.45%) ⬆️
...va/org/opensearch/security/configuration/Salt.java 100.00% <ø> (ø)
...ty/configuration/SecurityIndexSearcherWrapper.java 94.82% <100.00%> (+0.09%) ⬆️
...org/opensearch/security/filter/SecurityFilter.java 65.87% <100.00%> (-0.48%) ⬇️
...ch/security/privileges/PitPrivilegesEvaluator.java 96.15% <100.00%> (-0.15%) ⬇️
...urity/privileges/RestLayerPrivilegesEvaluator.java 93.75% <100.00%> (-0.37%) ⬇️
.../opensearch/security/securityconf/ConfigModel.java 100.00% <ø> (ø)
...pensearch/security/securityconf/ConfigModelV7.java 73.24% <ø> (+5.69%) ⬆️
... and 35 more

... and 10 files with indirect coverage changes

@nibix nibix force-pushed the optimized-privilege-evaluation branch from b5ff5c8 to 004df3b Compare July 4, 2024 09:05
@nibix
Copy link
Collaborator Author

nibix commented Jul 4, 2024

@cwperks @peternied @DarshitChanpura @scrawfor99

Just FYI:

I worked a bit on the micro benchmarking part of this issue. As JMH was out due to its license, I reviewed other frameworks. It is remarkable that in most cases the descriptions of the frameworks will say "rather use JMH instead of this framework".

Anyway, I tried out https://github.com/noconnor/JUnitPerf because the idea of using JUnit infrastructure seemed to be nice. The big downside of JUnitPerf is that it does not work well together with parameterized JUnit tests.

See here for an example:

https://github.com/opensearch-project/security/blob/004df3bbdc69514f0c95acd2a1653a01e71758b9/src/performanceTest/java/org/opensearch/security/privileges/PrivilegesEvaluatorPerformanceTest.java

The high number of very similar methods is caused by the lack of parameter support - in the end we need to test quite a few different dimensions (like number of indices, number of roles, etc), on the same operation.

As I was really keen on getting some broader result, I went on the "roll your own" path and quick threw together some naive micro benchmarking code. So, this is just a temporary thing, thus very messy, but it gives me some numbers. See here:

https://github.com/opensearch-project/security/blob/004df3bbdc69514f0c95acd2a1653a01e71758b9/src/performanceTest/java/org/opensearch/security/privileges/PrivilegesEvaluatorPeformanceTest2.java

So, I let run some tests and here are some preliminary results.

Micro benchmark test results

Disclaimer

Generally, the real world meaningfulness of micro benchmarks is limited. On a full real cluster, this can look totally different due to:

  • Proportion of effects to other time consuming operations
  • Effects caused by garbage collection, thread synchronization or JIT
  • Different hardware which can sustain constant CPU load much better than the consumer system I used to run the benchmarks

On the other hand, micro benchmarks make some tests so much easier. For micro benchmarking, a Metadata instance with 100000 indices can be mocked within a few seconds. On the other hand, creating so many indices on a real cluster would take much, much longer.

Full cluster benchmarks are also coming up, but these are still in the works.

Scope

The micro benchmarks were applied to the following code:

try {
PrivilegesEvaluationContext context = subject.createContext(
user(user),
requestParameters.action,
requestParameters.getRequest(this.random),
null,
null
);
PrivilegesEvaluatorResponse response = subject.evaluate(context);
if (!response.isAllowed()) {
throw new RuntimeException("Assertion failed: " + response);

For comparison, we also applied the micro benchmarks to the following code on the old code base:

https://github.com/nibix/security/blob/300d138578ef853071d649d647335d8430320f14/src/performanceTest/java/org/opensearch/security/privileges/PrivilegesEvaluatorPeformanceTest2.java#L502-L510

Due to refactorings, the code looks different. However, what happens under the hood is effectively the same.

Additionally some further code changes were necessary to make PrivilegeEvaluator independent from dependencies like ClusterService in order to make it really unit testable/benchmarkable. I first tried to use Mockito to mock ClusterService instances but had to learn that the performance characteristics of Mockito are so bad that it is unsuitable for micro benchmarking.

As we only look at the evaluate() method, DLS/FLS evaluation is disabled for this scope.

Tested dimensions

Action requests

We tested privilege evaluation with three different actions:

  • indices:data/write/bulk[s] with BulkShardRequest
    • with 10 bulk items
    • with 1000 bulk items
  • indices:data/write/bulk with BulkRequest
  • indices:data/read/search with SearchRequest
    • with an index pattern that matches 2% of all indices (randomized)
    • with an index pattern that matches 20% of all indices (randomized)

Number of indices on cluster

We tested with these indices:

  • 10 indices:index_a0, index_a1, index_b0, index_b1, index_c0, index_c1, ... , index_e0, index_e1
  • 30 indices: index_a0, ..., index_a5, ... , index_e0, ... index_e5
  • 100 indices: index_a0, ..., index_a19, ... , index_e0, ... index_e19
  • 300 indices
  • 1000 indices
  • 3000 indices
  • 10000 indices
  • 30000 indices
  • 100000 indices

Different user configurations

  • A user with full privileges (using * for index_permissions and cluster_permssions)
  • A user with a single role giving CRUD permissions on index_a* and index_b*
  • A user with 20 roles giving CRUD permissions individually on index_a0, index_a1, ...
  • A user with 40 roles in total: 20 roles giving READ permissions individually on index_a0, index_a1, ... and 20 more roles giving WRITE permissions on the same indices
  • A user with a single role which uses a regex index pattern with a user attribute. This is interesting because it makes certain pre-computations impossible and requires to re-evaluate the index patterns for each request.

Results

The raw result data can be found here: https://docs.google.com/spreadsheets/d/1Hd6pZFICTeplXIun3UpEplANAwQDE0jXbqLnnJz61AI/edit?usp=sharing

In the shards below, dashed lines indicate the performance of the old privilege evaluation code on a particular combination of test dimensions. Solid lines with the same color indicate the performance of the new code with the same test dimensions. The x-axis represents the number of indices on the cluster, the y-axis represents the throughput in operations per second.

bulk[s], BulkShardRequest

The performance of BulkShardRequests is the most interesting factor on clusters doing heavy ingestion. A single bulk requests will be broken down into the individual indices and shards, resulting in quite a few BulkShardRequests for which the privilege evaluation needs to be done in parallel, thus performance characteristics here have a high impact.

The privilege evaluation for the top level BulkRequest is less interesting because it is just an index-independent cluster privilege, which is easy to evaluate. Still, we will also review this below.

Requests with 10 items

chart

Requests with 1000 items

chart(1)

Observation

The performance of the old code degrades with the increasing number of indices. Starting with 30000 indices, we have a method call latency which is > 10 ms. This is where users on ingestion heavy clusters often start to experience performance issues and the method calls start to show up in the hot thread dumps.

In contrast, the throughput of the new code stays constant, independent of the number of indices. It can be seen that the number of roles still has quite an effect on the throughput. But here we talk about time differences below 0.1 ms, which should not be significant in a real world cluster.

bulk, BulkRequest

The top level bulk action is a cluster action, so it does not require considering the indices on a cluster.

chart(3)

Observation

As expected, performance is independent of number of indices, both on the new implementation and on the old implementation. However, the new implementation improves throughput by a factor between 2 and 3.

search, SearchRequest

Search operations become interesting when there are monitoring/alerting solutions issuing search requests on broad index patterns in short time intervals.

Search with search patterns that match 2% of the indices

chart(4)

Search with search patterns that match 20% of the indices

chart(5)

Observation

Both the old and new code degrade with the growing number of indices. Profiling shows that this is mostly not due to privilege evaluation, but due to the index pattern expression resolution.

However, the new code retains method call latencies below 20 ms even on clusters with 100000 indices. The old code however, takes up to 5 seconds for a single method call on clusters with 100000 indices.

See the following chart for a zoomed in section of the 2% of indices case for 10000-100000 indices:

chart

@nibix nibix force-pushed the optimized-privilege-evaluation branch 2 times, most recently from 30fcc0f to 41dd986 Compare July 16, 2024 09:30
@nibix nibix force-pushed the optimized-privilege-evaluation branch from 7adb281 to 504a157 Compare July 17, 2024 18:14
@cwperks
Copy link
Member

cwperks commented Jul 17, 2024

@nibix thank you for the update. The results of bulk indexing in a cluster with a large number of indices is great to see and I like how the test cases isolate PrivilegeEvaluation for performance testing. Is there any work remaining before labeling this PR ready for review?

@nibix
Copy link
Collaborator Author

nibix commented Jul 17, 2024

@cwperks

Is there any work remaining before labeling this PR ready for review?

Yes. Actually, I was also about to ping you regarding this.

Note: This leaves the DLS/FLS implementation unchanged at the moment. Thus, we will still have a part of the problematic performance characteristics. However, I would still get this merged and then add DLS/FLS support in a further PR to keep things (relatively) small.

@nibix nibix force-pushed the optimized-privilege-evaluation branch 2 times, most recently from b30b530 to 78db68b Compare August 2, 2024 10:17
@nibix nibix force-pushed the optimized-privilege-evaluation branch 2 times, most recently from fca7058 to fcc6183 Compare September 2, 2024 07:45
@nibix
Copy link
Collaborator Author

nibix commented Sep 2, 2024

Benchmark results

After having shared the micro benchmarks before, I can now share results of benchmarks performed on an actual cluster. This brings the following benefits:

  • It puts the performance gains into perspective of the performance characteristics of the whole software. This way it gets apparent whether the performance gains actually have significance or if they just "vanish" behind other dominating performance characteristics.
  • We can also test the performance gains for DLS. With micro benchmarks, this is not possible as DLS is not implemented as an isolated piece of code, but is performed distributed over the cluster during a search operation.

Disclaimer

The usual benchmarking disclaimer: Of course, these figures can only represent very specific scenarios. Other scenarios can look quite different, as there are so many variables which can be different. Yet, especially the "slow parts" of the benchmark results can give one an impression where real performance issues are.

Test environment

We ran the tests on an OpenSearch cluster hosted using Kubernetes. The host machine for the K8S nodes was a physical server with an AMD EPYC 7401P CPU. We ran an OpenSearch cluster with 3 nodes, each node had 64 GB of RAM, 32 GB of Java Heap, and a cpu: 8 limit configured in K8S. The version of OpenSearch was a recent snapshot of main.

Tested dimensions

Operations

We benchmarked the following operations:

  • Ingestion with the REST _bulk API into a single index with 8 parallel clients
    • with 10 bulk items per request
    • with 1000 bulk items per request
  • Search requests with the REST _search API with 16 parallel clients
    • on exactly one index
    • on an index expression matching 2% of the indices on the cluster
    • on an index expression matching 20% of the indices on the cluster

Number of indices on cluster

We tested with these indices:

  • 10 indices: index_a0, index_a1, index_b0, index_b1, index_c0, index_c1, ... , index_e0, index_e1
  • 30 indices: index_a0, ..., index_a5, ... , index_e0, ... index_e5
  • 100 indices: index_a0, ..., index_a19, ... , index_e0, ... index_e19
  • 300 indices
  • 1000 indices
  • 3000 indices
  • 10000 indices

User configurations

We tested with differently configured users to find out about the effects of complexity of roles, DLS rules, etc. The users were:

  • A user identified by the super admin certificate. Using this certificate, most of the privilege evaluation code is by-passed. This gives us a rough "upper limit" for the possible performance.
  • A user with full privileges (using * for index_permissions and cluster_permssions)
  • A user with a single role giving CRUD permissions on index_a* and index_b*
  • A user with 20 roles giving CRUD permissions individually on index_a0, index_a1, ...
  • A user with 40 roles in total: 20 roles giving READ permissions individually on index_a0, index_a1, ... and 20 more roles giving WRITE permissions on the same indices
  • A user with a single role giving CRUD permissions on index_a* and index_b* and additionally a simple DLS query

Test results

A commented summary of the results follows in the upcoming sections.

The raw test results can be also reviewed at https://docs.google.com/spreadsheets/d/16VRr9B2bPTyyS_T-IZobUG3C0uJxnXRxlbmsdH_MeCg/edit?usp=sharing

Indexing throughput

Bulk size 10

chart(7)

In this and the following charts, the dashed lines represent OpenSearch with the standard security plugin. The solid lines represent OpenSearch with the security plugin with the optimized privilege evaluation code.

The blue line represents requests authenticated by the super admin certificate, which by-passes most of privilege evaluation. Thus, the blue line forms a kind of "hull curve", it can be seen as a rough theoretical maximum from a security plugin POV.

The green lines represent users with full privileges. Yellow lines represent users with limited privileges - the darker the yellow, the more complex the role configuration.

The benchmark results for the standard security plugin show a clear performance decline starting at about 300 indices on the cluster. This is caused by the privilege evaluation code resolving the index patterns in the roles configuration against all cluster indices for each request. At 1000 indices, we only get roughly 60% of the throughput that was observed for 10 indices.

Additionally, it can be seen that growing role complexity has a clear effect on throughput. More complex roles show a significant lower throughput.

On the other hand, the optimized security plugin shows a mostly linear throughput, which is independent of the number of indices. There is a slight decline starting from 3000 indices. The reasons for this are unknown so far. Yet, these values are still well above the standard plugin.

Bulk size 1000

chart(8)

Larger bulk sizes shift the place where the gap between standard and optimized plugin performance opens a bit to the right. Here we see a strong performance decline in the standard security plugin starting from 1000 indices.

The optimized security plugin exhibits better performance in all cases, though. Additionally, the performance decline that was visible for the bulk: 10 case is not really visible here any more.

The charts show a low throughput for the full privileges user both for the standard plugin and for the optimized plugin in the indices: 10 case. However, I would guess that this is some artifact of the used benchmarking process and is not a real performance characteristic.

Search throughput

Search on single index

chart(9)

The search throughput graphs introduce one further user: Users symbolized by the purple line are users which do have a document level access restriction implemented with a DLS query in one role.

Like for the bulk indexing, the standard plugin shows a declining performance with increasing number of indices on the clusters. Also here, the complexity of the roles configuration has a very significant effect on the throughput. Especially the user with DLS exhibits heavy performance degradations. With 300 indices, the DLS user shows less than half the throughput of the full privileges user of the standard plugin.

On the other hand, the optimized plugin shows mostly constant performance characteristics, independent of the number of indices. Also the DLS user does not show any significant degradation of performance.

Search on 2% of indices

chart(10)

When an index pattern comes into play, both the standard plugin and the optimized plugin show a performance degradation with a growing number of indices. This is due to the necessary resolution of the index pattern against all indices on the cluster.

The blue line - the super admin user - shows that there is quite a gap (about 20%, growing with higher number of indices) between the theoretically possible throughput and also the optimized plugin. This is likely due to the code in https://github.com/nibix/security/blob/main/src/main/java/org/opensearch/security/resolver/IndexResolverReplacer.java which still leaves some room for improvement.

Still, the optimized plugin delivers a clearly higher throughput in all cases.

Especially the DLS performance has strongly improved. For 1000 indices, we see a throughput of 1035 operations per second on the optimized plugin. The standard plugin just manages 53 operations per second with a service time of 300 ms per request. With 10000 indices, DLS gets virtually unusable on the standard plugin with just 0.6 operations per second and a service time of 16 seconds. The optimized plugin still delivers 99 operations per second.

Search on 20% of indices

chart(11)

With a search operation on 20% of indices, the index resolution performance gets so dominant that significant performance gains are no longer visible - except for DLS, which still shows very strong improvements. Using DLS with the standard plugin delivers a throughput of 7 operations per second already for just 1000 indices (service time 2.2 seconds). The optimized plugin still delivers a throughput of 113 operations per second (service time 176 ms).

The blue line - the admin cert user - shows that there is still room for improvement, though.

@DarshitChanpura
Copy link
Member

@nibix The results are very promising from the benchmarks. Should the index patten resolver be optimized as follow-up?

Signed-off-by: Nils Bandener <[email protected]>
Signed-off-by: Nils Bandener <[email protected]>
@nibix nibix force-pushed the optimized-privilege-evaluation branch from b479d36 to 06af405 Compare October 2, 2024 22:10
Signed-off-by: Nils Bandener <[email protected]>
Signed-off-by: Nils Bandener <[email protected]>
Signed-off-by: Nils Bandener <[email protected]>
Signed-off-by: Nils Bandener <[email protected]>
Signed-off-by: Nils Bandener <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants