Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Store multiple policies in the index #109

Merged
merged 2 commits into from
Oct 14, 2024
Merged

Conversation

adam-cattermole
Copy link
Member

@adam-cattermole adam-cattermole commented Oct 11, 2024

Prior to making more substantial changes to the configuration I've quickly made the change to store multiple policies for each hostname:

  • We now iterate over each "policy" at the hostname of the index, finding the first one where the "rules" apply.
  • Updated the log ordering in tests - we now debug log when checking the conditions prior to knowing we've selected a policy, so the "policy selected" statement comes after

Signed-off-by: Adam Cattermole <[email protected]>
@eguzki
Copy link
Contributor

eguzki commented Oct 11, 2024

For the record, I want to write here that I would not follow the path opened by this PR. Let me try my (last) bullet and if still any of you do not buy this, I will accept it and approve this.

TL;DR: let's not reinvent the wheel here.

Envoy, for routing, also is doing a similar task as the wasm needs to do. Envoy has a set of routes indexed by domains and some matchers to select one route out of multiple routes for a given domain. This is what Envoy calls VirtualHost. And each virtualhost object, has a list of domains.

virtual_hosts:
  - name: gateway-system/kuadrant-ingressgateway/http/api_toystore_com
    domains:
      - api.toystore.com

It turns out that domains field, as explained in their doc,

The longest wildcards match first. Only a single virtual host in the entire route configuration can match on *. A domain must be unique across all virtual hosts or the config will fail to load.

And, more importantly, the domain search order is as follows (also from the doc).

        Exact domain names: www.foo.com.

        Suffix domain wildcards: *.foo.com or *-bar.foo.com.

        Prefix domain wildcards: foo.* or foo-*.

        Special wildcard * matching any domain.

In conclusion: the order of virtual_hosts does not matter.

What are we doing in this PR? We are making the order of policies (the virtualhosts of our Wasm configuration) to matter. Moreover, we are making the configuration far more complicated to understand as we are allowing multiple "policies" under the same domain. It is totally counter intuitive IMO. And Wasm policy rules are meant to solve that problem precisely. Why not use them?

I am advocating for following Envoy's approach. In the first place, I would implement, in Wasm configuration terms:

A hostname must be unique across all policies or the config will fail to load.

Currently, the Wasm-shim only implements The longest wildcards match first. I was very careful to have that functionality from the very beginning. But we should also change the trie implementation with something smarter about

        Exact domain names: www.foo.com.

        Suffix domain wildcards: *.foo.com or *-bar.foo.com.

        Prefix domain wildcards: foo.* or foo-*.

        Special wildcard * matching any domain.

So, instead of allowing the wasm configuration enabled by this PR, which is something like this:

      policies:
      - hostnames:
        - '*'
        name: gateway-system/gw
        rules:
        - actions:
          - data:
            - static:
                key: limit.limit1__e3f1f251
                value: "1"
            extension: limitador
            scope: gateway-system/A
          conditions:
          - allOf:
            - operator: eq
              selector: request.url_path
              value: /toys
            - operator: eq
              selector: request.method
              value: GET
      - hostnames:
        - '*'
        name: gateway-system/gw
        rules:
        - actions:
          - data:
            - static:
                key: limit.limit1__e3f1f251
                value: "1"
            extension: limitador
            scope: gateway-system/B
          conditions:
          - allOf:
            - operator: startswith
              selector: request.url_path
              value: /
            - operator: eq
              selector: request.method
              value: GET

I suggest having something like:

      policies:
      - hostnames:
        - '*'
        name: policyA
        rules:
        - name: ruleA
          actions:
          - data:
            - static:
                key: limit.limit1__e3f1f251
                value: "1"
            extension: limitador
            scope: gateway-system/A
          conditions:
          - allOf:
            - operator: eq
              selector: request.url_path
              value: /toys
            - operator: eq
              selector: request.method
              value: GET
        - name: ruleB
          actions:
          - data:
            - static:
                key: limit.limit1__e3f1f251
                value: "1"
            extension: limitador
            scope: gateway-system/B
          conditions:
          - allOf:
            - operator: startswith
              selector: request.url_path
              value: /
            - operator: eq
              selector: request.method
              value: GET

If the issue is tracking which policy and rule was activated, we can easily handle that adding name field to the rules as in the example above. Let's not create one "so-called-policy" per listener-HTTPRouteRule pair. Kuadrant's control plane can write HTTPRules for a given listener in the corresponding policy/rule pair. And still follow the same order of preference defined by the Gateway API.

cc @alexsnaps

@guicassolato
Copy link

My 2 cents:

I agree that the order of the configs should not matter. The rest I honestly (and deeply respectfully) believe is conflating API concern with implementation detail.

Indexing "policies" by hostname does not mean the config has to require unique hostnames. I can see how that could be important for one who writes an Envoy config, where possibly the mental model of the topology needs to fit the uniqueness constraint of the domain names. Arguably, that could be because Envoy, in this case, has decided to leak an implementation decision (indexation) to its API, thus shaping the way its users have to write configs.

The wasm shim is not Envoy though. In the whole scope of Kuadrant, users are not expected to write wasm configs themselves. Rather, they are invited to think in terms of Gateway API resources. How those resources are organised will dictate the mental model users will establish, to reason about the state of things, how requests will be handled.

Even without writing the wasm configs themselves and perhaps more even so because of that, having an API that reflects that mental model is IMO a good thing. In fact, I even wonder (again in a very respectful manner) why gateways in general wouldn't give this idea a shot.

@alexsnaps
Copy link
Member

alexsnaps commented Oct 12, 2024

I agree that the order of the configs should not matter. The rest I honestly (and deeply respectfully) believe is conflating API concern with implementation detail.

I think we had agreed that the order would matter and be ordered by the Kuadrant Operator. But these are NOT virtual hosts tho, the things ordered are the HTTPRouteRule HTTPRouteMatches, that happen to also have a hostname associated. At this stage I'll be honest I'm getting slightly frustrated. We are running in circles and not getting to any working software.

The order that matters in this PR isn't the hostname's, but the "Rule", that's what is ordered. This PR fixes the issue where when having 2 "so called policies" have the same hostname, instead of replacing the last one seen (in the config's order, as we had decided), it appends it. Then, when looking up policies based on the hostname, searches for the first policy that matches given the other attributes of the request. Again, as we had discussed.

So someone tells me where this diverges from what was agreed upon. If now there are better ways to achieve the same, which I'm sure there is, as I consider this config utterly convoluted for reasons that I fail to see... sure! But in the meantime this is working software!

Copy link
Contributor

@eguzki eguzki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That been said, LGTM

@guicassolato
Copy link

guicassolato commented Oct 12, 2024

I agree that the order of the configs should not matter.
I think we had agreed that the order would matter and be ordered by the Kuadrant Operator.

Sorry, @alexsnaps. I wasn't clear in my comment before. I do not want to change anything here.

My comment was more in the general sense of, as a principle, maybe in a perfect world, users should not have to care about the order of their configs – and neither they will, while something else does that for them, whether it's the wasm module or the Kuadrant Operator. And yes, we agreed to be the latter, for reasons already discussed (e.g. identical source of truths whose precedence can only be known at the level of the operator that handles them). In fact, I've already started working on the part of sorting policies first by hostname.

But as you rightfully pointed out, it's the ordering by full matching rules (not only hostnames) that matters. I only focused on hostnames before because of the current structure of the config. For a better discussion about other ways to struct the config, we have #104 and #108.

conflating API concern with implementation detail

That was in reference to the criticism raised against allowing multiple policies per hostname, not about the PR, which I totally support.

@adam-cattermole
Copy link
Member Author

adam-cattermole commented Oct 14, 2024

I think we're in agreement that this is moving the direction we had decided (albeit a patch on the old convoluted config vs a new one), so I'm going to merge and we can continue to discuss and iterate with the potential solutions in #104 and #108

@adam-cattermole adam-cattermole merged commit 6b196bf into main Oct 14, 2024
8 checks passed
@adam-cattermole adam-cattermole deleted the multiple-policies branch October 14, 2024 07:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

4 participants