Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support xdp auth with tail call #869

Closed
wants to merge 6 commits into from
Closed

Conversation

weli-l
Copy link
Contributor

@weli-l weli-l commented Sep 19, 2024

What type of PR is this?

What this PR does / why we need it:

Which issue(s) this PR fixes:
Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?:


@kmesh-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign kevin-wangzefeng for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@weli-l weli-l force-pushed the dev/xdp_auth_2 branch 2 times, most recently from 74e8ab6 to ddcae11 Compare September 19, 2024 07:44
@@ -22,6 +23,25 @@ struct {
__uint(max_entries, MAP_SIZE_OF_AUTH_POLICY);
} map_of_authz SEC(".maps");

struct match_result {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the same content as PR #863 ?
Can't we put them together?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the same content as PR #863 ? Can't we put them together?

PR #863 is relatively large and contains the tail call logic of this PR. Merge this PR first and close the previous PR #863

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then you may need to add the /hold labels to #863.

Copy link

codecov bot commented Sep 19, 2024

Codecov Report

Attention: Patch coverage is 60.00000% with 4 lines in your changes missing coverage. Please review.

Project coverage is 54.43%. Comparing base (0a7735d) to head (aa98335).
Report is 281 commits behind head on main.

Files with missing lines Patch % Lines
pkg/bpf/workload/xdp.go 60.00% 2 Missing and 2 partials ⚠️
Files with missing lines Coverage Δ
pkg/bpf/workload/xdp.go 55.38% <60.00%> (ø)

... and 8 files with indirect coverage changes


Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 99e0c0c...aa98335. Read the comment docs.


🚨 Try these New Features:

@hzxuzhonghu
Copy link
Member

Wanna to know why do we need tailcall here

@weli-l
Copy link
Contributor Author

weli-l commented Sep 20, 2024

Wanna to know why do we need tailcall here

Because the subsequent authentication rules will support srcip, dstip, namespce, etc., but if tailcall is not used, the bytes of the ebpf program will be too large (up to 1000000 bytes), and it will not pass the verification of the verifier.

@hzxuzhonghu
Copy link
Member

got it

struct match_result {
__u32 action;
__u32 match_res;
__u16 dport;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if dport is needed? this can be get from element "match"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if dport is needed? this can be get from element "match"

We need to know whether it is ipv4 or ipv6 type. If we do not pass dport, we need to pass xdp_info to determine whether it is v4 or v6.

@@ -22,6 +23,25 @@ struct {
__uint(max_entries, MAP_SIZE_OF_AUTH_POLICY);
} map_of_authz SEC(".maps");

struct match_result {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
struct match_result {
struct match_ctx {

__uint(key_size, sizeof(__u32));
__uint(value_size, sizeof(struct match_result));
__uint(max_entries, 1);
} map_of_t_data SEC(".maps");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

map_of_tailcall_info

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

map_of_tailcall_info

this can also be truncated as map_of_tailcall, so I change it to tailcall_info_map

struct match_result *res;

res = bpf_map_lookup_elem(&map_of_t_data, &key);
if (!res) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when the first time come, can not find a record?

{
struct match_result *res;
__u32 key = 0;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the key is always 0, there is a concurrency problem.

__uint(value_size, sizeof(__u32));
__uint(max_entries, MAP_SIZE_OF_TAIL_CALL_PROG);
__uint(map_flags, 0);
} map_of_tail_call_prog_for_xdp SEC(".maps");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The map name is too long and will be truncated when used bpftool

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The map name is too long and will be truncated when used bpftool

change to xdp_tailcall_map

if (xdp_rbac_manage(&info, &tuple_info) == AUTH_DENY) {
return xdp_deny_packet(&info, &tuple_info);
}
// tail call will be executed in it, so there will be no return value, ignore it.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

line 196 is also to be processed

@supercharge-xsy
Copy link
Contributor

It is recommended that this modification be merged after version 0.5 is released.

{
__u32 matchResult;
__u32 key = bpf_get_smp_processor_id();
struct match_result res;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

match_ctx

@weli-l weli-l force-pushed the dev/xdp_auth_2 branch 3 times, most recently from 5656126 to 56f179a Compare October 10, 2024 08:09
Signed-off-by: weli-l <[email protected]>
@@ -22,6 +23,22 @@ struct {
__uint(max_entries, MAP_SIZE_OF_AUTH_POLICY);
} map_of_authz SEC(".maps");

struct match_ctx {
__u32 action;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please add some comments

Copy link
Member

@hzxuzhonghu hzxuzhonghu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What i understand is each time a packet received, it will run auth, no considering it was done for the same connection, right?


if (match->n_destination_ports == 0 && match->n_not_destination_ports == 0) {
return MATCHED;
bpf_tail_call(ctx, &xdp_tailcall_map, TAIL_CALL_AUTH_IN_USER_SPACE);
return (res->action == AUTH_DENY) ? XDP_PASS : XDP_DROP;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be opposite

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if bpf_tail_call succeed, it can not runinto the return

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need delete the map record if tail_call failed, and other tail_call code

int ret;

if (get_tuple_key(ctx, &tuple_key, &info) != PARSER_SUCC) {
BPF_LOG(ERR, AUTH, "Failed to get tuple key\n");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: \n is not needed

}

SEC("xdp_auth")
int matchDstPorts(struct xdp_md *ctx)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

obey c naming convention

Suggested change
int matchDstPorts(struct xdp_md *ctx)
int match_dst_port(struct xdp_md *ctx)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how do you make sure this prog is not triggered each time a packet received

// todo: add other match types
matchResult = matchDstPorts(match, info, tuple_info);
return matchResult;
ret = bpf_map_update_elem(&tailcall_info_map, &tuple_key, &res, BPF_ANY);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there are multi policy set, multiple packet concurrently received, isn't this map overwritten with random policy?

Signed-off-by: weli-l <[email protected]>
__uint(type, BPF_MAP_TYPE_HASH);
__uint(key_size, sizeof(struct bpf_sock_tuple));
__uint(value_size, sizeof(struct match_ctx));
__uint(max_entries, 256);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is 256 records sufficient in a large number of concurrent requests?


// If auth denied, it still returns XDP_PASS here, so next time when a client package is
// sent to server, it will be shutdown since server's RST has been set
return XDP_PASS;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't go here. It doesn't correspond to the comments.

@@ -159,18 +111,18 @@ static inline int match_workload_policy(struct xdp_info *info, struct bpf_sock_t
if (!policy) {
continue;
}
if (do_auth(policy, info, tuple_info) == AUTH_DENY) {
BPF_LOG(ERR, AUTH, "policy %u manage result deny\n", policyId);
if (do_auth(ctx, policy) == AUTH_DENY) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tail_call mechanism is used. There is no return value.

@@ -194,7 +317,7 @@ do_auth(Istio__Security__Authorization *policy, struct xdp_info *info, struct bp
if (!rule) {
continue;
}
if (rule_match_check(rule, info, tuple_info) == MATCHED) {
if (rule_match_check(ctx, rule, policy->action) == MATCHED) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Match should not be returned. If the processing is successful in tail_call, will use taill_call result. If cant handle (the configuration is abnormal), bypass the socket directly.

BPF_LOG(ERR, AUTH, "Failed to get tuple key\n");
return XDP_ABORTED;
}
res.match = match;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
res.match = match;
res.match = match_info;

struct xdp_info info = {0};
int ret;

if (get_tuple_key(ctx, &tuple_key, &info) != PARSER_SUCC) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if (get_tuple_key(ctx, &tuple_key, &info) != PARSER_SUCC) {
if (construct_tuple_key(ctx, &tuple_key, &info) != PARSER_SUCC) {

BPF_LOG(ERR, AUTH, "Failed to update map, error: %d\n", ret);
return XDP_DROP;
}
bpf_tail_call(ctx, &xdp_tailcall_map, TAIL_CALL_PORT_MATCH);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if tail_call failed, delete the tail_call_info map record


if (match->n_destination_ports == 0 && match->n_not_destination_ports == 0) {
return MATCHED;
bpf_tail_call(ctx, &xdp_tailcall_map, TAIL_CALL_AUTH_IN_USER_SPACE);
return (res->action == AUTH_DENY) ? XDP_PASS : XDP_DROP;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need delete the map record if tail_call failed, and other tail_call code

}

if (match->n_not_destination_ports != 0) {
notPorts = kmesh_get_ptr_val(match->not_destination_ports);
if (!notPorts) {
return UNMATCHED;
return (res->action == AUTH_DENY) ? XDP_PASS : XDP_DROP;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the configuration is abnormal, it is more reasonable to directly bypass the service.


if (parser_xdp_info(ctx, &info) == PARSER_FAILED)
return XDP_PASS;
if (info.iph->version != 4 && info.iph->version != 6)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the same judgment logic is already included in parser_xdp_info?

}

ports = kmesh_get_ptr_val(match->destination_ports);
if (!ports) {
return UNMATCHED;
return (res->action == AUTH_DENY) ? XDP_PASS : XDP_DROP;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return (res->action == AUTH_DENY) ? XDP_PASS : XDP_DROP;
return (res->action == AUTH_DENY) ? XDP_DROP : XDP_PASS;

?

if ((void *)(info->ethh + 1) > end)
return PARSER_FAILED;

// ip4|ip6 header
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// ip4|ip6 header
// ipv4|ipv6 header

@weli-l weli-l closed this Nov 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants