From 76ab3cd2533ee048a55699708581d272c6006171 Mon Sep 17 00:00:00 2001 From: Billy McFall <22157057+Billy99@users.noreply.github.com> Date: Fri, 21 Jun 2024 11:47:01 -0400 Subject: [PATCH 1/5] docs: convert notes Currently, when something is noted, something like `> ***Note:*** ...` is used, which bold the word `Note:` and shifts the text over with | displayed. We already have a markdown_extension that will display a box around the note, so convert all the notes to use `!!! Note` to improve the display of notes. Signed-off-by: Billy McFall <22157057+Billy99@users.noreply.github.com> --- docs/developer-guide/develop-operator.md | 17 ++++++++------- docs/developer-guide/documentation.md | 17 +++++++++------ docs/developer-guide/operator-quick-start.md | 13 ++++++----- docs/developer-guide/xdp-overview.md | 17 ++++++++------- docs/getting-started/building-bpfman.md | 23 ++++++++++---------- docs/getting-started/launching-bpfman.md | 7 +++--- docs/getting-started/running-release.md | 9 ++++---- docs/getting-started/running-rpm.md | 17 +++++++++------ 8 files changed, 66 insertions(+), 54 deletions(-) diff --git a/docs/developer-guide/develop-operator.md b/docs/developer-guide/develop-operator.md index e75c69999..8be668c50 100644 --- a/docs/developer-guide/develop-operator.md +++ b/docs/developer-guide/develop-operator.md @@ -167,14 +167,15 @@ When editing follow best practices describe in [Proto Best Practices](https://protobuf.dev/programming-guides/dos-donts/). -**Note:** `cargo xtask build-proto` also pulls in -[proto/csi.proto](https://github.com/bpfman/bpfman/blob/main/proto/csi.proto) (which is in the -same directory as -[proto/bpfman.proto](https://github.com/bpfman/bpfman/blob/main/proto/bpfman.proto)). -[proto/csi.proto](https://github.com/bpfman/bpfman/blob/main/proto/csi.proto) is taken from -[container-storage-interface/spec/csi.proto](https://github.com/container-storage-interface/spec/blob/master/csi.proto). -See [container-storage-interface/spec/spec.md](https://github.com/container-storage-interface/spec/blob/master/spec.md) -for more details. +!!! Note + `cargo xtask build-proto` also pulls in + [proto/csi.proto](https://github.com/bpfman/bpfman/blob/main/proto/csi.proto) (which is in the + same directory as + [proto/bpfman.proto](https://github.com/bpfman/bpfman/blob/main/proto/bpfman.proto)). + [proto/csi.proto](https://github.com/bpfman/bpfman/blob/main/proto/csi.proto) is taken from + [container-storage-interface/spec/csi.proto](https://github.com/container-storage-interface/spec/blob/master/csi.proto). + See [container-storage-interface/spec/spec.md](https://github.com/container-storage-interface/spec/blob/master/spec.md) + for more details. ### Generated Files diff --git a/docs/developer-guide/documentation.md b/docs/developer-guide/documentation.md index 297f995fb..46fbc5721 100644 --- a/docs/developer-guide/documentation.md +++ b/docs/developer-guide/documentation.md @@ -40,9 +40,10 @@ This indicates to `mkdocs` to pull the additional file from the project root dir For example: [docs/governance/MEETINGS.md](https://github.com/bpfman/bpfman/blob/main/docs/governance/MEETINGS.md) -> **NOTE:** This works for the website generation, but if a Markdown file is viewed through - Github (not the website), the link is broken. - So these files should only be linked from `docs/index.md` and `mkdocs.yml`. +!!! Note + This works for the website generation, but if a Markdown file is viewed through + Github (not the website), the link is broken. + So these files should only be linked from `docs/index.md` and `mkdocs.yml`. ### docs/developer-guide/api-spec.md @@ -63,8 +64,9 @@ cd bpfman/ mkdocs build ``` ->**NOTE:** If `mkdocs build` gives you an error, make sure you have the mkdocs -packages listed below installed. +!!! Note + If `mkdocs build` gives you an error, make sure you have the mkdocs + packages listed below installed. To preview from a build on a local machine, start the mkdocs dev-server with the command below, then open up `http://127.0.0.1:8000/` in your browser, and you'll see the default home page @@ -97,8 +99,9 @@ mkdocs -V mkdocs, version 1.4.3 from /home/$USER/.local/lib/python3.11/site-packages/mkdocs (Python 3.11) ``` ->**NOTE:** If you have an older version of mkdocs installed, you may need to use -the `--upgrade` option (e.g., `pip install --upgrade mkdocs`) to get it to work. +!!! Note + If you have an older version of mkdocs installed, you may need to use + the `--upgrade` option (e.g., `pip install --upgrade mkdocs`) to get it to work. ## Document Images diff --git a/docs/developer-guide/operator-quick-start.md b/docs/developer-guide/operator-quick-start.md index 53d473082..8bc3d1c34 100644 --- a/docs/developer-guide/operator-quick-start.md +++ b/docs/developer-guide/operator-quick-start.md @@ -23,12 +23,13 @@ cd bpfman/bpfman-operator make run-on-kind ``` ->> **NOTE:** By default, bpfman-operator deploys bpfman with CSI enabled. -CSI requires Kubernetes v1.26 due to a PR -([kubernetes/kubernetes#112597](https://github.com/kubernetes/kubernetes/pull/112597)) -that addresses a gRPC Protocol Error that was seen in the CSI client code and it doesn't appear to have -been backported. -It is recommended to install kind v0.20.0 or later. +!!! Note + By default, bpfman-operator deploys bpfman with CSI enabled. + CSI requires Kubernetes v1.26 due to a PR + ([kubernetes/kubernetes#112597](https://github.com/kubernetes/kubernetes/pull/112597)) + that addresses a gRPC Protocol Error that was seen in the CSI client code and it doesn't + appear to have been backported. + It is recommended to install kind v0.20.0 or later. ### Deploy To Openshift Cluster diff --git a/docs/developer-guide/xdp-overview.md b/docs/developer-guide/xdp-overview.md index d7b2a9fef..30a226209 100644 --- a/docs/developer-guide/xdp-overview.md +++ b/docs/developer-guide/xdp-overview.md @@ -17,14 +17,15 @@ XDP programs on a given interface. This tutorial will show you how to use `bpfman` to load multiple XDP programs on an interface. -**Note:** The TC hook point is also associated with an interface. -Within bpfman, TC is implemented in a similar fashion to XDP in that it uses a dispatcher with -stub functions. -TCX is a fairly new kernel feature that improves how the kernel handles multiple TC programs -on a given interface. -bpfman is on the process of integrating TCX support, which will replace the dispatcher logic -for TC. -Until then, assume TC behaves in a similar fashion to XDP. +!!! Note: + The TC hook point is also associated with an interface. + Within bpfman, TC is implemented in a similar fashion to XDP in that it uses a dispatcher with + stub functions. + TCX is a fairly new kernel feature that improves how the kernel handles multiple TC programs + on a given interface. + bpfman is on the process of integrating TCX support, which will replace the dispatcher logic + for TC. + Until then, assume TC behaves in a similar fashion to XDP. See [Launching bpfman](../getting-started/launching-bpfman.md) for more detailed instructions on building and loading bpfman. diff --git a/docs/getting-started/building-bpfman.md b/docs/getting-started/building-bpfman.md index caf58d907..23b882b92 100644 --- a/docs/getting-started/building-bpfman.md +++ b/docs/getting-started/building-bpfman.md @@ -162,11 +162,11 @@ Once installed, use `man` to view the pages. man bpfman list ``` -> **NOTE:** -> `bpfman` commands with subcommands (specifically `bpfman load`) have `-` in the -> manpage subcommand generation. -> So use `bpfman load-file`, `bpfman load-image`, `bpfman load-image-xdp`, etc. to -> display the subcommand manpage files. +!!! Note + `bpfman` commands with subcommands (specifically `bpfman load`) have `-` in the + manpage subcommand generation. + So use `bpfman load-file`, `bpfman load-image`, `bpfman load-image-xdp`, etc. to + display the subcommand manpage files. ## Development Environment Setup @@ -277,12 +277,13 @@ throughout the `bpfman` documentation is to run a Kubernetes Kind cluster. See [kind](https://kind.sigs.k8s.io/) for documentation and installation instructions. `kind` also requires `docker` to be installed. ->> **NOTE:** By default, bpfman-operator deploys bpfman with CSI enabled. -CSI requires Kubernetes v1.26 due to a PR -([kubernetes/kubernetes#112597](https://github.com/kubernetes/kubernetes/pull/112597)) -that addresses a gRPC Protocol Error that was seen in the CSI client code and it doesn't appear to have -been backported. -It is recommended to install kind v0.20.0 or later. +!!! Note + By default, bpfman-operator deploys bpfman with CSI enabled. + CSI requires Kubernetes v1.26 due to a PR + ([kubernetes/kubernetes#112597](https://github.com/kubernetes/kubernetes/pull/112597)) + that addresses a gRPC Protocol Error that was seen in the CSI client code and it doesn't appear + to have been backported. + It is recommended to install kind v0.20.0 or later. If the following error is seen, it means there is an older version of Kubernetes running and it needs to be upgraded. diff --git a/docs/getting-started/launching-bpfman.md b/docs/getting-started/launching-bpfman.md index 127338e65..cca098bbe 100644 --- a/docs/getting-started/launching-bpfman.md +++ b/docs/getting-started/launching-bpfman.md @@ -124,9 +124,10 @@ The socket service is the long lived process, which doesn't have any special per The service that runs `bpfman-rpc` is only started when there is a request on the socket, and then `bpfman-rpc` stops itself after an inactivity timeout. -> For security reasons, it is recommended to run `bpfman-rpc` as a systemd service when running -on a local host. -For local development, some may find it useful to run `bpfman-rpc` as a long lived process. +!!! Note + For security reasons, it is recommended to run `bpfman-rpc` as a systemd service when running + on a local host. + For local development, some may find it useful to run `bpfman-rpc` as a long lived process. When run as a systemd service, the set of linux capabilities are limited to only the required set. If permission errors are encountered, see [Linux Capabilities](../developer-guide/linux-capabilities.md) diff --git a/docs/getting-started/running-release.md b/docs/getting-started/running-release.md index ca6deb4f0..71e67f231 100644 --- a/docs/getting-started/running-release.md +++ b/docs/getting-started/running-release.md @@ -4,10 +4,11 @@ This section describes how to deploy `bpfman` from a given release. See [Releases](https://github.com/bpfman/bpfman/releases) for the set of bpfman releases. -> **Note:** Instructions for interacting with bpfman change from release to release, so reference -> release specific documentation. For example: -> -> [https://bpfman.io/v0.4.2/getting-started/running-release/](https://bpfman.io/v0.4.2/getting-started/running-release/) +!!! Note + Instructions for interacting with bpfman change from release to release, so reference + release specific documentation. For example: + + [https://bpfman.io/v0.4.2/getting-started/running-release/](https://bpfman.io/v0.4.2/getting-started/running-release/) Jump to the [Setup and Building bpfman](./building-bpfman.md) section for help building from the latest code or building from a release branch. diff --git a/docs/getting-started/running-rpm.md b/docs/getting-started/running-rpm.md index 45e77f2cd..7465b9129 100644 --- a/docs/getting-started/running-rpm.md +++ b/docs/getting-started/running-rpm.md @@ -35,11 +35,13 @@ To install nightly builds: sudo dnf copr enable @ebpf-sig/bpfman-next ``` -> **Note:** If both the bpfman and bpfman-next copr repos are enabled DNF will -> automatically pull from bpfman-next. To disable one or the other simply run -> ```console -> sudo dnf copr disable @ebpf-sig/bpfman-next -> ``` +!!! Note + If both the bpfman and bpfman-next copr repos are enabled DNF will + automatically pull from bpfman-next. To disable one or the other simply run + + ```console + sudo dnf copr disable @ebpf-sig/bpfman-next + ``` ### Install RPM From Packit Service @@ -123,8 +125,9 @@ sudo dnf install packit sudo dnf install cargo-rpm-macros ``` -> NOTE: `cargo-rpm-macros` needs to be version 25 or higher. -> It appears this is only available on Fedora 37, 38, 39 and Rawhide at the moment. +!!! Note + `cargo-rpm-macros` needs to be version 25 or higher. + It appears this is only available on Fedora 37, 38, 39 and Rawhide at the moment. ### Build Locally From 209af23a42448fd303ac4113da0ba3cf5ba74110 Mon Sep 17 00:00:00 2001 From: Billy McFall <22157057+Billy99@users.noreply.github.com> Date: Thu, 15 Aug 2024 16:13:13 -0400 Subject: [PATCH 2/5] cli: fix spacing on bpfman image build Signed-off-by: Billy McFall <22157057+Billy99@users.noreply.github.com> --- bpfman/src/bin/cli/args.rs | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/bpfman/src/bin/cli/args.rs b/bpfman/src/bin/cli/args.rs index 193ae2a0a..9fba15e7e 100644 --- a/bpfman/src/bin/cli/args.rs +++ b/bpfman/src/bin/cli/args.rs @@ -320,26 +320,34 @@ pub(crate) enum ImageSubCommand { /// /// To use, the --container-file and --tag must be included, as well as a pointer to /// at least one bytecode file that can be passed in several ways. Use either: + /// /// * --bytecode: for a single bytecode built for the host architecture. + /// /// * --cilium-ebpf-project: for a cilium/ebpf project directory which contains /// multiple object files for different architectures. + /// /// * --bc-386-el .. --bc-s390x-eb: to add one or more architecture specific bytecode files. /// /// Examples: /// bpfman image build -f Containerfile.bytecode -t quay.io//go-xdp-counter:test \ /// -b ./examples/go-xdp-counter/bpf_x86_bpfel.o + #[clap(verbatim_doc_comment)] Build(BuildBytecodeArgs), /// Generate the OCI image labels for a given bytecode file. /// /// To use, the --container-file and --tag must be included, as well as a pointer to /// at least one bytecode file that can be passed in several ways. Use either: + /// /// * --bytecode: for a single bytecode built for the host architecture. + /// /// * --cilium-ebpf-project: for a cilium/ebpf project directory which contains /// multiple object files for different architectures. + /// /// * --bc-386-el .. --bc-s390x-eb: to add one or more architecture specific bytecode files. /// /// Examples: /// bpfman image generate-build-args --bc-amd64-el ./examples/go-xdp-counter/bpf_x86_bpfel.o + #[clap(verbatim_doc_comment)] GenerateBuildArgs(GenerateArgs), } From 748dccf28692f0ede667cfdaa556e7d3d4500752 Mon Sep 17 00:00:00 2001 From: Billy McFall <22157057+Billy99@users.noreply.github.com> Date: Fri, 21 Jun 2024 11:29:26 -0400 Subject: [PATCH 3/5] docs: Cleanup docs based on recommendations There have been several recommendations to clean-up and rearrange the docs. Signed-off-by: Billy McFall <22157057+Billy99@users.noreply.github.com> --- docs/developer-guide/image-build.md | 342 +++++++++++++++++++++- docs/developer-guide/xdp-overview.md | 16 +- docs/getting-started/building-bpfman.md | 250 ++++++++-------- docs/getting-started/example-bpf-k8s.md | 14 +- docs/getting-started/example-bpf-local.md | 2 + docs/getting-started/launching-bpfman.md | 86 +++--- docs/getting-started/overview.md | 7 +- docs/index.md | 136 ++++----- docs/quick-start.md | 4 +- examples/Makefile | 116 ++++++-- mkdocs.yml | 2 +- 11 files changed, 695 insertions(+), 280 deletions(-) diff --git a/docs/developer-guide/image-build.md b/docs/developer-guide/image-build.md index 848ed890d..a7ec87d7e 100644 --- a/docs/developer-guide/image-build.md +++ b/docs/developer-guide/image-build.md @@ -1,21 +1,345 @@ # bpfman Container Images -Container images for the `bpfman` binaries are automatically built and -pushed to `quay.io/bpfman` whenever code is merged into the `main` branch of the -`github.com/bpfman/bpfman` repository under the `:latest` tag. +Container images for `bpfman` are automatically built and pushed to `quay.io/` under the +`:latest` tag whenever code is merged into the `main` branch of the `github.com/bpfman/bpfman` +and `github.com/bpfman/bpfman-operator` repositories. -## Building the images locally +* [quay.io/bpfman](https://quay.io/organization/bpfman): This repository contains images needed + to run bpfman. + It contains the `xdp-dispatcher` and `tc-dispatcher` eBPF container images, which are used by + bpfman to allow multiple XDP or TC programs to be loaded on a given interface. + It also includes the container images which are used to deploy bpfman in a Kubernetes deployment: + * **bpfman**: Packages all the bpfman binaries, including `bpfman` CLI, `bpfman-ns` and `bpfman-rpc`. + * **bpfman-agent**: Agent that listens to KubeAPI Server and makes calls to bpfman to load or unload + eBPF programs based on user intent. + * **bpfman-operator**: Operator for deploying bpfman. + * **tc-dispatcher**: eBPF container image containing the TC Dispatcher, which is used by bpfman + to manage and allow multiple TC based programs to be loaded on a given TC hook point. + * **xdp-dispatcher**: eBPF container image containing the XDP Dispatcher, which is used by bpfman + to manage and allow multiple TC based programs to be loaded on a given XDP hook point. + * **csi-node-driver-registrar**: CSI Driver used by bpfman. + * **bpfman-operator-bundle**: Image containing all the CRDs (Custom-Resource-Definitions) used + by bpfman-agent to define Kubernetes objects used to manage eBPF programs. +* [quay.io/bpfman-bytecode](https://quay.io/organization/bpfman-bytecode): This repository contains + eBPF container images for all of the generated bytecode from + [examples/](https://github.com/bpfman/bpfman/tree/main/examples/) and + [integration-test/](https://github.com/bpfman/bpfman/tree/main/tests/integration-test/bpf). +* [quay.io/bpfman-userspace](https://quay.io/organization/bpfman-userspace): This repository contains + userspace container images for all of the example programs in + [examples/](https://github.com/bpfman/bpfman/tree/main/examples/). -### bpfman +## Multiple Architecture Support + +All `bpfman` related container images that are automatically built and pushed to `quay.io/` contain a manifest file +and images built for the following architectures: + +* x86_64 +* arm64 +* ppc64le +* s390x + +## Locally Build bpfman-operator and bpfman-agent Container Images + +When testing or developing in bpfman-operator, it may be necessary to run with updated changes +to the bpfman-operator or bpfman-agent container images. +The local Makefile will build and load both images based on the current changes: ```sh -docker build -f /Containerfile.bpfman . -t bpfman:local +cd $HOME/src/bpfman-operator/ + +make build-images +make run-on-kind ``` -## Running locally in container +## Locally Build bpfman Container Image + +When testing or developing in bpfman-operator, it may be necessary to run with updated changes +to bpfman. +By default, bpfman-agent uses `quay.io/bpfman/bpfman:latest`. +To build the bpfman binaries in a container image, run: -### bpfman +```sh +cd $HOME/src/bpfman/ + +docker build -f ./Containerfile.bpfman.local . -t quay.io/$QUAY_USER/bpfman:test +``` + +Use any registry, image name and tag, above is just an example. +Next, build and deploy the bpfman-operator and bpfman-agent with the locally built bpfman container +image. ```sh -sudo docker run --init --privileged --net=host -v /etc/bpfman/certs/:/etc/bpfman/certs/ -v /sys/fs/bpf:/sys/fs/bpf quay.io/bpfman/bpfman:latest +cd $HOME/src/bpfman-operator/ + +BPFMAN_IMG=quay.io/$QUAY_USER/bpfman:test make build-images +BPFMAN_IMG=quay.io/$QUAY_USER/bpfman:test make run-on-kind ``` + +To use, the Kind cluster must have access to the image. +So either the image needs to be pushed to a registry and made public (make +public via the repo GUI after the push): + +```sh +docker push quay.io/$QUAY_USER/bpfman:test +``` + +OR load into kind cluster: + +```sh +kind load docker-image quay.io/$QUAY_USER/bpfman:test --name bpfman-deployment +``` + +Now the image should be running in the Kind cluster: + +```sh +kubectl get pods -A + NAMESPACE NAME READY STATUS RESTARTS AGE + bpfman bpfman-daemon-87fqg 3/3 Running 0 16m + bpfman bpfman-operator-7f67bc7c57-bc6lk 2/2 Running 0 16m + : + +kubectl describe pod -n bpfman bpfman-daemon-87fqg + Name: bpfman-daemon-87fqg + Namespace: bpfman + : + Containers: + bpfman: + Container ID: containerd://1777d1810f3648f43df775e9d9af79406eaffc5694aa712da04c3f4e578093b3 + Image: quay.io/$QUAY_USER/bpfman:test + Image ID: quay.io/$QUAY_USER/bpfman@sha256:f2c94b7acff6b463fc55232a1896816283521dd1ba5560b0d0779af99f811cd0 +: +``` + +## Locally Build TC or XDP Dispatcher Container Image + +The TC and XDP Dispatcher images are automatically built and pushed to `quay.io/` under the `:latest` +tag whenever code is merged into the `main` branch of the `github.com/bpfman/bpfman`. +If a dispatcher container image needs to be built locally, use the following steps. + +Build the object files: + +```sh +cargo xtask build-ebpf --libbpf-dir ~/src/libbpf/ + +$ ls .output/tc_dispatcher.bpf/ +bpf_arm64_bpfel.o bpf_powerpc_bpfel.o bpf_s390_bpfeb.o bpf_x86_bpfel.o + +$ ls .output/xdp_dispatcher_v2.bpf/ +bpf_arm64_bpfel.o bpf_powerpc_bpfel.o bpf_s390_bpfeb.o bpf_x86_bpfel.o +``` + +Then build the bytecode image files: + +```sh +bpfman image build -f Containerfile.bytecode -t quay.io/$QUAY_USER/tc-dispatcher:test -b .output/tc_dispatcher.bpf/bpf_x86_bpfel.o +bpfman image build -f Containerfile.bytecode -t quay.io/$QUAY_USER/xdp-dispatcher:test -b .output/xdp_dispatcher_v2.bpf/bpf_x86_bpfel.o +``` + +If a multi-arch image is needed, use: + +```sh +bpfman image build -f Containerfile.bytecode.multi.arch -t quay.io/$QUAY_USER/tc-dispatcher:test -c .output/tc_dispatcher.bpf/ +bpfman image build -f Containerfile.bytecode.multi.arch -t quay.io/$QUAY_USER/xdp-dispatcher:test -c .output/xdp_dispatcher_v2.bpf/ +``` + +!!! NOTE + To build images for multiple architectures on a local system, docker may need additional configuration + settings to allow for caching of non-native images. See + [https://docs.docker.com/build/building/multi-platform/](https://docs.docker.com/build/building/multi-platform/) + for more details. + +## Locally Build Example Container Images + +The example images are automatically built and pushed to `quay.io/` under the `:latest` tag whenever code is +merged into the `main` branch of the `github.com/bpfman/bpfman`. +For each example, there is a bytecode and a userspace image. +For official bpfman images, bytecode images are pushed to +[quay.io/bpfman-bytecode](https://quay.io/organization/bpfman-bytecode) and userspace images are pushed to +[quay.io/bpfman-userspace](https://quay.io/organization/bpfman-userspace). +For example: + +* [quay.io/bpfman-bytecode/go-kprobe-counter](https://quay.io/repository/bpfman-bytecode/go-kprobe-counter) +* [quay.io/bpfman-bytecode/go-tc-counter](https://quay.io/repository/bpfman-bytecode/go-tc-counter) +* [quay.io/bpfman-bytecode/go-tracepoint-counter](https://quay.io/repository/bpfman-bytecode/go-tracepoint-counter) +* ... + +* [quay.io/bpfman-userspace/go-kprobe-counter](https://quay.io/repository/bpfman-userspace/go-kprobe-counter) +* [quay.io/bpfman-userspace/go-tc-counter](https://quay.io/repository/bpfman-userspace/go-tc-counter) +* [quay.io/bpfman-userspace/go-tracepoint-counter](https://quay.io/repository/bpfman-userspace/go-tracepoint-counter) +* ... + +The Makefile in the examples directory has commands to build both sets of images. +Image names and tags can be controlled using environment variables. +If private images are being generated, both bytecode and userspace images will probably be pushed to the same +account, so bytecode and userspace images will need to be distinguished by either fully qualified image +names (using IMAGE_TC_BC, IMAGE_TC_US, IMAGE_XDP_BC, IMAGE_XDP_US, etc) or unique tags for each (TAG_BC, +TAG_US). +See `make help` in the examples directory and the samples below. + +### Example Bytecode Container Images + +If an example bytecode container image needs to be built locally, use the following to +build the bytecode container image, (optionally passing the `USER_BC` and `TAG_BC` for the image): + +```sh +# Build images for all eBPF program types +$ make build-bc-images USER_BC=$QUAY_USER TAG_BC=test-bc +: + => pushing quay.io/$QUAY_USER/go-kprobe-counter:test-bc with docker +: + => pushing quay.io/$QUAY_USER/go-tc-counter:test-bc with docker +: + => pushing quay.io/$QUAY_USER/go-tracepoint-counter:test-bc with docker +: + +-- OR -- + +# Build image for a single eBPF program type, XDP in this example +$ make build-bc-xdp USER_BC=$QUAY_USER TAG_BC=test-bc +: + => pushing quay.io/$QUAY_USER/go-xdp-counter:test-bc with docker +``` + +If a multi-arch image is needed, use (appending `PLATFORM`): + +```sh +$ make build-bc-xdp USER_BC=$QUAY_USER TAG_BC=test-bc PLATFORM=linux/amd64,linux/arm64,linux/ppc64le,linux/s390x +: + => pushing quay.io/$QUAY_USER/go-xdp-counter:test-bc with docker +``` + +!!! NOTE + To build images for multiple architectures on a local system, docker may need additional configuration + settings to allow for caching of non-native images. See + [https://docs.docker.com/build/building/multi-platform/](https://docs.docker.com/build/building/multi-platform/) + for more details. + +### Example Userspace Container Images + +If an example userspace container image needs to be built locally, use the following to +build the userspace container images, (optionally passing the `USER_US` and `TAG_US` for the image): + +```sh +cd ~/src/bpfman/examples/ + +# Build all images +$ make build-us-images USER_US=$QUAY_USER TAG_US=test-us +: + => pushing quay.io/$QUAY_USER/go-kprobe-counter:test-us with docker +: + => pushing quay.io/$QUAY_USER/go-tc-counter:test-us with docker +: + => pushing quay.io/$QUAY_USER/go-tracepoint-counter:test-us with docker +: + +-- OR -- + +# Build a single image +$ make build-us-xdp USER_US=$QUAY_USER TAG_US=test-us +: + => pushing quay.io/$QUAY_USER/go-xdp-counter:test-us with docker +``` + +If a multi-arch image is needed, use (appending `PLATFORM`): + +```sh +$ make build-us-xdp USER_US=$QUAY_USER TAG_US=test-us PLATFORM=linux/amd64,linux/arm64,linux/ppc64le,linux/s390x +: + => pushing quay.io/$QUAY_USER/go-xdp-counter:test-us with docker +``` + +!!! NOTE + To build images for multiple architectures on a local system, docker may need additional configuration + settings to allow for caching of non-native images. See + [https://docs.docker.com/build/building/multi-platform/](https://docs.docker.com/build/building/multi-platform/) + for more details. + +## Adding Additional Container Images + +When adding a new container image to one of the bpfman repositories, whether it be via the examples or +integration tests, several steps need to be performed. + +* One of the maintainers of the bpfman quay.io repositories must: + * Add the image to the quay.io repository. + * Make the new image public. + * On the image, provide `Write` access to the `bpfman+github_actions` robot account. +* Add the new image to the + [bpfman/.github/workflows/image-build.yaml](https://github.com/bpfman/bpfman/blob/main/.github/workflows/image-build.yaml) + so the image is built and pushed on each PR merge. +* For examples, update the `examples/Makefile` to build the new images. + +## Signing Container Images + +It is encouraged to sign the eBPF container images, which can easily be done using +[cosign](https://docs.sigstore.dev/signing/quickstart/). +Below is a summary of the steps needed to sign an image. + +First, install `cosign`: + +```console +go install github.com/sigstore/cosign/v2/cmd/cosign@latest +``` + +Then sign the image. +The `cosign` command will generate a URL. +Follow the URL to a `sigstore` login and login with either GitHub, Google to Microsoft. +That will generate a verification code that will complete the `cosign` command. + +```console +cosign sign -y quay.io/$QUAY_USER/test-image@sha256:55fe3cfe46409939876be27f7ed4d2948842918145f6cda167d0c31fdea2046f +Generating ephemeral keys... +Retrieving signed certificate... +: +https://oauth2.sigstore.dev/auth/auth?access_type=online&client_id=sigstore&code_challenge=EwHYBahRxlbli-oEXxS9DoEzEWcyuS_f1lLBhntCVFI&code_challenge_method=S256&nonce=2kR9mJbP0eUxFBAQI9Nhs6LyS4l&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&scope=openid+email&state=2kR9mIqOn6IgmAw46BxVrnEEi0M +Enter verification code: wq3g58qhw6y25wwibcz2kgzfx + +Successfully verified SCT... +tlog entry created with index: 120018072 +Pushing signature to: quay.io/$QUAY_USER/test-image +``` + +## Containerfiles + +There are multiple Containerfiles in the bpfman repositories. +Below is a summary of the files and their purpose. + +### Userspace Containerfiles + +* **bpfman/Containerfile.bpfman.local:** This file is used to create a userspace container image + with bpfman binaries (`bpfman` CLI, `bpfman-rpc` and `bpfman-ns`). + It can be used to run local bpfman code in a Kubernetes cluster with the `bpfman-operator` and `bpfman-agent`. +* **bpfman/Containerfile.bpfman.multi.arch:** This file is used to create a userspace container image + with bpfman binaries (`bpfman` CLI, `bpfman-rpc` and `bpfman-ns`), but for multiple architectures. + It is used by the `bpfman/.github/workflows/image-build.yaml` file to build bpfman multi-arch images + on every github Pull Request merge. + The resulting images are stored in `quay.io`. +* **bpfman/Containerfile.bpfman.openshift:** This file is used to create a userspace container image + with bpfman binaries (`bpfman` CLI, `bpfman-rpc` and `bpfman-ns`). + It is used by internal OpenShift build processes. +* **bpfman/examples/go-\*-counter/container-deployment/Containerfile.go-\*-counter:** Where '*' is one of the + bpfman supported program types (tc, tcx, tracepoint, etc.). + These files are used to create the userspace container images associated with the examples. +* **bpfman-operator/Containerfile.bpfman-agent:** This file is used to create a userspace + container image with bpfman-agent. +* **bpfman-operator/Containerfile.bpfman-agent.openshift:** This file is used to create a userspace + container image with bpfman-agent. + It is used by internal OpenShift build processes. +* **bpfman-operator/Containerfile.bpfman-operator:** This file is used to create a userspace + container image with bpfman-operator. +* **bpfman-operator/Containerfile.bpfman-operator.openshift:** This file is used to create a userspace + container image with bpfman-operator. + It is used by internal OpenShift build processes. +* **bpfman-operator/Containerfile.bundle:** This file is used to create a container image with + all the Kubernetes object definitions (ConfigMaps, Custom Resource Definitions (CRDs), Roles, + Role Bindings, Service, Service Accounts, etc) bpfman needs to be deployed in a Kubernetes cluster. + +### Bytecode Containerfiles + +* **bpfman/Containerfile.bytecode:** This file is used to create a container image with eBPF bytecode + packaged inside. + The Containerfile applies labels to the container image describing the bytecode for consumers of the image. + See [eBPF Bytecode Image Specifications](./shipping-bytecode.md) for more details. +* **bpfman/Containerfile.bytecode.multi.arch:** This file is used to create a container image with eBPF bytecode + packaged inside, but packages eBPF bytecode for multiple architectures. + The Containerfile applies labels to the container image describing the bytecode for consumers of the image. + See [eBPF Bytecode Image Specifications](./shipping-bytecode.md) for more details. diff --git a/docs/developer-guide/xdp-overview.md b/docs/developer-guide/xdp-overview.md index 30a226209..158bdf0e4 100644 --- a/docs/developer-guide/xdp-overview.md +++ b/docs/developer-guide/xdp-overview.md @@ -39,8 +39,8 @@ We will use the priority of 100. Find a deeper dive into CLI syntax in [CLI Guide](../getting-started/cli-guide.md). ```console -sudo bpfman load image --image-url quay.io/bpfman-bytecode/xdp_pass:latest xdp \ - --iface eno3 --priority 100 +sudo bpfman load image --image-url quay.io/bpfman-bytecode/xdp_pass:latest --name pass \ + xdp --iface eno3 --priority 100 Bpfman State --------------- Name: pass @@ -129,8 +129,8 @@ We will now load 2 more programs with different priorities to demonstrate how bp will ensure they are ordered correctly: ```console -sudo bpfman load image --image-url quay.io/bpfman-bytecode/xdp_pass:latest xdp \ - --iface eno3 --priority 50 +sudo bpfman load image --image-url quay.io/bpfman-bytecode/xdp_pass:latest --name pass \ + xdp --iface eno3 --priority 50 Bpfman State --------------- Name: pass @@ -155,8 +155,8 @@ sudo bpfman load image --image-url quay.io/bpfman-bytecode/xdp_pass:latest xdp \ ``` ```console -sudo bpfman load image --image-url quay.io/bpfman-bytecode/xdp_pass:latest xdp \ - --iface eno3 --priority 200 +sudo bpfman load image --image-url quay.io/bpfman-bytecode/xdp_pass:latest --name pass \ + xdp --iface eno3 --priority 200 Bpfman State --------------- Name: pass @@ -261,8 +261,8 @@ then the program can be loaded with those additional return values using the `pr parameter (see `bpfman load image xdp --help` for list of valid values): ```console -sudo bpfman load image --image-url quay.io/bpfman-bytecode/xdp_pass:latest xdp \ - --iface eno3 --priority 150 --proceed-on "pass" --proceed-on "dispatcher_return" +sudo bpfman load image --image-url quay.io/bpfman-bytecode/xdp_pass:latest --name pass \ + xdp --iface eno3 --priority 150 --proceed-on "pass" --proceed-on "dispatcher_return" Bpfman State --------------- Name: pass diff --git a/docs/getting-started/building-bpfman.md b/docs/getting-started/building-bpfman.md index 23b882b92..13f16228e 100644 --- a/docs/getting-started/building-bpfman.md +++ b/docs/getting-started/building-bpfman.md @@ -1,19 +1,20 @@ # Setup and Building bpfman This section describes how to build bpfman. -If this is the first time building bpfman, jump to the -[Development Environment Setup](#development-environment-setup) section for help installing -the tooling. +If this is the first time building bpfman, the +[Development Environment Setup](#development-environment-setup) section describes all packages needed +to build bpfman. -There is also an option to run images from a given release, or from an RPM, as opposed to +There is also an option to run prebuilt images from a given release or from an RPM, as opposed to building locally. -Jump to the [Run bpfman From Release Image](./running-release.md) section for installing -from a fixed release or jump to the [Run bpfman From RPM](./running-rpm.md) section for installing -from an RPM. +Jump to: + +* [Run bpfman From Release Image](./running-release.md) for installing from a prebuilt fixed release. +* [Run bpfman From RPM](./running-rpm.md) for installing from a prebuilt RPM. ## Kernel Versions -eBPF is still a relatively new technology and being actively developed. +eBPF is still a relatively new technology that is being actively developed. To take advantage of this constantly evolving technology, it is best to use the newest kernel version possible. If bpfman needs to be run on an older kernel, this section describes some of the kernel @@ -35,6 +36,9 @@ Major kernel features leveraged by bpfman: * **Relaxed CAP_BPF Requirement:** Prior to Kernel 5.19, all eBPF system calls required CAP_BPF. This required userspace programs that wanted to access eBPF maps to have the CAP_BPF Linux capability. With the kernel 5.19 change, CAP_BPF is only required for load and unload requests. +* **TCX:** TCX support was added in Kernel 6.6, and added to bpfman in v0.5.2. + TCX has performance improvements over TC and adds support in the kernel for multiple TCX programs to run + on a given TC hook point. bpfman tested on older kernel versions: @@ -59,115 +63,6 @@ bpfman tested on older kernel versions: * bpfman fails to run as a systemd service because of some capabilities issues in the bpfman.service file. -## Clone the bpfman Repo - -You can build and run bpfman from anywhere. However, if you plan to make changes to the bpfman -operator, specifically run `make generate`, it will need to be under your `GOPATH` because -Kubernetes Code-generator does not work outside of `GOPATH` -[Issue 86753](https://github.com/kubernetes/kubernetes/issues/86753). -Assuming your `GOPATH` is set to the typical `$HOME/go`, your repo should live in -`$HOME/go/src/github.com/bpfman/bpfman` - -``` -mkdir -p $HOME/go/src/github.com/bpfman -cd $HOME/go/src/github.com/bpfman -git clone git@github.com:bpfman/bpfman.git -``` - -## Building bpfman - -To just test with the latest bpfman, containerized image are stored in `quay.io/bpfman` -(see [bpfman Container Images](../developer-guide/image-build.md)). -To build with local changes, use the following commands. - -If you are building bpfman for the first time OR the eBPF code has changed: - -```console -cargo xtask build-ebpf --libbpf-dir /path/to/libbpf -``` - -If protobuf files have changed (see -[RPC Protobuf Generation](../developer-guide/develop-operator.md#rpc-protobuf-generation)): - -```console -cargo xtask build-proto -``` - -To build bpfman: - -```console -cargo build -``` - -## Building CLI TAB completion files - -Optionally, to build the CLI TAB completion files, run the following command: - -```console -cargo xtask build-completion -``` - -Files are generated for different shells: - -```console -ls .output/completions/ -_bpfman bpfman.bash bpfman.elv bpfman.fish _bpfman.ps1 -``` - -### bash - -For `bash`, this generates a file that can be used by the linux `bash-completion` -utility (see [Install bash-completion](#install-bash-completion) for installation -instructions). - -If the files are generated, they are installed automatically when using the install -script (i.e. `sudo ./scripts/setup.sh install` - See -[Run as a systemd Service](example-bpf-local.md#run-as-a-systemd-service)). -To install the files manually, copy the file associated with a given shell to -`/usr/share/bash-completion/completions/`. -For example: - -```console -sudo cp .output/completions/bpfman.bash /usr/share/bash-completion/completions/. - -bpfman g -``` - -### Other shells - -Files are generated other shells (Elvish, Fish, PowerShell and zsh). -For these shells, generated file must be manually installed. - -## Building CLI Manpages - -Optionally, to build the CLI Manpage files, run the following command: - -```console -cargo xtask build-man-page -``` - -If the files are generated, they are installed automatically when using the install -script (i.e. `sudo ./scripts/setup.sh install` - See -[Run as a systemd Service](example-bpf-local.md#run-as-a-systemd-service)). -To install the files manually, copy the generated files to `/usr/local/share/man/man1/`. -For example: - -```console -sudo cp .output/manpage/bpfman*.1 /usr/local/share/man/man1/. -``` - -Once installed, use `man` to view the pages. - -```console -man bpfman list -``` - -!!! Note - `bpfman` commands with subcommands (specifically `bpfman load`) have `-` in the - manpage subcommand generation. - So use `bpfman load-file`, `bpfman load-image`, `bpfman load-image-xdp`, etc. to - display the subcommand manpage files. - ## Development Environment Setup To build bpfman, the following packages must be installed. @@ -200,7 +95,13 @@ sudo dnf install llvm-devel clang-devel elfutils-libelf-devel sudo apt install clang lldb lld libelf-dev gcc-multilib ``` -### Install libssl Development Library +### Install SSL Library + +`dnf` based OS: + +```console +sudo dnf install openssl-devel +``` `apt` based OS: @@ -218,7 +119,11 @@ sudo apt install libbpf-dev ### Install Protobuf Compiler -For further detailed instructions, see [protoc](https://grpc.io/docs/protoc-installation/). +If any of the [Protobuf files](https://github.com/bpfman/bpfman/tree/main/proto) need to be updated, +then the protobuf-compiler will need to be installed. +See [RPC Protobuf Generation](../developer-guide/develop-operator.md#rpc-protobuf-generation) for bpfman +use of protobufs and see [protoc](https://grpc.io/docs/protoc-installation/) for more detailed installation +instructions. `dnf` based OS: @@ -367,3 +272,110 @@ And to verify locally: ```console taplo fmt --check ``` + +## Clone the bpfman and bpfman-operator Repositories + +You can build and run bpfman from anywhere. +For simplicity throughout this documentation, all examples will assume +`$HOME/src/bpfman/` and `$HOME/src/bpfman-operator/`. +bpfman-operator only needs to be cloned if deploying in Kubernetes. + +``` +mkdir -p $HOME/src/ +cd $HOME/src/ +git clone https://github.com/bpfman/bpfman.git +git clone https://github.com/bpfman/bpfman-operator.git +``` + +## Building bpfman + +If you are building bpfman for the first time OR the eBPF code has changed: + +```console +cd ~/src/bpfman/ +cargo xtask build-ebpf --libbpf-dir /path/to/libbpf +``` + +If protobuf files have changed (see +[RPC Protobuf Generation](../developer-guide/develop-operator.md#rpc-protobuf-generation)): + +```console +cargo xtask build-proto +``` + +To build bpfman: + +```console +cargo build +``` + +## Building CLI TAB completion files + +Optionally, to build the CLI TAB completion files, run the following command: + +```console +cd ~/src/bpfman/ +cargo xtask build-completion +``` + +Files are generated for different shells: + +```console +ls .output/completions/ +_bpfman bpfman.bash bpfman.elv bpfman.fish _bpfman.ps1 +``` + +### bash + +For `bash`, this generates a file that can be used by the linux `bash-completion` +utility (see [Install bash-completion](#install-bash-completion) for installation +instructions). + +If the files are generated, they are installed automatically when using the install +script (i.e. `sudo ./scripts/setup.sh install` - See +[Run as a systemd Service](example-bpf-local.md#run-as-a-systemd-service)). +To install the files manually, copy the file associated with a given shell to +`/usr/share/bash-completion/completions/`. +For example: + +```console +sudo cp .output/completions/bpfman.bash /usr/share/bash-completion/completions/. + +bpfman g +``` + +### Other shells + +Files are generated other shells (Elvish, Fish, PowerShell and zsh). +For these shells, generated file must be manually installed. + +## Building CLI Manpages + +Optionally, to build the CLI Manpage files, run the following command: + +```console +cd ~/src/bpfman/ +cargo xtask build-man-page +``` + +If the files are generated, they are installed automatically when using the install +script (i.e. `sudo ./scripts/setup.sh install` - See +[Run as a systemd Service](example-bpf-local.md#run-as-a-systemd-service)). +To install the files manually, copy the generated files to `/usr/local/share/man/man1/`. +For example: + +```console +sudo cp .output/manpage/bpfman*.1 /usr/local/share/man/man1/. +``` + +Once installed, use `man` to view the pages. + +```console +man bpfman list +``` + +!!! NOTE + `bpfman` commands with subcommands (specifically `bpfman load`) have `-` in the + manpage subcommand generation. + So use `man bpfman load-file`, `man bpfman load-image`, `man bpfman load-image-xdp`, + etc. to display the subcommand manpage files. diff --git a/docs/getting-started/example-bpf-k8s.md b/docs/getting-started/example-bpf-k8s.md index 2c645ed05..b41a3b3c7 100644 --- a/docs/getting-started/example-bpf-k8s.md +++ b/docs/getting-started/example-bpf-k8s.md @@ -312,21 +312,21 @@ make deploy for target in deploy-tc deploy-tracepoint deploy-xdp deploy-xdp-ms deploy-kprobe deploy-target deploy-uprobe ; do \ make $target || true; \ done - make[1]: Entering directory '/home/bmcfall/go/src/github.com/bpfman/bpfman/examples' + make[1]: Entering directory '/home/<$USER>/go/src/github.com/bpfman/bpfman/examples' sed 's@URL_BC@quay.io/bpfman-bytecode/go-tc-counter:latest@' config/default/go-tc-counter/patch.yaml.env > config/default/go-tc-counter/patch.yaml - cd config/default/go-tc-counter && /home/bmcfall/go/src/github.com/bpfman/bpfman/examples/bin/kustomize edit set image quay.io/bpfman-userspace/go-tc-counter=quay.io/bpfman-userspace/go-tc-counter:latest + cd config/default/go-tc-counter && /home/<$USER>/go/src/github.com/bpfman/bpfman/examples/bin/kustomize edit set image quay.io/bpfman-userspace/go-tc-counter=quay.io/bpfman-userspace/go-tc-counter:latest namespace/go-tc-counter created serviceaccount/bpfman-app-go-tc-counter created daemonset.apps/go-tc-counter-ds created tcprogram.bpfman.io/go-tc-counter-example created : sed 's@URL_BC@quay.io/bpfman-bytecode/go-uprobe-counter:latest@' config/default/go-uprobe-counter/patch.yaml.env > config/default/go-uprobe-counter/patch.yaml - cd config/default/go-uprobe-counter && /home/bmcfall/go/src/github.com/bpfman/bpfman/examples/bin/kustomize edit set image quay.io/bpfman-userspace/go-uprobe-counter=quay.io/bpfman-userspace/go-uprobe-counter:latest + cd config/default/go-uprobe-counter && /home/<$USER>/go/src/github.com/bpfman/bpfman/examples/bin/kustomize edit set image quay.io/bpfman-userspace/go-uprobe-counter=quay.io/bpfman-userspace/go-uprobe-counter:latest namespace/go-uprobe-counter created serviceaccount/bpfman-app-go-uprobe-counter created daemonset.apps/go-uprobe-counter-ds created uprobeprogram.bpfman.io/go-uprobe-counter-example created - make[1]: Leaving directory '/home/bmcfall/go/src/github.com/bpfman/bpfman/examples' + make[1]: Leaving directory '/home/<$USER>/go/src/github.com/bpfman/bpfman/examples' # Test Away ... @@ -354,9 +354,9 @@ make undeploy for target in undeploy-tc undeploy-tracepoint undeploy-xdp undeploy-xdp-ms undeploy-kprobe undeploy-uprobe undeploy-target ; do \ make $target || true; \ done - make[1]: Entering directory '/home/bmcfall/go/src/github.com/bpfman/bpfman/examples' + make[1]: Entering directory '/home/<$USER>/go/src/github.com/bpfman/bpfman/examples' sed 's@URL_BC@quay.io/bpfman-bytecode/go-tc-counter:latest@' config/default/go-tc-counter/patch.yaml.env > config/default/go-tc-counter/patch.yaml - cd config/default/go-tc-counter && /home/bmcfall/go/src/github.com/bpfman/bpfman/examples/bin/kustomize edit set image quay.io/bpfman-userspace/go-tc-counter=quay.io/bpfman-userspace/go-tc-counter:latest + cd config/default/go-tc-counter && /home/<$USER>/go/src/github.com/bpfman/bpfman/examples/bin/kustomize edit set image quay.io/bpfman-userspace/go-tc-counter=quay.io/bpfman-userspace/go-tc-counter:latest namespace "go-tc-counter" deleted serviceaccount "bpfman-app-go-tc-counter" deleted daemonset.apps "go-tc-counter-ds" deleted @@ -366,7 +366,7 @@ make undeploy namespace "go-target" deleted serviceaccount "bpfman-app-go-target" deleted daemonset.apps "go-target-ds" deleted - make[1]: Leaving directory '/home/bmcfall/go/src/github.com/bpfman/bpfman/examples' + make[1]: Leaving directory '/home/<$USER>/go/src/github.com/bpfman/bpfman/examples' ``` Individual examples can be loaded and unloaded as well, for example `make deploy-xdp` and diff --git a/docs/getting-started/example-bpf-local.md b/docs/getting-started/example-bpf-local.md index f24078cdc..6aa902a49 100644 --- a/docs/getting-started/example-bpf-local.md +++ b/docs/getting-started/example-bpf-local.md @@ -50,6 +50,8 @@ The output should show the count and total bytes of packets as they pass through interface as shown below: ```console +cd ~/src/bpfman/examples/go-xdp-counter/ + go run -exec sudo . --iface eno3 2023/07/17 17:43:58 Using Input: Interface=eno3 Priority=50 Source=/home/<$USER>/src/bpfman/examples/go-xdp-counter/bpf_bpfel.o 2023/07/17 17:43:58 Program registered with id 6211 diff --git a/docs/getting-started/launching-bpfman.md b/docs/getting-started/launching-bpfman.md index cca098bbe..6a9d54614 100644 --- a/docs/getting-started/launching-bpfman.md +++ b/docs/getting-started/launching-bpfman.md @@ -17,76 +17,68 @@ cd bpfman/ cargo build ``` -## Start bpfman-rpc +## Install and Start bpfman -When running bpfman, the RPC Server `bpfman-rpc` can be run as a long running process or a -systemd service. -Examples run the same, independent of how bpfman is deployed. - -### Run as a Long Lived Process - -While learning and experimenting with `bpfman`, it may be useful to run `bpfman` in the foreground -(which requires a second terminal to run the `bpfman` CLI commands). -When run in this fashion, logs are dumped directly to the terminal. -For more details on how logging is handled in bpfman, see [Logging](../developer-guide/logging.md). +Run the following command to copy the `bpfman` CLI and `bpfman-rpc` binaries to `/usr/sbin/` and +copy `bpfman.socket` and `bpfman.service` files to `/usr/lib/systemd/system/`. +This option will also enable and start the systemd services: ```console -sudo RUST_LOG=info ./target/debug/bpfman-rpc --timeout=0 -[INFO bpfman::utils] Has CAP_BPF: true -[INFO bpfman::utils] Has CAP_SYS_ADMIN: true -[INFO bpfman_rpc::serve] Using no inactivity timer -[INFO bpfman_rpc::serve] Using default Unix socket -[INFO bpfman_rpc::serve] Listening on /run/bpfman-sock/bpfman.sock +cd bpfman/ +sudo ./scripts/setup.sh install ``` -When a build is run for bpfman, built binaries can be found in `./target/debug/`. -So when launching `bpfman-rpc` and calling `bpfman` CLI commands, the binary must be in the $PATH -or referenced directly: +`bpfman` CLI is now in $PATH and can be used to load, view and unload eBPF programs. ```console -sudo ./target/debug/bpfman list -``` +sudo bpfman load image --image-url quay.io/bpfman-bytecode/xdp_pass:latest --name pass xdp --iface eno3 --priority 100 -For readability, the remaining sample commands will assume the `bpfman` CLI binary is in the $PATH, -so `./target/debug/` will be dropped. +sudo bpfman list + Program ID Name Type Load Time + 53885 pass xdp 2024-08-26T17:41:36-0400 -### Run as a systemd Service +sudo bpfman unload 53885 +``` -Run the following command to copy the `bpfman` CLI and `bpfman-rpc` binaries to `/usr/sbin/` and -copy `bpfman.socket` and `bpfman.service` files to `/usr/lib/systemd/system/`. -This option will also enable and start the systemd services: +`bpfman` CLI is a Rust program that calls the `bpfman` library directly. +To view logs while running `bpfman` CLI commands, prepend `RUST_LOG=info` to each command +(see [Logging](../developer-guide/logging.md) for more details): ```console -sudo ./scripts/setup.sh install +sudo RUST_LOG=info bpfman list +[INFO bpfman::utils] Has CAP_BPF: true +[INFO bpfman::utils] Has CAP_SYS_ADMIN: true + Program ID Name Type Load Time ``` -`bpfman` CLI is now in $PATH, so `./targer/debug/` is not needed: +The examples (see [Deploying Example eBPF Programs On Local Host](./example-bpf-local.md)) +are Go based programs, so they are building and sending RPC messaged to the rust based binary +`bpfman-rpc`, which in turn calls the `bpfman` library. ```console -sudo bpfman list +cd bpfman/examples/go-xdp-counter/ +go run -exec sudo . -iface eno3 ``` -To view logs, use `journalctl`: +To view bpfman logs for RPC based applications, including all the provided examples, use `journalctl`: ```console sudo journalctl -f -u bpfman.service -u bpfman.socket -Mar 27 09:13:54 server-calvin systemd[1]: Listening on bpfman.socket - bpfman API Socket. - -Mar 27 09:15:43 server-calvin systemd[1]: Started bpfman.service - Run bpfman as a service. -Mar 27 09:15:43 server-calvin bpfman-rpc[2548091]: Has CAP_BPF: true -Mar 27 09:15:43 server-calvin bpfman-rpc[2548091]: Has CAP_SYS_ADMIN: true -Mar 27 09:15:43 server-calvin bpfman-rpc[2548091]: Using a Unix socket from systemd -Mar 27 09:15:43 server-calvin bpfman-rpc[2548091]: Using inactivity timer of 15 seconds -Mar 27 09:15:43 server-calvin bpfman-rpc[2548091]: Listening on /run/bpfman-sock/bpfman.sock -Mar 27 09:15:43 server-calvin bpfman-rpc[2548091]: Starting Cosign Verifier, downloading data from Sigstore TUF repository -Mar 27 09:15:45 server-calvin bpfman-rpc[2548091]: Loading program bytecode from file: /home//src/bpfman/examples/go-kprobe-counter/bpf_bpfel.o -Mar 27 09:15:45 server-calvin bpfman-rpc[2548091]: Added probe program with name: kprobe_counter and id: 7568 -Mar 27 09:15:48 server-calvin bpfman-rpc[2548091]: Removing program with id: 7568 -Mar 27 09:15:58 server-calvin bpfman-rpc[2548091]: Shutdown Unix Handler /run/bpfman-sock/bpfman.sock -Mar 27 09:15:58 server-calvin systemd[1]: bpfman.service: Deactivated successfully. +: + +Aug 26 18:03:54 server-calvin bpfman-rpc[2401725]: Using a Unix socket from systemd +Aug 26 18:03:54 server-calvin bpfman-rpc[2401725]: Using inactivity timer of 15 seconds +Aug 26 18:03:54 server-calvin bpfman-rpc[2401725]: Listening on /run/bpfman-sock/bpfman.sock +Aug 26 18:03:54 server-calvin bpfman-rpc[2401725]: Has CAP_BPF: true +Aug 26 18:03:54 server-calvin bpfman-rpc[2401725]: Has CAP_SYS_ADMIN: true +Aug 26 18:03:54 server-calvin bpfman-rpc[2401725]: Starting Cosign Verifier, downloading data from Sigstore TUF repository +Aug 26 18:03:55 server-calvin bpfman-rpc[2401725]: Loading program bytecode from file: /home/$USER/src/bpfman/bpfman/examples/go-xdp-counter/bpf_x86_bpfel.o +Aug 26 18:03:57 server-calvin bpfman-rpc[2401725]: The bytecode image: quay.io/bpfman/xdp-dispatcher:latest is signed +Aug 26 18:03:57 server-calvin bpfman-rpc[2401725]: Added xdp program with name: xdp_stats and id: 53919 +Aug 26 18:04:09 server-calvin bpfman-rpc[2401725]: Shutdown Unix Handler /run/bpfman-sock/bpfman.sock``` ``` -#### Additional Notes +### Additional Notes To update the configuration settings associated with running `bpfman` as a service, edit the service configuration files: diff --git a/docs/getting-started/overview.md b/docs/getting-started/overview.md index b87588faa..29fbb1cb5 100644 --- a/docs/getting-started/overview.md +++ b/docs/getting-started/overview.md @@ -18,12 +18,15 @@ functions, but that is not supported at the moment. ![bpfman library](../img/bpfman_library.png) -The `bpfman-rpc` server can run in one of two modes. -It can be run as a long running process or as a systemd service that uses +## Local Host Deployment + +When deploying `bpfman` on a local server, the `bpfman-rpc` binary runs as a systemd service that uses [socket activation](https://man7.org/linux/man-pages/man1/systemd-socket-activate.1.html) to start `bpfman-rpc` only when there is a RPC message to process. More details are provided in [Deploying Example eBPF Programs On Local Host](./example-bpf-local.md). +## Kubernetes Deployment + When deploying `bpfman` in a Kubernetes deployment, `bpfman-agent`, `bpfman-rpc`, and the `bpfman` library are packaged in a container. When the container starts, `bpfman-rpc` is started as a long running process. diff --git a/docs/index.md b/docs/index.md index 19b50b236..97c432e60 100644 --- a/docs/index.md +++ b/docs/index.md @@ -40,38 +40,38 @@ in Kubernetes include: ## Challenges for eBPF in Kubernetes -- Requires privileged pods. - - eBPF-enabled apps require at least CAP_BPF permissions and potentially - more depending on the type of program that is being attached. - - Since the Linux capabilities are very broad it is challenging to constrain - a pod to the minimum set of privileges required. This can allow them to do - damage (either unintentionally or intentionally). -- Handling multiple eBPF programs on the same eBPF hooks. - - Not all eBPF hooks are designed to support multiple programs. - - Some software using eBPF assumes exclusive use of an eBPF hook and can - unintentionally eject existing programs when being attached. This can - result in silent failures and non-deterministic failures. -- Debugging problems with deployments is hard. - - The cluster administrator may not be aware that eBPF programs are being - used in a cluster. - - It is possible for some eBPF programs to interfere with others in - unpredictable ways. - - SSH access or a privileged pod is necessary to determine the state of eBPF - programs on each node in the cluster. -- Lifecycle management of eBPF programs. - - While there are libraries for the basic loading and unloading of eBPF - programs, a lot of code is often needed around them for lifecycle - management. -- Deployment on Kubernetes is not simple. - - It is an involved process that requires first writing a daemon that loads - your eBPF bytecode and deploying it using a DaemonSet. - - This requires careful design and intricate knowledge of the eBPF program - lifecycle to ensure your program stays loaded and that you can easily - tolerate pod restarts and upgrades. - - In eBPF enabled K8s deployments today, the eBPF Program is often embedded - into the userspace binary that loads and interacts with it. This means - there's no easy way to have fine-grained versioning control of the - bpfProgram in relation to it's accompanying userspace counterpart. +- **Requires privileged pods:** + - eBPF-enabled apps require at least CAP_BPF permissions and potentially + more depending on the type of program that is being attached. + - Since the Linux capabilities are very broad it is challenging to constrain + a pod to the minimum set of privileges required. This can allow them to do + damage (either unintentionally or intentionally). +- **Handling multiple eBPF programs on the same eBPF hooks:** + - Not all eBPF hooks are designed to support multiple programs. + - Some software using eBPF assumes exclusive use of an eBPF hook and can + unintentionally eject existing programs when being attached. This can + result in silent failures and non-deterministic failures. +- **Debugging problems with deployments is hard:** + - The cluster administrator may not be aware that eBPF programs are being + used in a cluster. + - It is possible for some eBPF programs to interfere with others in + unpredictable ways. + - SSH access or a privileged pod is necessary to determine the state of eBPF + programs on each node in the cluster. +- **Lifecycle management of eBPF programs:** + - While there are libraries for the basic loading and unloading of eBPF + programs, a lot of code is often needed around them for lifecycle + management. +- **Deployment on Kubernetes is not simple:** + - It is an involved process that requires first writing a daemon that loads + your eBPF bytecode and deploying it using a DaemonSet. + - This requires careful design and intricate knowledge of the eBPF program + lifecycle to ensure your program stays loaded and that you can easily + tolerate pod restarts and upgrades. + - In eBPF enabled K8s deployments today, the eBPF Program is often embedded + into the userspace binary that loads and interacts with it. This means + there's no easy way to have fine-grained versioning control of the + bpfProgram in relation to it's accompanying userspace counterpart. ## What is bpfman? @@ -95,42 +95,42 @@ bpfman is developed in Rust and built on top of Aya, a Rust eBPF library. The benefits of this solution include the following: -- Security - - Improved security because only the bpfman daemon, which can be tightly - controlled, has the privileges needed to load eBPF programs, while access - to the API can be controlled via standard RBAC methods. Within bpfman, only - a single thread keeps these capabilities while the other threads (serving - RPCs) do not. - - Gives the administrators control over who can load programs. - - Allows administrators to define rules for the ordering of networking eBPF - programs. (ROADMAP) -- Visibility/Debuggability - - Improved visibility into what eBPF programs are running on a system, which - enhances the debuggability for developers, administrators, and customer - support. - - The greatest benefit is achieved when all apps use bpfman, but even if they - don't, bpfman can provide visibility into all the eBPF programs loaded on - the nodes in a cluster. -- Multi-program Support - - Support for the coexistence of multiple eBPF programs from multiple users. - - Uses the [libxdp multiprog - protocol](https://github.com/xdp-project/xdp-tools/blob/master/lib/libxdp/protocol.org) - to allow multiple XDP programs on single interface - - This same protocol is also supported for TC programs to provide a common - multi-program user experience across both TC and XDP. -- Productivity - - Simplifies the deployment and lifecycle management of eBPF programs in a - Kubernetes cluster. - - developers can stop worrying about program lifecycle (loading, attaching, - pin management, etc.) and use existing eBPF libraries to interact with - their program maps using well defined pin points which are managed by - bpfman. - - Developers can still use Cilium/libbpf/Aya/etc libraries for eBPF - development, and load/unload with bpfman. - - Provides eBPF Bytecode Image Specifications that allows fine-grained - separate versioning control for userspace and kernelspace programs. This - also allows for signing these container images to verify bytecode - ownership. +- **Security:** + - Improved security because only the bpfman daemon, which can be tightly + controlled, has the privileges needed to load eBPF programs, while access + to the API can be controlled via standard RBAC methods. Within bpfman, only + a single thread keeps these capabilities while the other threads (serving + RPCs) do not. + - Gives the administrators control over who can load programs. + - Allows administrators to define rules for the ordering of networking eBPF + programs. (ROADMAP) +- **Visibility/Debuggability:** + - Improved visibility into what eBPF programs are running on a system, which + enhances the debuggability for developers, administrators, and customer + support. + - The greatest benefit is achieved when all apps use bpfman, but even if they + don't, bpfman can provide visibility into all the eBPF programs loaded on + the nodes in a cluster. +- **Multi-program Support:** + - Support for the coexistence of multiple eBPF programs from multiple users. + - Uses the [libxdp multiprog + protocol](https://github.com/xdp-project/xdp-tools/blob/master/lib/libxdp/protocol.org) + to allow multiple XDP programs on single interface + - This same protocol is also supported for TC programs to provide a common + multi-program user experience across both TC and XDP. +- **Productivity:** + - Simplifies the deployment and lifecycle management of eBPF programs in a + Kubernetes cluster. + - Developers can stop worrying about program lifecycle (loading, attaching, + pin management, etc.) and use existing eBPF libraries to interact with + their program maps using well defined pin points which are managed by + bpfman. + - Developers can still use Cilium/libbpf/Aya/etc libraries for eBPF + development, and load/unload with bpfman. + - Provides eBPF Bytecode Image Specifications that allows fine-grained + separate versioning control for userspace and kernelspace programs. This + also allows for signing these container images to verify bytecode + ownership. For more details, please see the following: diff --git a/docs/quick-start.md b/docs/quick-start.md index 09d565b37..064cfbdd9 100644 --- a/docs/quick-start.md +++ b/docs/quick-start.md @@ -107,7 +107,7 @@ sudo dnf erase -y bpfman-0.4.2-1.fc39.x86_64 sudo systemctl daemon-reload ``` -## Deploy Released container images on Kubernetes +## Deploy Released Container Images on Kubernetes The quickest solution for running `bpfman` in a Kubernetes deployment is to run a [local Kubernetes KIND Cluster](https://kind.sigs.k8s.io/docs/user/quick-start/): @@ -119,7 +119,7 @@ kind create cluster --name=test-bpfman Next, deploy the bpfman CRDs: ```console -export BPFMAN_REL=0.4.2 +export BPFMAN_REL=0.5.1 kubectl apply -f https://github.com/bpfman/bpfman/releases/download/v${BPFMAN_REL}/bpfman-crds-install.yaml ``` diff --git a/examples/Makefile b/examples/Makefile index 40c933d36..c47bec637 100644 --- a/examples/Makefile +++ b/examples/Makefile @@ -25,6 +25,7 @@ ## quay.io/bpfman-bytecode/go-xdp-counter:v0.1.0 ## quay.io/bpfman-userspace/go-xdp-counter:v0.1.0 ## quay.io/bpfman-userspace/go-app-counter:v0.1.0 +## : ## # VERSION defines the project version for the bundle. @@ -123,17 +124,90 @@ generate: ## Run `go generate` to build the bytecode for each of the examples. build-release-yamls: kustomize ## Generate yamls examples for a specific release version. VERSION=$(VERSION) ./build-release-yamls.sh + +.PHONY: build-bc-images +build-bc-images: generate ## Build all example bytecode images + bpfman image build -f ../Containerfile.bytecode -t ${IMAGE_TC_BC} -b ./go-tc-counter/bpf_x86_bpfel.o + bpfman image build -f ../Containerfile.bytecode -t ${IMAGE_TP_BC} -b ./go-tracepoint-counter/bpf_x86_bpfel.o + bpfman image build -f ../Containerfile.bytecode -t ${IMAGE_XDP_BC} -b ./go-xdp-counter/bpf_x86_bpfel.o + bpfman image build -f ../Containerfile.bytecode -t ${IMAGE_KP_BC} -b ./go-kprobe-counter/bpf_x86_bpfel.o + bpfman image build -f ../Containerfile.bytecode -t ${IMAGE_UP_BC} -b ./go-uprobe-counter/bpf_x86_bpfel.o + bpfman image build -f ../Containerfile.bytecode -t ${IMAGE_URP_BC} -b ./go-uretprobe-counter/bpf_x86_bpfel.o + bpfman image build -f ../Containerfile.bytecode -t ${IMAGE_APP_BC} -b ./go-app-counter/bpf_x86_bpfel.o + +.PHONY: build-bc-tc +build-bc-tc: generate ## Build TC example bytecode image + bpfman image build -f ../Containerfile.bytecode -t ${IMAGE_TC_BC} -b ./go-tc-counter/bpf_x86_bpfel.o + +.PHONY: build-bc-tp +build-bc-tp: generate ## Build Tracepoint example bytecode image + bpfman image build -f ../Containerfile.bytecode -t ${IMAGE_TP_BC} -b ./go-tracepoint-counter/bpf_x86_bpfel.o + +.PHONY: build-bc-xdp +build-bc-xdp: generate ## Build XDP example bytecode image + bpfman image build -f ../Containerfile.bytecode -t ${IMAGE_XDP_BC} -b ./go-xdp-counter/bpf_x86_bpfel.o + +.PHONY: build-bc-kprobe +build-bc-kprobe: generate ## Build kprobe example bytecode image + bpfman image build -f ../Containerfile.bytecode -t ${IMAGE_KP_BC} -b ./go-kprobe-counter/bpf_x86_bpfel.o + +.PHONY: build-bc-uprobe +build-bc-uprobe: generate ## Build uprobe example bytecode image + bpfman image build -f ../Containerfile.bytecode -t ${IMAGE_UP_BC} -b ./go-uprobe-counter/bpf_x86_bpfel.o + +.PHONY: build-bc-uretprobe +build-bc-uretprobe: generate ## Build uretprobe example bytecode image + bpfman image build -f ../Containerfile.bytecode -t ${IMAGE_URP_BC} -b ./go-uretprobe-counter/bpf_x86_bpfel.o + +.PHONY: build-bc-app +build-bc-app: generate ## Build application example bytecode image + bpfman image build -f ../Containerfile.bytecode -t ${IMAGE_APP_BC} -b ./go-app-counter/bpf_x86_bpfel.o + + .PHONY: build-us-images build-us-images: build ## Build all example userspace images + docker buildx build -t ${IMAGE_TC_US} --platform ${PLATFORM} --load -f ./go-tc-counter/container-deployment/Containerfile.go-tc-counter ../ + docker buildx build -t ${IMAGE_TP_US} --platform ${PLATFORM} --load -f ./go-tracepoint-counter/container-deployment/Containerfile.go-tracepoint-counter ../ + docker buildx build -t ${IMAGE_XDP_US} --platform ${PLATFORM} --load -f ./go-xdp-counter/container-deployment/Containerfile.go-xdp-counter ../ + docker buildx build -t ${IMAGE_KP_US} --platform ${PLATFORM} --load -f ./go-kprobe-counter/container-deployment/Containerfile.go-kprobe-counter ../ + docker buildx build -t ${IMAGE_UP_US} --platform ${PLATFORM} --load -f ./go-uprobe-counter/container-deployment/Containerfile.go-uprobe-counter ../ + docker buildx build -t ${IMAGE_URP_US} --platform ${PLATFORM} --load -f ./go-uretprobe-counter/container-deployment/Containerfile.go-uretprobe-counter ../ + docker buildx build -t ${IMAGE_GT_US} --platform ${PLATFORM} --load -f ./go-target/container-deployment/Containerfile.go-target ../ + docker buildx build -t ${IMAGE_APP_US} --platform ${PLATFORM} --load -f ./go-app-counter/container-deployment/Containerfile.go-app-counter ../ + +.PHONY: build-us-tc +build-us-tc: build ## Build TC example userspace image docker buildx build -t ${IMAGE_TC_US} --platform ${PLATFORM} --load -f ./go-tc-counter/container-deployment/Containerfile.go-tc-counter ../ + +.PHONY: build-us-tp +build-us-tp: build ## Build Tracepoint example userspace image docker buildx build -t ${IMAGE_TP_US} --platform ${PLATFORM} --load -f ./go-tracepoint-counter/container-deployment/Containerfile.go-tracepoint-counter ../ + +.PHONY: build-us-xdp +build-us-xdp: build ## Build XDP example userspace image docker buildx build -t ${IMAGE_XDP_US} --platform ${PLATFORM} --load -f ./go-xdp-counter/container-deployment/Containerfile.go-xdp-counter ../ + +.PHONY: build-us-kprobe +build-us-kprobe: build ## Build kprobe example userspace image docker buildx build -t ${IMAGE_KP_US} --platform ${PLATFORM} --load -f ./go-kprobe-counter/container-deployment/Containerfile.go-kprobe-counter ../ + +.PHONY: build-us-uprobe +build-us-uprobe: build ## Build uprobe example userspace image docker buildx build -t ${IMAGE_UP_US} --platform ${PLATFORM} --load -f ./go-uprobe-counter/container-deployment/Containerfile.go-uprobe-counter ../ + +.PHONY: build-us-uretprobe +build-us-uretprobe: build ## Build uretprobe example userspace image docker buildx build -t ${IMAGE_URP_US} --platform ${PLATFORM} --load -f ./go-uretprobe-counter/container-deployment/Containerfile.go-uretprobe-counter ../ + +.PHONY: build-us-target +build-us-target: build ## Build target example userspace image docker buildx build -t ${IMAGE_GT_US} --platform ${PLATFORM} --load -f ./go-target/container-deployment/Containerfile.go-target ../ + +.PHONY: build-us-app +build-us-app: build ## Build application example userspace image docker buildx build -t ${IMAGE_APP_US} --platform ${PLATFORM} --load -f ./go-app-counter/container-deployment/Containerfile.go-app-counter ../ + .PHONY: push-us-images push-us-images: ## Push all example userspace images docker push ${IMAGE_TC_US} @@ -149,8 +223,12 @@ push-us-images: ## Push all example userspace images load-us-images-kind: build-us-images ## Build and load all example userspace images into kind kind load docker-image ${IMAGE_TC_US} ${IMAGE_TP_US} ${IMAGE_XDP_US} ${IMAGE_KP_US} ${IMAGE_UP_US} ${IMAGE_GT_US} ${IMAGE_APP_US} --name ${KIND_CLUSTER_NAME} -##@ Deployment Variables (not commands) -TAG: ## Used to set all images to a fixed tag. Example: make deploy TAG=v0.2.0 +##@ Build and Deployment Variables (not commands) +TAG: ## ONLY deploy commands. Used to set all images to a fixed tag. TAG takes precedence over TAG_BC or TAG_US. Example: make deploy TAG=v0.2.0 +TAG_BC: ## Used to set all bytecode images to a given tag. Example: make deploy TAG_BC=test-bc +TAG_US: ## Used to set all userspace images to a given tag. Example: make build-us-images TAG_US=test; make deploy TAG_US=test-us +USER_BC: ## Used to set all bytecode images to a given respository account. Example: make deploy USER_BC=$QUAY_USER +USER_US: ## Used to set all userspace images to a given respository account. Example: make build-us-images USER_US=$QUAY_USER; make deploy USER_US=$QUAY_USER IMAGE_TC_BC: ## TC Bytecode image. Example: make deploy-tc IMAGE_TC_BC=quay.io/user1/go-tc-counter-bytecode:test IMAGE_TC_US: ## TC Userspace image. Example: make deploy-tc IMAGE_TC_US=quay.io/user1/go-tc-counter-userspace:test IMAGE_TP_BC: ## Tracepoint Bytecode image. Example: make deploy-tracepoint IMAGE_TP_BC=quay.io/user1/go-tracepoint-counter-bytecode:test @@ -170,21 +248,25 @@ KIND_CLUSTER_NAME: ## Name of the deployed cluster to load example images to, de ignore-not-found: ## For any undeploy command, set to true to ignore resource not found errors during deletion. Example: make undeploy ignore-not-found=true ##@ Deployment -IMAGE_TC_BC ?= quay.io/bpfman-bytecode/go-tc-counter:latest -IMAGE_TC_US ?= quay.io/bpfman-userspace/go-tc-counter:latest -IMAGE_TP_BC ?= quay.io/bpfman-bytecode/go-tracepoint-counter:latest -IMAGE_TP_US ?= quay.io/bpfman-userspace/go-tracepoint-counter:latest -IMAGE_XDP_BC ?= quay.io/bpfman-bytecode/go-xdp-counter:latest -IMAGE_XDP_US ?= quay.io/bpfman-userspace/go-xdp-counter:latest -IMAGE_KP_BC ?= quay.io/bpfman-bytecode/go-kprobe-counter:latest -IMAGE_KP_US ?= quay.io/bpfman-userspace/go-kprobe-counter:latest -IMAGE_UP_BC ?= quay.io/bpfman-bytecode/go-uprobe-counter:latest -IMAGE_UP_US ?= quay.io/bpfman-userspace/go-uprobe-counter:latest -IMAGE_URP_BC ?= quay.io/bpfman-bytecode/go-uretprobe-counter:latest -IMAGE_URP_US ?= quay.io/bpfman-userspace/go-uretprobe-counter:latest -IMAGE_APP_BC ?= quay.io/bpfman-bytecode/go-app-counter:latest -IMAGE_APP_US ?= quay.io/bpfman-userspace/go-app-counter:latest -IMAGE_GT_US ?= quay.io/bpfman-userspace/go-target:latest +TAG_BC ?= latest +TAG_US ?= latest +USER_BC ?= bpfman-bytecode +USER_US ?= bpfman-userspace +IMAGE_TC_BC ?= quay.io/$(USER_BC)/go-tc-counter:$(TAG_BC) +IMAGE_TC_US ?= quay.io/$(USER_US)/go-tc-counter:$(TAG_US) +IMAGE_TP_BC ?= quay.io/$(USER_BC)/go-tracepoint-counter:$(TAG_BC) +IMAGE_TP_US ?= quay.io/$(USER_US)/go-tracepoint-counter:$(TAG_US) +IMAGE_XDP_BC ?= quay.io/$(USER_BC)/go-xdp-counter:$(TAG_BC) +IMAGE_XDP_US ?= quay.io/$(USER_US)/go-xdp-counter:$(TAG_US) +IMAGE_KP_BC ?= quay.io/$(USER_BC)/go-kprobe-counter:$(TAG_BC) +IMAGE_KP_US ?= quay.io/$(USER_US)/go-kprobe-counter:$(TAG_US) +IMAGE_UP_BC ?= quay.io/$(USER_BC)/go-uprobe-counter:$(TAG_BC) +IMAGE_UP_US ?= quay.io/$(USER_US)/go-uprobe-counter:$(TAG_US) +IMAGE_URP_BC ?= quay.io/$(USER_BC)/go-uretprobe-counter:$(TAG_BC) +IMAGE_URP_US ?= quay.io/$(USER_US)/go-uretprobe-counter:$(TAG_US) +IMAGE_APP_BC ?= quay.io/$(USER_BC)/go-app-counter:$(TAG_BC) +IMAGE_APP_US ?= quay.io/$(USER_US)/go-app-counter:$(TAG_US) +IMAGE_GT_US ?= quay.io/$(USER_US)/go-target:$(TAG_US) KIND_CLUSTER_NAME ?= bpfman-deployment diff --git a/mkdocs.yml b/mkdocs.yml index cda231560..610153f8d 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -53,10 +53,10 @@ nav: - Quick Start: quick-start.md - Documentation: - bpfman Overview: getting-started/overview.md + - Setup and Building: getting-started/building-bpfman.md - Launching bpfman: getting-started/launching-bpfman.md - Deploying Example eBPF Programs On Local Host: getting-started/example-bpf-local.md - Deploying Example eBPF Programs On Kubernetes: getting-started/example-bpf-k8s.md - - Setup and Building: getting-started/building-bpfman.md - Run bpfman From Release Image: getting-started/running-release.md - Run bpfman From RPM: getting-started/running-rpm.md - CLI Guide: getting-started/cli-guide.md From e0eec4613ad030857c221a4b006a03e566e30a41 Mon Sep 17 00:00:00 2001 From: Billy McFall <22157057+Billy99@users.noreply.github.com> Date: Tue, 17 Sep 2024 12:38:05 -0400 Subject: [PATCH 4/5] docs: Scrub CLI Guide Make sure all the helptext is accurate and all the examples run. Also document the new image commands. Signed-off-by: Billy McFall <22157057+Billy99@users.noreply.github.com> --- bpfman/src/bin/cli/args.rs | 20 +- docs/developer-guide/shipping-bytecode.md | 2 +- docs/getting-started/cli-guide.md | 336 +++++++++++++++++++--- 3 files changed, 310 insertions(+), 48 deletions(-) diff --git a/bpfman/src/bin/cli/args.rs b/bpfman/src/bin/cli/args.rs index 9fba15e7e..95d241e74 100644 --- a/bpfman/src/bin/cli/args.rs +++ b/bpfman/src/bin/cli/args.rs @@ -465,11 +465,6 @@ impl GoArch { #[derive(Args, Debug)] #[command(disable_version_flag = true)] pub(crate) struct BuildBytecodeArgs { - /// Optional: bytecode file to use for building the image assuming host architecture. - /// Example: -b ./bpf_x86_bpfel.o - #[clap(flatten)] - pub(crate) bytecode_file: BytecodeFile, - /// Required: Name and optionally a tag in the name:tag format. /// Example: --tag quay.io/bpfman-bytecode/xdp_pass:latest #[clap(short, long, verbatim_doc_comment)] @@ -484,6 +479,9 @@ pub(crate) struct BuildBytecodeArgs { /// Example: --runtime podman #[clap(short, long, verbatim_doc_comment)] pub(crate) runtime: Option, + + #[clap(flatten)] + pub(crate) bytecode_file: BytecodeFile, } #[derive(Args, Debug)] @@ -506,6 +504,12 @@ pub(crate) struct BytecodeFile { #[clap(short, long, verbatim_doc_comment)] pub(crate) bytecode: Option, + /// Optional: If specified pull multi-arch bytecode files from a cilium/ebpf formatted project + /// where the bytecode files all contain a standard bpf__<(el/eb)>.o tag. + /// Example: --cilium-ebpf-project ./examples/go-xdp-counter + #[clap(short, long, verbatim_doc_comment)] + pub(crate) cilium_ebpf_project: Option, + /// Optional: bytecode file to use for building the image assuming amd64 architecture. /// Example: --bc-386-el ./examples/go-xdp-counter/bpf_386_bpfel.o #[clap(long, verbatim_doc_comment, group = "multi-arch")] @@ -570,12 +574,6 @@ pub(crate) struct BytecodeFile { /// Example: --bc-s390x-eb ./examples/go-xdp-counter/bpf_s390x_bpfeb.o #[clap(long, verbatim_doc_comment, group = "multi-arch")] pub(crate) bc_s390x_eb: Option, - - /// Optional: If specified pull multi-arch bytecode files from a cilium/ebpf formatted project - /// where the bytecode files all contain a standard bpf__<(el/eb)>.o tag. - /// Example: --cilium-ebpf-project ./examples/go-xdp-counter - #[clap(short, long, verbatim_doc_comment)] - pub(crate) cilium_ebpf_project: Option, } #[derive(Args, Debug)] diff --git a/docs/developer-guide/shipping-bytecode.md b/docs/developer-guide/shipping-bytecode.md index 3e2f83369..16d1f2065 100644 --- a/docs/developer-guide/shipping-bytecode.md +++ b/docs/developer-guide/shipping-bytecode.md @@ -60,7 +60,7 @@ Example Containerfiles for single-arch and multi-arch can be found at `Container #### Host Platform Architecture Image Build ```console -bpfman image build -b ./examples/go-xdp-counter/bpf_bpfel.o -f Containerfile.bytecode --tag quay.io//go-xdp-counter +bpfman image build -b ./examples/go-xdp-counter/bpf_x86_bpfel.o -f Containerfile.bytecode --tag quay.io//go-xdp-counter ``` Where `./examples/go-xdp-counter/bpf_x86_bpfel.o` is the path to the bytecode object file. diff --git a/docs/getting-started/cli-guide.md b/docs/getting-started/cli-guide.md index 806eee3df..6f717ed53 100644 --- a/docs/getting-started/cli-guide.md +++ b/docs/getting-started/cli-guide.md @@ -36,7 +36,6 @@ from the `bpfman` repository. Below are the commands supported by `bpfman`. ```console -sudo bpfman --help An eBPF manager focusing on simplifying the deployment and administration of eBPF programs. Usage: bpfman @@ -83,7 +82,7 @@ Commands: Options: -p, --path Required: Location of local bytecode file - Example: --path /run/bpfman/examples/go-xdp-counter/bpf_bpfel.o + Example: --path /run/bpfman/examples/go-xdp-counter/bpf_x86_bpfel.o -n, --name Required: The name of the function that is the entry point for the BPF program @@ -120,7 +119,7 @@ and sudo bpfman load image --help Load an eBPF program packaged in a OCI container image from a given registry -Usage: bpfman load image [OPTIONS] --image-url +Usage: bpfman load image [OPTIONS] --image-url --name Commands: xdp Install an eBPF program on the XDP hook point for a given interface @@ -151,10 +150,7 @@ Options: [default: IfNotPresent] -n, --name - Optional: The name of the function that is the entry point for the BPF program. - If not provided, the program name defined as part of the bytecode image will be used. - - [default: ] + Required: The name of the function that is the entry point for the eBPF program. -g, --global ... Optional: Global variables to be set when program is loaded. @@ -217,20 +213,20 @@ Options: Example loading from local file (`--path` is the fully qualified path): ```console -sudo bpfman load file --path $HOME/src/bpfman/tests/integration-test/bpf/.output/xdp_pass.bpf.o --name "pass" xdp --iface vethb2795c7 --priority 100 +sudo bpfman load file --path $HOME/src/bpfman/tests/integration-test/bpf/.output/xdp_pass.bpf.o --name "pass" xdp --iface eno3 --priority 100 ``` -Example from image in remote repository (Note: `--name` is built into the image and is not required): +Example from image in remote repository: ```console -sudo bpfman load image --image-url quay.io/bpfman-bytecode/xdp_pass:latest xdp --iface vethb2795c7 --priority 100 +sudo bpfman load image --image-url quay.io/bpfman-bytecode/xdp_pass:latest --name "pass" xdp --iface eno3 --priority 100 ``` The `tc` command is similar to `xdp`, but it also requires the `direction` option and the `proceed-on` values are different. ```console -sudo bpfman load file tc -h +sudo bpfman load file tc --help Install an eBPF program on the TC hook point for a given interface Usage: bpfman load file --path --name tc [OPTIONS] --direction --iface --priority @@ -268,11 +264,11 @@ sudo bpfman load file -p $HOME/src/bpfman/tests/integration-test/bpf/.output/tc_ ``` For the `tc_pass.bpf.o` program loaded with the command above, the name -would be set as shown in the following snippet: +would be set as shown in the following snippet, taken from the function name, not `SEC()`: ```c SEC("classifier/pass") -int accept(struct __sk_buff *skb) +int pass(struct __sk_buff *skb) { { : } @@ -285,49 +281,49 @@ Below are some additional examples of `bpfman load` commands: #### Fentry ```console -sudo bpfman load image --image-url quay.io/bpfman-bytecode/fentry:latest fentry -f do_unlinkat +sudo bpfman load image --image-url quay.io/bpfman-bytecode/fentry:latest --name "test_fentry" fentry -f do_unlinkat ``` #### Fexit ```console -sudo bpfman load image --image-url quay.io/bpfman-bytecode/fexit:latest fexit -f do_unlinkat +sudo bpfman load image --image-url quay.io/bpfman-bytecode/fexit:latest --name "test_fexit" fexit -f do_unlinkat ``` #### Kprobe ```console -sudo bpfman load image --image-url quay.io/bpfman-bytecode/kprobe:latest kprobe -f try_to_wake_up +sudo bpfman load image --image-url quay.io/bpfman-bytecode/kprobe:latest --name "my_kprobe" kprobe -f try_to_wake_up ``` #### Kretprobe ```console -sudo bpfman load image --image-url quay.io/bpfman-bytecode/kretprobe:latest kprobe -f try_to_wake_up -r +sudo bpfman load image --image-url quay.io/bpfman-bytecode/kretprobe:latest --name "my_kretprobe" kprobe -f try_to_wake_up -r ``` #### TC ```console -sudo bpfman load file --path $HOME/src/bpfman/examples/go-tc-counter/bpf_bpfel.o --name "stats"" tc --direction ingress --iface vethb2795c7 --priority 110 +sudo bpfman load file --path $HOME/src/bpfman/examples/go-tc-counter/bpf_x86_bpfel.o --name "stats" tc --direction ingress --iface eno3 --priority 110 ``` #### Uprobe ```console -sudo bpfman load image --image-url quay.io/bpfman-bytecode/uprobe:latest uprobe -f "malloc" -t "libc" +sudo bpfman load image --image-url quay.io/bpfman-bytecode/uprobe:latest --name "my_uprobe" uprobe -f "malloc" -t "libc" ``` #### Uretprobe ```console -sudo bpfman load image --image-url quay.io/bpfman-bytecode/uretprobe:latest uprobe -f "malloc" -t "libc" -r +sudo bpfman load image --image-url quay.io/bpfman-bytecode/uretprobe:latest --name "my_uretprobe" uprobe -f "malloc" -t "libc" -r ``` #### XDP ```console -sudo bpfman load file --path $HOME/src/bpfman/examples/go-xdp-counter/bpf_bpfel.o --name "xdp_stats" xdp --iface vethb2795c7 --priority 35 +sudo bpfman load file --path $HOME/src/bpfman/examples/go-xdp-counter/bpf_x86_bpfel.o --name "xdp_stats" xdp --iface eno3 --priority 35 ``` ### Setting Global Variables in eBPF Programs @@ -335,7 +331,7 @@ sudo bpfman load file --path $HOME/src/bpfman/examples/go-xdp-counter/bpf_bpfel. Global variables can be set for any eBPF program type when loading as follows: ```console -sudo bpfman load file -p $HOME/src/bpfman/tests/integration-test/bpf/.output/tc_pass.bpf.o -g GLOBAL_u8=01020304 GLOBAL_u32=0A0B0C0D -n "pass" tc -d ingress -i mynet1 -p 40 +sudo bpfman load file -p $HOME/src/bpfman/tests/integration-test/bpf/.output/tc_pass.bpf.o -g GLOBAL_u8=01 GLOBAL_u32=0A0B0C0D -n "pass" tc -d ingress -i mynet1 -p 40 ``` Note, that when setting global variables, the eBPF program being loaded must @@ -363,15 +359,18 @@ sudo bpfman load file -p $HOME/src/bpfman/tests/integration-test/bpf/.output/xdp ### Sharing Maps Between eBPF Programs -> **WARNING** Currently for the map sharing feature to work the LIBBPF_PIN_BY_NAME -flag **MUST** be set in the shared bpf map definitions. Please see [this aya issue](https://github.com/aya-rs/aya/issues/837) for future work that will change this requirement. +!!! WARNING + Currently for the map sharing feature to work the LIBBPF_PIN_BY_NAME flag **MUST** be set in + the shared bpf map definitions. + Please see [this aya issue](https://github.com/aya-rs/aya/issues/837) for future work that will + change this requirement. To share maps between eBPF programs, first load the eBPF program that owns the maps. One eBPF program must own the maps. ```console -sudo bpfman load file --path $HOME/src/bpfman/examples/go-xdp-counter/bpf_bpfel.o -n "xdp_stats" xdp --iface vethb2795c7 --priority 100 +sudo bpfman load file --path $HOME/src/bpfman/examples/go-xdp-counter/bpf_x86_bpfel.o -n "xdp_stats" xdp --iface eno3 --priority 100 6371 ``` @@ -380,7 +379,7 @@ the program id of the eBPF program that owns the maps using the `--map-owner-id` parameter: ```console -sudo bpfman load file --path $HOME/src/bpfman/examples/go-xdp-counter/bpf_bpfel.o -n "xdp_stats" --map-owner-id 6371 xdp --iface vethff657c7 --priority 100 +sudo bpfman load file --path $HOME/src/bpfman/examples/go-xdp-counter/bpf_x86_bpfel.o -n "xdp_stats" --map-owner-id 6371 xdp --iface eno3 --priority 100 6373 ``` @@ -398,15 +397,15 @@ sudo bpfman get 6371 Bpfman State --------------- Name: xdp_stats - Path: /home/<$USER>/src/bpfman/examples/go-xdp-counter/bpf_bpfel.o + Path: /home/<$USER>/src/bpfman/examples/go-xdp-counter/bpf_x86_bpfel.o Global: None Metadata: None Map Pin Path: /run/bpfman/fs/maps/6371 Map Owner ID: None Map Used By: 6371 6373 - Priority: 50 - Iface: vethff657c7 + Priority: 100 + Iface: eno3 Position: 1 Proceed On: pass, dispatcher_return : @@ -417,15 +416,15 @@ sudo bpfman get 6373 Bpfman State --------------- Name: xdp_stats - Path: /home/<$USER>/src/bpfman/examples/go-xdp-counter/bpf_bpfel.o + Path: /home/<$USER>/src/bpfman/examples/go-xdp-counter/bpf_x86_bpfel.o Global: None Metadata: None Map Pin Path: /run/bpfman/fs/maps/6371 Map Owner ID: 6371 Map Used By: 6371 6373 - Priority: 50 - Iface: vethff657c7 + Priority: 100 + Iface: eno3 Position: 0 Proceed On: pass, dispatcher_return : @@ -519,7 +518,7 @@ sudo bpfman get 6204 Map Owner ID: None Map Used By: 6204 Priority: 100 - Iface: vethff657c7 + Iface: eno3 Position: 0 Direction: eg Proceed On: pipe, dispatcher_return @@ -580,7 +579,11 @@ sudo bpfman list 6202 sys_enter_openat tracepoint 2023-07-17T17:19:09-0400 ``` -## bpfman image pull +## bpfman image + +The `bpfman image` commands contain a set of container image related commands. + +### bpfman image pull The `bpfman image pull` command pulls a given bytecode image for future use by a load command. @@ -623,7 +626,7 @@ Successfully downloaded bytecode Then when loaded, the local image will be used: ```console -sudo bpfman load image --image-url quay.io/bpfman-bytecode/xdp_pass:latest --pull-policy IfNotPresent xdp --iface vethff657c7 --priority 100 +sudo bpfman load image --image-url quay.io/bpfman-bytecode/xdp_pass:latest --pull-policy IfNotPresent xdp --iface eno3 --priority 100 Bpfman State --------------- Name: pass @@ -635,7 +638,7 @@ sudo bpfman load image --image-url quay.io/bpfman-bytecode/xdp_pass:latest --pul Map Owner ID: None Maps Used By: None Priority: 100 - Iface: vethff657c7 + Iface: eno3 Position: 2 Proceed On: pass, dispatcher_return @@ -655,3 +658,264 @@ sudo bpfman load image --image-url quay.io/bpfman-bytecode/xdp_pass:latest --pul Kernel Allocated Memory (bytes): 4096 Verified Instruction Count: 9 ``` + +### bpfman image build + +The `bpfman image build` command is a utility command that builds and pushes an eBPF program +in a OCI container image leveraging either `docker` or `podman`. +The eBPF program bytecode must already be generated. +This command calls `docker` or `podman` with the proper parameters for building +multi-architecture based images with the proper labels for a OCI container image. + +Since this command is leveraging `docker` and `podman`, a container file (`--container-file` or `-f`) +is required, along with an image tag (`--tag` of `-t`). +In addition, the bytecode to package must be included. +The bytecode can take several forms, but at least one must be provided: + +* `--bytecode` or `-b`: Use this option for a single bytecode object file built for the host architecture. + The value of this parameter is a single bytecode object file. +* `--cilium-ebpf-project` or `-c`: Use this option for a cilium/ebpf based project. + The value of this parameter is a directory that contains multiple object files for different architectures, + where the object files follow the Cilium naming convention with the architecture in the name (i.e. bpf_x86_bpfel.o, + bpf_arm64_bpfel.o, bpf_powerpc_bpfel.o, bpf_s390_bpfeb.o). +* `--bc-386-el` .. `--bc-s390x-eb`: Use this option to add one or more architecture specific bytecode files. + +```console +bpfman image build --help +Build an eBPF bytecode image from local bytecode objects and push to a registry. + +To use, the --container-file and --tag must be included, as well as a pointer to +at least one bytecode file that can be passed in several ways. Use either: + +* --bytecode: for a single bytecode built for the host architecture. + +* --cilium-ebpf-project: for a cilium/ebpf project directory which contains + multiple object files for different architectures. + +* --bc-386-el .. --bc-s390x-eb: to add one or more architecture specific bytecode files. + +Examples: + bpfman image build -f Containerfile.bytecode -t quay.io//go-xdp-counter:test \ + -b ./examples/go-xdp-counter/bpf_x86_bpfel.o + +Usage: bpfman image build [OPTIONS] --tag --container-file <--bytecode |--cilium-ebpf-project |--bc-386-el |--bc-amd64-el |--bc-arm-el |--bc-arm64-el |--bc-loong64-el |--bc-mips-eb |--bc-mipsle-el |--bc-mips64-eb |--bc-mips64le-el |--bc-ppc64-eb |--bc-ppc64le-el |--bc-riscv64-el |--bc-s390x-eb > + +Options: + -t, --tag + Required: Name and optionally a tag in the name:tag format. + Example: --tag quay.io/bpfman-bytecode/xdp_pass:latest + + -f, --container-file + Required: Dockerfile to use for building the image. + Example: --container_file Containerfile.bytecode + + -r, --runtime + Optional: Container runtime to use, works with docker or podman, defaults to docker + Example: --runtime podman + + -b, --bytecode + Optional: bytecode file to use for building the image assuming host architecture. + Example: -b ./examples/go-xdp-counter/bpf_x86_bpfel.o + + -c, --cilium-ebpf-project + Optional: If specified pull multi-arch bytecode files from a cilium/ebpf formatted project + where the bytecode files all contain a standard bpf__<(el/eb)>.o tag. + Example: --cilium-ebpf-project ./examples/go-xdp-counter + + --bc-386-el + Optional: bytecode file to use for building the image assuming amd64 architecture. + Example: --bc-386-el ./examples/go-xdp-counter/bpf_386_bpfel.o + + --bc-amd64-el + Optional: bytecode file to use for building the image assuming amd64 architecture. + Example: --bc-amd64-el ./examples/go-xdp-counter/bpf_x86_bpfel.o + + --bc-arm-el + Optional: bytecode file to use for building the image assuming arm architecture. + Example: --bc-arm-el ./examples/go-xdp-counter/bpf_arm_bpfel.o + + --bc-arm64-el + Optional: bytecode file to use for building the image assuming arm64 architecture. + Example: --bc-arm64-el ./examples/go-xdp-counter/bpf_arm64_bpfel.o + + --bc-loong64-el + Optional: bytecode file to use for building the image assuming loong64 architecture. + Example: --bc-loong64-el ./examples/go-xdp-counter/bpf_loong64_bpfel.o + + --bc-mips-eb + Optional: bytecode file to use for building the image assuming mips architecture. + Example: --bc-mips-eb ./examples/go-xdp-counter/bpf_mips_bpfeb.o + + --bc-mipsle-el + Optional: bytecode file to use for building the image assuming mipsle architecture. + Example: --bc-mipsle-el ./examples/go-xdp-counter/bpf_mipsle_bpfel.o + + --bc-mips64-eb + Optional: bytecode file to use for building the image assuming mips64 architecture. + Example: --bc-mips64-eb ./examples/go-xdp-counter/bpf_mips64_bpfeb.o + + --bc-mips64le-el + Optional: bytecode file to use for building the image assuming mips64le architecture. + Example: --bc-mips64le-el ./examples/go-xdp-counter/bpf_mips64le_bpfel.o + + --bc-ppc64-eb + Optional: bytecode file to use for building the image assuming ppc64 architecture. + Example: --bc-ppc64-eb ./examples/go-xdp-counter/bpf_ppc64_bpfeb.o + + --bc-ppc64le-el + Optional: bytecode file to use for building the image assuming ppc64le architecture. + Example: --bc-ppc64le-el ./examples/go-xdp-counter/bpf_ppc64le_bpfel.o + + --bc-riscv64-el + Optional: bytecode file to use for building the image assuming riscv64 architecture. + Example: --bc-riscv64-el ./examples/go-xdp-counter/bpf_riscv64_bpfel.o + + --bc-s390x-eb + Optional: bytecode file to use for building the image assuming s390x architecture. + Example: --bc-s390x-eb ./examples/go-xdp-counter/bpf_s390x_bpfeb.o + + -h, --help + Print help (see a summary with '-h') +``` + +Below are some different examples of building images. +Note that `sudo` is not required. +This command also pushed the image to a registry, so user must already be logged into the registry. + +Example of single bytecode image: + +```console +bpfman image build -f Containerfile.bytecode -t quay.io/$QUAY_USER/go-xdp-counter:test -b ./examples/go-xdp-counter/bpf_x86_bpfel.o +``` + +Example of directory with Cilium generated bytecode objects: + +```console +bpfman image build -f Containerfile.bytecode.multi.arch -t quay.io/$QUAY_USER/go-xdp-counter:test -c ./examples/go-xdp-counter/ +``` + +### bpfman image generate-build-args + +The `bpfman image generate-build-args` command is a utility command that generates the labels used +to package eBPF program bytecode in a OCI container image. +The eBPF program bytecode must already be generated. + +This command requires the bytecode to package that would be packaged in a OCI container image. +The bytecode can take several forms, but at least one must be provided: + +* `--bytecode` or `-b`: Use this option for a single bytecode object file built for the host architecture. + The value of this parameter is a single bytecode object file. +* `--cilium-ebpf-project` or `-c`: Use this option for a cilium/ebpf based project. + The value of this parameter is a directory that contains multiple object files for different architectures, + where the object files follow the Cilium naming convention with the architecture in the name (i.e. bpf_x86_bpfel.o, + bpf_arm64_bpfel.o, bpf_powerpc_bpfel.o, bpf_s390_bpfeb.o). +* `--bc-386-el` .. `--bc-s390x-eb`: Use this option to add one or more architecture specific bytecode files. + +```console +bpfman image generate-build-args --help +Generate the OCI image labels for a given bytecode file. + +To use, the --container-file and --tag must be included, as well as a pointer to +at least one bytecode file that can be passed in several ways. Use either: + +* --bytecode: for a single bytecode built for the host architecture. + +* --cilium-ebpf-project: for a cilium/ebpf project directory which contains + multiple object files for different architectures. + +* --bc-386-el .. --bc-s390x-eb: to add one or more architecture specific bytecode files. + +Examples: + bpfman image generate-build-args --bc-amd64-el ./examples/go-xdp-counter/bpf_x86_bpfel.o + +Usage: bpfman image generate-build-args <--bytecode |--cilium-ebpf-project |--bc-386-el |--bc-amd64-el |--bc-arm-el |--bc-arm64-el |--bc-loong64-el |--bc-mips-eb |--bc-mipsle-el |--bc-mips64-eb |--bc-mips64le-el |--bc-ppc64-eb |--bc-ppc64le-el |--bc-riscv64-el |--bc-s390x-eb > + +Options: + -b, --bytecode + Optional: bytecode file to use for building the image assuming host architecture. + Example: -b ./examples/go-xdp-counter/bpf_x86_bpfel.o + + -c, --cilium-ebpf-project + Optional: If specified pull multi-arch bytecode files from a cilium/ebpf formatted project + where the bytecode files all contain a standard bpf__<(el/eb)>.o tag. + Example: --cilium-ebpf-project ./examples/go-xdp-counter + + --bc-386-el + Optional: bytecode file to use for building the image assuming amd64 architecture. + Example: --bc-386-el ./examples/go-xdp-counter/bpf_386_bpfel.o + + --bc-amd64-el + Optional: bytecode file to use for building the image assuming amd64 architecture. + Example: --bc-amd64-el ./examples/go-xdp-counter/bpf_x86_bpfel.o + + --bc-arm-el + Optional: bytecode file to use for building the image assuming arm architecture. + Example: --bc-arm-el ./examples/go-xdp-counter/bpf_arm_bpfel.o + + --bc-arm64-el + Optional: bytecode file to use for building the image assuming arm64 architecture. + Example: --bc-arm64-el ./examples/go-xdp-counter/bpf_arm64_bpfel.o + + --bc-loong64-el + Optional: bytecode file to use for building the image assuming loong64 architecture. + Example: --bc-loong64-el ./examples/go-xdp-counter/bpf_loong64_bpfel.o + + --bc-mips-eb + Optional: bytecode file to use for building the image assuming mips architecture. + Example: --bc-mips-eb ./examples/go-xdp-counter/bpf_mips_bpfeb.o + + --bc-mipsle-el + Optional: bytecode file to use for building the image assuming mipsle architecture. + Example: --bc-mipsle-el ./examples/go-xdp-counter/bpf_mipsle_bpfel.o + + --bc-mips64-eb + Optional: bytecode file to use for building the image assuming mips64 architecture. + Example: --bc-mips64-eb ./examples/go-xdp-counter/bpf_mips64_bpfeb.o + + --bc-mips64le-el + Optional: bytecode file to use for building the image assuming mips64le architecture. + Example: --bc-mips64le-el ./examples/go-xdp-counter/bpf_mips64le_bpfel.o + + --bc-ppc64-eb + Optional: bytecode file to use for building the image assuming ppc64 architecture. + Example: --bc-ppc64-eb ./examples/go-xdp-counter/bpf_ppc64_bpfeb.o + + --bc-ppc64le-el + Optional: bytecode file to use for building the image assuming ppc64le architecture. + Example: --bc-ppc64le-el ./examples/go-xdp-counter/bpf_ppc64le_bpfel.o + + --bc-riscv64-el + Optional: bytecode file to use for building the image assuming riscv64 architecture. + Example: --bc-riscv64-el ./examples/go-xdp-counter/bpf_riscv64_bpfel.o + + --bc-s390x-eb + Optional: bytecode file to use for building the image assuming s390x architecture. + Example: --bc-s390x-eb ./examples/go-xdp-counter/bpf_s390x_bpfeb.o + + -h, --help + Print help (see a summary with '-h') +``` + +Below are some different examples of generating build arguments. +Note that `sudo` is not required. + +Example of single bytecode image: + +```console +$ bpfman image generate-build-args -b ./examples/go-xdp-counter/bpf_x86_bpfel.o +BYTECODE_FILE=./examples/go-xdp-counter/bpf_x86_bpfel.o +PROGRAMS={"xdp_stats":"xdp"} +MAPS={"xdp_stats_map":"per_cpu_array"} +``` + +Example of directory with Cilium generated bytecode objects: + +```console +$ bpfman image generate-build-args -c ./examples/go-xdp-counter/ +BC_AMD64_EL=./examples/go-xdp-counter/bpf_x86_bpfel.o +BC_ARM_EL=./examples/go-xdp-counter/bpf_arm64_bpfel.o +BC_PPC64LE_EL=./examples/go-xdp-counter/bpf_powerpc_bpfel.o +BC_S390X_EB=./examples/go-xdp-counter/bpf_s390_bpfeb.o +PROGRAMS={"xdp_stats":"xdp"} +MAPS={"xdp_stats_map":"per_cpu_array"} +``` From 6678e4d3d2ff19212fce4ac3025ff16f40da1f55 Mon Sep 17 00:00:00 2001 From: Billy McFall <22157057+Billy99@users.noreply.github.com> Date: Wed, 2 Oct 2024 11:43:18 -0400 Subject: [PATCH 5/5] Address comments To be squashed after review. Signed-off-by: Billy McFall <22157057+Billy99@users.noreply.github.com> --- docs/developer-guide/develop-operator.md | 30 ++++++---- docs/developer-guide/image-build.md | 34 ++++++------ docs/developer-guide/testing.md | 67 +++++++++++------------ docs/developer-guide/xdp-overview.md | 2 +- docs/getting-started/building-bpfman.md | 16 +++--- docs/getting-started/cli-guide.md | 34 ++++++++---- docs/getting-started/example-bpf-local.md | 4 +- docs/getting-started/launching-bpfman.md | 2 +- docs/getting-started/running-rpm.md | 6 +- 9 files changed, 110 insertions(+), 85 deletions(-) diff --git a/docs/developer-guide/develop-operator.md b/docs/developer-guide/develop-operator.md index 8be668c50..e85bcd621 100644 --- a/docs/developer-guide/develop-operator.md +++ b/docs/developer-guide/develop-operator.md @@ -18,7 +18,7 @@ For building and deploying the bpfman-operator simply see the attached `make hel output. ```bash -make help +$ make help Usage: make @@ -44,8 +44,10 @@ Development generate-typed-clients Generate typed client code generate-typed-listers Generate typed listers code generate-typed-informers Generate typed informers code + vendors Refresh vendors directory. fmt Run go fmt against code. verify Verify all the autogenerated code + lint Run linter (golangci-lint). test Run Unit tests. test-integration Run Integration tests. bundle Generate bundle manifests and metadata, then validate generated files. @@ -53,11 +55,14 @@ Development Build build Build bpfman-operator and bpfman-agent binaries. - build-images Build bpfman, bpfman-agent, and bpfman-operator images. - push-images Push bpfman, bpfman-agent, bpfman-operator images. - load-images-kind Load bpfman, bpfman-agent, and bpfman-operator images into the running local kind devel cluster. + build-images Build bpfman-agent and bpfman-operator images. + build-operator-image Build bpfman-operator image. + build-agent-image Build bpfman-agent image. + push-images Push bpfman-agent and bpfman-operator images. + load-images-kind Load bpfman-agent, and bpfman-operator images into the running local kind devel cluster. bundle-build Build the bundle image. bundle-push Push the bundle image. + catalog-update Generate catalog yaml file. catalog-build Build a catalog image. catalog-push Push a catalog image. @@ -67,6 +72,7 @@ CRD Deployment Vanilla K8s Deployment setup-kind Setup Kind cluster + destroy-kind Destroy Kind cluster deploy Deploy bpfman-operator to the K8s cluster specified in ~/.kube/config with the csi driver initialized. undeploy Undeploy bpfman-operator from the K8s cluster specified in ~/.kube/config. Call with ignore-not-found=true to ignore resource not found errors during deletion. kind-reload-images Reload locally build images into a kind cluster and restart the ds and deployment so they're picked up. @@ -75,6 +81,8 @@ Vanilla K8s Deployment Openshift Deployment deploy-openshift Deploy bpfman-operator to the Openshift cluster specified in ~/.kube/config. undeploy-openshift Undeploy bpfman-operator from the Openshift cluster specified in ~/.kube/config. Call with ignore-not-found=true to ignore resource not found errors during deletion. + catalog-deploy Deploy a catalog image. + catalog-undeploy Undeploy a catalog image. ``` ### Project Layout @@ -187,7 +195,7 @@ If any of the files are modified, then regenerate these files using: ```bash -cd bpfman/bpfman-operator/ +cd bpfman-operator/ make generate ``` @@ -203,14 +211,14 @@ During development, it may be quicker to find and fix build errors by just build To build the code: ```bash -cd bpfman/bpfman-operator/ +cd bpfman-operator/ make build ``` To build the container images, run the following command: ```bash -cd bpfman/bpfman-operator/ +cd bpfman-operator/ make build-images ``` @@ -234,7 +242,7 @@ launch bpfman in a Kubernetes cluster. To run locally in a Kind cluster with an up to date build simply run: ```bash -cd bpfman/bpfman-operator/ +cd bpfman-operator/ make run-on-kind ``` @@ -243,14 +251,14 @@ The `make run-on-kind` will run the `make build-images` if the images do not exi Then rebuild and load a fresh build run: ```bash -cd bpfman/bpfman-operator/ +cd bpfman-operator/ make build-images make kind-reload-images ``` -Which will rebuild the bpfman-operator, bpfman-agent, and bpfman images and load them into the kind cluster. +Which will rebuild the bpfman-operator and bpfman-agent images, and load them into the kind cluster. -By default, the `make run-on-kind` uses the `quay.io/bpfman/bpfman*` images described above. +By default, the `make run-on-kind` uses the local images described above. The container images used for `bpfman`, `bpfman-agent`, and `bpfman-operator` can also be manually configured: ```bash diff --git a/docs/developer-guide/image-build.md b/docs/developer-guide/image-build.md index a7ec87d7e..c6e22831d 100644 --- a/docs/developer-guide/image-build.md +++ b/docs/developer-guide/image-build.md @@ -1,8 +1,9 @@ # bpfman Container Images Container images for `bpfman` are automatically built and pushed to `quay.io/` under the -`:latest` tag whenever code is merged into the `main` branch of the `github.com/bpfman/bpfman` -and `github.com/bpfman/bpfman-operator` repositories. +`:latest` tag whenever code is merged into the `main` branch of the +[bpfman](https://github.com/bpfman/bpfman) and [bpfman-operator](https://github.com/bpfman/bpfman-operator) +repositories. * [quay.io/bpfman](https://quay.io/organization/bpfman): This repository contains images needed to run bpfman. @@ -45,7 +46,7 @@ to the bpfman-operator or bpfman-agent container images. The local Makefile will build and load both images based on the current changes: ```sh -cd $HOME/src/bpfman-operator/ +cd bpfman-operator/ make build-images make run-on-kind @@ -59,7 +60,7 @@ By default, bpfman-agent uses `quay.io/bpfman/bpfman:latest`. To build the bpfman binaries in a container image, run: ```sh -cd $HOME/src/bpfman/ +cd bpfman/ docker build -f ./Containerfile.bpfman.local . -t quay.io/$QUAY_USER/bpfman:test ``` @@ -69,7 +70,7 @@ Next, build and deploy the bpfman-operator and bpfman-agent with the locally bui image. ```sh -cd $HOME/src/bpfman-operator/ +cd bpfman-operator/ BPFMAN_IMG=quay.io/$QUAY_USER/bpfman:test make build-images BPFMAN_IMG=quay.io/$QUAY_USER/bpfman:test make run-on-kind @@ -77,13 +78,14 @@ BPFMAN_IMG=quay.io/$QUAY_USER/bpfman:test make run-on-kind To use, the Kind cluster must have access to the image. So either the image needs to be pushed to a registry and made public (make -public via the repo GUI after the push): +public via the repo GUI after the push) before executing the `make run-on-kind` +command shown above: ```sh docker push quay.io/$QUAY_USER/bpfman:test ``` -OR load into kind cluster: +OR it can be loaded into the kind cluster after the cluster is running: ```sh kind load docker-image quay.io/$QUAY_USER/bpfman:test --name bpfman-deployment @@ -142,8 +144,8 @@ bpfman image build -f Containerfile.bytecode.multi.arch -t quay.io/$QUAY_USER/tc bpfman image build -f Containerfile.bytecode.multi.arch -t quay.io/$QUAY_USER/xdp-dispatcher:test -c .output/xdp_dispatcher_v2.bpf/ ``` -!!! NOTE - To build images for multiple architectures on a local system, docker may need additional configuration +!!! Note + To build images for multiple architectures on a local system, docker (or podman) may need additional configuration settings to allow for caching of non-native images. See [https://docs.docker.com/build/building/multi-platform/](https://docs.docker.com/build/building/multi-platform/) for more details. @@ -208,8 +210,8 @@ $ make build-bc-xdp USER_BC=$QUAY_USER TAG_BC=test-bc PLATFORM=linux/amd64,linux => pushing quay.io/$QUAY_USER/go-xdp-counter:test-bc with docker ``` -!!! NOTE - To build images for multiple architectures on a local system, docker may need additional configuration +!!! Note + To build images for multiple architectures on a local system, docker (or podman) may need additional configuration settings to allow for caching of non-native images. See [https://docs.docker.com/build/building/multi-platform/](https://docs.docker.com/build/building/multi-platform/) for more details. @@ -220,7 +222,7 @@ If an example userspace container image needs to be built locally, use the follo build the userspace container images, (optionally passing the `USER_US` and `TAG_US` for the image): ```sh -cd ~/src/bpfman/examples/ +cd bpfman/examples/ # Build all images $ make build-us-images USER_US=$QUAY_USER TAG_US=test-us @@ -248,8 +250,8 @@ $ make build-us-xdp USER_US=$QUAY_USER TAG_US=test-us PLATFORM=linux/amd64,linux => pushing quay.io/$QUAY_USER/go-xdp-counter:test-us with docker ``` -!!! NOTE - To build images for multiple architectures on a local system, docker may need additional configuration +!!! Note + To build images for multiple architectures on a local system, docker (or podman) may need additional configuration settings to allow for caching of non-native images. See [https://docs.docker.com/build/building/multi-platform/](https://docs.docker.com/build/building/multi-platform/) for more details. @@ -270,7 +272,7 @@ integration tests, several steps need to be performed. ## Signing Container Images -It is encouraged to sign the eBPF container images, which can easily be done using +Signing eBPF container images is encouraged and can be easily done using [cosign](https://docs.sigstore.dev/signing/quickstart/). Below is a summary of the steps needed to sign an image. @@ -282,7 +284,7 @@ go install github.com/sigstore/cosign/v2/cmd/cosign@latest Then sign the image. The `cosign` command will generate a URL. -Follow the URL to a `sigstore` login and login with either GitHub, Google to Microsoft. +Follow the `sigstore` URL and login with either GitHub, Google to Microsoft. That will generate a verification code that will complete the `cosign` command. ```console diff --git a/docs/developer-guide/testing.md b/docs/developer-guide/testing.md index e0ac9be91..e47f0b312 100644 --- a/docs/developer-guide/testing.md +++ b/docs/developer-guide/testing.md @@ -10,7 +10,8 @@ Unit testing is executed as part of the `build` job by running the following command in the top-level bpfman directory. ``` - cargo test +cd bpfman/ +cargo test ``` ## Go Example Tests @@ -25,6 +26,7 @@ The full set of basic integration tests are executed by running the following command in the top-level bpfman directory. ```bash +cd bpfman/ cargo xtask integration-test ``` @@ -72,17 +74,7 @@ the eBPF test programs can be found in the `tests/integration-test/bpf` directory. These programs are compiled by executing `cargo xtask build-ebpf --libbpf-dir ` -We also load some tests from local files to test the `load-from-file` option. - -The `bpf` directory also contains a script called `build_push_images.sh` that -can be used to build and push new images to quay if the code is changed. -Images get pushed automatically when code gets merged, however, it's still -useful to be able to push them manually sometimes. For example, when a new test -case requires that both the eBPF and integration code be changed together. It -is also a useful template for new eBPF test code that needs to be pushed. -However, as a word of caution, be aware that existing integration tests will -start using the new programs immediately, so this should only be done if the -modified program is backward compatible. +We also load some tests from local files to test the `bpfman load file` option. ## Kubernetes Operator Tests @@ -91,42 +83,49 @@ modified program is backward compatible. To run all of the unit tests defined in the bpfman-operator controller code run `make test` in the bpfman-operator directory. +```bash +cd bpfman-operator/ +make test +``` + ### Kubernetes Operator Integration Tests To run the Kubernetes Operator integration tests locally: 1. Build the example test code userspace images locally. -```bash - # in bpfman/examples + ```bash + cd bpfman/examples/ make build-us-images -``` + ``` 2. (optional) build the bytecode images - In order to rebuild all of the bytecode images for a PR, ask a maintainer to do so, - they will be built and generate by github actions with the tag - `quay.io/bpfman-bytecode/:` + In order to rebuild all of the bytecode images for a PR, ask a maintainer to do so, + they will be built and generate by github actions with the tag + `quay.io/bpfman-bytecode/:` -3. Build the bpfman images locally with the `int-test` tag. +3. Build the bpfman images locally with a unique tag, for example: `int-test` -```bash - # in bpfman/bpfman-operator - BPFMAN_AGENT_IMG=quay.io/bpfman/bpfman-agent:int-test BPFMAN_IMG=quay.io/bpfman/bpfman:int-test BPFMAN_OPERATOR_IMG=quay.io/bpfman/bpfman-operator:int-test make build-images -``` + ```bash + cd bpfman-operator/ + BPFMAN_AGENT_IMG=quay.io/bpfman/bpfman-agent:int-test BPFMAN_OPERATOR_IMG=quay.io/bpfman/bpfman-operator:int-test make build-images + ``` -4. Run the integration test suite. +4. Run the integration test suite with the images from the previous step: -```bash - # in bpfman/bpfman-operator - BPFMAN_AGENT_IMG=quay.io/bpfman/bpfman-agent:int-test BPFMAN_IMG=quay.io/bpfman/bpfman:int-test BPFMAN_OPERATOR_IMG=quay.io/bpfman/bpfman-operator:int-test make test-integration -``` + ```bash + cd bpfman-operator/ + BPFMAN_AGENT_IMG=quay.io/bpfman/bpfman-agent:int-test BPFMAN_OPERATOR_IMG=quay.io/bpfman/bpfman-operator:int-test make test-integration + ``` -Additionally the integration test can be configured with the following environment variables: + If an update `bpfman` image is required, build it separately and pass to `make test-integration` using `BPFMAN_IMG`. + See [Locally Build bpfman Container Image](./image-build.md#locally-build-bpfman-container-image). -* **KEEP_TEST_CLUSTER**: If set to `true` the test cluster will not be torn down - after the integration test suite completes. -* **USE_EXISTING_KIND_CLUSTER**: If this is set to the name of the existing kind - cluster the integration test suite will use that cluster instead of creating a - new one. + Additionally the integration test can be configured with the following environment variables: + * **KEEP_TEST_CLUSTER**: If set to `true` the test cluster will not be torn down + after the integration test suite completes. + * **USE_EXISTING_KIND_CLUSTER**: If this is set to the name of the existing kind + cluster the integration test suite will use that cluster instead of creating a + new one. diff --git a/docs/developer-guide/xdp-overview.md b/docs/developer-guide/xdp-overview.md index 158bdf0e4..e7fab0c5f 100644 --- a/docs/developer-guide/xdp-overview.md +++ b/docs/developer-guide/xdp-overview.md @@ -17,7 +17,7 @@ XDP programs on a given interface. This tutorial will show you how to use `bpfman` to load multiple XDP programs on an interface. -!!! Note: +!!! Note The TC hook point is also associated with an interface. Within bpfman, TC is implemented in a similar fashion to XDP in that it uses a dispatcher with stub functions. diff --git a/docs/getting-started/building-bpfman.md b/docs/getting-started/building-bpfman.md index 13f16228e..f1e81c928 100644 --- a/docs/getting-started/building-bpfman.md +++ b/docs/getting-started/building-bpfman.md @@ -36,7 +36,7 @@ Major kernel features leveraged by bpfman: * **Relaxed CAP_BPF Requirement:** Prior to Kernel 5.19, all eBPF system calls required CAP_BPF. This required userspace programs that wanted to access eBPF maps to have the CAP_BPF Linux capability. With the kernel 5.19 change, CAP_BPF is only required for load and unload requests. -* **TCX:** TCX support was added in Kernel 6.6, and added to bpfman in v0.5.2. +* **TCX:** TCX support was added in Kernel 6.6 and is expected to be added to bpfman in v0.5.2. TCX has performance improvements over TC and adds support in the kernel for multiple TCX programs to run on a given TC hook point. @@ -188,7 +188,7 @@ See [kind](https://kind.sigs.k8s.io/) for documentation and installation instruc ([kubernetes/kubernetes#112597](https://github.com/kubernetes/kubernetes/pull/112597)) that addresses a gRPC Protocol Error that was seen in the CSI client code and it doesn't appear to have been backported. - It is recommended to install kind v0.20.0 or later. + kind v0.20.0 or later is recommended. If the following error is seen, it means there is an older version of Kubernetes running and it needs to be upgraded. @@ -276,8 +276,8 @@ taplo fmt --check ## Clone the bpfman and bpfman-operator Repositories You can build and run bpfman from anywhere. -For simplicity throughout this documentation, all examples will assume -`$HOME/src/bpfman/` and `$HOME/src/bpfman-operator/`. +For simplicity throughout this documentation, all examples will reference +`bpfman/` and `bpfman-operator/` to indicate which repository is being used. bpfman-operator only needs to be cloned if deploying in Kubernetes. ``` @@ -292,7 +292,7 @@ git clone https://github.com/bpfman/bpfman-operator.git If you are building bpfman for the first time OR the eBPF code has changed: ```console -cd ~/src/bpfman/ +cd bpfman/ cargo xtask build-ebpf --libbpf-dir /path/to/libbpf ``` @@ -314,7 +314,7 @@ cargo build Optionally, to build the CLI TAB completion files, run the following command: ```console -cd ~/src/bpfman/ +cd bpfman/ cargo xtask build-completion ``` @@ -354,7 +354,7 @@ For these shells, generated file must be manually installed. Optionally, to build the CLI Manpage files, run the following command: ```console -cd ~/src/bpfman/ +cd bpfman/ cargo xtask build-man-page ``` @@ -374,7 +374,7 @@ Once installed, use `man` to view the pages. man bpfman list ``` -!!! NOTE +!!! Note `bpfman` commands with subcommands (specifically `bpfman load`) have `-` in the manpage subcommand generation. So use `man bpfman load-file`, `man bpfman load-image`, `man bpfman load-image-xdp`, diff --git a/docs/getting-started/cli-guide.md b/docs/getting-started/cli-guide.md index 6f717ed53..eed8a66ac 100644 --- a/docs/getting-started/cli-guide.md +++ b/docs/getting-started/cli-guide.md @@ -213,7 +213,8 @@ Options: Example loading from local file (`--path` is the fully qualified path): ```console -sudo bpfman load file --path $HOME/src/bpfman/tests/integration-test/bpf/.output/xdp_pass.bpf.o --name "pass" xdp --iface eno3 --priority 100 +cd bpfman/ +sudo bpfman load file --path tests/integration-test/bpf/.output/xdp_pass.bpf.o --name "pass" xdp --iface eno3 --priority 100 ``` Example from image in remote repository: @@ -260,7 +261,8 @@ Options: The following is an example of the `tc` command using short option names: ```console -sudo bpfman load file -p $HOME/src/bpfman/tests/integration-test/bpf/.output/tc_pass.bpf.o -n "pass" tc -d ingress -i mynet1 -p 40 +cd bpfman/ +sudo bpfman load file -p tests/integration-test/bpf/.output/tc_pass.bpf.o -n "pass" tc -d ingress -i mynet1 -p 40 ``` For the `tc_pass.bpf.o` program loaded with the command above, the name @@ -305,7 +307,8 @@ sudo bpfman load image --image-url quay.io/bpfman-bytecode/kretprobe:latest --na #### TC ```console -sudo bpfman load file --path $HOME/src/bpfman/examples/go-tc-counter/bpf_x86_bpfel.o --name "stats" tc --direction ingress --iface eno3 --priority 110 +cd bpfman/ +sudo bpfman load file --path examples/go-tc-counter/bpf_x86_bpfel.o --name "stats" tc --direction ingress --iface eno3 --priority 110 ``` #### Uprobe @@ -323,7 +326,8 @@ sudo bpfman load image --image-url quay.io/bpfman-bytecode/uretprobe:latest --na #### XDP ```console -sudo bpfman load file --path $HOME/src/bpfman/examples/go-xdp-counter/bpf_x86_bpfel.o --name "xdp_stats" xdp --iface eno3 --priority 35 +cd bpfman/ +sudo bpfman load file --path bpfman/examples/go-xdp-counter/bpf_x86_bpfel.o --name "xdp_stats" xdp --iface eno3 --priority 35 ``` ### Setting Global Variables in eBPF Programs @@ -331,10 +335,11 @@ sudo bpfman load file --path $HOME/src/bpfman/examples/go-xdp-counter/bpf_x86_bp Global variables can be set for any eBPF program type when loading as follows: ```console -sudo bpfman load file -p $HOME/src/bpfman/tests/integration-test/bpf/.output/tc_pass.bpf.o -g GLOBAL_u8=01 GLOBAL_u32=0A0B0C0D -n "pass" tc -d ingress -i mynet1 -p 40 +cd bpfman/ +sudo bpfman load file -p bpfman/tests/integration-test/bpf/.output/tc_pass.bpf.o -g GLOBAL_u8=01 GLOBAL_u32=0A0B0C0D -n "pass" tc -d ingress -i mynet1 -p 40 ``` -Note, that when setting global variables, the eBPF program being loaded must +Note that when setting global variables, the eBPF program being loaded must have global variables named with the strings given, and the size of the value provided must match the size of the given variable. For example, the above command can be used to update the following global variables in an eBPF program. @@ -354,12 +359,13 @@ program's return value. For example, the default `proceed-on` configuration for an `xdp` program can be modified as follows: ```console -sudo bpfman load file -p $HOME/src/bpfman/tests/integration-test/bpf/.output/xdp_pass.bpf.o -n "pass" xdp -i mynet1 -p 30 --proceed-on drop pass dispatcher_return +cd bpfman/ +sudo bpfman load file -p tests/integration-test/bpf/.output/xdp_pass.bpf.o -n "pass" xdp -i mynet1 -p 30 --proceed-on drop pass dispatcher_return ``` ### Sharing Maps Between eBPF Programs -!!! WARNING +!!! Warning Currently for the map sharing feature to work the LIBBPF_PIN_BY_NAME flag **MUST** be set in the shared bpf map definitions. Please see [this aya issue](https://github.com/aya-rs/aya/issues/837) for future work that will @@ -370,7 +376,8 @@ maps. One eBPF program must own the maps. ```console -sudo bpfman load file --path $HOME/src/bpfman/examples/go-xdp-counter/bpf_x86_bpfel.o -n "xdp_stats" xdp --iface eno3 --priority 100 +cd bpfman/ +sudo bpfman load file --path examples/go-xdp-counter/bpf_x86_bpfel.o -n "xdp_stats" xdp --iface eno3 --priority 100 6371 ``` @@ -379,7 +386,8 @@ the program id of the eBPF program that owns the maps using the `--map-owner-id` parameter: ```console -sudo bpfman load file --path $HOME/src/bpfman/examples/go-xdp-counter/bpf_x86_bpfel.o -n "xdp_stats" --map-owner-id 6371 xdp --iface eno3 --priority 100 +cd bpfman/ +sudo bpfman load file --path examples/go-xdp-counter/bpf_x86_bpfel.o -n "xdp_stats" --map-owner-id 6371 xdp --iface eno3 --priority 100 6373 ``` @@ -794,6 +802,12 @@ Example of directory with Cilium generated bytecode objects: bpfman image build -f Containerfile.bytecode.multi.arch -t quay.io/$QUAY_USER/go-xdp-counter:test -c ./examples/go-xdp-counter/ ``` +!!! Note + To build images for multiple architectures on a local system, docker (or podman) may need additional configuration + settings to allow for caching of non-native images. See + [https://docs.docker.com/build/building/multi-platform/](https://docs.docker.com/build/building/multi-platform/) + for more details. + ### bpfman image generate-build-args The `bpfman image generate-build-args` command is a utility command that generates the labels used diff --git a/docs/getting-started/example-bpf-local.md b/docs/getting-started/example-bpf-local.md index 6aa902a49..4d6c51b69 100644 --- a/docs/getting-started/example-bpf-local.md +++ b/docs/getting-started/example-bpf-local.md @@ -50,10 +50,10 @@ The output should show the count and total bytes of packets as they pass through interface as shown below: ```console -cd ~/src/bpfman/examples/go-xdp-counter/ +cd bpfman/examples/go-xdp-counter/ go run -exec sudo . --iface eno3 -2023/07/17 17:43:58 Using Input: Interface=eno3 Priority=50 Source=/home/<$USER>/src/bpfman/examples/go-xdp-counter/bpf_bpfel.o +2023/07/17 17:43:58 Using Input: Interface=eno3 Priority=50 Source=/home/$USER/src/bpfman/examples/go-xdp-counter/bpf_bpfel.o 2023/07/17 17:43:58 Program registered with id 6211 2023/07/17 17:44:01 4 packets received 2023/07/17 17:44:01 580 bytes received diff --git a/docs/getting-started/launching-bpfman.md b/docs/getting-started/launching-bpfman.md index 6a9d54614..ff5a0234f 100644 --- a/docs/getting-started/launching-bpfman.md +++ b/docs/getting-started/launching-bpfman.md @@ -52,7 +52,7 @@ sudo RUST_LOG=info bpfman list ``` The examples (see [Deploying Example eBPF Programs On Local Host](./example-bpf-local.md)) -are Go based programs, so they are building and sending RPC messaged to the rust based binary +are Go based programs, so they are building and sending RPC messages to the rust based binary `bpfman-rpc`, which in turn calls the `bpfman` library. ```console diff --git a/docs/getting-started/running-rpm.md b/docs/getting-started/running-rpm.md index 7465b9129..d68cf5eab 100644 --- a/docs/getting-started/running-rpm.md +++ b/docs/getting-started/running-rpm.md @@ -36,8 +36,10 @@ sudo dnf copr enable @ebpf-sig/bpfman-next ``` !!! Note - If both the bpfman and bpfman-next copr repos are enabled DNF will - automatically pull from bpfman-next. To disable one or the other simply run + If both the `bpfman` and `bpfman-next` copr repos are enabled, `dnf` will + automatically pull from `bpfman-next`. + Either repo can be disabled. + For example, to disable `bpfman-next` run: ```console sudo dnf copr disable @ebpf-sig/bpfman-next