This guide will walk you through the process of installing and configuring a Self-Hosted GitHub Actions Runner on a Kubernetes Cluster.
- Repository Access: Create a GitHub Organization and fork one of your repos to it.
- Kubernetes Cluster: Use an existing Kubernetes cluster or set up a new one (Minikube, GKE, EKS or AKS).
- Kubectl: Install the Kubernetes command-line tool kubectl and configure it to connect to your cluster.
- Helm: Install Helm, the Kubernetes package manager.
Note: If you are using Minikube, install Docker and select it as your preferred driver.
The following steps are taken to install a Self-Hosted GitHub Actions Runner on a Kubernetes Cluster:
The Cert-Manager is required before installing a runner because it plays a crucial role in securing the communication within the Kubernetes cluster, especially for the Actions Runner Controller (ARC) components. It is vital for secure communication, webhook validation, component authentication and automated certificate lifecycle.
- Run the following command to install Cert-Manager on your cluster.
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.2/cert-manager.yaml
-
Create a personal access token (classic) with the following required scopes:
- Repository runners:
repo
- Organization runners:
admin:org
- Repository runners:
-
Copy the generated personal access token, it will be used when installing the Actions Runner Controller.
The command below uses Helm to install the Actions Runner Controller in a Kubernetes Cluster. It sets up the necessary components in a specified namespace, configures authentication with GitHub and waits for the installation to complete before finishing. This controller allows you to run GitHub Actions runners in your Kubernetes cluster.
- Insert your
personal access token
before executing the command.
helm repo add actions-runner-controller https://actions-runner-controller.github.io/actions-runner-controller
helm upgrade --install --namespace actions-runner-system --create-namespace \
--set=authSecret.create=true \
--set=authSecret.github_token="YOUR_GITHUB_TOKEN" \
--wait actions-runner-controller actions-runner-controller/actions-runner-controller
- This will give you an output:
kubectl --namespace actions-runner-system port-forward $POD_NAME 8080:$CONTAINER_PORT
- Head to your browser and enter
localhost:8080
. This will display Client sent an HTTP request to an HTTPS server.
- Create a file
runner.yaml
and insert your organization name, your repo and runner label in the repository context.
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: telex-runner
spec:
replicas: 1
template:
spec:
repository: your-org/your-repo
labels:
- Telex
- Apply the configuration.
kubectl apply -f runner.yaml
- Verify if the pod is running
kubectl get pods -w
- Go to your repository on GitHub and modify the runner to use
Self-Hosted
or thelabel
(i.e. Telex) you indicated in the Runner Deployment manifest.
name: Build Binary and Run Tests
on: workflow_dispatch
jobs:
build:
runs-on: Telex
- Since the workflow above is triggered manually, trigger it and wait for the job to build.
- Run the following command to check the logs of the runner pods:
kubectl logs -f <POD_NAME>