This repository will help you to configure for your application an CI/CD pipeline with building an docker image and deploying on Kubernetes Cluster.
This README will explain how to configure CI/CD for an simple FastAPI app which is connecting to PostgreSQL on kubernetes.
Before to start our journey you must to have few things already installed and configured:
- GKE Cluster (You must to have on gke cluster two namespaces
development/staging
where will be deployed the fastapi applications and postgres statefulset) - Deployed nginx ingress controller on GKE
- Created gitlab repository (You must to have on repository two branches
development/staging
from where will be triggered your pipeline, for other branches you must to adjust pipeline) - Already configured Google Artifact Registry for our pipelines
- Public domain with access to dns hosting dashboard (For example Cloudflare)
Following these steps you will succeed to configure CI/CD pipeline on your cluster.
- Create namespace argocd
$ kubectl create ns argocd
- Deploy ArgoCD with HelmCharts
$ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
- Disable native ArgoCD tls applying next changes argocd_docs, argocd_issue
$ kubectl patch deployment -n argocd argocd-server --patch-file deployment/argocd_configurations/argocd_server_ssl.yaml
- Create ArgoCD Ingress Rule to have access from public
$ kubectl apply -f deployment/argocd_configurations/argocd_ingress.yaml
Mention: Be aware if you have different ingress class name to change it, also to update with your domain name.
-
Create DNS Record with your domain argocd.
domain.comand load balancer public IP provided by GCloud. -
Configure admin user on argocd
$ argocd admin initial-password --insecure https://argocd.domain.com
- Login with credentials to your ArgoCD dashboard and add gitlab repository Mention: If you connected with success, you must to have green status on dashboard, like next picture.\
- Create
GCP Service Account Key
(for the beginning you can delegate full access to project for testing purpose) - Create kubernetes secret with
gcp-sa-key
$ kubectl create secret -n argocd generic sa-gcr-key --from-file=my-secret.json
- Create Dockerfile for
argocd repo server plugin container
FROM quay.io/argoproj/argocd:v2.4.11
ARG SOPS_VERSION=v3.7.3
# Switch to root for the ability to perform install
USER root
# Install tools needed for your repo-server to retrieve & decrypt secrets, render manifests
# (e.g. curl, awscli, gpg, sops)
RUN apt-get update && \
apt-get install -y \
curl \
awscli \
jq \
gpg && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* && \
curl -o /usr/local/bin/sops -L https://github.com/mozilla/sops/releases/download/${SOPS_VERSION}/sops-${SOPS_VERSION}.linux && \
chmod +x /usr/local/bin/sops
RUN curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | tee /usr/share/keyrings/helm.gpg > /dev/null &&\
apt-get install apt-transport-https --yes &&\
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | tee /etc/apt/sources.list.d/helm-stable-debian.list &&\
apt-get update &&\
apt-get install helm
# Switch back to non-root user
USER argocd
- Build and push to your container registry
$ docker build -t <image:tag> . && docker push <image:tag>
- Create
argocd plugin in configmap
(be careful to provide rightgcp-sa-key-name
in fileargocd_cm_plugin_gcp.yaml
)
$ kubectl describe secret sa-gcr-key
# Output
Name: sa-gcr-key
Namespace: argocd
Labels: none
Annotations: none
Type: Opaque
Data
====
<gcp-sa-key-name.json>: 2392 bytes
$ kubectl apply -f deployment/argocd_configurations/argocd_cm_plugin_gcp.yaml
- Update
argocd-repo-server
to use created plugin from above step (don't forget to updateargocd_repo_server_gcp.yaml
file with custom docker image builded in step #11)
kubectl patch deployment -n argocd argocd-repo-server --patch-file deployment/argocd_configurations/argocd_repo_server_gcp.yaml
- Create in Gitlab, two CI/CD variables with name
envVars
and next parameters: Environment - development/staging, Type - file
DOCKER_IMAGE_REGISTRY=<your_google_artifact_registry_address>
ENVIRONMENT=development
DOMAIN=<fastapi.domain.com>
ARGOCD_SERVER_URL=<argocd.domain.com>:443
ARGOCD_USERNAME=admin
ARGOCD_PASSWORD=<secure_password>
Mention: ENVIRONMENT variable is used in pipeline to provide from which path to decrypt secrets
15. Create in Gitlab, CI/CD variable with name GCR_SERVICE_KEY
and next parameters: Environment - development/staging, Type - file
{
"type": "service_account",
"project_id": "project-name",
"private_key_id": "private-key-id-number",
"private_key": "-----BEGIN PRIVATE KEY-----PRIVATE_KEY_CONTENT-----END PRIVATE KEY-----\n",
"client_email": "[email protected]",
"client_id": "client-id-number",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/gcr-617%40project-name.iam.gserviceaccount.com",
"universe_domain": "googleapis.com"
}
- Create GCP
gcp key ring
for sops secrets which will be used for encryption and decryption
# First enable kms api on gcp interface
$ gcloud kms keyrings create sops --location global
$ gcloud kms keys create sops-key --location global --keyring sops --purpose encryption
$ gcloud kms keys list --location global --keyring sops
- Deploy postgresql on both namespaces
development/staging
$ helm install postgresql oci://registry-1.docker.io/bitnamicharts/postgresql
Additional: bellow will be information how to extract password of postgresql user or how to connect to postgresql instance, therefore to update credentials in secrets.yaml
PostgreSQL can be accessed via port 5432 on the following DNS names from within your cluster:
postgresql.<development/staging>.svc.cluster.local - Read/Write connection
To get the password for "postgres" run:
export POSTGRES_PASSWORD=$(kubectl get secret --namespace <development/staging> postgresql -o jsonpath="{.data.postgres-password}" | base64 -d)
To connect to your database run the following command:
kubectl run postgresql-client --rm --tty -i --restart='Never' --namespace <development/staging> --image docker.io/bitnami/postgresql:16.1.0-debian-11-r16 --env="PGPASSWORD=$POSTGRES_PASSWORD" \ --command -- psql --host postgresql -U postgres -d postgres -p 5432 > NOTE: If you access the container using bash, make sure that you execute "/opt/bitnami/scripts/postgresql/entrypoint.sh /bin/bash" in order to avoid the error "psql: local user with ID 1001} does not exist"
To connect to your database from outside the cluster execute the following commands:
kubectl port-forward --namespace <development/staging> svc/postgresql 5432:5432 & PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U postgres -d postgres -p 5432
- Update credentials and host in files
deployment/helmcharts/fastapi/secrets/<development/staging>/secrets.yaml
- Login on localhost with
gcloud auth
gcloud auth application-default login
- Encrypt
deployment/helmcharts/fastapi/secrets/<development/staging>/secrets.yaml
(renamesecrets.yaml
insecrets_dec.yaml
after encrypt delete them)
sops --encrypt --gcp-kms projects/<project-name>/locations/global/keyRings/sops/cryptoKeys/sops-key secrets/<development/staging>/secrets_dec.yaml > secrets/<development/staging>/secrets.yaml
- Push all changes to repository
$ git add :/ && git commit -m "$*" && git push