Skip to content

Commit

Permalink
Updating doc
Browse files Browse the repository at this point in the history
  • Loading branch information
ricsanfre committed Oct 13, 2023
1 parent 845b498 commit 2b24fee
Show file tree
Hide file tree
Showing 6 changed files with 89 additions and 90 deletions.
3 changes: 3 additions & 0 deletions docs/_docs/argocd.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,9 @@ ArgoCD can be installed through helm chart
end
end
return hs
# Enabling Helm chart rendering with Kustomize
kustomize.buildOptions: --enable-helm

server:
# Ingress Resource.
ingress:
Expand Down
56 changes: 29 additions & 27 deletions docs/_docs/backup.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: Backup & Restore
permalink: /docs/backup/
description: How to deploy a backup solution based on Velero and Restic in our Raspberry Pi Kubernetes Cluster.
last_modified_at: "02-08-2023"
last_modified_at: "13-10-2023"
---

## Backup Architecture and Design
Expand Down Expand Up @@ -56,11 +56,11 @@ The backup architecture is the following:
kubectl exec pod -- app_unfreeze_command
```

Velero also support CSI snapshot API to take Persistent Volumes snapshots, through CSI provider, Longorn, when backing-up the PODs. See Velero [CSI snapshot support documentation](https://velero.io/docs/v1.9/csi/).
Velero also support CSI snapshot API to take Persistent Volumes snapshots, through CSI provider, Longorn, when backing-up the PODs. See Velero [CSI snapshot support documentation](https://velero.io/docs/v1.12/csi/).

Integrating Container Storage Interface (CSI) snapshot support into Velero and Longhorn enables Velero to backup and restore CSI-backed volumes using the [Kubernetes CSI Snapshot feature](https://kubernetes.io/docs/concepts/storage/volume-snapshots/).

For orchestrating application-consistent backups, Velero supports the definition of [backup hooks](https://velero.io/docs/v1.9/backup-hooks/), commands to be executed before and after the backup, that can be configured at POD level through annotations.
For orchestrating application-consistent backups, Velero supports the definition of [backup hooks](https://velero.io/docs/v1.12/backup-hooks/), commands to be executed before and after the backup, that can be configured at POD level through annotations.

So Velero, with its buil-in functionality, CSI snapshot support and backup hooks, is able to perform the orchestration of application-consistent backups. Velero delegates the actual backup/restore of PV to the CSI provider, Longhorn.

Expand Down Expand Up @@ -219,11 +219,11 @@ Backup policies scheduling

K3S distribution currently does not come with a preintegrated Snapshot Controller that is needed to enable CSI Snapshot feature. An external snapshot controller need to be deployed. K3S can be configured to use [kubernetes-csi/external-snapshotter](https://github.com/kubernetes-csi/external-snapshotter).

To enable this feature, follow instructions in [Longhorn documentation - Enable CSI Snapshot Support](https://longhorn.io/docs/1.4.0/snapshots-and-backups/csi-snapshot-support/enable-csi-snapshot-support/).
To enable this feature, follow instructions in [Longhorn documentation - Enable CSI Snapshot Support](https://longhorn.io/docs/1.5.1/snapshots-and-backups/csi-snapshot-support/enable-csi-snapshot-support/).

{{site.data.alerts.note}}

Longhorn 1.4.0 CSI Snapshots support is compatible with [kubernetes-csi/external-snapshotter](https://github.com/kubernetes-csi/external-snapshotter) release 5.0.1. Do not install latest version available of External Snapshotter.
Longhorn 1.5.1 CSI Snapshots support is compatible with [kubernetes-csi/external-snapshotter](https://github.com/kubernetes-csi/external-snapshotter) release v6.2.1. Do not install latest version available of External Snapshotter.

{{site.data.alerts.end}}

Expand All @@ -236,8 +236,8 @@ Longhorn 1.4.0 CSI Snapshots support is compatible with [kubernetes-csi/external
kind: Kustomization
namespace: kube-system
resources:
- https://github.com/kubernetes-csi/external-snapshotter/client/config/crd/?ref=v5.0.1
- https://github.com/kubernetes-csi/external-snapshotter/deploy/kubernetes/snapshot-controller/?ref=v5.0.1
- https://github.com/kubernetes-csi/external-snapshotter/client/config/crd/?ref=v6.2.1
- https://github.com/kubernetes-csi/external-snapshotter/deploy/kubernetes/snapshot-controller/?ref=v6.2.1
```
- Step Deploy Snapshot-Controller
Expand Down Expand Up @@ -409,13 +409,13 @@ VolumeSnapshotClass objects from CSI Snapshot API need to be configured

Velero defines a set of Kuberentes' CRDs (Custom Resource Definition) and Controllers that process those CRDs to perform backups and restores.

Velero as well provides a CLI to execute backup/restore commands using Kuberentes API. More details in official [documentation](https://velero.io/docs/v1.9/how-velero-works/)
Velero as well provides a CLI to execute backup/restore commands using Kuberentes API. More details in official [documentation](https://velero.io/docs/v1.12/how-velero-works/)

The complete backup workflow is the following:

![velero-backup-process](/assets/img/velero-backup-process.png)

As storage provider, Minio will be used. See [Velero's installation documentation using Minio as backend](https://velero.io/docs/v1.9/contributions/minio/).
As storage provider, Minio will be used. See [Velero's installation documentation using Minio as backend](https://velero.io/docs/v1.12/contributions/minio/).


### Configuring Minio bucket and user for Velero
Expand Down Expand Up @@ -516,13 +516,13 @@ Installation using `Helm` (Release 3):
# AWS backend and CSI plugins configuration
initContainers:
- name: velero-plugin-for-aws
image: velero/velero-plugin-for-aws:v1.7.1
image: velero/velero-plugin-for-aws:v1.8.0
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /target
name: plugins
- name: velero-plugin-for-csi
image: velero/velero-plugin-for-csi:v0.5.1
image: velero/velero-plugin-for-csi:v0.6.0
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /target
Expand Down Expand Up @@ -552,16 +552,18 @@ Installation using `Helm` (Release 3):
snapshotsEnabled: false
# Run velero only on amd64 nodes
# velero-plugin-for-csi not officially available for ARM architecture
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
# velero-plugin-for-csi was not available for ARM architecture (version < 0.6.0)
# Starting from plugin version 0.6.0 (Velero 1.12) ARM64 is available and so
# This rule is not longer required
# affinity:
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: kubernetes.io/arch
# operator: In
# values:
# - amd64
```

- Step 5: Install Velero in the `velero` namespace with the overriden values
Expand Down Expand Up @@ -615,20 +617,20 @@ Installation using `Helm` (Release 3):
# AWS backend and CSI plugins configuration
initContainers:
- name: velero-plugin-for-aws
image: velero/velero-plugin-for-aws:v1.7.1
image: velero/velero-plugin-for-aws:v1.8.0
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /target
name: plugins
- name: velero-plugin-for-csi
image: velero/velero-plugin-for-csi:v0.5.1
image: velero/velero-plugin-for-csi:v0.6.0
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /target
name: plugins
```

- Affinity configuration
- Affinity configuration (only needed for Velero releases previous to 1.12)

```yml
# Run velero only on amd64 nodes
Expand All @@ -645,7 +647,7 @@ Installation using `Helm` (Release 3):
```
{{site.data.alerts.note}}

Official docker image `velero/velero-plugin-for-csi` recently is supporting ARM64 architecture but not for the official tagged images for each release.
Official docker image `velero/velero-plugin-for-csi` is supporting ARM64 architecture starting from version 0.6.0 (Velero 1.12).

{{site.data.alerts.end}}

Expand Down Expand Up @@ -885,7 +887,7 @@ Set up daily full backup can be on with velero CLI
```shell
velero schedule create full --schedule "0 4 * * *"
```
Or creating a 'Schedule' [kubernetes resource](https://velero.io/docs/v1.9/api-types/schedule/):
Or creating a 'Schedule' [kubernetes resource](https://velero.io/docs/v1.12/api-types/schedule/):

```yml
apiVersion: velero.io/v1
Expand Down Expand Up @@ -913,7 +915,7 @@ spec:
## References

- [K3S Backup/Restore official documentation](https://rancher.com/docs/k3s/latest/en/backup-restore/)
- [Longhorn Backup/Restore official documentation](https://longhorn.io/docs/1.3.1/snapshots-and-backups/)
- [Longhorn Backup/Restore official documentation](https://longhorn.io/docs/1.5.1/snapshots-and-backups/)
- [Bare metal Minio documentation](https://docs.min.io/minio/baremetal/)
- [Create a Multi-User MinIO Server for S3-Compatible Object Hosting](https://www.civo.com/learn/create-a-multi-user-minio-server-for-s3-compatible-object-hosting)
- [Backup Longhorn Volumes to a Minio S3 bucket](https://www.civo.com/learn/backup-longhorn-volumes-to-a-minio-s3-bucket)
Expand Down
44 changes: 22 additions & 22 deletions docs/_docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: What is this project about?
permalink: /docs/home/
redirect_from: /docs/index.html
description: The scope of this project is to create a kubernetes cluster at home using Raspberry Pis and low cost mini PCs, and to automate its deployment and configuration applying IaC (infrastructure as a code) and GitOps methodologies with tools like Ansible and ArgoCD. How to automatically deploy K3s baesed kubernetes cluster, Longhorn as distributed block storage for PODs' persistent volumes, Prometheus as monitoring solution, EFK+Loki stack as centralized log management solution, Velero and Restic as backup solution and Linkerd as service mesh architecture.
last_modified_at: "09-06-2023"
last_modified_at: "13-10-2023"
---


Expand Down Expand Up @@ -302,38 +302,38 @@ The software used and the latest version tested of each component
| OS | Ubuntu | 22.04.2 | |
| Control | Ansible | 2.14.5 | |
| Control | cloud-init | 23.1.2 | version pre-integrated into Ubuntu 22.04.2 |
| Kubernetes | K3S | v1.27.3 | K3S version|
| Kubernetes | K3S | v1.28.2 | K3S version|
| Kubernetes | Helm | v3.12 ||
| Metrics | Kubernetes Metrics Server | v0.6.3 | version pre-integrated into K3S |
| Kubernetes | etcd | v3.5.7-k3s1 | version pre-integrated into K3S |
| Computing | containerd | v1.7.1-k3s1 | version pre-integrated into K3S |
| Networking | Flannel | v0.22.0 | version pre-integrated into K3S |
| Kubernetes | etcd | v3.5.9-k3s1 | version pre-integrated into K3S |
| Computing | containerd | v1.7.6-k3s1 | version pre-integrated into K3S |
| Networking | Flannel | v0.22.2 | version pre-integrated into K3S |
| Networking | CoreDNS | v1.10.1 | version pre-integrated into K3S |
| Networking | Metal LB | v0.13.10 | Helm chart version: 0.13.10 |
| Service Mesh | Linkerd | v2.13.5 | Helm chart version: linkerd-control-plane-1.12.5 |
| Service Mesh | Linkerd | v2.14.1 | Helm chart version: linkerd-control-plane-1.16.2 |
| Service Proxy | Traefik | v2.10.1 | Helm chart version: 23.1.0 |
| Service Proxy | Ingress NGINX | v1.8.1| Helm chart version: 4.7.1 |
| Storage | Longhorn | v1.4.2 | Helm chart version: 1.4.2 |
| Service Proxy | Ingress NGINX | v1.9.1| Helm chart version: 4.8.1 |
| Storage | Longhorn | v1.5.1 | Helm chart version: 1.5.1 |
| Storage | Minio | RELEASE.2023-06-19T19-52-50Z | Helm chart version: 5.0.11 |
| TLS Certificates | Certmanager | v1.12.2| Helm chart version: v1.12.2 |
| Logging | ECK Operator | 2.7.0 | Helm chart version: 2.7.0 |
| TLS Certificates | Certmanager | v1.13.1| Helm chart version: v1.13.1 |
| Logging | ECK Operator | 2.9.0 | Helm chart version: 2.9.0 |
| Logging | Elastic Search | 8.6.0 | Deployed with ECK Operator |
| Logging | Kibana | 8.6.0 | Deployed with ECK Operator |
| Logging | Fluentbit | 2.1.3 | Helm chart version: 0.29.0 |
| Logging | Fluentbit | 2.1.10 | Helm chart version: 0.39.0 |
| Logging | Fluentd | 1.15.2 | Helm chart version: 0.3.9 [Custom docker image](https://github.com/ricsanfre/fluentd-aggregator) from official v1.15.2|
| Logging | Loki | 2.8.2 | Helm chart grafana/loki version: 5.5.1 |
| Monitoring | Kube Prometheus Stack | v0.66.0 | Helm chart version: 47.3.0 |
| Monitoring | Prometheus Operator | v0.66.0 | Installed by Kube Prometheus Stack. Helm chart version: 47.3.0 |
| Monitoring | Prometheus | v2.45.0 | Installed by Kube Prometheus Stack. Helm chart version: 47.3.0 |
| Monitoring | AlertManager | 0.25.0 | Installed by Kube Prometheus Stack. Helm chart version: 47.3.0 |
| Monitoring | Grafana | 9.5.5 | Helm chart version grafana-6.56.5. Installed as dependency of Kube Prometheus Stack chart. Helm chart version: 45.29.0 |
| Monitoring | Prometheus Node Exporter | 1.5.0 | Helm chart version: prometheus-node-exporter-4.16.0 Installed as dependency of Kube Prometheus Stack chart. Helm chart version: 43.3.1 |
| Logging | Loki | 2.9.1 | Helm chart grafana/loki version: 5.27.0 |
| Monitoring | Kube Prometheus Stack | v0.68.0 | Helm chart version: 51.5.1 |
| Monitoring | Prometheus Operator | v0.68.0 | Installed by Kube Prometheus Stack. Helm chart version: 51.5.1 |
| Monitoring | Prometheus | v2.47.1 | Installed by Kube Prometheus Stack. Helm chart version: 51.5.1 |
| Monitoring | AlertManager | 0.26.0 | Installed by Kube Prometheus Stack. Helm chart version: 51.5.1 |
| Monitoring | Grafana | 10.1.4 | Helm chart version grafana-6.60.4. Installed as dependency of Kube Prometheus Stack chart v51.5.1 |
| Monitoring | Prometheus Node Exporter | 1.6.1 | Helm chart version: prometheus-node-exporter-4.23.2 Installed as dependency of Kube Prometheus Stack chart. Helm chart version: 43.3.1 |
| Monitoring | Prometheus Elasticsearch Exporter | 1.5.0 | Helm chart version: prometheus-elasticsearch-exporter-4.15.1 |
| Tracing | Grafana Tempo | 2.1.1 | Helm chart: tempo-distributed (1.4.7) |
| Tracing | Grafana Tempo | 2.2.3 | Helm chart: tempo-distributed (1.6.10) |
| Backup | Minio External (self-hosted) | RELEASE.2023-05-04T18-10-16Z | |
| Backup | Restic | 0.13.1 | |
| Backup | Velero | 1.11.1 | Helm chart version: 4.1.4 |
| Backup | Velero | 1.12.0 | Helm chart version: 5.0.1 |
| Secrets | Hashicorp Vault | 1.12.2 | |
| Secrets| External Secret Operator | 0.9.0 | Helm chart version: 0.9.0 |
| GitOps | Argo CD | v2.7.6 | Helm chart version: 5.37.0 |
| Secrets| External Secret Operator | 0.9.5 | Helm chart version: 0.9.5 |
| GitOps | Argo CD | v2.8.4 | Helm chart version: 5.46.7 |
{: .table .table-white .border-dark }
53 changes: 16 additions & 37 deletions docs/_docs/logging-forwarder-aggregator.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: Log collection and distribution (Fluentbit/Fluentd)
permalink: /docs/logging-forwarder-aggregator/
description: How to deploy logging collection, aggregation and distribution in our Raspberry Pi Kuberentes cluster. Deploy a forwarder/aggregator architecture using Fluentbit and Fluentd. Logs are routed to Elasticsearch and Loki, so log analysis can be done using Kibana and Grafana.

last_modified_at: "06-04-2023"
last_modified_at: "13-10-2023"

---

Expand Down Expand Up @@ -379,7 +379,7 @@ The above Kubernetes resources, except TLS certificate and shared secret, are cr
The config map contains dynamic index templates that will be used by fluentd-elasticsearch-plugin configuration.
- Step 4. Add fluentbit helm repo
- Step 4. Add fluent helm repo
```shell
helm repo add fluent https://fluent.github.io/helm-charts
```
Expand Down Expand Up @@ -421,9 +421,6 @@ The above Kubernetes resources, except TLS certificate and shared secret, are cr

## Additional environment variables to set for fluentd pods
env:
# Path to fluentd conf file
- name: "FLUENTD_CONF"
value: "../../../etc/fluent/fluent.conf"
# Elastic operator creates elastic service name with format cluster_name-es-http
- name: FLUENT_ELASTICSEARCH_HOST
value: efk-es-http
Expand Down Expand Up @@ -457,16 +454,8 @@ The above Kubernetes resources, except TLS certificate and shared secret, are cr
- name: LOKI_PASSWORD
value: ""

# Volumes and VolumeMounts (only configuration files and certificates)
# Volumes and VolumeMounts (only ES template files and certificates)
volumes:
- name: etcfluentd-main
configMap:
name: fluentd-main
defaultMode: 0777
- name: etcfluentd-config
configMap:
name: fluentd-config
defaultMode: 0777
- name: fluentd-tls
secret:
secretName: fluentd-tls
Expand All @@ -477,10 +466,6 @@ The above Kubernetes resources, except TLS certificate and shared secret, are cr
defaultMode: 0777

volumeMounts:
- name: etcfluentd-main
mountPath: /etc/fluent
- name: etcfluentd-config
mountPath: /etc/fluent/config.d/
- name: etcfluentd-template
mountPath: /etc/fluent/template
- mountPath: /etc/fluent/certs
Expand Down Expand Up @@ -807,9 +792,6 @@ HPA autoscaling is also configured (`autoscaling.enabling: true`).
```yml
## Additional environment variables to set for fluentd pods
env:
# Path to fluentd conf file
- name: "FLUENTD_CONF"
value: "../../../etc/fluent/fluent.conf"
# Elastic operator creates elastic service name with format cluster_name-es-http
- name: FLUENT_ELASTICSEARCH_HOST
value: efk-es-http
Expand Down Expand Up @@ -870,19 +852,20 @@ fluentd docker image and configuration files use the following environment varia



#### Fluentd POD volumes and volume mounts
#### Fluentd POD additional volumes and volume mounts

By default helm chart defines volume mounts needed for storing fluentd config files

Additionally volumes for ES templates and TLS certificates need to be configure and container logs directories volumes should be not mounted (fluentd is not reading container logs files).


```yml
# Volumes and VolumeMounts (only configuration files and certificates)
# Do not mount logs directories
mountVarLogDirectory: false
mountDockerContainersDirectory: false
# Volumes and VolumeMounts (only ES template files and TLS certificates)
volumes:
- name: etcfluentd-main
configMap:
name: fluentd-main
defaultMode: 0777
- name: etcfluentd-config
configMap:
name: fluentd-config
defaultMode: 0777
- name: etcfluentd-template
configMap:
name: fluentd-template
Expand All @@ -892,10 +875,6 @@ volumes:
secretName: fluentd-tls
volumeMounts:
- name: etcfluentd-main
mountPath: /etc/fluent
- name: etcfluentd-config
mountPath: /etc/fluent/config.d/
- name: etcfluentd-template
mountPath: /etc/fluent/template
- mountPath: /etc/fluent/certs
Expand All @@ -905,9 +884,9 @@ volumeMounts:

ConfigMaps created by the helm chart are mounted in the fluentd container:

- ConfigMap `fluentd-main`, containing fluentd main config file (`fluent.conf`), is mounted as `/etc/fluent` volume.
- ConfigMap `fluentd-main`, created by default by helm chart, containing fluentd main config file (`fluent.conf`), is mounted as `/etc/fluent` volume.

- ConfigMap `fluentd-config`, containing fluentd config files included by main config file is mounted as `/etc/fluent/config.d`
- ConfigMap `fluentd-config`, created by default by helm chart, containing fluentd config files included by main config file is mounted as `/etc/fluent/config.d`

- ConfigMap `fluentd-template`, containing ES index templates used by fluentd-elasticsearch-plugin, mounted as `/etc/fluent/template`. This configMap is generated in step 3 of the installation procedure.

Expand Down
Loading

0 comments on commit 2b24fee

Please sign in to comment.