From 9f78411a88a8bc56b29de8763550e61bea594053 Mon Sep 17 00:00:00 2001 From: Siddharth Rawat Date: Mon, 22 Jan 2024 12:47:41 -0500 Subject: [PATCH] feat: add EFK logging stack with custom chart values Includes a separate folder for example chart values at `root/vars` --- README.md | 38 +- modules/efk/main.tf | 36 ++ modules/efk/variables.tf | 4 + modules/namespace/main.tf | 9 + root/example.tfvars | 13 +- root/main.tf | 18 +- root/variables.tf | 18 + root/vars/example.elasticsearch.yaml | 357 +++++++++++++ root/vars/example.fluentbit.yaml | 497 ++++++++++++++++++ root/{ => vars}/example.infra.yaml | 0 root/vars/example.kibana.yaml | 176 +++++++ .../example.prometheus_grafana.yaml | 0 root/{ => vars}/example.webapp.yaml | 0 13 files changed, 1157 insertions(+), 9 deletions(-) create mode 100644 modules/efk/main.tf create mode 100644 modules/efk/variables.tf create mode 100644 root/vars/example.elasticsearch.yaml create mode 100644 root/vars/example.fluentbit.yaml rename root/{ => vars}/example.infra.yaml (100%) create mode 100644 root/vars/example.kibana.yaml rename root/{ => vars}/example.prometheus_grafana.yaml (100%) rename root/{ => vars}/example.webapp.yaml (100%) diff --git a/README.md b/README.md index cc8edf2..548d8b8 100644 --- a/README.md +++ b/README.md @@ -14,6 +14,7 @@ provider "helm" { Based on the `KUBECONFIG` value, the helm chart will be installed on that particular cluster. +> \[!IMPORTANT]\ > Due to an on-going issue with Terraform Helm Provider [[reference](https://github.com/hashicorp/terraform-provider-helm/issues/932)] which prevents the Terraform resource to pull a chart from a private GitHub repository (even after providing a GitHub PAT), we are forced to install the Helm chart locally. ## Kubernetes Provider @@ -85,7 +86,8 @@ Here are key aspects and advantages of Istio: - (Optional) Install `istioctl`: -> NOTE: We will use this tool to analyze namespaces and to verify if the pods have been injected with Istio sidecar pods +> \[!NOTE]\ +> We will use this tool to analyze namespaces and to verify if the pods have been injected with Istio sidecar pods ```bash brew install istioctl @@ -95,7 +97,8 @@ istioctl version istioctl analyze ``` -> **NOTE**: Add the `sidecar.istio.io/inject: "false"` annotation to the metadata section of the pod template. This will prevent the Istio sidecar from being injected into that specific pod. +> \[!NOTE]\ +> Add the `sidecar.istio.io/inject: "false"` annotation to the metadata section of the pod template. This will prevent the Istio sidecar from being injected into that specific pod. ## Monitoring Stack @@ -133,10 +136,41 @@ Instead of installing the helm charts for these applications, we will use the cu > You can read more information on how to add firewall rules for the GKE control plane nodes in the [GKE docs](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules) > Alternatively, you can disable the hooks by setting `prometheusOperator.admissionWebhooks.enabled=false`. +## Logging Stack + +We will use the [EFK stack](https://medium.com/@tech_18484/simplifying-kubernetes-logging-with-efk-stack-158da47ce982) to setup logging for our containerized applications (which are installed via custom helm charts) on kubernetes. The `EFK stack` consists of Elasticsearch, FluentBit and Kibana to streamline the process of collecting, processing and visualizing logs. + +- **[Elasticsearch](https://www.elastic.co/elasticsearch)**: NoSQL database based on the `Lucene search engine`. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. +- **[Fluentbit](https://fluentbit.io/)**: Super fast, lightweight, and highly scalable logging and metrics processor and forwarder. +- **[Kibana](https://www.elastic.co/kibana)** — Data visualization dashboard software for Elasticsearch. + +> \[NOTE] +> Before installing the Helm chart on an EKS cluster, we must ensure the presence of a storage class and the AWS CSI driver for Elasticsearch. Elasticsearch functions as a database and is often deployed as a stateful set. This deployment configuration necessitates the use of Persistent Volume Claims (PVCs), and to fulfill those claims, we require storage resources. To achieve proper provisioning of EBS (Elastic Block Store) volumes within the EKS cluster, we rely on a storage class with the AWS EBS provisioner. Therefore, the prerequisites for successful EBS provisioning in the EKS cluster encompass the storage class and the EBS CSI driver. Refer [this blog](https://medium.com/@tech_18484/simplifying-kubernetes-logging-with-efk-stack-158da47ce982) for more details. + +### Working with EFK Stack + +1. Get the Helm repository information for elastic and fluentbit tools + + ```bash + helm repo add elastic https://helm.elastic.co + helm repo add fluent https://fluent.github.io/helm-charts + helm repo update + ``` + +2. Refer the Helm chart default values to configure the charts accordingly + + ```bash + # example: fluent-bit chart values + helm show values fluent/fluent-bit > fluentbit-values.yaml + ``` + ## Configuring the chart values For specific `values.yaml`, refer their specific charts and create their respective `values.yaml` files based on the dummy `values.yaml` file. You can also use the `example.*.yaml` files in the `root/` directory to view specific values for the chart values. +> \[NOTE] +> Make sure to configure correct values depending on the kubernetes cluster you deploy to. If you are using minikube to test the deployment, make sure you edit the values accordingly, since minikube is a single-node kubernetes cluster. + ## Infrastructure Setup Once we have all our chart `values.yaml` configured, we can apply our Terraform configuration to install the helm charts to our kubernetes cluster. diff --git a/modules/efk/main.tf b/modules/efk/main.tf new file mode 100644 index 0000000..df9fcbe --- /dev/null +++ b/modules/efk/main.tf @@ -0,0 +1,36 @@ +resource "helm_release" "elasticsearch" { + name = "elasticsearch" + namespace = "efk" + create_namespace = true + repository = "https://helm.elastic.co" + chart = "elasticsearch" + timeout = var.timeout + cleanup_on_fail = true + force_update = false + wait = false + values = ["${file(var.elasticsearch_values_file)}"] +} +resource "helm_release" "kibana" { + name = "kibana" + namespace = "efk" + create_namespace = true + repository = "https://helm.elastic.co" + chart = "kibana" + timeout = var.timeout + cleanup_on_fail = true + force_update = false + wait = false + values = ["${file(var.kibana_values_file)}"] +} +resource "helm_release" "fluentbit" { + name = "fluent-bit" + namespace = "efk" + create_namespace = true + repository = "https://fluent.github.io/helm-charts" + chart = "fluent-bit" + timeout = var.timeout + cleanup_on_fail = true + force_update = false + wait = false + values = ["${file(var.fluentbit_values_file)}"] +} diff --git a/modules/efk/variables.tf b/modules/efk/variables.tf new file mode 100644 index 0000000..e877dbc --- /dev/null +++ b/modules/efk/variables.tf @@ -0,0 +1,4 @@ +variable "timeout" {} +variable "elasticsearch_values_file" {} +variable "kibana_values_file" {} +variable "fluentbit_values_file" {} diff --git a/modules/namespace/main.tf b/modules/namespace/main.tf index 3430f23..40d6917 100644 --- a/modules/namespace/main.tf +++ b/modules/namespace/main.tf @@ -40,3 +40,12 @@ resource "kubernetes_namespace" "prometheus" { name = "prometheus" } } + +resource "kubernetes_namespace" "efk" { + metadata { + # labels = { + # istio-injection = "enabled" + # } + name = "efk" + } +} diff --git a/root/example.tfvars b/root/example.tfvars index ece6557..df105db 100644 --- a/root/example.tfvars +++ b/root/example.tfvars @@ -1,7 +1,10 @@ timeout = 600 -infra_values_file = "./infra_values.yaml" -webapp_values_file = "./webapp_values.yaml" -kube_prometheus_values_file = "./kube_prometheus_values.yaml" +infra_values_file = "./vars/infra_values.yaml" +webapp_values_file = "./vars/webapp_values.yaml" +kube_prometheus_values_file = "./vars/kube_prometheus_values.yaml" +elasticsearch_values_file = "./vars/elasticsearch_values.yaml" +kibana_values_file = "./vars/kibana_values.yaml" +fluentbit_values_file = "./vars/fluentbit_values.yaml" chart_path = "../modules/charts" -webapp_chart = "webapp-helm-chart-1.1.3.tar.gz" -infra_chart = "infra-helm-chart-1.4.0.tar.gz" +webapp_chart = "webapp-helm-chart-1.8.3.tar.gz" +infra_chart = "infra-helm-chart-1.10.0.tar.gz" diff --git a/root/main.tf b/root/main.tf index ef41b87..313e598 100644 --- a/root/main.tf +++ b/root/main.tf @@ -30,14 +30,27 @@ module "istio_gateway" { timeout = var.timeout } - resource "time_sleep" "install_istio_gateway" { depends_on = [module.istio_gateway] create_duration = "20s" } +module "logging_stack" { + depends_on = [time_sleep.install_istio_gateway] + source = "../modules/efk" + timeout = var.timeout + elasticsearch_values_file = var.elasticsearch_values_file + kibana_values_file = var.kibana_values_file + fluentbit_values_file = var.fluentbit_values_file +} + +resource "time_sleep" "install_logging_stack" { + depends_on = [module.logging_stack] + create_duration = "20s" +} + module "monitoring_stack" { - depends_on = [time_sleep.install_istio_gateway] + depends_on = [time_sleep.install_logging_stack] source = "../modules/kube_prometheus" timeout = var.timeout kube_prometheus_values_file = var.kube_prometheus_values_file @@ -47,6 +60,7 @@ resource "time_sleep" "install_monitoring_stack" { depends_on = [module.monitoring_stack] create_duration = "20s" } + module "infra_dependencies" { depends_on = [time_sleep.install_monitoring_stack] source = "../modules/infra_helm" diff --git a/root/variables.tf b/root/variables.tf index 541983c..f82a382 100644 --- a/root/variables.tf +++ b/root/variables.tf @@ -22,6 +22,24 @@ variable "kube_prometheus_values_file" { default = "./kube_prometheus_values.yaml" } +variable "elasticsearch_values_file" { + type = string + description = "The path to the elasticsearch_values.yaml file for the helm chart" + default = "./elasticsearch_values.yaml" +} + +variable "kibana_values_file" { + type = string + description = "The path to the kibana_values.yaml file for the helm chart" + default = "./kibana_values.yaml" +} + +variable "fluentbit_values_file" { + type = string + description = "The path to the fluentbit_values.yaml file for the helm chart" + default = "./fluentbit_values.yaml" +} + variable "chart_path" { type = string description = "The path to the charts/ directory to install local charts" diff --git a/root/vars/example.elasticsearch.yaml b/root/vars/example.elasticsearch.yaml new file mode 100644 index 0000000..2e59a02 --- /dev/null +++ b/root/vars/example.elasticsearch.yaml @@ -0,0 +1,357 @@ +--- +clusterName: "elasticsearch" +nodeGroup: "master" + +# The service that non master groups will try to connect to when joining the cluster +# This should be set to clusterName + "-" + nodeGroup for your master group +masterService: "" + +# Elasticsearch roles that will be applied to this nodeGroup +# These will be set as environment variables. E.g. node.roles=master +# https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html#node-roles +roles: + - master + - data + - data_content + - data_hot + - data_warm + - data_cold + - ingest + - ml + - remote_cluster_client + - transform + +replicas: 3 +minimumMasterNodes: 2 + +esMajorVersion: "" + +# Allows you to add any config files in /usr/share/elasticsearch/config/ +# such as elasticsearch.yml and log4j2.properties +esConfig: {} +# elasticsearch.yml: | +# key: +# nestedkey: value +# log4j2.properties: | +# key = value + +createCert: true + +esJvmOptions: {} +# processors.options: | +# -XX:ActiveProcessorCount=3 + +# Extra environment variables to append to this nodeGroup +# This will be appended to the current 'env:' key. You can use any of the kubernetes env +# syntax here +extraEnvs: [] +# - name: MY_ENVIRONMENT_VAR +# value: the_value_goes_here + +# Allows you to load environment variables from kubernetes secret or config map +envFrom: [] +# - secretRef: +# name: env-secret +# - configMapRef: +# name: config-map + +# Disable it to use your own elastic-credential Secret. +secret: + enabled: true + password: "" # generated randomly if not defined + +# A list of secrets and their paths to mount inside the pod +# This is useful for mounting certificates for security and for mounting +# the X-Pack license +secretMounts: [] +# - name: elastic-certificates +# secretName: elastic-certificates +# path: /usr/share/elasticsearch/config/certs +# defaultMode: 0755 + +hostAliases: [] +#- ip: "127.0.0.1" +# hostnames: +# - "foo.local" +# - "bar.local" + +image: "docker.elastic.co/elasticsearch/elasticsearch" +imageTag: "8.5.1" +imagePullPolicy: "IfNotPresent" + +podAnnotations: {} +# iam.amazonaws.com/role: es-cluster + +# additionals labels +labels: {} + +esJavaOpts: "" # example: "-Xmx1g -Xms1g" + +resources: + requests: + cpu: "1000m" + memory: "2Gi" + limits: + cpu: "1000m" + memory: "2Gi" + +initResources: {} +# limits: +# cpu: "25m" +# # memory: "128Mi" +# requests: +# cpu: "25m" +# memory: "128Mi" + +networkHost: "0.0.0.0" + +volumeClaimTemplate: + accessModes: ["ReadWriteOnce"] + resources: + requests: + storage: 30Gi + +rbac: + create: false + serviceAccountAnnotations: {} + serviceAccountName: "" + automountToken: true + +podSecurityPolicy: + create: false + name: "" + spec: + privileged: true + fsGroup: + rule: RunAsAny + runAsUser: + rule: RunAsAny + seLinux: + rule: RunAsAny + supplementalGroups: + rule: RunAsAny + volumes: + - secret + - configMap + - persistentVolumeClaim + - emptyDir + +persistence: + enabled: true + labels: + # Add default labels for the volumeClaimTemplate of the StatefulSet + enabled: false + annotations: {} + +extraVolumes: [] +# - name: extras +# emptyDir: {} + +extraVolumeMounts: [] +# - name: extras +# mountPath: /usr/share/extras +# readOnly: true + +extraContainers: [] +# - name: do-something +# image: busybox +# command: ['do', 'something'] + +extraInitContainers: [] +# - name: do-something +# image: busybox +# command: ['do', 'something'] + +# This is the PriorityClass settings as defined in +# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass +priorityClassName: "" + +# By default this will make sure two pods don't end up on the same node +# Changing this to a region would allow you to spread pods across regions +antiAffinityTopologyKey: "kubernetes.io/hostname" + +# Hard means that by default pods will only be scheduled if there are enough nodes for them +# and that they will never end up on the same node. Setting this to soft will do this "best effort" +antiAffinity: "hard" + +# This is the node affinity settings as defined in +# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature +nodeAffinity: {} + +# The default is to deploy all pods serially. By setting this to parallel all pods are started at +# the same time when bootstrapping the cluster +podManagementPolicy: "Parallel" + +# The environment variables injected by service links are not used, but can lead to slow Elasticsearch boot times when +# there are many services in the current namespace. +# If you experience slow pod startups you probably want to set this to `false`. +enableServiceLinks: true + +protocol: https +httpPort: 9200 +transportPort: 9300 + +service: + enabled: true + labels: {} + labelsHeadless: {} + type: ClusterIP + # Consider that all endpoints are considered "ready" even if the Pods themselves are not + # https://kubernetes.io/docs/reference/kubernetes-api/service-resources/service-v1/#ServiceSpec + publishNotReadyAddresses: false + nodePort: "" + annotations: {} + httpPortName: http + transportPortName: transport + loadBalancerIP: "" + loadBalancerSourceRanges: [] + externalTrafficPolicy: "" + +updateStrategy: RollingUpdate + +# This is the max unavailable setting for the pod disruption budget +# The default value of 1 will make sure that kubernetes won't allow more than 1 +# of your pods to be unavailable during maintenance +maxUnavailable: 1 + +podSecurityContext: + fsGroup: 1000 + runAsUser: 1000 + +securityContext: + capabilities: + drop: + - ALL + # readOnlyRootFilesystem: true + runAsNonRoot: true + runAsUser: 1000 + +# How long to wait for elasticsearch to stop gracefully +terminationGracePeriod: 120 + +sysctlVmMaxMapCount: 262144 + +readinessProbe: + failureThreshold: 3 + initialDelaySeconds: 10 + periodSeconds: 10 + successThreshold: 3 + timeoutSeconds: 5 + +# https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html#request-params wait_for_status +clusterHealthCheckParams: "wait_for_status=green&timeout=1s" + +## Use an alternate scheduler. +## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ +## +schedulerName: "" + +imagePullSecrets: [] +nodeSelector: {} +tolerations: [] + +# Enabling this will publicly expose your Elasticsearch instance. +# Only enable this if you have security enabled on your cluster +ingress: + enabled: false + annotations: {} + # kubernetes.io/ingress.class: nginx + # kubernetes.io/tls-acme: "true" + className: "nginx" + pathtype: ImplementationSpecific + hosts: + - host: chart-example.local + paths: + - path: / + tls: [] + # - secretName: chart-example-tls + # hosts: + # - chart-example.local + +nameOverride: "" +fullnameOverride: "" +healthNameOverride: "" + +lifecycle: {} +# preStop: +# exec: +# command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"] +# postStart: +# exec: +# command: +# - bash +# - -c +# - | +# #!/bin/bash +# # Add a template to adjust number of shards/replicas +# TEMPLATE_NAME=my_template +# INDEX_PATTERN="logstash-*" +# SHARD_COUNT=8 +# REPLICA_COUNT=1 +# ES_URL=http://localhost:9200 +# while [[ "$(curl -s -o /dev/null -w '%{http_code}\n' $ES_URL)" != "200" ]]; do sleep 1; done +# curl -XPUT "$ES_URL/_template/$TEMPLATE_NAME" -H 'Content-Type: application/json' -d'{"index_patterns":['\""$INDEX_PATTERN"\"'],"settings":{"number_of_shards":'$SHARD_COUNT',"number_of_replicas":'$REPLICA_COUNT'}}' + +sysctlInitContainer: + enabled: true + +keystore: [] + +networkPolicy: + ## Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now. + ## In order for a Pod to access Elasticsearch, it needs to have the following label: + ## {{ template "uname" . }}-client: "true" + ## Example for default configuration to access HTTP port: + ## elasticsearch-master-http-client: "true" + ## Example for default configuration to access transport port: + ## elasticsearch-master-transport-client: "true" + + http: + enabled: false + ## if explicitNamespacesSelector is not set or set to {}, only client Pods being in the networkPolicy's namespace + ## and matching all criteria can reach the DB. + ## But sometimes, we want the Pods to be accessible to clients from other namespaces, in this case, we can use this + ## parameter to select these namespaces + ## + # explicitNamespacesSelector: + # # Accept from namespaces with all those different rules (only from whitelisted Pods) + # matchLabels: + # role: frontend + # matchExpressions: + # - {key: role, operator: In, values: [frontend]} + + ## Additional NetworkPolicy Ingress "from" rules to set. Note that all rules are OR-ed. + ## + # additionalRules: + # - podSelector: + # matchLabels: + # role: frontend + # - podSelector: + # matchExpressions: + # - key: role + # operator: In + # values: + # - frontend + + transport: + ## Note that all Elasticsearch Pods can talk to themselves using transport port even if enabled. + enabled: false + # explicitNamespacesSelector: + # matchLabels: + # role: frontend + # matchExpressions: + # - {key: role, operator: In, values: [frontend]} + # additionalRules: + # - podSelector: + # matchLabels: + # role: frontend + # - podSelector: + # matchExpressions: + # - key: role + # operator: In + # values: + # - frontend + +tests: + enabled: true + diff --git a/root/vars/example.fluentbit.yaml b/root/vars/example.fluentbit.yaml new file mode 100644 index 0000000..3a43e87 --- /dev/null +++ b/root/vars/example.fluentbit.yaml @@ -0,0 +1,497 @@ +# Default values for fluent-bit. + +# kind -- DaemonSet or Deployment +kind: DaemonSet + +# replicaCount -- Only applicable if kind=Deployment +replicaCount: 1 + +image: + repository: cr.fluentbit.io/fluent/fluent-bit + # Overrides the image tag whose default is {{ .Chart.AppVersion }} + # Set to "-" to not use the default value + tag: + digest: + pullPolicy: Always + +testFramework: + enabled: true + namespace: + image: + repository: busybox + pullPolicy: Always + tag: latest + digest: + +imagePullSecrets: [] +nameOverride: "" +fullnameOverride: "" + +serviceAccount: + create: true + annotations: {} + name: + +rbac: + create: true + nodeAccess: false + eventsAccess: false + +# Configure podsecuritypolicy +# Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/ +# from Kubernetes 1.25, PSP is deprecated +# See: https://kubernetes.io/blog/2022/08/23/kubernetes-v1-25-release/#pod-security-changes +# We automatically disable PSP if Kubernetes version is 1.25 or higher +podSecurityPolicy: + create: false + annotations: {} + +# OpenShift-specific configuration +openShift: + enabled: false + securityContextConstraints: + # Create SCC for Fluent-bit and allow use it + create: true + name: "" + annotations: {} + # Use existing SCC in cluster, rather then create new one + existingName: "" + +podSecurityContext: {} +# fsGroup: 2000 + +hostNetwork: false +dnsPolicy: ClusterFirst + +dnsConfig: {} +# nameservers: +# - 1.2.3.4 +# searches: +# - ns1.svc.cluster-domain.example +# - my.dns.search.suffix +# options: +# - name: ndots +# value: "2" +# - name: edns0 + +hostAliases: [] +# - ip: "1.2.3.4" +# hostnames: +# - "foo.local" +# - "bar.local" + +securityContext: {} +# capabilities: +# drop: +# - ALL +# readOnlyRootFilesystem: true +# runAsNonRoot: true +# runAsUser: 1000 + +service: + type: ClusterIP + port: 2020 + loadBalancerClass: + loadBalancerSourceRanges: [] + labels: {} + # nodePort: 30020 + # clusterIP: 172.16.10.1 + annotations: {} +# prometheus.io/path: "/api/v1/metrics/prometheus" +# prometheus.io/port: "2020" +# prometheus.io/scrape: "true" + +serviceMonitor: + enabled: false + # namespace: monitoring + # interval: 10s + # scrapeTimeout: 10s + # selector: + # prometheus: my-prometheus + # ## metric relabel configs to apply to samples before ingestion. + # ## + # metricRelabelings: + # - sourceLabels: [__meta_kubernetes_service_label_cluster] + # targetLabel: cluster + # regex: (.*) + # replacement: ${1} + # action: replace + # ## relabel configs to apply to samples after ingestion. + # ## + # relabelings: + # - sourceLabels: [__meta_kubernetes_pod_node_name] + # separator: ; + # regex: ^(.*)$ + # targetLabel: nodename + # replacement: $1 + # action: replace + # scheme: "" + # tlsConfig: {} + + ## Beare in mind if youn want to collec metrics from a different port + ## you will need to configure the new ports on the extraPorts property. + additionalEndpoints: [] + # - port: metrics + # path: /metrics + # interval: 10s + # scrapeTimeout: 10s + # scheme: "" + # tlsConfig: {} + # # metric relabel configs to apply to samples before ingestion. + # # + # metricRelabelings: + # - sourceLabels: [__meta_kubernetes_service_label_cluster] + # targetLabel: cluster + # regex: (.*) + # replacement: ${1} + # action: replace + # # relabel configs to apply to samples after ingestion. + # # + # relabelings: + # - sourceLabels: [__meta_kubernetes_pod_node_name] + # separator: ; + # regex: ^(.*)$ + # targetLabel: nodename + # replacement: $1 + # action: replace + +prometheusRule: + enabled: false +# namespace: "" +# additionalLabels: {} +# rules: +# - alert: NoOutputBytesProcessed +# expr: rate(fluentbit_output_proc_bytes_total[5m]) == 0 +# annotations: +# message: | +# Fluent Bit instance {{ $labels.instance }}'s output plugin {{ $labels.name }} has not processed any +# bytes for at least 15 minutes. +# summary: No Output Bytes Processed +# for: 15m +# labels: +# severity: critical + +dashboards: + enabled: false + labelKey: grafana_dashboard + labelValue: 1 + annotations: {} + namespace: "" + +lifecycle: {} +# preStop: +# exec: +# command: ["/bin/sh", "-c", "sleep 20"] + +livenessProbe: + httpGet: + path: / + port: http + +readinessProbe: + httpGet: + path: /api/v1/health + port: http + +resources: {} +# limits: +# cpu: 100m +# memory: 128Mi +# requests: +# cpu: 100m +# memory: 128Mi + +## only available if kind is Deployment +ingress: + enabled: false + ingressClassName: "" + annotations: {} + # kubernetes.io/ingress.class: nginx + # kubernetes.io/tls-acme: "true" + hosts: [] + # - host: fluent-bit.example.tld + extraHosts: [] + # - host: fluent-bit-extra.example.tld + ## specify extraPort number + # port: 5170 + tls: [] + # - secretName: fluent-bit-example-tld + # hosts: + # - fluent-bit.example.tld + +## only available if kind is Deployment +autoscaling: + vpa: + enabled: false + + annotations: {} + + # List of resources that the vertical pod autoscaler can control. Defaults to cpu and memory + controlledResources: [] + + # Define the max allowed resources for the pod + maxAllowed: {} + # cpu: 200m + # memory: 100Mi + # Define the min allowed resources for the pod + minAllowed: {} + # cpu: 200m + # memory: 100Mi + + updatePolicy: + # Specifies whether recommended updates are applied when a Pod is started and whether recommended updates + # are applied during the life of a Pod. Possible values are "Off", "Initial", "Recreate", and "Auto". + updateMode: Auto + + enabled: false + minReplicas: 1 + maxReplicas: 3 + targetCPUUtilizationPercentage: 75 + # targetMemoryUtilizationPercentage: 75 + ## see https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics + customRules: [] + # - type: Pods + # pods: + # metric: + # name: packets-per-second + # target: + # type: AverageValue + # averageValue: 1k + ## see https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-configurable-scaling-behavior + behavior: {} +# scaleDown: +# policies: +# - type: Pods +# value: 4 +# periodSeconds: 60 +# - type: Percent +# value: 10 +# periodSeconds: 60 + +## only available if kind is Deployment +podDisruptionBudget: + enabled: false + annotations: {} + maxUnavailable: "30%" + +nodeSelector: {} + +tolerations: [] + +affinity: {} + +labels: {} + +annotations: {} + +podAnnotations: {} + +podLabels: {} + +## How long (in seconds) a pods needs to be stable before progressing the deployment +## +minReadySeconds: + +## How long (in seconds) a pod may take to exit (useful with lifecycle hooks to ensure lb deregistration is done) +## +terminationGracePeriodSeconds: + +priorityClassName: "" + +env: [] +# - name: FOO +# value: "bar" + +# The envWithTpl array below has the same usage as "env", but is using the tpl function to support templatable string. +# This can be useful when you want to pass dynamic values to the Chart using the helm argument "--set =" +# https://helm.sh/docs/howto/charts_tips_and_tricks/#using-the-tpl-function +envWithTpl: [] +# - name: FOO_2 +# value: "{{ .Values.foo2 }}" +# +# foo2: bar2 + +envFrom: [] + +extraContainers: [] +# - name: do-something +# image: busybox +# command: ['do', 'something'] + +flush: 1 + +metricsPort: 2020 + +extraPorts: [] +# - port: 5170 +# containerPort: 5170 +# protocol: TCP +# name: tcp +# nodePort: 30517 + +extraVolumes: [] + +extraVolumeMounts: [] + +updateStrategy: {} +# type: RollingUpdate +# rollingUpdate: +# maxUnavailable: 1 + +# Make use of a pre-defined configmap instead of the one templated here +existingConfigMap: "" + +networkPolicy: + enabled: false +# ingress: +# from: [] + +luaScripts: {} + +## https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/classic-mode/configuration-file +config: + service: | + [SERVICE] + Daemon Off + Flush {{ .Values.flush }} + Log_Level {{ .Values.logLevel }} + Parsers_File /fluent-bit/etc/parsers.conf + Parsers_File /fluent-bit/etc/conf/custom_parsers.conf + HTTP_Server On + HTTP_Listen 0.0.0.0 + HTTP_Port {{ .Values.metricsPort }} + Health_Check On + + ## https://docs.fluentbit.io/manual/pipeline/inputs + inputs: | + [INPUT] + Name tail + Path /var/log/containers/*.log + multiline.parser docker, cri + Tag kube.* + Mem_Buf_Limit 5MB + Skip_Long_Lines On + + [INPUT] + Name systemd + Tag host.* + Systemd_Filter _SYSTEMD_UNIT=kubelet.service + Read_From_Tail On + + ## https://docs.fluentbit.io/manual/pipeline/filters + filters: | + [FILTER] + Name kubernetes + Match kube.* + Merge_Log On + Keep_Log Off + K8S-Logging.Parser On + K8S-Logging.Exclude On + + ## https://docs.fluentbit.io/manual/pipeline/outputs + outputs: | + [OUTPUT] + Name es + Match kube.* + Host elasticsearch-master + Logstash_Format On + Retry_Limit False + + [OUTPUT] + Name es + Match host.* + Host elasticsearch-master + Logstash_Format On + Logstash_Prefix node + Retry_Limit False + + ## https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/classic-mode/upstream-servers + ## This configuration is deprecated, please use `extraFiles` instead. + upstream: {} + + ## https://docs.fluentbit.io/manual/pipeline/parsers + customParsers: | + [PARSER] + Name docker_no_time + Format json + Time_Keep Off + Time_Key time + Time_Format %Y-%m-%dT%H:%M:%S.%L + + # This allows adding more files with arbitary filenames to /fluent-bit/etc/conf by providing key/value pairs. + # The key becomes the filename, the value becomes the file content. + extraFiles: {} +# upstream.conf: | +# [UPSTREAM] +# upstream1 +# +# [NODE] +# name node-1 +# host 127.0.0.1 +# port 43000 +# example.conf: | +# [OUTPUT] +# Name example +# Match foo.* +# Host bar + +# The config volume is mounted by default, either to the existingConfigMap value, or the default of "fluent-bit.fullname" +volumeMounts: + - name: config + mountPath: /fluent-bit/etc/conf + +daemonSetVolumes: + - name: varlog + hostPath: + path: /var/log + - name: varlibdockercontainers + hostPath: + path: /var/lib/docker/containers + - name: etcmachineid + hostPath: + path: /etc/machine-id + type: File + +daemonSetVolumeMounts: + - name: varlog + mountPath: /var/log + - name: varlibdockercontainers + mountPath: /var/lib/docker/containers + readOnly: true + - name: etcmachineid + mountPath: /etc/machine-id + readOnly: true + +command: + - /fluent-bit/bin/fluent-bit + +args: + - --workdir=/fluent-bit/etc + - --config=/fluent-bit/etc/conf/fluent-bit.conf + +# This supports either a structured array or a templatable string +initContainers: [] + +# Array mode +# initContainers: +# - name: do-something +# image: bitnami/kubectl:1.22 +# command: ['kubectl', 'version'] + +# String mode +# initContainers: |- +# - name: do-something +# image: bitnami/kubectl:{{ .Capabilities.KubeVersion.Major }}.{{ .Capabilities.KubeVersion.Minor }} +# command: ['kubectl', 'version'] + +logLevel: info + +hotReload: + enabled: false + image: + repository: ghcr.io/jimmidyson/configmap-reload + tag: v0.11.1 + digest: + pullPolicy: IfNotPresent + resources: {} + diff --git a/root/example.infra.yaml b/root/vars/example.infra.yaml similarity index 100% rename from root/example.infra.yaml rename to root/vars/example.infra.yaml diff --git a/root/vars/example.kibana.yaml b/root/vars/example.kibana.yaml new file mode 100644 index 0000000..e2e3b2c --- /dev/null +++ b/root/vars/example.kibana.yaml @@ -0,0 +1,176 @@ +--- +elasticsearchHosts: "https://elasticsearch-master:9200" +elasticsearchCertificateSecret: elasticsearch-master-certs +elasticsearchCertificateAuthoritiesFile: ca.crt +elasticsearchCredentialSecret: elasticsearch-master-credentials + +replicas: 1 + +# Extra environment variables to append to this nodeGroup +# This will be appended to the current 'env:' key. You can use any of the kubernetes env +# syntax here +extraEnvs: + - name: "NODE_OPTIONS" + value: "--max-old-space-size=1800" +# - name: MY_ENVIRONMENT_VAR +# value: the_value_goes_here + +# Allows you to load environment variables from kubernetes secret or config map +envFrom: [] +# - secretRef: +# name: env-secret +# - configMapRef: +# name: config-map + +# A list of secrets and their paths to mount inside the pod +# This is useful for mounting certificates for security and for mounting +# the X-Pack license +secretMounts: [] +# - name: kibana-keystore +# secretName: kibana-keystore +# path: /usr/share/kibana/data/kibana.keystore +# subPath: kibana.keystore # optional + +hostAliases: [] +#- ip: "127.0.0.1" +# hostnames: +# - "foo.local" +# - "bar.local" + +image: "docker.elastic.co/kibana/kibana" +imageTag: "8.5.1" +imagePullPolicy: "IfNotPresent" + +# additionals labels +labels: {} + +annotations: {} + +podAnnotations: {} +# iam.amazonaws.com/role: es-cluster + +resources: + requests: + cpu: "1000m" + memory: "2Gi" + limits: + cpu: "1000m" + memory: "2Gi" + +protocol: http + +serverHost: "0.0.0.0" + +healthCheckPath: "/app/kibana" + +# Allows you to add any config files in /usr/share/kibana/config/ +# such as kibana.yml +kibanaConfig: {} +# kibana.yml: | +# key: +# nestedkey: value + +# If Pod Security Policy in use it may be required to specify security context as well as service account + +podSecurityContext: + fsGroup: 1000 + +securityContext: + capabilities: + drop: + - ALL + # readOnlyRootFilesystem: true + runAsNonRoot: true + runAsUser: 1000 + +serviceAccount: "" + +# Whether or not to automount the service account token in the pod. Normally, Kibana does not need this +automountToken: true + +# This is the PriorityClass settings as defined in +# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass +priorityClassName: "" + +httpPort: 5601 + +extraVolumes: + [] + # - name: extras + # emptyDir: {} + +extraVolumeMounts: + [] + # - name: extras + # mountPath: /usr/share/extras + # readOnly: true + # + +extraContainers: [] +# - name: dummy-init +# image: busybox +# command: ['echo', 'hey'] + +extraInitContainers: [] +# - name: dummy-init +# image: busybox +# command: ['echo', 'hey'] + +updateStrategy: + type: "Recreate" + +service: + type: ClusterIP + loadBalancerIP: "" + port: 5601 + nodePort: "" + labels: {} + annotations: {} + # cloud.google.com/load-balancer-type: "Internal" + # service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0 + # service.beta.kubernetes.io/azure-load-balancer-internal: "true" + # service.beta.kubernetes.io/openstack-internal-load-balancer: "true" + # service.beta.kubernetes.io/cce-load-balancer-internal-vpc: "true" + loadBalancerSourceRanges: [] + # 0.0.0.0/0 + httpPortName: http + +ingress: + enabled: false + className: "nginx" + pathtype: ImplementationSpecific + annotations: {} + # kubernetes.io/ingress.class: nginx + # kubernetes.io/tls-acme: "true" + hosts: + - host: kibana-example.local + paths: + - path: / + #tls: [] + # - secretName: chart-example-tls + # hosts: + # - chart-example.local + +readinessProbe: + failureThreshold: 3 + initialDelaySeconds: 10 + periodSeconds: 10 + successThreshold: 3 + timeoutSeconds: 5 + +imagePullSecrets: [] +nodeSelector: {} +tolerations: [] +affinity: {} + +nameOverride: "" +fullnameOverride: "" + +lifecycle: {} +# preStop: +# exec: +# command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"] +# postStart: +# exec: +# command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"] + diff --git a/root/example.prometheus_grafana.yaml b/root/vars/example.prometheus_grafana.yaml similarity index 100% rename from root/example.prometheus_grafana.yaml rename to root/vars/example.prometheus_grafana.yaml diff --git a/root/example.webapp.yaml b/root/vars/example.webapp.yaml similarity index 100% rename from root/example.webapp.yaml rename to root/vars/example.webapp.yaml