Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add EFK logging stack with custom chart values #16

Merged
merged 2 commits into from
Jan 23, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
65 changes: 63 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ provider "helm" {

Based on the `KUBECONFIG` value, the helm chart will be installed on that particular cluster.

> \[!IMPORTANT]\
> Due to an on-going issue with Terraform Helm Provider [[reference](https://github.com/hashicorp/terraform-provider-helm/issues/932)] which prevents the Terraform resource to pull a chart from a private GitHub repository (even after providing a GitHub PAT), we are forced to install the Helm chart locally.

## Kubernetes Provider
Expand Down Expand Up @@ -85,7 +86,8 @@ Here are key aspects and advantages of Istio:

- (Optional) Install `istioctl`:

> NOTE: We will use this tool to analyze namespaces and to verify if the pods have been injected with Istio sidecar pods
> \[!NOTE]\
> We will use this tool to analyze namespaces and to verify if the pods have been injected with Istio sidecar pods

```bash
brew install istioctl
Expand All @@ -95,7 +97,8 @@ istioctl version
istioctl analyze
```

> **NOTE**: Add the `sidecar.istio.io/inject: "false"` annotation to the metadata section of the pod template. This will prevent the Istio sidecar from being injected into that specific pod.
> \[!NOTE]\
> Add the `sidecar.istio.io/inject: "false"` annotation to the metadata section of the pod template. This will prevent the Istio sidecar from being injected into that specific pod.

## Monitoring Stack

Expand Down Expand Up @@ -133,10 +136,68 @@ Instead of installing the helm charts for these applications, we will use the cu
> You can read more information on how to add firewall rules for the GKE control plane nodes in the [GKE docs](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules)
> Alternatively, you can disable the hooks by setting `prometheusOperator.admissionWebhooks.enabled=false`.

## Logging Stack

We will use the [EFK stack](https://medium.com/@tech_18484/simplifying-kubernetes-logging-with-efk-stack-158da47ce982) to setup logging for our containerized applications (which are installed via custom helm charts) on kubernetes. The `EFK stack` consists of Elasticsearch, FluentBit and Kibana to streamline the process of collecting, processing and visualizing logs.

- **[Elasticsearch](https://www.elastic.co/elasticsearch)**: NoSQL database based on the `Lucene search engine`. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.
- **[Fluentbit](https://fluentbit.io/)**: Super fast, lightweight, and highly scalable logging and metrics processor and forwarder.
- **[Kibana](https://www.elastic.co/kibana)** — Data visualization dashboard software for Elasticsearch.

> \[NOTE]
> Before installing the Helm chart on an EKS cluster, we must ensure the presence of a storage class and the AWS CSI driver for Elasticsearch. Elasticsearch functions as a database and is often deployed as a stateful set. This deployment configuration necessitates the use of Persistent Volume Claims (PVCs), and to fulfill those claims, we require storage resources. To achieve proper provisioning of EBS (Elastic Block Store) volumes within the EKS cluster, we rely on a storage class with the AWS EBS provisioner. Therefore, the prerequisites for successful EBS provisioning in the EKS cluster encompass the storage class and the EBS CSI driver. Refer [this blog](https://medium.com/@tech_18484/simplifying-kubernetes-logging-with-efk-stack-158da47ce982) for more details.

### Working with EFK Stack

- Get the Helm repository information for elastic and fluentbit tools

```bash
helm repo add elastic https://helm.elastic.co
helm repo add fluent https://fluent.github.io/helm-charts
helm repo update
```

- Refer the Helm chart default values to configure the charts accordingly

```bash
# example: fluent-bit chart values
helm show values fluent/fluent-bit > fluentbit-values.yaml
```

- To verify elasticsearch is up and running successfully:
- Run the below commands to get the username and password from the elasticsearch master pod:

```bash
# get elasticsearch username
kubectl get secrets --namespace=efk elasticsearch-master-credentials -ojsonpath='{.data.username}' | base64 -d
# get elasticsearch password (exclude '%' at end in output of below command)
kubectl get secrets --namespace=efk elasticsearch-master-credentials -ojsonpath='{.data.password}' | base64 -d
```

- Access the JSON response by accessing the link `https://<LoadBalancer-IP>:9200` and enter the username and password when prompted. Here the `LoadBalancer-IP` is the `External IP` provided to the elasticsearch service.

- To verify Kibana is up and running successfully:
- Run the below commands to get the password and service account token for Kibana from the elasticsearch master pod:

```bash
# get elasticsearch password (exclude '%' at end in output of below command)
kubectl get secrets --namespace=efk elasticsearch-master-credentials -ojsonpath=’{.data.password}’ | base64 -d
# get Kibana service account token
kubectl get secrets --namespace=efk kibana-kibana-es-token -ojsonpath=’{.data.token}’ | base64 -d
```

- Access the Kibana dashboard by accessing the link `https://<LoadBalancer-IP>:5601` and enter the password and service account token when prompted. Here the `LoadBalancer-IP` is the `External IP` provided to the kibana service.

> \[\NOTE]
> Depending on the Kibana version installed, the dashboard might prompt to enter the elasticsearch username and password, in which case you do not need to get the service account token.

## Configuring the chart values

For specific `values.yaml`, refer their specific charts and create their respective `values.yaml` files based on the dummy `values.yaml` file. You can also use the `example.*.yaml` files in the `root/` directory to view specific values for the chart values.

> \[NOTE]
> Make sure to configure correct values depending on the kubernetes cluster you deploy to. If you are using minikube to test the deployment, make sure you edit the values accordingly, since minikube is a single-node kubernetes cluster.

## Infrastructure Setup

Once we have all our chart `values.yaml` configured, we can apply our Terraform configuration to install the helm charts to our kubernetes cluster.
Expand Down
36 changes: 36 additions & 0 deletions modules/efk/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
resource "helm_release" "elasticsearch" {
name = "elasticsearch"
namespace = "efk"
create_namespace = true
repository = "https://helm.elastic.co"
chart = "elasticsearch"
timeout = var.timeout
cleanup_on_fail = true
force_update = false
wait = false
values = ["${file(var.elasticsearch_values_file)}"]
}
resource "helm_release" "kibana" {
name = "kibana"
namespace = "efk"
create_namespace = true
repository = "https://helm.elastic.co"
chart = "kibana"
timeout = var.timeout
cleanup_on_fail = true
force_update = false
wait = false
values = ["${file(var.kibana_values_file)}"]
}
resource "helm_release" "fluentbit" {
name = "fluent-bit"
namespace = "efk"
create_namespace = true
repository = "https://fluent.github.io/helm-charts"
chart = "fluent-bit"
timeout = var.timeout
cleanup_on_fail = true
force_update = false
wait = false
values = ["${file(var.fluentbit_values_file)}"]
}
4 changes: 4 additions & 0 deletions modules/efk/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
variable "timeout" {}
variable "elasticsearch_values_file" {}
variable "kibana_values_file" {}
variable "fluentbit_values_file" {}
9 changes: 9 additions & 0 deletions modules/namespace/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -40,3 +40,12 @@ resource "kubernetes_namespace" "prometheus" {
name = "prometheus"
}
}

resource "kubernetes_namespace" "efk" {
metadata {
# labels = {
# istio-injection = "enabled"
# }
name = "efk"
}
}
13 changes: 8 additions & 5 deletions root/example.tfvars
Original file line number Diff line number Diff line change
@@ -1,7 +1,10 @@
timeout = 600
infra_values_file = "./infra_values.yaml"
webapp_values_file = "./webapp_values.yaml"
kube_prometheus_values_file = "./kube_prometheus_values.yaml"
infra_values_file = "./vars/infra_values.yaml"
webapp_values_file = "./vars/webapp_values.yaml"
kube_prometheus_values_file = "./vars/kube_prometheus_values.yaml"
elasticsearch_values_file = "./vars/elasticsearch_values.yaml"
kibana_values_file = "./vars/kibana_values.yaml"
fluentbit_values_file = "./vars/fluentbit_values.yaml"
chart_path = "../modules/charts"
webapp_chart = "webapp-helm-chart-1.1.3.tar.gz"
infra_chart = "infra-helm-chart-1.4.0.tar.gz"
webapp_chart = "webapp-helm-chart-1.8.3.tar.gz"
infra_chart = "infra-helm-chart-1.10.0.tar.gz"
18 changes: 16 additions & 2 deletions root/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -30,14 +30,27 @@ module "istio_gateway" {
timeout = var.timeout
}


resource "time_sleep" "install_istio_gateway" {
depends_on = [module.istio_gateway]
create_duration = "20s"
}

module "logging_stack" {
depends_on = [time_sleep.install_istio_gateway]
source = "../modules/efk"
timeout = var.timeout
elasticsearch_values_file = var.elasticsearch_values_file
kibana_values_file = var.kibana_values_file
fluentbit_values_file = var.fluentbit_values_file
}

resource "time_sleep" "install_logging_stack" {
depends_on = [module.logging_stack]
create_duration = "20s"
}

module "monitoring_stack" {
depends_on = [time_sleep.install_istio_gateway]
depends_on = [time_sleep.install_logging_stack]
source = "../modules/kube_prometheus"
timeout = var.timeout
kube_prometheus_values_file = var.kube_prometheus_values_file
Expand All @@ -47,6 +60,7 @@ resource "time_sleep" "install_monitoring_stack" {
depends_on = [module.monitoring_stack]
create_duration = "20s"
}

module "infra_dependencies" {
depends_on = [time_sleep.install_monitoring_stack]
source = "../modules/infra_helm"
Expand Down
18 changes: 18 additions & 0 deletions root/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,24 @@ variable "kube_prometheus_values_file" {
default = "./kube_prometheus_values.yaml"
}

variable "elasticsearch_values_file" {
type = string
description = "The path to the elasticsearch_values.yaml file for the helm chart"
default = "./elasticsearch_values.yaml"
}

variable "kibana_values_file" {
type = string
description = "The path to the kibana_values.yaml file for the helm chart"
default = "./kibana_values.yaml"
}

variable "fluentbit_values_file" {
type = string
description = "The path to the fluentbit_values.yaml file for the helm chart"
default = "./fluentbit_values.yaml"
}

variable "chart_path" {
type = string
description = "The path to the charts/ directory to install local charts"
Expand Down
Loading