This repo contains an initial set of cluster components to be installed and configured by eksctl through GitOps.
- AWS load balancer controller -- to easily expose services to the World.
- Cluster autoscaler -- to automatically add/remove nodes to/from your cluster based on its usage.
- The autoscaler is configured as recommended in the AWS autoscaler docs
- See also the autoscaler docs for the AWS provider
- Prometheus (its Alertmanager, its operator, its
node-exporter
,kube-state-metrics
, andmetrics-server
) -- for powerful metrics & alerts. - Grafana -- for a rich way to visualize metrics via dashboards you can create, explore, and share.
- Kubernetes dashboard -- Kubernetes' standard dashboard.
- Fluentd & Amazon's CloudWatch agent -- for cluster & containers' log collection, aggregation & analytics in CloudWatch.
- podinfo -- a toy demo application.
A running EKS cluster with IAM policies for:
- ALB ingress
- auto-scaler
- CloudWatch
These policies can be added to a nodegroup by including the following iam
options in your nodegroup config:
nodeGroups:
- iam:
withAddonPolicies:
albIngress: true
autoScaler: true
cloudWatch: true
N.B.: policies are configured at the nodegroup level. Therefore, depending on your use case, you may want to:
- add these policies to all nodegroups,
- add node selectors to the ALB ingress, auto-scaler and CloudWatch pods, so that they are deployed on the nodes configured with these policies.
For security reasons, this quickstart profile does not expose any workload publicly. However, should you want to access one of the workloads, various solutions are possible.
You could port-forward into a pod, so that you (and only you) could access it locally.
For example, for demo/podinfo
:
- run:
kubectl --namespace demo port-forward service/podinfo 9898:9898
- go to http://localhost:9898
You could expose a service publicly, at your own risks, via ALB ingress.
N.B.: the ALB ingress controller requires services:
- to be of
NodePort
type, - to have the following annotations:
annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing
For any NodePort
service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ${name}
namespace: ${namespace}
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
labels:
app: ${service-app-selector}
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: ${service-name}
servicePort: 80
A few minutes after deploying the above Ingress
object, you should be able to see the public URL for the service:
$ kubectl get ingress --namespace demo podinfo
NAME HOSTS ADDRESS PORTS AGE
podinfo * xxxxxxxx-${namespace}-${name}-xxxx-xxxxxxxxxx.${region}.elb.amazonaws.com 80 1s
For HelmRelease
objects, you would have to configure spec.values.service
and spec.values.ingress
, e.g. for demo/podinfo
:
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: podinfo
namespace: demo
spec:
releaseName: podinfo
chart:
git: https://github.com/stefanprodan/podinfo
ref: 3.0.0
path: charts/podinfo
values:
service:
enabled: true
type: NodePort
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
path: /*
N.B.: the above HelmRelease
- changes the type of
podinfo
's service from its default value,ClusterIP
, toNodePort
, - adds the annotations required for the ALB ingress controller to expose the service, and
- exposes all of
podinfo
's URLs, so that all assets can be served over HTTP.
A few minutes after deploying the above HelmRelease
object, you should be able to see the following Ingress
object, and the public URL for podinfo
:
$ kubectl get ingress --namespace demo podinfo
NAME HOSTS ADDRESS PORTS AGE
podinfo * xxxxxxxx-demo-podinfo-xxxx-xxxxxxxxxx.${region}.elb.amazonaws.com 80 1s
For a production-grade deployment, it's recommended to secure your endpoints with SSL. See Ingress annotations for SSL.
Any sensitive service that needs to be exposed must have some form of authentication. To add authentication to Grafana for e.g., see Grafana configuration. To add authentication to other components, please consult their documentation.
Create an issue, or login to Weave Community Slack (#eksctl) (signup).
Weaveworks follows the CNCF Code of Conduct. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting a Weaveworks project maintainer, or Alexis Richardson ([email protected]).