An Event Sourced version of the backend for the FrontEnd solution.
To prove that event sourcing works for a well-known project, Noviaal, I have build this. Event sourcing fits really well, as the users of NoviBlog can update notes and can add comments, etc. to any existing note. Being able to understand the sequence of events that leads/led to a certain state is gold when something does not work as intended.
- Kotlin (tired of Java; Scala has no future as I far as I can see)
- Spring WebFlux (almost all annotations reduced away)
- Netty (async, best for webflux)
- Spring Actuator (free k8s health endpoints)
- Akka Persistence (the only viable event sourced option)
- JUnit 5
- Gradle
- Jackson (easier and most boring option)
- Cassandra (event store) or
- R2DBC in case of Replicated Event Sourcing
- TSID for unique, historically sortable ids
GitHub Actions FTW! See .github/workflows/gradle.yml
.
In order to use kubectl
on your machine:
microk8s.kubectl config view --raw > $HOME/.kube/config
To Kubernetes cluster at MiruVor, see deploy/deployment.yaml
. First deploy to my k8s cluster also needs deploy/service.yaml
.
To allow traffic from outside the namespace in k8s, deploy/ingress.yaml
should also be applied.
kubectl rollout restart -n default deployment konomas
But changing the deployment.yml
works as a redeployment trigger.
Connect to the new pod:
kubectl get pods
kubectl exec -i -t ||pod-name|| -- /bin/bash
echo -n "jvorhauer:{{microk8s docker registry token}}" | base64
kubectl create -f ~/Code/k8s/registryconfig.yaml
The image is stored in the registry of GitHub, ghcr.io. This registry requires authentication with a special JSON file, that is stored in a
k8s secret dockerregistry
. The JSON file and the YAML file to deploy it are in my ~/Code/k8s
folder.
Username and password are stored in environment variables
The noviblog api is proxied by an nginx running on enna. The configuration is in /etc/nginx/sites-available/noviblog-https.conf
, which is
soft-linked (ln -s) to /etc/nginx/sites-enabled
. A redirector to the https site via noviblog-http.conf
.
kubectl port-forward -n kube-system service/kubernetes-dashboard 8443:443
This way the dashboard is safely and only locally available via dashboard
In order to provide more resilience for crashing k8s nodes and other anomalies, the use of Akka Cluster presents itself.
However, Akka Cluster is rather an extra layer of complication that I would like to postpone to when such resilience is really useful. With the introduction of Replicated Event Sourcing the learning curve seems less objectable.
Get rid of:
- Valiktor: not maintained, last change was 4 years ago -> switch to Hibernate Validator, but without using annotations
- UUID: too much resource consumption, not sortable and no relevant information in the key -> TSID