Releases: m3db/m3db-operator
v0.5.0
0.5.0 includes a bug fix for passing cluster annotations to pods, as well as a backwards-compatible addition of a new base environment variable M3CLUSTER_ENVIRONMENT
which contains the ${NAMESPACE}/${CLUSTER_NAME}
-formatted variable used for the cluster's environment in etcd.
v0.4.0
v0.3.0
0.3.0 is focused on some behind the scenes reliability improvements. Changes such as using purpose-built M3DB health endpoints, using PATCH
to do partial updates to non-operator owned resources, and giving M3DB pods SYS_RESOURCE
by default should make operated clusters work in more environments with no changes.
Users that have had etcd-related issues when deleting and recreating M3DB clusters will also be happy, as by default the operator will delete the metadata associated with an M3DB cluster from etcd when a cluster is deleted. Users can set keepEtcdDataOnDelete
to true
on their cluster specs to disable this behavior.
- [ENHNACEMENT] Use Kubernetes 1.14 libraries (#167)
- [ENHANCEMENT] Add SYS_RESOURCE if security context not set (#147)
- [BUGFIX] Use patch instead of update for resources not owned by operator (#162)
- [ENHANCEMENT] Add HTTP JSONPB request method to client and update callers (#163)
- [ENHANCEMENT] Support image pull secrets (#160)
- [FEATURE] Add carbon ingester port config to cluster spec (#158)
- [FEATURE] Support custom annotations (#155)
- [ENHANCEMENT] Always create missing stateful sets (#148)
- [ENHANCEMENT] Use dbnode health/bootstrap endpoints (#135)
- [FEATURE] Clear data in etcd on cluster delete (#154) (#181)
- [ENHANCEMENT] Continuously reconcile operator CRD (#149)
- [ENHANCEMENT] Use CRD status subresource (#152)
- [DOCS] Update 0.2.0 breaking changes (#146)
- [ENHANCEMENT] Add better error messages for time parsing from yaml for namespaces (#144)
- [BUGFIX] Fix 0.2.0 migration script (#143)
- [DOCS] Include prometheus monitoring instructions (#140)
v0.2.0
The theme of this release is usability improvements and more granular control over node placement.
Features such as specifying etcd endpoints directly on the cluster spec eliminate the need to provide a manual
configuration for custom etcd endpoints. Per-cluster etcd environments will allow users to collocate multiple m3db
clusters on a single etcd cluster.
Users can now specify more complex affinity terms, and specify taints that their cluster tolerates to allow dedicating
specific nodes to M3DB. See the affinity docs for more.
- [FEATURE] Allow specifying of etcd endpoints on M3DBCluster spec (#99)
- [FEATURE] Allow specifying security contexts for M3DB pods (#107)
- [FEATURE] Allow specifying tolerations of M3DB pods (#111)
- [FEATURE] Allow specifying pod priority classes (#119)
- [FEATURE] Use a dedicated etcd-environment per-cluster to support sharing etcd clusters (#99)
- [FEATURE] Support more granular node affinity per-isolation group (#106) (#131)
- [ENHANCEMENT] Change default M3DB bootstrapper config to recover more easily when an entire cluster is taken down
(#112) - [ENHANCEMENT] Build + release with Go 1.12 (#114)
- [ENHANCEMENT] Continuously reconcile configmaps (#118)
- [BUGFIX] Allow unknown protobuf fields to be unmarshalled (#117)
- [BUGFIX] Fix pod removal when removing more than 1 pod at a time (#125)
Breaking Changes
0.2.0 changes how M3DB stores its cluster topology in etcd to allow for multiple M3DB clusters to share an etcd cluster.
A migration script is provided to copy etcd data from the old format to the new format. If migrating an
operated cluster, run that script (see script for instructions) and then rolling restart your M3DB pods by deleting them
one at a time.
If using a custom configmap, this same change will require a modification to your configmap. See the
warning in the docs about how to ensure your configmap is compatible.
v0.1.4
v0.1.3
v0.1.2
v0.1.1
Initial alpha release!
Initial release of the M3DB Operator!