Skip to content

Commit

Permalink
[helm] 0.1.2 release (#93)
Browse files Browse the repository at this point in the history
  • Loading branch information
schallert authored Feb 27, 2019
1 parent e4fd863 commit 7a0b16e
Show file tree
Hide file tree
Showing 6 changed files with 22 additions and 8 deletions.
14 changes: 14 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Changelog

## 0.1.2

* Update default cluster ConfigMap to include parameters required by latest m3db.
* Add event `patch` permission to default RBAC role

## 0.1.0

* TODO

## 0.1.0

* TODO
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ helm install m3db/m3db-operator --namespace m3db-operator
With `kubectl` (will install in the `default` namespace):

```
kubectl apply -f https://raw.githubusercontent.com/m3db/m3db-operator/v0.1.1/bundle.yaml
kubectl apply -f https://raw.githubusercontent.com/m3db/m3db-operator/v0.1.2/bundle.yaml
```

## Managing Clusters
Expand All @@ -60,7 +60,7 @@ kubectl apply -f https://raw.githubusercontent.com/m3db/m3db-operator/v0.1.1/bun
Create a simple etcd cluster to store M3DB's topology:

```
kubectl apply -f https://raw.githubusercontent.com/m3db/m3db-operator/v0.1.1/example/etcd/etcd-basic.yaml
kubectl apply -f https://raw.githubusercontent.com/m3db/m3db-operator/v0.1.2/example/etcd/etcd-basic.yaml
```

Apply manifest with your zones specified for isolation groups:
Expand Down
2 changes: 1 addition & 1 deletion bundle.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ spec:
spec:
containers:
- name: m3db-operator
image: quay.io/m3db/m3db-operator:v0.1.1
image: quay.io/m3db/m3db-operator:v0.1.2
command:
- m3db-operator
imagePullPolicy: Always
Expand Down
6 changes: 3 additions & 3 deletions docs/getting_started/create_cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ available, this will create a cluster that will not use persistent storage and w
the pods die:

```
kubectl apply -f https://raw.githubusercontent.com/m3db/m3db-operator/v0.1.1/example/etcd/etcd-basic.yaml
kubectl apply -f https://raw.githubusercontent.com/m3db/m3db-operator/v0.1.2/example/etcd/etcd-basic.yaml
# Verify etcd health once pods available
kubectl exec etcd-0 -- env ETCDCTL_API=3 etcdctl endpoint health
Expand All @@ -26,7 +26,7 @@ kubectl exec etcd-0 -- env ETCDCTL_API=3 etcdctl endpoint health
If you have remote storage available and would like to jump straight to using it, apply the following manifest for etcd
instead:
```
kubectl apply -f https://raw.githubusercontent.com/m3db/m3db-operator/v0.1.1/example/etcd/etcd-pd.yaml
kubectl apply -f https://raw.githubusercontent.com/m3db/m3db-operator/v0.1.2/example/etcd/etcd-pd.yaml
```

### M3DB
Expand Down Expand Up @@ -92,7 +92,7 @@ for performance reasons, and since M3DB already replicates your data.

Create an etcd cluster with persistent volumes:
```
kubectl apply -f https://raw.githubusercontent.com/m3db/m3db-operator/v0.1.1/example/etcd/etcd-pd.yaml
kubectl apply -f https://raw.githubusercontent.com/m3db/m3db-operator/v0.1.2/example/etcd/etcd-pd.yaml
```

We recommend modifying the `storageClassName` in the manifest to one that matches your cloud provider's fastest remote
Expand Down
2 changes: 1 addition & 1 deletion helm/m3db-operator/Chart.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
apiVersion: v1
name: m3db-operator
version: 0.1.1
version: 0.1.2
# TODO(PS) - helm has issues with GKE's SemVer
# Error: Chart requires kubernetesVersion: >=1.10.6 which is incompatible with Kubernetes v1.10.7-gke.2
#
Expand Down
2 changes: 1 addition & 1 deletion helm/m3db-operator/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,5 @@ operator:
name: m3db-operator
image:
repository: quay.io/m3db/m3db-operator
tag: v0.1.1
tag: v0.1.2
environment: production

0 comments on commit 7a0b16e

Please sign in to comment.