Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Would it be possible to use nfs-provisioner with openebs-jiva to get HA replication? #152

Open
keskival opened this issue Dec 23, 2022 · 4 comments
Milestone

Comments

@keskival
Copy link

keskival commented Dec 23, 2022

Describe the problem/challenge you have
I need high-availability replication support which would otherwise be provided by Jiva, but it doesn't support ReadWriteMany.

I tried to make Jiva use replica storage class of openebs-rwx but that doesn't work because of the aforementioned limitation.

Describe the solution you'd like
Perhaps it would work the other way around, putting replicated Jiva storage class to back the NFS provisioner? It would be nice to have some documentation about this use case.

Edit: Actually I tried it the other way around now, NFS provisioner using a replicated OpenEBS Jiva replication to back it, and it works perfectly!

Some more information about my set up here:
https://www.costacoders.es/news/2022-12-24_kubernetes/

Vote on this issue!

This is an invitation to the OpenEBS community to vote on issues.
Use the "reaction smiley face" up to the right of this comment to vote.

  • 👍 for "The project would be better with this feature added"
  • 👎 for "This feature will not enhance the project in a meaningful way"
@avishnu avishnu pinned this issue Dec 26, 2022
@avishnu avishnu added this to the 98o99987wo7oi milestone Dec 26, 2022
@keskival
Copy link
Author

keskival commented Dec 26, 2022

I got this working with something like this:

apiVersion: openebs.io/v1alpha1
kind: JivaVolumePolicy
metadata:
  name: openebs-jiva-default-policy
  namespace: openebs
spec:
  replicaSC: openebs-hostpath
  target:
    replicationFactor: 3
---
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-jiva-csi-default
parameters:
  cas-type: jiva
  policy: openebs-jiva-default-policy
provisioner: jiva.csi.openebs.io
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    cas.openebs.io/config: |
      - name: NFSServerType
        value: "kernel"
      - name: BackendStorageClass
        value: "openebs-jiva-csi-default"
    openebs.io/cas-type: nfsrwx
  name: openebs-rwx
provisioner: openebs.io/nfsrwx
reclaimPolicy: Delete
volumeBindingMode: Immediate
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    cas.openebs.io/config: |
      - name: StorageType
        value: "hostpath"
      - name: BasePath
        value: "/var/openebs/basepath"
    openebs.io/cas-type: local
    storageclass.kubernetes.io/is-default-class: "true"
  name: openebs-hostpath
provisioner: openebs.io/local
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

@keskival
Copy link
Author

keskival commented Jan 19, 2023

With the above configuration, I think there is an issue with OpenEBS Jiva replication on the cluster.

It seems all the volume data goes to the NFS controller pod ephemeral store for that PVC, and it stores all the data in /var/snap/microk8s/common/var/lib/kubelet/pods/PODID/volumes/kubernetes.io~csi/PVCID/mount.

The data isn't passed on to Jiva at all, this mount directory on the NFS pod isn't mounted.

That directory perhaps should be a mount to somewhere, but isn't. It just stores the plain files there on a single node.

The Jiva replicas, three per every volume claim, are set up correctly but the file data doesn't seem to go to those...

This would seem to be an issue on the OpenEBS Jiva side because it happens even without Dynamic NFS Provisioner:
openebs-archive/jiva#367

@keskival
Copy link
Author

keskival commented Jan 19, 2023

There is by the way this old blog article of this topic as well:
https://utkarshmani1997.medium.com/how-to-run-nfs-on-top-of-openebs-jiva-ca4158e82127
It seems this guide creates the backing Jiva PVC manually and applies it manually to the Dynamic NFS Provisioner, while I would like to use the Dynamic NFS Provisioner backend storage class functionality which doesn't seem to completely work in this case.
It does create the backing Jiva replicas, but doesn't actually mount them to the NFS PVC pod /nfsshare it seems. It is mounted as a PVC, but in the node file system it doesn't work out for some reason:

$ kubectl describe pod -n openebs nfs-pvc-5c7090e6-6d09-450d-98c5-74e57fa94299-7b84974f9c-cgswm
Name:             nfs-pvc-5c7090e6-6d09-450d-98c5-74e57fa94299-7b84974f9c-cgswm
Namespace:        openebs
Priority:         0
Service Account:  default
Node:             betanzos/192.168.68.99
Start Time:       Sat, 21 Jan 2023 15:23:06 +0100
Labels:           openebs.io/nfs-server=nfs-pvc-5c7090e6-6d09-450d-98c5-74e57fa94299
                  pod-template-hash=7b84974f9c
Annotations:      cni.projectcalico.org/containerID: 87c687d0baed8c427d484903424bdb7b87eb1af90b2ecec5bf0f26b86600500a
                  cni.projectcalico.org/podIP: 10.1.140.179/32
                  cni.projectcalico.org/podIPs: 10.1.140.179/32
Status:           Running
IP:               10.1.140.179
IPs:
  IP:           10.1.140.179
Controlled By:  ReplicaSet/nfs-pvc-5c7090e6-6d09-450d-98c5-74e57fa94299-7b84974f9c
Containers:
  nfs-server:
    Container ID:   containerd://1a5ee76f90c15a4fccdc91fa4a7ac5a6e8b70601215ea2c44cfd32cc1d2379bc
    Image:          openebs/nfs-server-alpine:0.9.0
    Image ID:       docker.io/openebs/nfs-server-alpine@sha256:3ec1304582a07fd2befa441e3cc09d808be1b3bf1906f10015e2d60611533548
    Ports:          2049/TCP, 111/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Running
      Started:      Sat, 21 Jan 2023 15:34:50 +0100
    Ready:          True
    Restart Count:  0
    Environment:
      SHARED_DIRECTORY:       /nfsshare
      CUSTOM_EXPORTS_CONFIG:  
      NFS_LEASE_TIME:         90
      NFS_GRACE_TIME:         90
      FILEPERMISSIONS_UID:    1000
      FILEPERMISSIONS_GID:    2000
      FILEPERMISSIONS_MODE:   0777
    Mounts:
      /nfsshare from exports-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5jb4n (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  exports-dir:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  nfs-pvc-5c7090e6-6d09-450d-98c5-74e57fa94299
    ReadOnly:   false
  kube-api-access-5jb4n:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>

$ kubectl describe pvc -n openebs nfs-pvc-5c7090e6-6d09-450d-98c5-74e57fa94299
Name:          nfs-pvc-5c7090e6-6d09-450d-98c5-74e57fa94299
Namespace:     openebs
StorageClass:  openebs-jiva-csi-default
Status:        Bound
Volume:        pvc-4f872aeb-385e-44e0-b0f1-9643a04ea4af
Labels:        nfs.openebs.io/nfs-pvc-name=mastodon-system
               nfs.openebs.io/nfs-pvc-namespace=mastodon
               nfs.openebs.io/nfs-pvc-uid=5c7090e6-6d09-450d-98c5-74e57fa94299
               openebs.io/cas-type=nfs-kernel
               persistent-volume=pvc-5c7090e6-6d09-450d-98c5-74e57fa94299
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: jiva.csi.openebs.io
               volume.kubernetes.io/selected-node: arcones
               volume.kubernetes.io/storage-provisioner: jiva.csi.openebs.io
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      100Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       nfs-pvc-5c7090e6-6d09-450d-98c5-74e57fa94299-7b84974f9c-cgswm
Events:        <none>

$ kubectl get pv -o yaml pvc-4f872aeb-385e-44e0-b0f1-9643a04ea4af
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: jiva.csi.openebs.io
  creationTimestamp: "2023-01-16T18:03:56Z"
  finalizers:
  - kubernetes.io/pv-protection
  name: pvc-4f872aeb-385e-44e0-b0f1-9643a04ea4af
  resourceVersion: "14133237"
  uid: ddcac725-8330-4243-a1eb-64e9b572aa15
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 100Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: nfs-pvc-5c7090e6-6d09-450d-98c5-74e57fa94299
    namespace: openebs
    resourceVersion: "14133225"
    uid: 4f872aeb-385e-44e0-b0f1-9643a04ea4af
  csi:
    driver: jiva.csi.openebs.io
    fsType: ext4
    volumeAttributes:
      storage.kubernetes.io/csiProvisionerIdentity: 1673883758976-8081-jiva.csi.openebs.io
    volumeHandle: pvc-4f872aeb-385e-44e0-b0f1-9643a04ea4af
  persistentVolumeReclaimPolicy: Delete
  storageClassName: openebs-jiva-csi-default
  volumeMode: Filesystem
status:
  phase: Bound

@dsharma-dc
Copy link
Contributor

Went through the details provided in this issue above. We'll give it a try and assess if that classifies as a bug or a limitation in dynamic provisioning case.

@avishnu avishnu unpinned this issue Sep 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants