Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nfs rwx folder has 000 as permission #124

Open
mbu147 opened this issue Nov 1, 2021 · 4 comments
Open

nfs rwx folder has 000 as permission #124

mbu147 opened this issue Nov 1, 2021 · 4 comments
Assignees

Comments

@mbu147
Copy link

mbu147 commented Nov 1, 2021

Describe the bug:
I created many RWX nfs shares with nfs-provisioner. As backend storageclass i use openebs-jiva.
Sometimes every nfs share mount get the permission 000
Then, the mount folder looks like

root@nfs-pvc-3da88edc-2b97-4165-93ba-49bc54056cc6-fb8f5fd66-jfz6b:/ # ls -la
d---------   17 xfs      xfs           4096 Oct 29 14:15 nfsshare

in the nfs-pvc pod.
Cause of that, the nginx container where the nfs-pvc is mounted also have 000 on the folder and cannot read the files within

The files in the mount folder have the correct permissions.

Expected behaviour:
Default mount permission 755 or something similar

Steps to reproduce the bug:
Just create a new nfs rwx pvc and wait. After some time the running nginx container cannot read the folder anymore.

The output of the following commands will help us better understand what's going on:

Anything else we need to know?:
jiva and nfs installed via helm charts
https://github.com/openebs/jiva-operator/tree/develop/deploy/helm/charts
https://github.com/openebs/dynamic-nfs-provisioner/tree/develop/deploy/helm/charts
helm config:

nfs-provisioner:
    rbac:
        pspEnabled: false
    podSecurityContext:
        fsGroup: 120

storage class:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-rwx
  annotations:
    openebs.io/cas-type: nfsrwx
    cas.openebs.io/config: |
      - name: NFSServerType
        value: "kernel"
      - name: BackendStorageClass
        value: "openebs-jiva-csi-default"
      #  LeaseTime defines the renewal period(in seconds) for client state
      - name: LeaseTime
      value: 30
      #  GraceTime defines the recovery period(in seconds) to reclaim locks
      - name: GraceTime
        value: 30
      #  FSGID defines the group permissions of NFS Volume. If it is set
      #  then non-root applications should add FSGID value under pod
      #  Supplemental groups
      - name: FSGID
        value: "120"
      - name: NFSServerResourceRequests
        value: |-
          cpu: 50m
          memory: 50Mi
      - name: NFSServerResourceLimits
        value: |-
          cpu: 100m
          memory: 100Mi
provisioner: openebs.io/nfsrwx
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true

Environment details:

  • OpenEBS version (use kubectl get po -n openebs --show-labels):
openebs/jiva-operator:3.0.0
openebs/provisioner-nfs:0.7.1
  • Kubernetes version (use kubectl version):
    v1.21.5+k3s
  • Cloud provider or hardware configuration:
    contabo vps hardware
  • OS (e.g: cat /etc/os-release):
    AlmaLinux 8.4 (Electric Cheetah)
  • kernel (e.g: uname -a):
    4.18.0-305.19.1.el8_4.x86_64

Do i have a misconfigured setup or is this a bug?

Thanks for help!

@mittachaitu
Copy link
Contributor

mittachaitu commented Nov 2, 2021

  image: openebs/provisioner-nfs:0.7.1  and image: openebs/jiva-operator:3.0.0

Hi @mbu147, I followed the same steps as mentioned in the description and observed that permissions of nfsshare directory are 755 and here is the output:

root@nfs-pvc-e155220f-63b7-4882-9104-98575910d9c9-69c97df57d-wrxq2:/ # ls -la
total 88
drwxr-xr-x    1 root     root          4096 Nov  2 06:42 .
drwxr-xr-x    1 root     root          4096 Nov  2 06:42 ..
drwxr-xr-x    3 root     root          4096 Nov  2 06:42 nfsshare
...
...
...

Steps followed to provision NFS volume:

  • Used umbrella helm chart to install openebs-localPV, openebs-jiva & openebs-nfs-provisioner in to the cluster by running following command
helm install openebs openebs/openebs -n openebs --create-namespace  --set legacy.enabled=false --set jiva.enabled=true  --set ndm.enabled=false  --set ndmOperator.enabled=false  --set localProvisioner.enabled=true  --set nfs-provisioner.enabled=true  --set nfs-provisioner.nfsStorageClass.backendStorageClass=openebs-jiva-csi-default 
  • Created PVC on by using openebs-kernel-nfs SC which got created by above command
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
        name: nfs-pvc
    spec:
       storageClassName: openebs-kernel-nfs
       accessModes:
       - ReadWriteMany
       resources:
          requests:
             storage: 5G
  • It in turns provision a volume and created po nfs-pvc-e155220f-63b7-4882-9104-98575910d9c9-69c97df57d-wrxq2 with the following permissions
    drwxr-xr-x    3 root     root          4096 Nov  2 06:42 nfsshare
  • (Optional step) Created a test pod, which will just sleep nothing else. It was done to check permissions of nfs volume
    root@fio:/# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    overlay         916G   92G  778G  11% /
    tmpfs            64M     0   64M   0% /dev
    tmpfs           7.8G     0  7.8G   0% /sys/fs/cgroup
    10.0.0.20:/     4.9G   20M  4.9G   1% /datadir
    and permissions are matching with nfs volume permissions
    drwxr-xr-x   3 root root 4096 Nov  2 06:42 datadir

StorageClass outputs:

kubectl get sc
NAME                       PROVISIONER           RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
openebs-device             openebs.io/local      Delete          WaitForFirstConsumer   false                  140m
openebs-hostpath           openebs.io/local      Delete          WaitForFirstConsumer   false                  140m
openebs-jiva-csi-default   jiva.csi.openebs.io   Delete          Immediate              true                   140m
openebs-kernel-nfs         openebs.io/nfsrwx     Delete          Immediate              false                  140m
  • Click here to get -o yaml output of above storageclasses

Did I miss anything? Not sure how are you getting d--------- 17 xfs xfs 4096 Oct 29 14:15 nfsshare this 000 permissions.

One more observation:

root@nfs-pvc-3da88edc-2b97-4165-93ba-49bc54056cc6-fb8f5fd66-jfz6b:/ # ls -la
d--------- 17 xfs xfs 4096 Oct 29 14:15 nfsshare

  • ls -la shows that different owner & user? Are there any manual edits made on ownership? Usually, it should be root root by default...

Can you help with the following outputs(maybe it will help to understand further):

  • kubectl get openebs-jiva-csi-default -o yaml
  • kubectl get deploy nfs-pvc-3da88edc-2b97-4165-93ba-49bc54056cc6 -n openebs -o yaml
  • kubectl get deploy -o yaml -n

@mbu147
Copy link
Author

mbu147 commented Nov 2, 2021

Hi @mittachaitu,
thanks for your reply and testing process!

It looks like I'm doing the same, except not using the "global" helm chart. I switch to the same chart as you and will have a look.

I noticed that it apparently only occurs when each node is under a high IO load, so that he needs to "reconnect" the mount points.

ls -la shows that different owner & user? Are there any manual edits made on ownership? Usually, it should be root root by default...

In a fresh new pvc the folder has root root and r-xr-xr-x. My nginx and php-fpm container have the UID and GID 33, in the pvc container UID 33 is the xfs user. I think i did a chown nginx:nginx -R <nfs mount> after copying my data from the old pvc.

Thanks!

@mittachaitu
Copy link
Contributor

I noticed that it apparently only occurs when each node is under a high IO load, so that he needs to "reconnect" the mount points.

Hmm..., the system might be going into an RO state, if jiva volume is turning into RO then d--------- permission make sense(AFAIK).

My nginx and php-fpm container have the UID and GID 33, in the pvc container UID 33 is the xfs user. I think i did a chown nginx:nginx -R after copying my data from the old PVC.

Yeah, currently nfs-provisioner allows to only to set fsGID but there is an issue to support configuring UID(which is being worked upon), so that when the volume is provisioned user no need to run chown commands explicitly(anyway this is different problem).

@mbu147
Copy link
Author

mbu147 commented Nov 2, 2021

okay i understand.. no one had these issue beside me?

You asked for more information, i forgot to add this to my last post:
kubectl get sc openebs-jiva-csi-default -o yaml

allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"labels":{"argocd.argoproj.io/instance":"openebs"},"name":"openebs-jiva-csi-default"},"parameters":{"cas-type":"jiva","policy":"openebs-jiva-default-policy"},"provisioner":"jiva.csi.openebs.io","reclaimPolicy":"Delete","volumeBindingMode":"Immediate"}
  creationTimestamp: "2021-10-29T11:16:06Z"
  labels:
    argocd.argoproj.io/instance: openebs
  name: openebs-jiva-csi-default
  resourceVersion: "21060613"
  selfLink: /apis/storage.k8s.io/v1/storageclasses/openebs-jiva-csi-default
  uid: f9275268-9a2b-4f20-a59b-ebdaede5b8e3
parameters:
  cas-type: jiva
  policy: openebs-jiva-default-policy
provisioner: jiva.csi.openebs.io
reclaimPolicy: Delete
volumeBindingMode: Immediate

kubectl get deploy nfs-pvc-3da88edc-2b97-4165-93ba-49bc54056cc6 -n openebs -o yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2021-10-29T12:23:27Z"
  generation: 1
  labels:
    openebs.io/nfs-server: nfs-pvc-3da88edc-2b97-4165-93ba-49bc54056cc6
  name: nfs-pvc-3da88edc-2b97-4165-93ba-49bc54056cc6
  namespace: openebs
  resourceVersion: "22196180"
  selfLink: /apis/apps/v1/namespaces/openebs/deployments/nfs-pvc-3da88edc-2b97-4165-93ba-49bc54056cc6
  uid: 4b9de5e8-a1b5-4ea1-b905-67ec358dc015
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      openebs.io/nfs-server: nfs-pvc-3da88edc-2b97-4165-93ba-49bc54056cc6
  strategy:
    type: Recreate
  template:
    metadata:
      creationTimestamp: null
      labels:
        openebs.io/nfs-server: nfs-pvc-3da88edc-2b97-4165-93ba-49bc54056cc6
    spec:
      containers:
      - env:
        - name: SHARED_DIRECTORY
          value: /nfsshare
        - name: CUSTOM_EXPORTS_CONFIG
        - name: NFS_LEASE_TIME
          value: "90"
        - name: NFS_GRACE_TIME
          value: "90"
        image: openebs/nfs-server-alpine:0.7.1
        imagePullPolicy: IfNotPresent
        name: nfs-server
        ports:
        - containerPort: 2049
          name: nfs
          protocol: TCP
        - containerPort: 111
          name: rpcbind
          protocol: TCP
        resources: {}
        securityContext:
          privileged: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /nfsshare
          name: exports-dir
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: exports-dir
        persistentVolumeClaim:
          claimName: nfs-pvc-3da88edc-2b97-4165-93ba-49bc54056cc6
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2021-10-29T12:23:27Z"
    lastUpdateTime: "2021-10-29T12:24:58Z"
    message: ReplicaSet "nfs-pvc-3da88edc-2b97-4165-93ba-49bc54056cc6-fb8f5fd66" has
      successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  - lastTransitionTime: "2021-10-31T10:33:15Z"
    lastUpdateTime: "2021-10-31T10:33:15Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants