-
Notifications
You must be signed in to change notification settings - Fork 373
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Secrets feild addition to NodeGetVolumeStats Spec #371
Comments
/assign @gnufied |
@humblec if we have a valid IIRC - @j-griffith was looking into a similar issue. |
@humblec could you please provide more detailed and specific information regarding this use case? It's not entirely clear what the issue is here or that it can't be solved without secrets (or why). |
To summarize, we'd like to understand:
|
I was under the assumption that this is the same problem that has been discussed here: #370 (comment) |
The problem here is different, the issue was how to validate the given Given a Coming back to the We (as in the ceph-csi folks) would like to revisit this request post hashing out our alternatives as above. @humblec if you agree we can take this off this list and come back with the requirement or state that we do not require the same at a later date, thoughts? |
I'd recommend this issue be closed in that case. Can reopen if it's determined there's actually a problem to solve. |
@gnufied as per spec, when the RPC call gets in, we have to validate the
@saad-ali actually we need it. IOW, may be soon, this is going to be the requirement from other storage vendors too or when they implement this RPC call. The justification what I see for this requirement is below. The So, I still think this is a requirement.
@j-griffith not really or this is different or specific to this RPC call :). |
There is an additional use-case for this, where having the secrets to connect to the storage backend is useful. With Ceph-CSI we can create RBD images for However, RBD images are by default thin-provisioned, so not all space has been allocated on the Ceph cluster. It would be useful to be able to return the size of the image (not everything allocated), and check in the Ceph cluster (secrets required) how large the allocation is. These details can give an administrator of the Ceph cluster an idea how much over-provisioning is done. |
I am sort of becoming convinced that we may have to add secrets here. The other case I think is, this call is also being used to report volume health and hence it may be useful in many cases to have secret present so as volume health can be queried accurately. Having said that - we have to also figure out how to carry this information to node side in kubelet. We started with just few secret fields in PV and now it looks like it is kind of exploding. :/ |
for FS mount expose bytes and inodes. for block expose only bytes - see container-storage-interface/spec#371 thread for reason why we can't access the lightos API issue: LBM1-17861
True, now the secrets are too much , we do have 5/6 secrets already in place including controller,node and provisioner.. I would like to explore if there is a thought/idea which can be used to consolidate these heavy number of secrets in the spec. Would be difficult to have such mechanism considered we have to maintain backward compatibility, however may be we can try to handle internally or atleast think about a new way to store additionally introduced secrets.. |
It looks like more CSI RPC calls require/wanting the secret field to be part of ( ex: #462, #515..etc |
Yes we should rethink secrets. The existing interface is too flexible and underspecified. The fact that secrets can vary per-namespace, per-volume, and per-RPC means that driver implementers have expansive flexibility, but it creates a burden that we have to store more secrets per volume as we add more RPCs, and it makes it unclear what to do for any RPC that involves more than 1 volume (like ListVolumes). Given that we're stuck with the existing model for existing RPCs, probably the best we can do is to find out what exactly driver authors are doing with secrets and see if we can make the interface less flexible. If per-volume or per-RPC secrests are not really required maybe we can change the approach for new RPCs at least, and update existing RPCs to the do the same, and eventually deprecate (but not remove) the current way of handing secrets. We also may want to do something to separate operational secrets from data encryption secrets. Right now the two use cases are both served by the same secrets field, but this creates awkward problems when you both need admin-supplied secrets for the SP to authenticate to the storage device and user-supplied secrets to manage data encryption. |
Indeed!
Most of the time, or if we go with CSI Service model, RPCs having an option for
I like the idea of separating operational secrets from data path secrets. 👍 However, I havent come across a situation of user-supplied secret requirement , that said, even data encryption has been managed by the admin persona and CSI driver internally without giving control to user in our case. But there could be a requirement for user supplied too. |
At present, there is no
secret
field available on NodeGetVolumeStats RPC, however this is required at times for specific storage vendor to map volumeID to thetargetPath
.As per the spec:
targetpath
contains the pvc name and volumeID is generated by plugin. The mapping between thetargetpath
tovolumeID
(may) require the secret/credentials. For example, in Ceph, we need secret to connect to the ceph cluster to get the volumeID from volume Name.The text was updated successfully, but these errors were encountered: