Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Secrets feild addition to NodeGetVolumeStats Spec #371

Open
humblec opened this issue Jul 19, 2019 · 14 comments · May be fixed by #479
Open

Secrets feild addition to NodeGetVolumeStats Spec #371

humblec opened this issue Jul 19, 2019 · 14 comments · May be fixed by #479

Comments

@humblec
Copy link
Contributor

humblec commented Jul 19, 2019

At present, there is no secret field available on NodeGetVolumeStats RPC, however this is required at times for specific storage vendor to map volumeID to the targetPath.

As per the spec:

Condition | gRPC Code | Description | Recovery Behavior
-- | -- | -- | --
Volume does not exist | 5 NOT_FOUND | Indicates that a volume corresponding to the specified volume_id does not exist on specified volume_path. | Caller MUST verify that the volume_id is correct and that the volume is accessible on specified volume_path and has not been deleted before retrying with exponential back off.

targetpath contains the pvc name and volumeID is generated by plugin. The mapping between the targetpath to volumeID (may) require the secret/credentials. For example, in Ceph, we need secret to connect to the ceph cluster to get the volumeID from volume Name.

@humblec humblec changed the title Secrets filed addition to NodeGetVolumeStats Secrets feild addition to NodeGetVolumeStats RPC Jul 19, 2019
@humblec humblec changed the title Secrets feild addition to NodeGetVolumeStats RPC Secrets feild addition to NodeGetVolumeStats Spec Jul 19, 2019
@humblec
Copy link
Contributor Author

humblec commented Jul 19, 2019

/assign @gnufied

@gnufied
Copy link
Contributor

gnufied commented Jul 23, 2019

@humblec if we have a valid targetPath - do we still need to verify if targetPath belongs to given volume ID? Does this apply to something like NodeUnpublishVolume too - which also does not have secrets in requst?

IIRC - @j-griffith was looking into a similar issue.

@j-griffith
Copy link

@humblec could you please provide more detailed and specific information regarding this use case? It's not entirely clear what the issue is here or that it can't be solved without secrets (or why).

@saad-ali
Copy link
Member

saad-ali commented Aug 7, 2019

To summarize, we'd like to understand:

  • Does Ceph for NodeGetVolumeStats actually require backend call (which would require a secret)?

@j-griffith
Copy link

@humblec could you please provide more detailed and specific information regarding this use case? It's not entirely clear what the issue is here or that it can't be solved without secrets (or why).

I was under the assumption that this is the same problem that has been discussed here: #370 (comment)

@ShyamsundarR
Copy link

@humblec could you please provide more detailed and specific information regarding this use case? It's not entirely clear what the issue is here or that it can't be solved without secrets (or why).

I was under the assumption that this is the same problem that has been discussed here: #370 (comment)

The problem here is different, the issue was how to validate the given volume_id is the one that is mapped to volume_path and hence return stats or return a NOT_FOUND error.

Given a volume_id, we carry enough information encoded in the same, to detect which cluster it is, so we do not need parameters when volume_id's are passed (IOW, the where information in parameters is not needed when we have a volume_id). Which is the difference between this and the ListVolume/Snapshot requests. (as an aside, we still need secrets if we need to reach the storage backend for information)

Coming back to the secrets request here, we thought we needed it to validate parts of the volume_id and ensure that volume_path contains a mount of the exact volume_id. I think we now have an alternative, where we can store enough information about the backend volume being mounted when we execute the NodeStageVolume to match the volume_id and use the local mount tools to match the volume_path to the same. IOW, we can create a lookaside metadata to ensure we have the right mapping. This also ensures we save a network roundtrip to the storage server when validating the same.

We (as in the ceph-csi folks) would like to revisit this request post hashing out our alternatives as above. @humblec if you agree we can take this off this list and come back with the requirement or state that we do not require the same at a later date, thoughts?

@j-griffith
Copy link

I'd recommend this issue be closed in that case. Can reopen if it's determined there's actually a problem to solve.

@humblec
Copy link
Contributor Author

humblec commented Aug 8, 2019

@humblec if we have a valid targetPath - do we still need to verify if targetPath belongs to given volume ID? Does this apply to something like NodeUnpublishVolume too - which also does not have secrets in requst?>

@gnufied as per spec, when the RPC call gets in, we have to validate the targetPath against volumeID , to me this looks like a very corner case scenario, but opened this issue as it is in the spec.

To summarize, we'd like to understand:
Does Ceph for NodeGetVolumeStats actually require backend call (which would require a secret)? >

@saad-ali actually we need it. IOW, may be soon, this is going to be the requirement from other storage vendors too or when they implement this RPC call. The justification what I see for this requirement is below.

The targetPath is composed by CO and passed to the SP/plugin. The targetpath contain PVC/POD information or in short it dont have direct reference/mapping to the VolumeID ( which Plugin created) . As per spec, we need to do the validation. This mapping or validation can be done in many ways or we can hack/work around. However thinking about the validation, first/obvious solution is figuring it out from the Storage backend from available information in the targetPath, but if we want to connect to the storage cluster and do the validation, it require the credentials/secret. I think this is going to be a general requirement for other storage vendors too. I mentioned about Ceph as an example. We were discussing about other alternatives too ( in Ceph) to have the metadata stored in targetPath somehow and use it for validation. But this looks to be a workaround/hack as we dont have secret option now in the RPC call.

So, I still think this is a requirement.

I was under the assumption that this is the same problem that has been discussed here: #370 (comment)

@j-griffith not really or this is different or specific to this RPC call :).

@nixpanic
Copy link

nixpanic commented Jan 8, 2021

There is an additional use-case for this, where having the secrets to connect to the storage backend is useful.

With Ceph-CSI we can create RBD images for volumeMode: Block. The block-device gets attached to the node, which makes it possible to check the size of the block-device. So far, so good.

However, RBD images are by default thin-provisioned, so not all space has been allocated on the Ceph cluster. It would be useful to be able to return the size of the image (not everything allocated), and check in the Ceph cluster (secrets required) how large the allocation is. These details can give an administrator of the Ceph cluster an idea how much over-provisioning is done.

@gnufied
Copy link
Contributor

gnufied commented Jan 8, 2021

I am sort of becoming convinced that we may have to add secrets here. The other case I think is, this call is also being used to report volume health and hence it may be useful in many cases to have secret present so as volume health can be queried accurately.

Having said that - we have to also figure out how to carry this information to node side in kubelet. We started with just few secret fields in PV and now it looks like it is kind of exploding. :/

lev-lb pushed a commit to LightBitsLabs/los-csi that referenced this issue Jan 20, 2022
for FS mount expose bytes and inodes.
for block expose only bytes - see
container-storage-interface/spec#371 thread
for reason why we can't access the lightos API

issue: LBM1-17861
@humblec
Copy link
Contributor Author

humblec commented Mar 17, 2022

I am sort of becoming convinced that we may have to add secrets here. The other case I think is, this call is also being used to report volume health and hence it may be useful in many cases to have secret present so as volume health can be queried accurately.

Having said that - we have to also figure out how to carry this information to node side in kubelet. We started with just few secret fields in PV and now it looks like it is kind of exploding. :/

True, now the secrets are too much , we do have 5/6 secrets already in place including controller,node and provisioner.. I would like to explore if there is a thought/idea which can be used to consolidate these heavy number of secrets in the spec. Would be difficult to have such mechanism considered we have to maintain backward compatibility, however may be we can try to handle internally or atleast think about a new way to store additionally introduced secrets..

@humblec
Copy link
Contributor Author

humblec commented Jun 16, 2022

It looks like more CSI RPC calls require/wanting the secret field to be part of ( ex: #462, #515..etc
) RPC calls. as mentioned in previous comment, It may be a good time to rethink about the possibility of accommodating such requests without making more fields to existing PV spec ...etc. , Inviting thoughts on this , so that we can draw the line for addressing such requirements in future and in a better way.

Cc @gnufied @jsafrane @bswartz @xing-yang @msau42

@bswartz
Copy link
Contributor

bswartz commented Jun 16, 2022

Yes we should rethink secrets. The existing interface is too flexible and underspecified. The fact that secrets can vary per-namespace, per-volume, and per-RPC means that driver implementers have expansive flexibility, but it creates a burden that we have to store more secrets per volume as we add more RPCs, and it makes it unclear what to do for any RPC that involves more than 1 volume (like ListVolumes).

Given that we're stuck with the existing model for existing RPCs, probably the best we can do is to find out what exactly driver authors are doing with secrets and see if we can make the interface less flexible. If per-volume or per-RPC secrests are not really required maybe we can change the approach for new RPCs at least, and update existing RPCs to the do the same, and eventually deprecate (but not remove) the current way of handing secrets.

We also may want to do something to separate operational secrets from data encryption secrets. Right now the two use cases are both served by the same secrets field, but this creates awkward problems when you both need admin-supplied secrets for the SP to authenticate to the storage device and user-supplied secrets to manage data encryption.

@humblec
Copy link
Contributor Author

humblec commented Jun 20, 2022

Yes we should rethink secrets.

Indeed!

Given that we're stuck with the existing model for existing RPCs, probably the best we can do is to find out what exactly driver authors are doing with secrets and see if we can make the interface less flexible. If per-volume or per-RPC secrests are not really required maybe we can change the approach for new RPCs at least, and update existing RPCs to the do the same, and eventually deprecate (but not remove) the current way of handing secrets.

Most of the time, or if we go with CSI Service model, RPCs having an option for secret looks to be a 'need', different storage systems may consume it differently though. However, fetching or having secret per RPC is also a burden if all these secrets find its space in the PV Spec. Considering CSI spec has been laid out in a such a way that, operations are targeted or meant for 3 main services, I feel secret fields can also be grouped from COs operation pov ( thus less in SC, PV Spec..etc) .. say one secret for controller, one for node. Then all operations on a service ( ex: controller) has to use the same, so node. Having separate or flexibility of specifying secrets per operation ( provisioner, expansion{controller and node}, snapshotter, volume stats{controller and node}..etc) looks to be overhead for me . Additionally, I am not sure is there really a use case to support separate secrets for each and every operation or RPC...etc.

We also may want to do something to separate operational secrets from data encryption secrets. Right now the two use cases are both served by the same secrets field, but this creates awkward problems when you both need admin-supplied secrets for the SP to authenticate to the storage device and user-supplied secrets to manage data encryption.

I like the idea of separating operational secrets from data path secrets. 👍

However, I havent come across a situation of user-supplied secret requirement , that said, even data encryption has been managed by the admin persona and CSI driver internally without giving control to user in our case. But there could be a requirement for user supplied too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants