-
Notifications
You must be signed in to change notification settings - Fork 935
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Auth: Add storage volume and bucket location to URL in access check #13517
Auth: Add storage volume and bucket location to URL in access check #13517
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added a few comments on things I noticed while working on this that might be worth revisiting.
e72d043
to
c253ded
Compare
Please can you rebase |
c253ded
to
9d5193f
Compare
@tomponline rebased and ready for review |
Before reviewing further I just wanted to check and understand this statement, as it seems over complicated to me for an access check of a volume (which in my understanding ultimately boils down to loading the volume record to get its ID before doing an access check on that resource). Or is this just the existing logic that is used to load a specific volume already? |
This is the logic that is used to figure out where to forward the request. Coming back to this I think it can probably be simplified by writing a custom query and bypassing all this complication. The main issue is that there can be multiple volumes with the same name, just located on different members. |
@tomponline summarising here my remarks about this PR in the core daily stand up: As far as fine-grained auth is concerned, we can call For storage buckets and volumes, this represents a challenge because bucket/volumes can have the same name if they are in a local pool and located on different cluster members. Additionally, a There are a few cases to consider:
To catch all these cases, we need to*:
*For all of these cases, we additionally need to check for the effective project of the bucket/volume and substitute the project parameter if it differs from that of the request. In figuring out all of these steps, we've already done a lot of the work that would normally be done in the handler for forwarding requests to appropriate nodes. This PR adds the effective project, the storage pool, and forwarding node info (name & address) to the request context so that the handlers don't need to perform these queries again. Additionally, if the request has already been forwarded, the query for forwarding node info is skipped (we assume the other node has already correctly handled this). I think this PR is the way forward because it overall reduces the number of queries (by skipping the remote volume check when it's a forwarded request) but does not change the queries themselves. (We can optimise these later if it becomes necessary). |
9d5193f
to
878605c
Compare
@tomponline have updated this to address your comments. Ready for review when you have some time. Thanks. |
878605c
to
b000906
Compare
Please can you rebase this |
b000906
to
bc55e27
Compare
Yep done. |
Signed-off-by: Mark Laing <[email protected]>
bc55e27
to
319d6b7
Compare
Contents are copied and slightly modified from `cluster.ConnectIfVolumeIsRemote`. Signed-off-by: Mark Laing <[email protected]>
319d6b7
to
992c1b7
Compare
992c1b7
to
51292f0
Compare
Signed-off-by: Mark Laing <[email protected]>
Signed-off-by: Mark Laing <[email protected]>
Signed-off-by: Mark Laing <[email protected]>
This function will now only forward the request to another member if a db.NodeInfo is found in the request context under `ctxStorageVolumeRemoteNodeInfo`. Signed-off-by: Mark Laing <[email protected]>
Signed-off-by: Mark Laing <[email protected]>
Signed-off-by: Mark Laing <[email protected]>
Signed-off-by: Mark Laing <[email protected]>
Signed-off-by: Mark Laing <[email protected]>
Signed-off-by: Mark Laing <[email protected]>
Signed-off-by: Mark Laing <[email protected]>
…handler. Signed-off-by: Mark Laing <[email protected]>
Signed-off-by: Mark Laing <[email protected]>
This is no longer used. Signed-off-by: Mark Laing <[email protected]>
51292f0
to
f0ac51f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! This pattern of extracting fields common to several API endpoints and loading the DB resource and storing them into the context seems like a good way of avoiding additional DB queries when performing access checks as well as removing code duplication in the API endpoint handlers themselves.
This PR fixes fine grained authorization for storage volumes and storage buckets in clustered LXD.
When listing volumes, we can use the
Location
field of the storage volume or bucket to perform an access check.When performing an action on a single storage volume, we need to perform the following checks:
target
query parameter may have been used to target a particular cluster member, but should not be used in the URL for the volume (volumes in remote pools do not have a location).To prevent extra queries from being performed when a request is forwarded to another node. Each node will assume that forwarded requests have already reached their intended destination. This is done by checking for
request.CtxForwardedProtocol
in the request context (I tried this originally by setting thetarget
parameter but this broke storage volume migrations).Closes #13365