You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Imagine we have a PUT operation followed by a DELETE operation on a same key. What if the DELETE operation is performed before the PUT operation. In that case, the storage would still have a file when it shouldn't.
To solve this, we need to take a look at the deletion zenoh timestamp of the file and compare it to the put zenoh timestamp, if the timestamp is greater for this latter, then the operation should be dropped. So before performing a PUT operation, we should fetch from the s3 server the zenoh timestamp of the delete operation. We possibly could optimise this operation by setting up some local database with all the entry logs of the database, specially those related to delete operations, but that would come associated with other complexities, like for instance what if we have multiple clients interacting with the same s3 instance...
Possible solutions
For the moment, as a quick way to solve this, the zenoh timestamp of the delete operations should be kept on the S3 storage. The amazon timestamp and the zenoh timestamps are different, plus when you delete a file, any associated user-defined metadata of the deleted file cannot be retrieved, we get only a 404 error.
An alternative would be to have an entry logs file in the storage, from where we could retrieve the delete timestamp. The downside of this alternative would be:
the size of the logs file increasing continuously
a blow to the performance
the logs file should be downloaded entirely
after download, it should be processed in order to get the deletion timestamp, which would be a linear operation if not optimised someway.
Another alternative for this issue can be the following one: instead of removing the file, replace the file with an empty one containing the required metadata, that is the deletion timestamp and maybe some other file stating it ought to be deleted. This way we can perform a get request to retrieve the metadata and the timestamp.
We can remove all the "deleted" files upon dropping the storage.
The text was updated successfully, but these errors were encountered:
Note this issue was originally created by me here
Problem
Imagine we have a PUT operation followed by a DELETE operation on a same key. What if the DELETE operation is performed before the PUT operation. In that case, the storage would still have a file when it shouldn't.
To solve this, we need to take a look at the deletion zenoh timestamp of the file and compare it to the put zenoh timestamp, if the timestamp is greater for this latter, then the operation should be dropped. So before performing a PUT operation, we should fetch from the s3 server the zenoh timestamp of the delete operation. We possibly could optimise this operation by setting up some local database with all the entry logs of the database, specially those related to delete operations, but that would come associated with other complexities, like for instance what if we have multiple clients interacting with the same s3 instance...
Possible solutions
For the moment, as a quick way to solve this, the zenoh timestamp of the delete operations should be kept on the S3 storage. The amazon timestamp and the zenoh timestamps are different, plus when you delete a file, any associated user-defined metadata of the deleted file cannot be retrieved, we get only a 404 error.
An alternative would be to have an entry logs file in the storage, from where we could retrieve the delete timestamp. The downside of this alternative would be:
Another alternative for this issue can be the following one: instead of removing the file, replace the file with an empty one containing the required metadata, that is the deletion timestamp and maybe some other file stating it ought to be deleted. This way we can perform a get request to retrieve the metadata and the timestamp.
We can remove all the "deleted" files upon dropping the storage.
The text was updated successfully, but these errors were encountered: