-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement support for copying directories recursively #160
Comments
The copy object seems to be limited by 5GB according to the doc: https://docs.aws.amazon.com/AmazonS3/latest/dev/CopyingObjectsExamples.html |
DetailsYou are pointing to their "old" documentation - the new one is here. It is correct you cannot recursively copy using the S3 API, however it is possible to use batch operations for this:
However, I don't think this would be easy or straightforward to implement as #163 Delete objects recursively. You cannot get the entire list of objects, because there is a limit of 1000 objects (as I mentioned in #163). It looks like the you could maybe use the Possible issues I foresee are:In S3 you can virtually unlimited number of recursive "objects" (files or directories) in a tree-like structure:
The first two points are actually valid for #163 as well, but I guess we can create a follow-up after this task. // cc @carlspring |
@steve-todorov, thank you for your suggestion ! However, when I check #163 , I do not only delete the file, but the folders too, that is why I made my own Do you think I should keep my own thanks :D |
Task Description
We need to implement support for copying directories in the
org.carlspring.cloud.storage.s3fs.S3FileSystemProvider
class, as this currently only works for regular files and does not check if the paths are directories in order to recurse into them.Tasks
The following tasks will need to be carried out:
org.carlspring.cloud.storage.s3fs.S3FileSystemProvider
and propose the most efficient way to do this.Help
The text was updated successfully, but these errors were encountered: