-
-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Two chunk instances downloading the same file #36
Comments
With the changes from #38 we started to have a more predictable scenario here. This would still be a scenario we wouldn't be prepared for: $ ./chunk HTTP://a.b/c &
$ ./chunk HTTP://a.b/c & But this would be OK: $ ./chunk HTTP://a.b/c &
$ cd another-directory && ./chunk HTTP://a.b/c & What would happen here is that the file In my opinion, two processing downloading to the same local path should not be allowed (classic concurrency problem both on the downloaded file and on the progress file). I am just not sure about how to control and prevent that. |
If #38 is ok, what about we create a |
It is possible, but I believe we don't need to make it too restrictive. Bellow, I try to do some brainstorming about a less restrictive option: consider each progress management file a lock file. That has the same complexity as creating a single lock file but allows us to have as many chunk instances as needed, as long as they do not download the same file. That said, there is one faulty case missing that we should treat either way: what happens when the chunk instance crashes? There is a straightforward way to solve this problem:
With that, every time a download is started, chunk should do as follows:
Of course, we could introduce the |
Do we care about the same file being downloaded simultaneously by two chunk instances? Asking because I believe this can a pre-release feature.
Example:
The result of this sequence of operations is unknown.
We could deal with it after #8, by augmenting the infrastructure in place.
cc/ @cuducos
The text was updated successfully, but these errors were encountered: