-
-
Notifications
You must be signed in to change notification settings - Fork 449
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docker + dat deployment #858
Comments
Seems like it is related to a bug in discovery-swarm/channel (mafintosh/discovery-swarm#32) where the internal port doesn't match the external port and the wrong one gets reported. |
Switching Docker networking to "host" mode works. This will use the OS ports instead of Docker's networking interface, getting around the external/internal port bug. |
I think you are running it wrong. If you just do I am running hypercored inside Docker using |
interestingly, I've been having issues with dat sharing that are related to this:
pretty much doesn't work. I'm going to compare to running hypercored in a bit - cos this is ew! whereas using |
I'm kinda wondering if there's not enough error reporting going on - I was able to run 2 |
@SvenDowideit see #945 about port question. |
@mitar merci! - that would pretty much make issues when using it in docker impossible to debug |
Also see #947. |
I have experienced the same thing while trying to dockerize dat in my local machine. I tried exposing port 3282 and with --net host as well, but local host machine does not connect to dat in docker. I did not manage to connect to it from dat cli neither from Beaker browser. However, non dockerized dat works fine in local machine and you can access it directly e.g. from Beaker. You can connect also from any remote machine directly to local dat docker (!). It is enough just to expose 3282, no need in --net host Agree here with @SvenDowideit that it is almost impossible to understand where dat fails without proper debugging messages. I was able to guess that only local machine is not working by trying and guessing |
Did you try running dat with the DEBUG env var set to *? That’ll give a lot of logging
… On Jul 24, 2018, at 3:53 AM, nettiopsu ***@***.***> wrote:
I have experienced the same thing while trying to dockerize dat. I tried exposing ports and with --net host as well, but it seems that host machine does not see it for some reason. dat doctor runs well with first tests without any errors, second test starts to use random ports and host machine fails to find it
Agree here with @SvenDowideit that it is almost impossible to understand where dat fails without proper debugging messages
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
I'm facing the same issue, made worse by the fact I can't use
|
I looked into this a while ago and found something about the docker NAT implementation having a bug with UDP hole punching. Can't find the github thread now but thought I'd comment in case that ends up being relevant. |
Thanks @maxogden I saw your status from two years ago to remind us two years in the future :-) https://twitter.com/maxogden/status/913874394421719040 This issue that @joehand references at the top may be the issue you are thinking of #841 For the time being I'm mounting a volume and running |
@pfrazee yes I tried, but it lacks still some data for understanding, sometimes it does not show even URL which dat is sharing By the way, @rjsteinert reminded me about other Docker problem with dat share if you run it in MacOS. Docker has problems with sending notifications about file change (inotify) in this system. So if you run dat share for shared volume in Docker machine, then if you make changes in this volume from host machine, then dat will not get this changes neither share them in network. Though dat share works if changes were made from the same Docker machine, not from host. Taking this into account, it maybe wise for dat cli not to rely always on inotify, but also run sometimes in low priority scan of all files in folder for changes (?) at least for Docker |
I didn't have problems running dat inside docker with regular Docker NAT, the problem was just that dat is picking random ports. If you could configure which port dat really uses, things work great. |
@mitar but it does ? It picks first available, if the default port is already in use: https://github.com/datproject/dat/blob/master/bin/cli.js#L39 |
Yes, and so it is tricky to know that default port is not being used. So it is tricky to debug. I would prefer a mode which would be "use given port or die". i have had issues in Docker only because my dat daemon was listening on ports I have not had mapped. And it was hard to debug. |
@mitar, the latest release of the dat cli fixed this =). Thanks for the suggestions on that. |
Then it should work. (If nothing else changed. I have not used dat for few months now.)
@nettiopsu, did you expose both UDP and TCP ports? Use |
@mitar just tried, it did not help, same situation - can not connect from host machine, but can connect from a remote one I have tried also this new port parameter to use some another port, but then even remote machine stopped working |
Oh, yes, from host it does not work. This is a general issue with NAT. I missed that in your original comment. |
Would you mind explaining more about what the general issue with NAT is? |
I mean, it depends how your Docker is configured, but if whole Docker network is behind one IP (NAT), you might not be able to access a particular IP of the container from outside (host). You can map port to that one IP and then access the port on it. |
Current status on docker + dat networking troubleshooting. This is a general issue to keep visibility and update progress, not related directly to cli.
We're still having trouble accessing exposing ports directly.e.g.:
We had an issue with using IPTables before (#503), which docker uses. So it may be related to that, not Docker directly. This is what some of our IPTables rules (generated via docker) look like:
See also:
The text was updated successfully, but these errors were encountered: