-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Swarm services #186
base: master
Are you sure you want to change the base?
WIP: Swarm services #186
Conversation
Signed-off-by: Evan Hazlett <[email protected]>
Signed-off-by: Evan Hazlett <[email protected]>
Signed-off-by: Evan Hazlett <[email protected]>
Signed-off-by: Evan Hazlett <[email protected]>
Signed-off-by: Evan Hazlett <[email protected]>
Signed-off-by: Evan Hazlett <[email protected]>
Signed-off-by: Evan Hazlett <[email protected]>
As this runs globally and uses docker.sock, how do you handle worker nodes which can't give information about swarm services? Does it need to be restricted to managers only? |
Yes or a proxy. I have a proxy that works and comes with tls that could be On Jul 9, 2016 15:36, "Curtis Mitchell" [email protected] wrote:
|
I have been thinking of building this into interlock and running as a On Jul 9, 2016 15:48, "Evan Hazlett" [email protected] wrote:
|
Interesting idea - how is access to the proxy secured? |
TLS On Jul 9, 2016 16:04, "Curtis Mitchell" [email protected] wrote:
|
Whoops - let me clarify. I understand it uses TLS to talk to the remote api. Does it also use TLS between the docker client and itself? And, does it require any authentication? |
@curtismitchell How would config.toml work with both docker.sock and TLS if its a global environment variable? Surely putting certs/keys with access to the master nodes onto every worker is a security issue and also a fair amount of effort? @ehazlett have you got an example of this proxy working? Sounds like an interesting idea, not sure how it would work security wise. |
@tpbowden it would only need to use TLS on the swarm manager. |
@ehazlett the docs do not seem to show the @tpbowden sorry - it looks like I was mistaken in my previous comment. Per the docs, interlock should only need access to the docker socket on the swarm manager. I'm testing this out, now. |
@curtismitchell no there is no need to share the config. interlock will configure each instance. |
@ehazlett Thanks for the speedy reply. It works! It just took a little longer than I expected based on the |
@ehazlett It's inconsistent. I followed the steps in your documentation with one exception: I used the With It took some time (minutes maybe? I left the house and came back.) before I was able to get the hello world message on the screen with a request to Within minutes, it stopped working again. I hadn't made any changes. I scaled the BTW, is this the right place for this feedback? Since it doesn't appear that this PR has been merged yet, I didn't know where else to offer these observations. |
Yes there is currently a limitation in that you can only run interlock on On Jul 16, 2016 12:38, "Curtis Mitchell" [email protected] wrote:
|
Yes this is the right place for feedback. This will be the PR that adds If you just want to chat for help or debug you can ping me on irc. Maybe On Jul 16, 2016 12:48, "Evan Hazlett" [email protected] wrote:
|
Oh! Again, I misunderstood something that was mentioned earlier. Thanks for the explanation. |
No problem :) On Jul 16, 2016 12:50, "Curtis Mitchell" [email protected] wrote:
|
I think irc or gitter.im would be great if there were more participants. So far, it has only been 3 of us, and I think my questions are answered for now. |
I was actually unable to get a hello world at all, but i did get default Welcome to nginx at some point. My thoughts were that in the nginx-service steps guide - it included flag ,writable=yes and my swarm manager complained about this flat for -m mount options, so I looked through the docker's source and found that there isn't really a writable attribute but instead readonly attribute available in mount options, so i tried readonly=false, but it still didn't work. I'm assuming that since the socket is probably not writable by default - my nginx instances never get notified that new service joined in and all nginx configs look like defaults. Is there something that I need to do to be able to use that writable=true flag or is the issue something different altogether? Also - when the new service joins in - does the container that is running proxy extension (haproxy or nginx) actually get retstarted? that's what it looked like when i was testing non-services option. Is there a reason why nginx or haproxy reload not used? ie nginx -s reload, i thought it was odd that the restart was required, seems like it would drop all active connections and through a not available / found error if someone was waiting for response from the service on open connections? So far i was completely unable to get the services version working with one exception, i did the follwing:
Since it works for somebody but is not working for me - my guess is that i'm not doing something right and any pointers are really very appreciated. Using CentOS7, Docker 1.12-rc4, 2xNICs (one for local server network and one for public internet) |
It's kinda working for me, if I set up nginx + interlock + demo app and have 1 task running only the nginx with the task will reload nginx and update nginx.conf, the other one will return 503 if I hit it by reloading the page. |
docker service create \ | ||
--mode global \ | ||
--name interlock \ | ||
--mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock,writable=true \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to moby/moby#24053
writable=true
is the default behavior and the flag has been removed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. I'll re-vendor and update.
On Jul 31, 2016 05:51, "Luc Vieillescazes" [email protected] wrote:
In docs/examples/nginx-services/README.md
#186 (comment):+[[Extensions]]
+Name = "nginx"
+ConfigPath = "/etc/nginx/nginx.conf"
+PidPath = "/var/run/nginx.pid"
+TemplatePath = ""
+MaxConn = 1024
+Port = 80
++ +Now create the Interlock service: + +
+docker service create \
- --mode global \
- --name interlock \
- --mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock,writable=true \
According to moby/moby#24053
moby/moby#24053
writable=true is the default behavior and the flag has been removed—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/ehazlett/interlock/pull/186/files/97977eb4a071a8c515ac030999a90c798d6477d1#r72901527,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAP6ImaphFDoOaO14zzA9lOOZCetHiXhks5qbHAzgaJpZM4JIiDb
.
So I got this working to a certain extent in 1.12.1 a cluster of nodes hosts setup:
service setup:
then add demo.local to hosts
HoweverWe only ever see one container on the demo app. This could be todo with how the docker guys have changed host discovery in 1.12.1. I notice that the upstream server listed in
perhaps load balancing is not happening the way it used to in 1.12. Also a new feature has been added to the dns for service discovery so you can do:
I guess this would ideally be translated to the continence of the nginx config file. incidentally should we be putting the config in |
Strange now after a machine re start and building the cluster from scratch this works as expected... oh well. Its a bit annoying that interlock and nginx have to reside on the same node and that node has to be a managers node. As this means the publicly facing nodes end up being managers, and you are limited to have as many nginx containers as managers. Am I right in thinking this is true? |
Humm ok this dose seem to break on killing of scaling the service down, we will then get intermittent failures as it looks like the docker is keeping hold of the stale IP's for the service after scaling
the dns records still have 10 items
and i presume this is why I the connection to the ingress endpoint is intermittent as docker is dialling into those stale IP's. related to moby/moby#25130 Also It looks like we can't use other overlay networks for interlock to work. |
Yes this branch is still WIP. Obviously you don't just want it on managers On Aug 25, 2016 08:42, "Richard Mathie" [email protected] wrote:
|
seems like moby/moby#25962 fixes the scaling problem. |
Thanks for this work, really interesting. But is there a way to avoid exposing ports for other services, since I don't necessarily want them to be accessible (I know, I can block them with iptables, but it would be even easier not to expose it). My suggestion, taking your example: This would maybe need some declaration of the docker service command with a label, for instance interlock:port=8080 The command declaring the demo service would be something like:
What do you think? |
I'm ready to test this as well for Swarm 1.12+. Can't wait to make it happen :) |
commented --thx! On Wed, Oct 5, 2016 at 10:25 AM, Vincent Lepot [email protected]
|
This adds support for Docker 1.12 services. There is an example doc showing how it works. All container labels that were used to configure Interlock should be supported using Service labels.
This also switches from the
dockerclient
Docker Go lib to the officialdocker/engine-api
client. This adds a few fixes and should improve stability.This also pulls in the InfluxDB backend for the beacon extension.
Closes #178