Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-interlock config #123

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open

Multi-interlock config #123

wants to merge 3 commits into from

Conversation

etoews
Copy link
Contributor

@etoews etoews commented Apr 21, 2016

Closes #89

@@ -461,10 +477,8 @@ func (l *LoadBalancer) isExposedContainer(id string) bool {
return false
}

log().Debugf("checking container labels: id=%s", id)
// ignore proxy containers
if _, ok := c.Config.Labels[ext.InterlockExtNameLabel]; ok {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When the proxy is ignored:

  1. Proxy has containers in its config
  2. Kill proxy
  3. Run proxy
  4. Containers are not written to new proxy config

When the proxy isn't ignored:

  1. Proxy has containers in its config
  2. Kill proxy
  3. Run proxy
  4. Containers are written to new proxy config

@ehazlett ehazlett added this to the 1.2 milestone Apr 27, 2016
@etoews
Copy link
Contributor Author

etoews commented May 3, 2016

@ehazlett Is there anything else I can do to help get this merged?

@etoews
Copy link
Contributor Author

etoews commented May 6, 2016

@ehazlett Does anything else need to be done here?

@ehazlett
Copy link
Owner

The review looks good. I tried to get this to work with the example in docs/examples/nginx-sarm-machine by adding the labels but all I see is interlock ignoring the service containers.

You can see the part of the log here where it is ignoring it (this is the app service in the compose file):

interlock_1  | DEBU[0018] ignoring service container: id=57bc4313ee2ec57212ee1638aa53a91559096883ed8a796dfa547846f0add730 labels=map[com.docker.compose.service:app com.docker.swarm.id:aa952409965cc1f6815523d5f40bc81bb7eb19cd662adccf1e3d36b73d8df5ad com.docker.compose.container-number:1 com.docker.compose.project:nginxswarmmachine com.docker.compose.version:1.7.1 interlock.domain:local interlock.ext.service_name:foo interlock.hostname:test com.docker.compose.config-hash:9d1a61128f74da3782173dfc505c52c2b3995c37e59858b5bf3261062061dc60 com.docker.compose.oneoff:False]  ext=lb
interlock_1  | DEBU[0018] event received: status=start id=57bc4313ee2ec57212ee1638aa53a91559096883ed8a796dfa547846f0add730 type=container action=start

@ehazlett
Copy link
Owner

Also I don't like how Interlock uses the label for its service name. This should be in the config.toml. You should set a ServiceName in the config instead of the Interlock container looking for its label.

@etoews
Copy link
Contributor Author

etoews commented May 13, 2016

Actually Interlock does need to have the service name in the config.toml as ServiceName under [[extensions]]. That's why your service containers are getting ignored.

For example

[[extensions]]
Name = "nginx"
ServiceName = "myservice"
...

I documented the ServiceName option in docs/configuration.md but I should also document it in docs/interlock_data.md as well. I'll fix that.

I chose to also have Interlock require the label because it makes it very simple to determine all of the containers that make up a functioning service. It makes it simple for Interlock itself and also for anyone listing containers with a docker ps or via the API and using label filters. It's handy for devs or ops to because to get this kind of insight into service.

@ehazlett ehazlett modified the milestones: 1.3, 1.2 Jun 3, 2016
@sjoshi10
Copy link

@ehazlett Is this possible in newer versions of interlock or does this need to be merged?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants