-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do a soft reload of haproxy #212
base: master
Are you sure you want to change the base?
Conversation
Send the haproxy container a HUP instead of restarting it.
rebuild.sh
Outdated
@@ -0,0 +1,6 @@ | |||
#!/bin/bash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, didn't mean to commit that file.
if err := p.client.ContainerKill(context.Background(), cnt.ID, "HUP"); err != nil { | ||
log().Errorf("error reloading container: id=%s err=%s", cnt.ID[:12], err) | ||
continue | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you sure this catches all cases? A default
with log probably wouldn't hurt.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will add it and update the PR.
// update the proxy container status | ||
cInfo, err := p.client.ContainerInspect(context.Background(), cnt.ID) | ||
if err != nil { | ||
log().Errorf("unable to inspect proxy container: %s", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will fall through without a continue
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The continue is there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I didn't see it from the diff. 👍
} | ||
case "running": | ||
log().Debugf("reloading proxy container: id=%s", cnt.ID) | ||
if err := p.client.ContainerKill(context.Background(), cnt.ID, "HUP"); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you sure HAProxy handles this properly? I believe we've tried HUP
before and found that it didn't always reload the config.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That being said, if we can test it I would love to get rid of the TCP hacks. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pretty sure it is stable in v1.7.x. I have been using it (outside of interlock) for a while.
log().Infof("restarted proxy container: id=%s name=%s", cnt.ID[:12], cnt.Names[0]) | ||
} | ||
|
||
if err := p.resumeSYN(); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is needed to safely reload (see above comment).
test/integration: Remove test.hosts fixture
Send the haproxy container a HUP instead of restarting it.