Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zincati.service: periodically restart zincati daemon #1121

Closed
wants to merge 1 commit into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions dist/systemd/system/zincati.service
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ Type=notify
ExecStart=/usr/libexec/zincati agent ${ZINCATI_VERBOSITY}
Restart=on-failure
RestartSec=10s
RuntimeMaxSec=3w
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to implicitly rely on the default FCOS release schedule meaning we'll have restarted before we hit this timeout by default. Now, zincati is pretty FCOS specific but still, this seems...slightly unclean at best.

I think the most obvious bigger fix here is to not run as a persistent daemon at all, but to run as a timer.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(That said, I'm OK with this...but it does seem probably better done via a drop-in in fedora-coreos-config instead of hardcoded in source here)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to implicitly rely on the default FCOS release schedule meaning we'll have restarted before we hit this timeout by default. Now, zincati is pretty FCOS specific but still, this seems...slightly unclean at best.

The chosen period is FCOS-informed, but isn't tightly coupled at all. Whether for a given environment updates come in faster or slower than that, it shouldn't hurt to restart the service. I suggested making it longer so that at least in the context of FCOS (by far the most prevalent, if not only, use of Zincati), it's more of a last resort than a primary line of defense. But yes, also fine with having it be a dropin override in the FCOS overlays.

I think the most obvious bigger fix here is to not run as a persistent daemon at all, but to run as a timer.

Yes, @travier suggested this as well, but it requires a larger rework.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had originally suggested 2w but @jlebon suggested 3w. TBH it could be 1 day, though I'm not sure how well that would work with our periodic update strategy that people can set.

I'd be OK setting this in FCOS configs if we think it would be better placed there. Theoretically if zincati had other consumers this would be useful to them too, though those users might prefer a different cadence for restart.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it shouldn't hurt to restart the service.

Well, it will be racy with DBus today.

But my larger (yet still minor) concern is this restart will appear as an error, when it's not.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it shouldn't hurt to restart the service.

Well, it will be racy with DBus today.

That's fair. I actually was unaware that Zincati had a D-Bus interface until debugging #1119 (comment) today. It still seems pretty hidden currently at least, but indeed if we were to develop it more down the line as part of #498, having it be seemingly randomly force restarted would be odd and we'd probably want to rework/drop this directive.


[Install]
WantedBy=multi-user.target