-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add option to add jitter to interval #173
base: master
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -123,8 +123,10 @@ func New(client kubernetes.Interface, labels, annotations, namespaces, namespace | |
|
||
// Run continuously picks and terminates a victim pod at a given interval | ||
// described by channel next. It returns when the given context is canceled. | ||
func (c *Chaoskube) Run(ctx context.Context, next <-chan time.Time) { | ||
func (c *Chaoskube) Run(ctx context.Context, maxJitter time.Duration, next <-chan time.Time) { | ||
for { | ||
jitter := util.RandomJitter(maxJitter) | ||
time.Sleep(jitter) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If we go with two sources of delay (next channel and jitter duration, see above comment) we shouldn't use sleep here because it blocks handling Ctrl+c etc. Instead we can do another select: select {
case <-time.After(jitter):
case <-ctx.Done():
return
} after the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I just noticed that this jitter will only increase the interval, never reduce it, right? So, an interval of There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That's correct, I'm going to look at the jitterbug to see if that would fix most of the issues found here There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @linki so i think going with the jitterbug's normal distribution is what would fit well here, using a mean of 0 and standard deviation provided by the user, thoughts? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @desponda Sgtm. Thanks for looking into it! |
||
if err := c.TerminateVictims(); err != nil { | ||
c.Logger.WithField("err", err).Error("failed to terminate victim") | ||
metrics.ErrorsTotal.Inc() | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice if we only had one "tick source". A Ticker with a channel and some jitter inside would be nice Something like https://github.com/lthibault/jitterbug maybe.