-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Events → BatchMode → Flush doesn't block and wait #476
Comments
Thank you for raising the issue. It looks to me what we need is a new blocking The Perhaps a combination of |
I wonder too, if we might be able to leverage the context object to close down the rest of the system while the post completes. I don't expect to be able to look into this in the near term though. |
This issue has been automatically marked as stale because it has not had any recent activity. It will be closed if no further activity occurs. |
Bump. |
Hi @skyzyx Were you able to work around this or is it still an active issue? |
Description
The documentation for the
Events.Flush
method says:I expect foreground to be synonymous with block and wait, and the client doesn't appear to be doing that. This is the culmination of research that was originally discussed in #464.
Go Version
Current behavior
Reading thousands of records from one system, and pushing them into New Relic via
pkg/events
in batch mode. Code is very similar to what was posted in #464, but essentially boils down to something like this:(I've stripped out some event handling and logging code, so there may be some phantom variables here.)
What happens is that
client.Flush()
isn't preventing the program from ending, somain()
exits before the queue flushes (or, more accurately, even starts).Expected behavior
My expectation is that
client.Flush()
should block and wait — preventingmain()
from exiting until the flushed events have responded successfully (or at least have been sent in the first place).Steps To Reproduce
Additional Context
I was seeing cases where sometimes events made it, and other times they did not. The difference appeared to be related to which log level I was setting on the client. Once I narrowed-down the variables in the puzzle, I discovered that it wasn't the log level so much as the time it took for those log levels to write data to stdout/stderr.
debug
andtrace
wrote a lot more data to stdout/stderr, which slowed-down the program enough to allow the goroutines to complete.info
wrote less data, enabling the program to run faster, and the goroutines didn't have enough time to finish (or even start).A workaround has been to sleep at the end of the
main()
function.But what this means is that it always ends up sleeping either too long or not long enough. Too long means that I'm missing out on other live events while I'm waiting for the program to finish sleeping before listening again. Too short means that some events I've read aren't making their way to the Events API.
Ideally, the client should intrinsically understand what it needs to block for, then stop blocking once the queue flushing has completed (for whatever definition of "completed" is appropriate — request has been sent, or response has been received).
References or Related Issues
The text was updated successfully, but these errors were encountered: