Releases: hazardfn/riemannx
v4.2.0
v4.1.4
v4.1.3
v4.1.2
v4.1.1 (BROKEN)
Performance Enhacements
- Don't send 0 length packets, mostly affected UDP but overhead would have been present on the TCP layer also. However, it appears the underlying implementation ignored those zero length packets for TCP.
NOTE: This only affected TCP/UDP when used in combination with the batcher module.
v4.1.0
Performance Enhacement
Submitted by @ulfurinn:
There is an issue where high message pressure on connection procs can cause severe performance degradation. This happens because checking out a poolboy worker is a gen call, which necessarily requires a selective receive for a message that ends up at the end of a very long mailbox, so even seemingly fast async sends can become very expensive.
This is a problem with all connection types; batch is easiest to fix because it isolates the problem to a single process, while using the tcp/udp connections directly will place the bottleneck in all the consumer processes. This PR only addresses the batch connection.
The easiest approach is to simply move the flush that contains the poolboy checkout into a newly spawned process, which will start with an empty mailbox and will thus avoid the expensive selective receive. This, however, may cause packets to become reordered, which may be critical to client applications, since many spawned flush procs will be executed in whichever order the scheduler decides is optimal. To avoid this a simple locking mechanism is added, where only one flush will be allowed to run at a time, preserving the current effective behaviour.
v4.0.9
Bug
- Default batch settings could sometimes result in a message queue build-up during a Riemann outage
Behaviour Changes
- Riemannx will now wait 30s for a free worker from the pool if it can't find one it will drop the data. If your data is crucial you should ALWAYS send directly using TCP.
Future Considerations
- The next major version will allow you to siphon data to disk when unable to send and it will be re-attempted at intervals.