-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Batched PUT requests #205
base: dev/1.0.0
Are you sure you want to change the base?
Batched PUT requests #205
Conversation
Hi @robagar, thank you for your contribution. In order to accept it we need you to sing the Eclipse Contribution Agreement. |
ECA signed |
@Charles-Schleich could you review the PR? |
We don't use InfluxDB v2, and have no intention to. It looks like a dead end, with Flux being abandoned for v3. |
Hi @robagar, thank you for the contribution ! |
Skimming the code it looks fairly simple to add the batching to v2 - the client |
v1/src/lib.rs
Outdated
|
||
let client_clone = client.clone(); | ||
let name_clone = config.name.clone(); | ||
TOKIO_RUNTIME.spawn(async move { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The plugins can be linked statically or dynamically to an instance of zenohd.
In the case that we link statically, we want to reuse the Tokio runtime of zenohd and not use the TOKIO_RUNTIME stored in the lazy static:
This can be achieved by a check on the tokio handle, and running it on the correct executor.
let batch_future = async move{...};
match tokio::runtime::Handle::try_current() {
Ok(handle) => handle.spawn(batch_future),
Err(_) => TOKIO_RUNTIME.spawn(batch_future),
};
pub const PROP_STORAGE_PUT_BATCH_SIZE: &str = "put_batch_size"; | ||
pub const PROP_STORAGE_PUT_BATCH_TIMEOUT_MS: &str = "put_batch_timeout_ms"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These config options must be documented in the README (there is a pending PR for an example Config file)
// TODO - add pending status | ||
Ok(StorageInsertionResult::Inserted) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not 100% sure about returning aStorageInsertionResult::Inserted
as the insertion could fail inside the batch, but it has been reported to zenoh as inserted. This will likely have implications inside replication and storage alignment.
The suggestion to add a Pending
status could work, however then we will also have to add a Completed
status that will have to be signaled to zenoh that all of those Puts have been successful as a group.
This may require an additional ID per Put
when batching, and upon successful write, the ID's of the batch would have to be signaled back to zenoh.
@JEnoch, What do we think about adding a Pending
and Complete
system ?
it adds a little bit of complication, and i'd like to make sure we don't break alignment due to batching.
v1/src/lib.rs
Outdated
Some(tx) | ||
} else { | ||
None | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion / nit: let mut put_batch_tx = None;
above 322 and just over write with a Some()
value on 410.
HIya.. any chance of this making it into the 1.0.0 release? |
After some discussion, the proper support of batching would require some work on the backend API. So I'm afraid some more work is required in Zenoh to properly support this use case. |
OK, but please be aware that as it stands writing to the InfluxDB backend is just too slow - it falls behind here even with ~100Hz updates, which is hardly excessive. |
This adds the ability to send multiple data points in a single PUT to InfluxDB, giving a huge performance gain for high frequency data.
Volume configuration:
put_batch_size
- Maximum number of data points per PUT requestput_batch_timeout_ms
- Milliseconds before sending a batch if not fullIf
put_batch_size
is not set it reverts to the original behaviour of sending each data point in its own PUT request.