Zenoh-Flow provides a zenoh-based dataflow programming framework for computations that span from the cloud to the device.
Zenoh-Flow allow users to declare a dataflow graph, via a YAML file, and use tags to express location affinity and requirements for the operators that makeup the graph. When deploying the dataflow graph, Zenoh-Flow automatically deals with distribution by linking remote operators through zenoh.
A dataflow is composed of set of nodes: sources — producing data, operators — computing over the data, and sinks — consuming the resulting data. These nodes are dynamically loaded at runtime.
Remote source, operators, and sinks leverage zenoh to communicate in a transparent manner. In other terms, the dataflow the dafalow graph retails location transparency and could be deployed in different ways depending on specific needs.
Zenoh-Flow provides several working examples that illustrate how to define operators, sources and sinks as well as how to declaratively define they dataflow graph by means of a YAML file.
Install Cargo and Rust. Zenoh Flow can be successfully compiled with Rust stable (>= 1.5.1), so no special configuration is required — except for certain examples.
To build Zenoh-Flow, just type the following command after having followed the previous instructions:
$ cargo build --release
To build Zenoh-Flow documentation, just type the following command after having followed the previous instructions:
$ cargo doc
The HTML documentation can then be found under ./target/doc/zenoh_flow/index.html
.
Assuming that the previous steps completed successfully, you'll find the Zenoh-Flow runtime under target/release/runtime
. This executable expects the following arguments:
- the path of the dataflow graph to execute:
--graph-file zenoh-flow-examples/graphs/fizz_buzz_pipeline.yaml
, - a name for the runtime:
--runtime foo
.
The graph describes the different nodes composing the dataflow. Although mandatory, the name of the runtime is used to "deploy" the graph on different "runtime instances" (see the related examples).
Assuming that the build steps completed successfully, you'll be able to use the cargo zenoh-flow
subcommand to create a boilerplate for your nodes.
First let's ensure to have the cargo-zenoh-flow
binary in the Cargo path.
$ ln -s $(pwd)/target/release/cargo-zenoh-flow ~/.cargo/bin/
Then you can create your own node with:
$ cd ~
$ cargo zenoh-flow new myoperator
By default cargo zenoh-flow
generates the template for an operator. In order to create a source or a sink you need to add either --kind source
or --kind sink
.
The Cargo.toml
will contain metadata information (eg. the inputs/outputs) used during the build process to generate the descriptor.
More information about the cargo zenoh-flow
can be obtained using cargo zenoh-flow --help
.
You can now modify the src/lib.rs
file with your business logic and update the Cargo.toml
according to the inputs/outputs
that you need.
Once you are done you can build it:
$ cargo zenoh-flow build
It will provide you the path of the descriptor for the new node, that can be used inside a flow descriptor.
Examples can be found in our example repository.