forked from ParaGroup/WindFlow
-
Notifications
You must be signed in to change notification settings - Fork 0
/
API
148 lines (118 loc) · 4.82 KB
/
API
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
#################################################################################################
# WINDFLOW ACCEPTED SIGNATURES #
#################################################################################################
This file lists all the possible signatures that can be used to create WindFlow operators. In case
you provide a wrong signature during the creation of an operator (through its builder), you receive
a specific error message during the compilation phase (through some static asserts).
For basic and window-based operators, the functional logic, as well as the key extractor and the
closing logic can be provided as functions, lambdas or functor objects providing operator() with
the right signatures.
For GPU operators, the functional logic must be provided as a __host__ __device__ lambda or through
a functor object exposing a __device__ operator() method with the right signature. The key extractor
logic must be provided using a __host__ __device__ lambda or through a functor object with the right
__host__ __device__ operator() method.
SOURCE
------
bool(Source_Shipper<tuple_t> &);
bool(Source_Shipper<tuple_t> &, RuntimeContext &);
KAFKA_SOURCE
------------
bool(RdKafka::Message &, Source_Shipper<result_t> &);
bool(RdKafka::Message &, Source_Shipper<result_t> &, RuntimeContext &);
FILTER
------
bool(tuple_t &);
bool(tuple_t &, RuntimeContext &);
FILTER_GPU
----------
[__host__] __device__ bool(tuple_t &);
[__host__] __device__ bool(tuple_t &, state_t &);
MAP
---
void(tuple_t &);
void(tuple_t &, RuntimeContext &);
result_t(const tuple_t &);
result_t(const tuple_t &, RuntimeContext &);
MAP_GPU
-------
[__host__] __device__ void(tuple_t &);
[__host__] __device__ void(tuple_t &, state_t &);
FLATMAP
-------
void(const tuple_t &, Shipper<result_t> &);
void(const tuple_t &, Shipper<result_t> &, RuntimeContext &);
REDUCE
------
void(const tuple_t &, result_t &);
void(const tuple_t &, result_t &, RuntimeContext &);
REDUCE_GPU
----------
[__host__] __device__ tuple_t (const tuple_t &, const tuple_t &);
KEYED_WINDOWS, PARALLEL_WINDOWS
-------------------------------
void(const Iterable<tuple_t> &, result_t &);
void(const Iterable<tuple_t> &, result_t &, RuntimeContext &);
void(const tuple_t &, result_t &);
void(const tuple_t &, result_t &, RuntimeContext &);
PANED_WINDOWS
-------------
The corresponding builder needs two parameters (for the PLQ and WLQ logics) with the following accepted signatures:
* PLQ
void(const Iterable<tuple_t> &, tuple_t &);
void(const Iterable<tuple_t> &, tuple_t &, RuntimeContext &);
void(const tuple_t &, tuple_t &);
void(const tuple_t &, tuple_t &, RuntimeContext &);
* WLQ
void(const Iterable<tuple_t> &, result_t &);
void(const Iterable<tuple_t> &, result_t &, RuntimeContext &);
void(const tuple_t &, result_t &);
void(const tuple_t &, result_t &, RuntimeContext &);
MAPREDUCE_WINDOWS
-----------------
The corresponding builder needs two parameters (for the MAP and REDUCE logics) with the following accepted signatures:
* MAP
void(const Iterable<tuple_t> &, tuple_t &);
void(const Iterable<tuple_t> &, tuple_t &, RuntimeContext &);
void(const tuple_t &, tuple_t &);
void(const tuple_t &, tuple_t &, RuntimeContext &);
* REDUCE
void(const Iterable<tuple_t> &, result_t &);
void(const Iterable<tuple_t> &, result_t &, RuntimeContext &);
void(const tuple_t &, result_t &);
void(const tuple_t &, result_t &, RuntimeContext &);
FFAT_AGGREGATOR
---------------
The corresponding builder needs two parameters (for the lift and combine logics) with the following accepted signatures:
* Lift
void(const tuple_t &, result_t &);
void(const tuple_t &, result_t &, RuntimeContext &);
* Combine
void(const result_t &, const result_t &, result_t &);
void(const result_t &, const result_t &, result_t &, RuntimeContext &);
FFAT_AGGREGATOR_GPU
-------------------
The corresponding builder needs two parameters (for the lift and combine logics) with the following accepted signatures:
* Lift
void(const tuple_t &, result_t &);
* Combine
__host__ __device__ void(const result_t &, const result_t &, result_t &);
SINK
----
void(std::optional<tuple_t> &);
void(std::optional<tuple_t> &, RuntimeContext &);
void(std::optional<std::reference_wrapper<tuple_t>>);
void(std::optional<std::reference_wrapper<tuple_t>>, RuntimeContext &);
CLOSING LOGIC
-------------
void(RuntimeContext &);
KEY EXTRACTOR
-------------
key_t(const tuple_t &); // for all the CPU operators
__host__ __device__ key_t(const tuple_t &); // only for GPU operators
SPLITTING LOGIC
---------------
integral_t(const tuple_t &);
std::vector<integral_t>(const tuple_t &);
integral_t(tuple_t &);
std::vector<integral_t>(tuple_t &);
Where integral_t is any C++ integral type (e.g., short, int, long, and so forth).