-
Notifications
You must be signed in to change notification settings - Fork 4
/
iqtlabs_image_inference.block.yml
138 lines (127 loc) · 4.12 KB
/
iqtlabs_image_inference.block.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
---
id: iqtlabs_image_inference
label: image_inference
category: '[iqtlabs]'
flags: [python, cpp]
documentation: |-
This block accepts dB values from retune_fft, reformats them
as images and runs inference on them via Torchserve. The inference
results are used to annotate the images with bounding boxes
and RSSI values (the bounding boxes are used to select the original
dB power values within the boxes).
Torchserve inference is done in a background thread, to avoid
blocking the flowgraph. Torchserve batching is currently not done,
to trade inference latency for efficiency (generally, the inference
response time is much less than scanner dwell time).
input:
vector of floats, representing FFT dB power values,
tagged with center frequency.
output:
JSON inference results.
parameters:
tag: received frequency tag name.
vlen: length of FFT dB vector.
x: size in pixels of image to produce (x axis)
y: size in pixels of image to produce (y axis)
image_dir: directory to accumulate image results and logs under.
convert_alpha: alpha value when converting from dB power to image.
norm_alpha: alpha value to cv::normalize().
norm_beta: beta value to cv::normalize().
norm_type: type of normalization to cv::normalize().
colormap: type of colormap to cv::applyColorMap().
interpolation: type of interpolation to cv::resize().
flip: if -1, 0, or 1, type of transform to apply to image in cv::flip().
min_peak_points: Only run inference on buckets with this minimum dB power.
model_names: if not empty, comma separated list of model names.
model_server: if not empty, do torchserve inference to this address.
confidence: Only output inference results where confidence is greater.
max_rows: if > 0, use at most N dB input vectors per image.
rotate_secs: if > 0, use a new epoch timestamped directory every N seconds.
n_image: if > 0, only log 1/n_image images.
n_inference: if > 0, only run inference on 1/n_inference images.
samp_rate: sample rate.
text_color: 3-tuple of BGR for text color.
templates:
imports: from gnuradio import iqtlabs
make: >
iqtlabs.image_inference(${tag}, ${vlen}, ${x}, ${y}, ${image_dir},
${convert_alpha}, ${norm_alpha}, ${norm_beta}, ${norm_type}, ${colormap},
${interpolation}, ${model_server}, ${model_name}, ${confidence},
${max_rows}, ${rotate_secs}, ${n_image}, ${n_inference}, ${samp_rate},
${text_color})
cpp_templates:
includes: ['#include <gnuradio/iqtlabs/image_inference.h>']
declarations: 'gr::iqtlabs::image_inference::sptr ${id};'
make: >
this->${id} = gr::iqtlabs::image_inference::make(${tag}, ${vlen},
${x}, ${y}, ${image_dir}, ${convert_alpha}, ${norm_alpha}, ${norm_beta},
${norm_type}, ${colormap}, ${interpolation}, ${model_server},
${model_name}, ${confidence}, ${max_rows}, ${rotate_secs}, ${n_image},
${n_inference}, ${samp_rate}, ${text_color})
link: ['libgnuradio-iqtlabs.so']
parameters:
- id: tag
dtype: string
default: 'rx_freq'
- id: vlen
dtype: int
- id: x
dtype: int
- id: y
dtype: int
- id: image_dir
dtype: str
- id: convert_alpha
dtype: float
default: 255
- id: norm_alpha
dtype: float
default: 0
- id: norm_beta
dtype: float
default: 1
- id: norm_type
dtype: int
default: 32 # cv::NORM_MINMAX
- id: colormap
dtype: int
default: 20 # cv::TURBO
- id: interpolation
dtype: int
default: 2 # cv::INTER_CUBIC
- id: flip
dtype: int
default: 99 # cv::flip(), or 99 for no flip
- id: min_peak_points
dtype: float
- id: model_name
dtype: str
- id: model_server
dtype: str
- id: confidence
dtype: float
- id: max_rows
dtype: int
- id: rotate_secs
dtype: int
- id: n_image
dtype: int
- id: n_inference
dtype: int
- id: samp_rate
dtype: int
- id: text_color
dtype: str
asserts:
- ${ tag != "" }
- ${ vlen > 0 }
- ${ !model_server || (model_server && model_names) }
inputs:
- label: FFT power
domain: stream
dtype: float
vlen: ${ vlen }
outputs:
- id: inference
domain: message
file_format: 1