diff --git a/.dockerignore b/.dockerignore new file mode 100644 index 0000000..5f661f4 --- /dev/null +++ b/.dockerignore @@ -0,0 +1,6 @@ +# Avoid issues with container-specific assignment of user permissions +samples/webrtc/grafana/grafana-storage/**/* +samples/webrtc/grafana/grafana-storage +samples/webrtc/webserver/www/js-client/**/* +samples/webrtc/webserver/www/js-client + diff --git a/README.md b/README.md index a931e8b..a166b1c 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -# Intel(R) Deep Learning Streamer Pipeline Server +# Intel® Deep Learning Streamer Pipeline Server | [Getting Started](#getting-started) | [Request Customizations](#request-customizations) @@ -6,15 +6,12 @@ | [Further Reading](#further-reading) | [Known Issues](#known-issues) | -Intel(R) Deep Learning Streamer Pipeline Server (Intel(R) DL Streamer Pipeline Server) is a python package and microservice for +Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Server is a python package and microservice for deploying optimized media analytics pipelines. It supports pipelines defined in [GStreamer](https://gstreamer.freedesktop.org/documentation/?gi-language=c)* or [FFmpeg](https://ffmpeg.org/)* and provides APIs to discover, start, -stop, customize and monitor pipeline execution. Video Analytics -Serving is based on [OpenVINO Toolkit DL -Streamer](https://github.com/opencv/gst-video-analytics) and [FFmpeg -Video Analytics](https://github.com/VCDP/FFmpeg-patch). +stop, customize and monitor pipeline execution. Intel® DL Streamer Pipeline Server is based on [Intel® Deep Learning Streamer Pipeline Framework](https://github.com/dlstreamer/dlstreamer) and [FFmpeg Video Analytics](https://github.com/VCDP/FFmpeg-patch). ## Features Include @@ -23,12 +20,12 @@ Video Analytics](https://github.com/VCDP/FFmpeg-patch). | **Customizable Media Analytics Containers** | Scripts and dockerfiles to build and run container images with the required dependencies for hardware optimized media analytics pipelines. | | **No-Code Pipeline Definitions and Templates** | JSON based definition files, a flexible way for developers to define and parameterize pipelines while abstracting the low level details from their users. | | **Deep Learning Model Integration** | A simple way to package and reference [OpenVINO](https://software.intel.com/en-us/openvino-toolkit) based models in pipeline definitions. The precision of a model can be auto-selected at runtime based on the chosen inference device. | -| **Intel(R) DL Streamer Pipeline Server Python API** | A python module to discover, start, stop, customize and monitor pipelines based on their no-code definitions. | -| **Intel(R) DL Streamer Pipeline Server Microservice**                                                                      | A RESTful microservice providing endpoints and APIs matching the functionality of the python module. | +| **Intel® DL Streamer Pipeline Server Python API** | A python module to discover, start, stop, customize and monitor pipelines based on their no-code definitions. | +| **Intel® DL Streamer Pipeline Server Microservice**                                                                      | A RESTful microservice providing endpoints and APIs matching the functionality of the python module. | -> **IMPORTANT:** Intel(R) DL Streamer Pipeline Server is provided as a _sample_. It +> **IMPORTANT:** Intel® DL Streamer Pipeline Server is provided as a _sample_. It > is not intended to be deployed into production environments without -> modification. Developers deploying Intel(R) DL Streamer Pipeline Server should +> modification. Developers deploying Intel® DL Streamer Pipeline Server should > review it against their production requirements. The sample microservice includes five categories of media analytics pipelines. Click on the links below to find out more about each of them. @@ -47,12 +44,12 @@ The sample microservice includes five categories of media analytics pipelines. C | | | |---------------------------------------------|------------------| -| **Docker** | Intel(R) DL Streamer Pipeline Server requires Docker for its build, development, and runtime environments. Please install the latest for your platform. [Docker](https://docs.docker.com/install). | -| **bash** | Intel(R) DL Streamer Pipeline Server's build and run scripts require bash and have been tested on systems using versions greater than or equal to: `GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu)`. Most users shouldn't need to update their version but if you run into issues please install the latest for your platform. Instructions for macOS®* users [here](docs/installing_bash_macos.md). | +| **Docker** | Intel® DL Streamer Pipeline Server requires Docker for its build, development, and runtime environments. Please install the latest for your platform. [Docker](https://docs.docker.com/install). | +| **bash** | Intel® DL Streamer Pipeline Server's build and run scripts require bash and have been tested on systems using versions greater than or equal to: `GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu)`. Most users shouldn't need to update their version but if you run into issues please install the latest for your platform. Instructions for macOS®* users [here](docs/installing_bash_macos.md). | ## Supported Hardware -Refer to [OpenVINO System Requirements](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/system-requirements.html) for supported development and target runtime platforms and the [OpenVINO Container Release Notes](https://hub.docker.com/r/openvino/ubuntu20_data_runtime) for details on providing access to accelerator devices. +Refer to [Intel® DL Streamer Hardware Requirements](https://dlstreamer.github.io/get_started/hardware_requirements.html) for supported development and target runtime platforms and the [Intel® DL Streamer Install Guide](https://dlstreamer.github.io/get_started/install/install_guide_ubuntu.html) for details on providing access to accelerator devices. ## Building the Microservice @@ -111,16 +108,16 @@ Expected output: ## Running a Pipeline -The Pipeline Server includes a sample client [vaclient](./vaclient/README.md) that can connect to the service and make requests. We will use vaclient to explain how to use the key microservice features. -> **Note:** Any RESTful tool or library can be used to send requests to the Pipeline Server service. We are using vaclient as it simplifies interaction with the service. +The Pipeline Server includes a sample client [pipeline_client](./client/README.md) that can connect to the service and make requests. We will use pipeline_client to explain how to use the key microservice features. +> **Note:** Any RESTful tool or library can be used to send requests to the Pipeline Server service. We are using pipeline_client as it simplifies interaction with the service. > **Note:** The microservice has to be up and running before the sample client is invoked. -Before running a pipeline, we need to know what pipelines are available. We do this using vaclient's `list-pipeline` command. +Before running a pipeline, we need to know what pipelines are available. We do this using pipeline_client's `list-pipeline` command. In new shell run the following command: ```bash -./vaclient/vaclient.sh list-pipelines +./client/pipeline_client.sh list-pipelines ``` ```text @@ -139,12 +136,12 @@ In new shell run the following command: Pipelines are displayed as a name/version tuple. The name reflects the action and version supplies more details of that action. Let's go with `object_detection/person_vehicle_bike`. Now we need to choose a media source. We recommend the [IoT Devkit sample videos](https://github.com/intel-iot-devkit/sample-videos) to get started. As the pipeline version indicates support for detecting people, person-bicycle-car-detection.mp4 would be a good choice. > **Note:** Make sure to include `raw=true` parameter in the Github URL as shown in our examples. Failure to do so will result in a pipeline execution error. -vaclient offers a `run` command that takes two additional arguments the `pipeline` and the `uri` for the media source. The `run` command displays inference results until either the media is exhausted or `CTRL+C` is pressed. +pipeline_client offers a `run` command that takes two additional arguments the `pipeline` and the `uri` for the media source. The `run` command displays inference results until either the media is exhausted or `CTRL+C` is pressed. Inference result bounding boxes are displayed in the format `label (confidence) [top left width height] {meta-data}` provided applicable data is present. At the end of the pipeline run, the average fps is shown. ```bash -./vaclient/vaclient.sh run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true +./client/pipeline_client.sh run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true ``` ```text @@ -174,12 +171,12 @@ The file path is specified in the `destination` section of the REST request and ### Queued, Running and Completed -The vaclient `run` command starts the pipeline. The underlying REST request returns a `pipeline instance` which is used to query the state of the pipeline. -All being well it will go into `QUEUED` then `RUNNING` state. We can interrogate the pipeline status by using the vaclient `start` command that kicks off the pipeline like `run` and then exits displaying the `pipeline instance` (a [UUID](https://en.wikipedia.org/wiki/Universally_unique_identifier)) which is used by the `status` command to view pipeline state. +The pipeline_client `run` command starts the pipeline. The underlying REST request returns a `pipeline instance` which is used to query the state of the pipeline. +All being well it will go into `QUEUED` then `RUNNING` state. We can interrogate the pipeline status by using the pipeline_client `start` command that kicks off the pipeline like `run` and then exits displaying the `pipeline instance` (a [UUID](https://en.wikipedia.org/wiki/Universally_unique_identifier)) which is used by the `status` command to view pipeline state. > **NOTE:** The pipeline instance value depends on the number of pipelines started while the server is running so may differ from the value shown in the following examples. ```bash -./vaclient/vaclient.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true +./client/pipeline_client.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true ``` ```text @@ -190,7 +187,7 @@ Starting pipeline object_detection/person_vehicle_bike, instance = d83502e3ef314 You will need both the pipeline tuple and `instance` id for the status command. This command will display pipeline state: ```bash -./vaclient/vaclient.sh status object_detection/person_vehicle_bike d83502e3ef314e8fbec8dc926eadd0c2 +./client/pipeline_client.sh status object_detection/person_vehicle_bike d83502e3ef314e8fbec8dc926eadd0c2 ``` ```text @@ -201,7 +198,7 @@ RUNNING (49fps) Then wait for a minute or so and try again. Pipeline will be completed. ```bash -./vaclient/vaclient.sh status object_detection/person_vehicle_bike d83502e3ef314e8fbec8dc926eadd0c2 +./client/pipeline_client.sh status object_detection/person_vehicle_bike d83502e3ef314e8fbec8dc926eadd0c2 ``` ```text @@ -215,7 +212,7 @@ If a pipeline is stopped, rather than allowed to complete, it goes into the ABOR Start the pipeline again, this time we'll stop it. ```bash -./vaclient/vaclient.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true +./client/pipeline_client.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true ``` ```text @@ -224,7 +221,7 @@ Starting pipeline object_detection/person_vehicle_bike, instance = 8ad2c85af4bd4 ``` ```bash -./vaclient/vaclient.sh status object_detection/person_vehicle_bike 8ad2c85a-f4bd473e8a693aff562be316 +./client/pipeline_client.sh status object_detection/person_vehicle_bike 8ad2c85a-f4bd473e8a693aff562be316 ``` ```text @@ -233,7 +230,7 @@ RUNNING (50fps) ``` ```bash -./vaclient/vaclient.sh stop object_detection/person_vehicle_bike 8ad2c85af4bd473e8a693aff562be316 +./client/pipeline_client.sh stop object_detection/person_vehicle_bike 8ad2c85af4bd473e8a693aff562be316 ``` ```text @@ -244,7 +241,7 @@ avg_fps: 24.33 ``` ```bash -./vaclient/vaclient.sh status object_detection/person_vehicle_bike 8ad2c85af4bd473e8a693aff562be316 +./client/pipeline_client.sh status object_detection/person_vehicle_bike 8ad2c85af4bd473e8a693aff562be316 ``` ```text @@ -257,7 +254,7 @@ ABORTED (47fps) The error state covers a number of outcomes such as the request could not be satisfied, a pipeline dependency was missing or an initialization problem. We can create an error condition by supplying a valid but unreachable uri. ```bash -./vaclient/vaclient.sh start object_detection/person_vehicle_bike http://bad-uri +./client/pipeline_client.sh start object_detection/person_vehicle_bike http://bad-uri ``` ```text @@ -269,7 +266,7 @@ Note that the Pipeline Server does not report an error at this stage as it goes Checking on state a few seconds later will show the error. ```bash -./vaclient/vaclient.sh status object_detection/person_vehicle_bike 2bb2d219310a4ee881faf258fbcc4355 +./client/pipeline_client.sh status object_detection/person_vehicle_bike 2bb2d219310a4ee881faf258fbcc4355 ``` ```text @@ -277,43 +274,14 @@ Checking on state a few seconds later will show the error. ERROR (0fps) ``` -## Real Time Streaming Protocol (RTSP) - -RTSP allows you to connect to a server and display a video stream. The Pipeline Server includes an RTSP server that creates a stream that shows the incoming video with superimposed bounding boxes and meta-data. You will need a client that connects to the server and displays the video. We recommend [vlc](https://www.videolan.org/). For this example we'll assume the Pipeline Server and vlc are running on the same host. - -First start the Pipeline Server with RTSP enabled. By default, the RTSP stream will use port 8554. -``` -docker/run.sh --enable-rtsp -v /tmp:/tmp -``` - -Then start a pipeline specifying the RTSP server endpoint path `pipeline-server`. In this case the RTSP endpoint would be `rtsp://localhost:8554/pipeline-server` - -```bash -./vaclient/vaclient.sh run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --rtsp-path pipeline-server -``` - -If you see the error - -```text -Starting pipeline object_detection/person_vehicle_bike, instance = -Error in pipeline, please check pipeline-server log messages -``` - -You probably forgot to enable RTSP in the server. - -Now start `vlc` and from the `Media` menu select `Open Network Stream`. For URL enter `rtsp://localhost:8554/pipeline-server` and hit `Play`. -> **Note:** The pipeline must be running before you hit play otherwise VLC will not be able to connect to the RTSP server. - -> **Note:** For shorter video files you should have VLC ready to go before starting pipeline otherwise by the time you hit play the pipeline will have completed and the RTSP server will have shut down. - # Request Customizations ## Change Pipeline and Source Media -With vaclient it is easy to customize service requests. Here will use a vehicle classification pipeline `object_classification/vehicle_attributes` with the Iot Devkit video `car-detection.mp4`. Note how vaclient now displays classification metadata including type and color of vehicle. +With pipeline_client it is easy to customize service requests. Here will use a vehicle classification pipeline `object_classification/vehicle_attributes` with the Iot Devkit video `car-detection.mp4`. Note how pipeline_client now displays classification metadata including type and color of vehicle. ```bash -./vaclient/vaclient.sh run object_classification/vehicle_attributes https://github.com/intel-iot-devkit/sample-videos/blob/master/car-detection.mp4?raw=true +./client/pipeline_client.sh run object_classification/vehicle_attributes https://github.com/intel-iot-devkit/sample-videos/blob/master/car-detection.mp4?raw=true ``` ```text @@ -342,7 +310,7 @@ Timestamp 18640000000 - vehicle (0.98) [0.40, 0.00, 0.55, 0.15] {'color': 'red', 'type': 'car'} ``` -If you look at video you can see that there are some errors in classification - there are no trucks or busses in the video. However you can see that associated confidence is much lower than the correct classification of the white and red cars. +If you look at the video, you can see that there are some errors in classification - there are no trucks or buses in the video. However you can see that associated confidence is much lower than the correct classification of the white and red cars. ## Change Inference Accelerator Device @@ -350,7 +318,7 @@ Inference accelerator devices can be easily selected using the device parameter. but this time use the integrated GPU for detection inference by setting the `detection-device` parameter. ```bash -./vaclient/vaclient.sh run object_classification/vehicle_attributes https://github.com/intel-iot-devkit/sample-videos/blob/master/car-detection.mp4?raw=true --parameter detection-device GPU --parameter detection-model-instance-id person_vehicle_bike_detection_gpu +./client/pipeline_client.sh run object_classification/vehicle_attributes https://github.com/intel-iot-devkit/sample-videos/blob/master/car-detection.mp4?raw=true --parameter detection-device GPU --parameter detection-model-instance-id person_vehicle_bike_detection_gpu ``` ```text @@ -361,16 +329,40 @@ Starting pipeline object_classification/vehicle_attributes, instance = > **Note:** The `detection-model-instance-id` parameter caches the GPU model with a unique id. For more information read about [model instance ids](docs/defining_pipelines.md#model-persistance-in-openvino-gstreamer-elements). -vaclient's fps measurement is useful when assessing pipeline performance with different accelerators. +pipeline_client's fps measurement is useful when assessing pipeline performance with different accelerators. + +## Visualize Inference +Pipeline server allows you to optionally visualize inference results using either [Real Time Streaming Protocol (RTSP)](https://en.wikipedia.org/wiki/Real_Time_Streaming_Protocol) or [Web Real Time Communication (WebRTC)](https://webrtc.org/) by configuring the frame destination section of the request. + +RTSP is simpler to set up but you must have an RTSP player (e.g. [VLC](https://www.videolan.org/vlc/)) to render output. WebRTC setup is more complex (e.g., requires additional server-side microservices) but has the upside of using a web browser for client visualization. + +Before requesting visualization, the corresponding feature must be enabled in the server, see [Visualizing Inference Output](docs/running_pipeline_server.md#visualizing-inference-output). + +### RTSP + +RTSP allows you to connect to a server and display a video stream. The Pipeline Server includes an RTSP server that creates a stream that shows the incoming video with superimposed bounding boxes and meta-data. You will need a client that connects to the server and displays the video. We recommend [vlc](https://www.videolan.org/). + +First start the Pipeline Server with RTSP enabled. By default, the RTSP stream will use port 8554. +``` +docker/run.sh --enable-rtsp -v /tmp:/tmp +``` + +Then start pipeline and visualize as per [RTSP section in Customizing Pipeline Requests](docs/customizing_pipeline_requests.md#rtsp). + +> **Note:** The pipeline must be running before you hit play otherwise VLC will not be able to connect to the RTSP server. For shorter video files you should have VLC ready to go before starting pipeline otherwise by the time you hit play the pipeline will have completed and the RTSP server will have shut down. + +### WebRTC + +WebRTC is more complex. Follow setup instructions in the [sample](samples/webrtc). More details on fine tuning request can be found in the [WebRTC section in Customizing Pipeline Requests](docs/customizing_pipeline_requests.md#webrtc). ## View REST Request -As the previous example has shown, the vaclient application works by converting command line arguments into Pipeline Server REST requests. +As the previous example has shown, the pipeline_client application works by converting command line arguments into Pipeline Server REST requests. The `--show-request` option displays the REST verb, uri and body in the request. Let's repeat the previous GPU inference example, adding RTSP output and show the underlying request. ```bash -./vaclient/vaclient.sh run object_classification/vehicle_attributes https://github.com/intel-iot-devkit/sample-videos/blob/master/car-detection.mp4?raw=true --parameter detection-device GPU --rtsp-path pipeline-server --show-request +./client/pipeline_client.sh run object_classification/vehicle_attributes https://github.com/intel-iot-devkit/sample-videos/blob/master/car-detection.mp4?raw=true --parameter detection-device GPU --rtsp-path pipeline-server --show-request ``` ```text @@ -418,7 +410,7 @@ They are easier to understand when the json is pretty-printed 1. Media source: type is `uri` and the uri is the car-detection.mp4 video 2. Destinations: - * metadata: this is the inference results, they are sent to file `/tmp/results.jsonl` in `json-lines` format. vaclient parses this file to display the inference results and metadata. + * metadata: this is the inference results, they are sent to file `/tmp/results.jsonl` in `json-lines` format. pipeline_client parses this file to display the inference results and metadata. * frames: this the watermarked frames. Here they are sent to the RTSP server and available over given endpoint `pipeline-server`. 3. Parameters set pipeline properties. See the [Defining Pipelines](docs/defining_pipelines.md) document for more details on parameters. @@ -467,13 +459,13 @@ The Pipeline Server makes pipeline customization and model selection a simple ta | **Documentation** | **Reference Guides** | **Tutorials** | | ------------ | ------------------ | ----------- | -| **-** [Defining Media Analytics Pipelines](docs/defining_pipelines.md)
**-** [Building Intel(R) DL Streamer Pipeline Server](docs/building_video_analytics_serving.md)
**-** [Running Intel(R) DL Streamer Pipeline Server](docs/running_video_analytics_serving.md)
**-** [Customizing Pipeline Requests](docs/customizing_pipeline_requests.md)
**-** [Creating Extensions](docs/creating_extensions.md)| **-** [Intel(R) DL Streamer Pipeline Server Architecture Diagram](docs/images/video_analytics_service_architecture.png)
**-** [Microservice Endpoints](docs/restful_microservice_interfaces.md)
**-** [Build Script Reference](docs/build_script_reference.md)
**-** [Run Script Reference](docs/run_script_reference.md)
**-** [VA Client Reference](vaclient/README.md)| **-** [Changing Object Detection Models](docs/changing_object_detection_models.md)
**-** [Kubernetes Deployment with Load Balancing](samples/kubernetes/README.md) +| **-** [Defining Media Analytics Pipelines](docs/defining_pipelines.md)
**-** [Building Intel® DL Streamer Pipeline Server](docs/building_pipeline_server.md)
**-** [Running Intel® DL Streamer Pipeline Server](docs/running_pipeline_server.md)
**-** [Customizing Pipeline Requests](docs/customizing_pipeline_requests.md)
**-** [Creating Extensions](docs/creating_extensions.md)| **-** [Intel® DL Streamer Pipeline Server Architecture Diagram](docs/images/pipeline_server_architecture.png)
**-** [Microservice Endpoints](docs/restful_microservice_interfaces.md)
**-** [Build Script Reference](docs/build_script_reference.md)
**-** [Run Script Reference](docs/run_script_reference.md)
**-** [Pipeline Client Reference](client/README.md)| **-** [Changing Object Detection Models](docs/changing_object_detection_models.md)
**-** [Kubernetes Deployment with Load Balancing](samples/kubernetes/README.md) ## Related Links | **Media Frameworks** | **Media Analytics** | **Samples and Reference Designs** | ------------ | ------------------ | -----------------| -| **-** [GStreamer](https://gstreamer.freedesktop.org/documentation/?gi-language=c)*
**-** [GStreamer* Overview](docs/gstreamer_overview.md)
**-** [FFmpeg](https://ffmpeg.org/)* | **-** [OpenVINO Toolkit](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html)
**-** [Intel(R) Deep Learning Streamer](https://github.com/dlstreamer/dlstreamer)
**-** [FFmpeg* Video Analytics](https://github.com/VCDP/FFmpeg-patch) | **-** [Open Visual Cloud Smart City Sample](https://github.com/OpenVisualCloud/Smart-City-Sample)
**-** [Open Visual Cloud Ad Insertion Sample](https://github.com/OpenVisualCloud/Ad-Insertion-Sample)
**-** [Edge Insights for Retail](https://software.intel.com/content/www/us/en/develop/articles/real-time-sensor-fusion-for-loss-detection.html) +| **-** [GStreamer](https://gstreamer.freedesktop.org/documentation/?gi-language=c)*
**-** [GStreamer* Overview](docs/gstreamer_overview.md)
**-** [FFmpeg](https://ffmpeg.org/)* | **-** [Intel® Deep Learning Streamer Pipeline Framework](https://github.com/dlstreamer/dlstreamer)
**-** [OpenVINO Toolkit](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html)
**-** [FFmpeg* Video Analytics](https://github.com/VCDP/FFmpeg-patch) | **-** [Open Visual Cloud Smart City Sample](https://github.com/OpenVisualCloud/Smart-City-Sample)
**-** [Open Visual Cloud Ad Insertion Sample](https://github.com/OpenVisualCloud/Ad-Insertion-Sample)
**-** [Edge Insights for Retail](https://software.intel.com/content/www/us/en/develop/articles/real-time-sensor-fusion-for-loss-detection.html) # Known Issues diff --git a/vaclient/README.md b/client/README.md similarity index 60% rename from vaclient/README.md rename to client/README.md index fa40bf3..c2b2586 100644 --- a/vaclient/README.md +++ b/client/README.md @@ -1,8 +1,8 @@ -# VA Client Command Reference -vaclient is a python app intended to be a reference for using Intel(R) Deep Learning Streamer (Intel(R) DL Streamer) Pipeline Server REST API. vaclient is included in Pipeline Server's REST container and can be easily launched using the accompanying run script, `vaclient.sh`. +# Pipeline Client Command Reference +pipeline_client is a python app intended to be a reference for using Intel(R) Deep Learning Streamer (Intel(R) DL Streamer) Pipeline Server REST API. pipeline_client is included in Pipeline Server's REST container and can be easily launched using the accompanying run script, `pipeline_client.sh`. >**Note:** -This document assumes you are familiar with Intel(R) DL Streamer Pipeline Server and have built the image locally and Pipeline Server REST instance is running for VA Client to connect to. See the main [README](../README.md) for details on building and running the service. +This document assumes you are familiar with Intel(R) DL Streamer Pipeline Server and have built the image locally and Pipeline Server REST instance is running for Pipeline Client to connect to. See the main [README](../README.md) for details on building and running the service. ## Basic Usage ### Listing Supported Models and Pipelines @@ -11,7 +11,7 @@ To see which models and pipelines are loaded by the service run the following co Listing models: ``` - ./vaclient/vaclient.sh list-models + ./client/pipeline_client.sh list-models ``` ``` @@ -26,7 +26,7 @@ Listing models: Listing pipelines: ``` -./vaclient/vaclient.sh list-pipelines +./client/pipeline_client.sh list-pipelines ``` ``` @@ -40,12 +40,12 @@ Listing pipelines: ``` ### Running Pipelines -vaclient can be used to send pipeline start requests using the `run` command. With the `run` command you will need to enter two additional arguments the `pipeline` (in the form of pipeline_name/pipeline_version) you wish to use and the `uri` pointing to the media of your choice. +pipeline_client can be used to send pipeline start requests using the `run` command. With the `run` command you will need to enter two additional arguments the `pipeline` (in the form of pipeline_name/pipeline_version) you wish to use and the `uri` pointing to the media of your choice. ``` -./vaclient/vaclient.sh run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true +./client/pipeline_client.sh run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true ``` -If the pipeline request is successful, an instance id is created and vaclient will print the instance. More on `instance_id` below. -Once pre-roll is completed and pipeline begins running, the output file is processed by vaclient and inference information is printed to the screen in the following format: `label (confidence) [top left width height] {meta-data}` At the end of the pipeline run, the average fps is printed as well. If you wish to stop the pipeline mid-run, `Ctrl+C` will signal the client to send a `stop` command to the service. Once the pipeline is stopped, vaclient will output the average fps. More on `stop` below +If the pipeline request is successful, an instance id is created and pipeline_client will print the instance. More on `instance_id` below. +Once pre-roll is completed and pipeline begins running, the output file is processed by pipeline_client and inference information is printed to the screen in the following format: `label (confidence) [top left width height] {meta-data}` At the end of the pipeline run, the average fps is printed as well. If you wish to stop the pipeline mid-run, `Ctrl+C` will signal the client to send a `stop` command to the service. Once the pipeline is stopped, pipeline_client will output the average fps. More on `stop` below ``` Pipeline instance = @@ -69,7 +69,7 @@ Timestamp 49250000000 - vehicle (0.64) [0.00, 0.14, 0.05, 0.34] {} avg_fps: 39.66 ``` -However, if there are errors during pipeline execution i.e GPU is specified as detection device but is not present, vaclient will terminate with an error message +However, if there are errors during pipeline execution i.e GPU is specified as detection device but is not present, pipeline_client will terminate with an error message ``` Pipeline instance = Error in pipeline, please check pipeline-server log messages @@ -78,30 +78,30 @@ Error in pipeline, please check pipeline-server log messages ### Starting Pipelines The `run` command is helpful for quickly showing inference results but `run` blocks until completion. If you want to do your own processing and only want to kickoff a pipeline, this can be done with the `start` command. `start` arguments are the same as `run`, you'll need to provide the `pipeline` and `uri`. Run the following command: ``` -./vaclient/vaclient.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true +./client/pipeline_client.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true ``` -Similar to `run`, if the pipeline request is successful, an instance id is created and vaclient will print the instance. More on `instance_id` below. +Similar to `run`, if the pipeline request is successful, an instance id is created and pipeline_client will print the instance. More on `instance_id` below. ``` Pipeline instance = ``` -Errors during pipeline execution are not flagged as vaclient exits after receiving instance id for a successful request. However, both `start` and `run` will flag invalid requests, for example: +Errors during pipeline execution are not flagged as pipeline_client exits after receiving instance id for a successful request. However, both `start` and `run` will flag invalid requests, for example: ``` -./vaclient/vaclient.sh start object_detection/person_vehicle_bke https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true +./client/pipeline_client.sh start object_detection/person_vehicle_bke https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true ``` The pipeline name has a typo `object_detection/person_vehicle_bke` making it invalid, this results in the error message: ``` -Status 400 - "Invalid Pipeline or Version" +"Invalid Pipeline or Version" ``` #### Instance ID -On a successful start of a pipeline, VA Serving assigns a pipeline `instance_id` which is a [UUID](https://en.wikipedia.org/wiki/Universally_unique_identifier) which can be used to reference the pipeline in subsequent requests. In this example, the `instance_id` is `0fe8f408ea2441bca8161e1190eefc51` +On a successful start of a pipeline, Pipeline Server assigns a pipeline `instance_id` which is a [UUID](https://en.wikipedia.org/wiki/Universally_unique_identifier) which can be used to reference the pipeline in subsequent requests. In this example, the `instance_id` is `0fe8f408ea2441bca8161e1190eefc51` ``` Starting pipeline object_detection/person_vehicle_bike, instance = 0fe8f408ea2441bca8161e1190eefc51 ``` ### Stopping Pipelines Stopping a pipeline can be accomplished using the `stop` command along with the `pipeline` and `instance id`: ``` -./vaclient/vaclient.sh stop object_detection/person_vehicle_bike 0fe8f408ea2441bca8161e1190eefc51 +./client/pipeline_client.sh stop object_detection/person_vehicle_bike 0fe8f408ea2441bca8161e1190eefc51 ``` ``` Stopping Pipeline... @@ -111,9 +111,9 @@ avg_fps: 42.07 ### Getting Pipeline Status Querying the current state of the pipeline is done using the `status` command along with the `pipeline` and `instance id`: ``` -./vaclient/vaclient.sh status object_detection/person_vehicle_bike 0fe8f408ea2441bca8161e1190eefc51 +./client/pipeline_client.sh status object_detection/person_vehicle_bike 0fe8f408ea2441bca8161e1190eefc51 ``` -vaclient will print the status of `QUEUED`, `RUNNING`, `ABORTED`, `COMPLETED` or `ERROR` and also fps. +pipeline_client will print the status of `QUEUED`, `RUNNING`, `ABORTED`, `COMPLETED` or `ERROR` and also fps. ``` RUNNING (30fps) @@ -122,7 +122,7 @@ RUNNING (30fps) ### Waiting for a pipeline to finish If you wish to wait for a pipeline to finish running you can use the `wait` command along with the `pipeline` and `instance id`: ``` -./vaclient/vaclient.sh wait object_detection/person_vehicle_bike 0fe8f408ea2441bca8161e1190eefc51 +./client/pipeline_client.sh wait object_detection/person_vehicle_bike 0fe8f408ea2441bca8161e1190eefc51 ``` The client will print the initial status of the pipeline. Then wait for completion and print the average fps. @@ -131,19 +131,21 @@ Querying the current state of the pipeline is done using the `list-instances` co This example starts two pipelines and then gets their status and request details. ``` -./vaclient/vaclient.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true +./client/pipeline_client.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true ``` +Output: ``` Starting pipeline object_detection/person_vehicle_bike, instance = 94cf72b718184615bfc181c6589b240c ``` ``` -./vaclient/vaclient.sh start object_classification/vehicle_attributes https://github.com/intel-iot-devkit/sample-videos/blob/master/car-detection.mp4?raw=true +./client/pipeline_client.sh start object_classification/vehicle_attributes https://github.com/intel-iot-devkit/sample-videos/blob/master/car-detection.mp4?raw=true ``` +Output: ``` Starting pipeline object_classification/vehicle_attributes, instance = 978e09c561f14fa1b793e8b644f30031 ``` ``` -./vaclient/vaclient.sh list-instances +./client/pipeline_client.sh list-instances ``` ``` 01: object_detection/person_vehicle_bike @@ -186,26 +188,26 @@ parameters: { See [customizing pipeline requests](../docs/customizing_pipeline_requests.md) to further understand how pipeline request options can be customized. ### --quiet -This optional argument is meant to handle logging verbosity common across all commands to vaclient. +This optional argument is meant to handle logging verbosity common across all commands to pipeline_client. > **Note**: If specified, --quiet needs to be placed ahead of the specific command i.e start, run etc. #### Start -vaclient output will just be the pipeline instance. +pipeline_client output will just be the pipeline instance. ``` -./vaclient/vaclient.sh --quiet start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true +./client/pipeline_client.sh --quiet start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true ``` ``` -2 +280ff2c4a54611ec8b900242ac110002 ``` #### Run -vaclient output will be the pipeline instance followed by inference results. +pipeline_client output will be the pipeline instance followed by inference results. ``` -./vaclient/vaclient.sh --quiet run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true +./client/pipeline_client.sh --quiet run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true ``` ``` -1 +280ff2c4a54611ec8b900242ac110002 Timestamp 1500000000 - person (0.54) [0.67, 0.88, 0.74, 1.00] Timestamp 1666666666 @@ -213,7 +215,7 @@ Timestamp 1666666666 ``` ### Run/Start Arguments -This section summarizes all the arguments for vaclient `run` and `start` commands. +This section summarizes all the arguments for pipeline_client `run` and `start` commands. #### pipeline (required) Positional argument (first) that specifies the pipeline to be launched in the form of `pipeline name/pipeline version`. @@ -223,7 +225,7 @@ Positional argument (second) that specifies the location of the content to play/ > Note: uri argument can be skipped only if passed in via --request-file #### --destination -By default, vaclient uses a generic template for destination: +By default, pipeline_client uses a generic template for destination: ```json { "destination": { @@ -236,15 +238,15 @@ By default, vaclient uses a generic template for destination: ``` Destination configuration can be updated with `--destination`. This argument affects only metadata part of the destination. In the following example, passing in `--destination path /tmp/newfile.jsonl` will update the filepath for saving inference result. -> **Note**: You may need to volume mount this new location when running VA Serving. +> **Note**: You may need to volume mount this new location when running Pipeline Server. ``` -./vaclient/vaclient.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --destination path /tmp/newfile.jsonl +./client/pipeline_client.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --destination path /tmp/newfile.jsonl ``` -If other destination types are specified (e.g. `mqtt` or `kafka` ), the pipeline will try to publish to specified broker and vaclient will subscribe to it and display published metadata. Here is an mqtt example using a broker on localhost. +If other destination types are specified (e.g. `mqtt` or `kafka` ), the pipeline will try to publish to specified broker and pipeline_client will subscribe to it and display published metadata. Here is an mqtt example using a broker on localhost. ``` docker run -rm --network=host -d eclipse-mosquitto:1.6 -./vaclient/vaclient.sh run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --destination type mqtt --destination host localhost:1883 --destination topic pipeline-server +./client/pipeline_client.sh run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --destination type mqtt --destination host localhost:1883 --destination topic pipeline-server ``` ``` Starting pipeline object_detection/person_vehicle_bike, instance = @@ -263,17 +265,17 @@ Timestamp 4000000000 For more details on destination types, see [customizing pipeline requests](../docs/customizing_pipeline_requests.md#metadata). #### --rtsp-path If you are utilizing RTSP restreaming, `--rtsp-path` can be used to update the `server_url` path. This updates the frame part of destination under the hood. -For example, adding `--rtsp-path new_path` will able you to view the stream at `rtsp://:/new_path`. More details on RTSP restreaming in [running_video_analytics_serving](../docs/running_video_analytics_serving.md) documentation. +For example, adding `--rtsp-path new_path` will able you to view the stream at `rtsp://:/new_path`. More details on RTSP restreaming in [running_pipeline_server](../docs/running_pipeline_server.md) documentation. #### --parameter -By default, vaclient relies on pipeline parameter defaults. This can be updated with `--parameter` option. See [Defining Pipelines](../docs/defining_pipelines.md) to know how parameters are defined. The following example adds `--parameter detection-device GPU` +By default, pipeline_client relies on pipeline parameter defaults. This can be updated with `--parameter` option. See [Defining Pipelines](../docs/defining_pipelines.md) to know how parameters are defined. The following example adds `--parameter detection-device GPU` ``` -./vaclient/vaclient.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --parameter detection-device GPU +./client/pipeline_client.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --parameter detection-device GPU ``` #### --parameter-file Specifies a JSON file that contains parameters in key, value pairs. Parameters from this file take precedence over those set by `--parameter`. -> **Note**: As vaclient volume mounts /tmp, the parameter file may be placed there. +> **Note**: As pipeline_client volume mounts /tmp, the parameter file may be placed there. A sample parameter file can look like ```json @@ -285,26 +287,26 @@ A sample parameter file can look like ``` The above file, say /tmp/sample_parameters.json may be used as follows: ``` -./vaclient/vaclient.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --parameter-file /tmp/sample_parameters.json +./client/pipeline_client.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --parameter-file /tmp/sample_parameters.json ``` #### --tag Specifies a key, value pair to update request with. This information is added to each frame's metadata. This example adds tags for direction and location of video capture ``` -./vaclient/vaclient.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --tag direction east --tag camera_location parking_lot +./client/pipeline_client.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --tag direction east --tag camera_location parking_lot ``` #### --server-address This can be used with any command to specify a remote HTTP server address. Here we start a pipeline on remote server `http://remote-server.my-domain.com:8080`. ``` -./vaclient/vaclient.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --tag direction east --server=address http://remote-server.my-domain.com:8080 +./client/pipeline_client.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --tag direction east --server=address http://remote-server.my-domain.com:8080 ``` #### --status-only Use with `run` command to disable output of metadata and periodically display pipeline state and fps ``` -./vaclient/vaclient.sh run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --tag direction east --status-only +./client/pipeline_client.sh run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --tag direction east --status-only ``` ``` Starting pipeline 0 @@ -324,7 +326,7 @@ Pipeline status @ 21s Takes an integer value that specifies the number of streams to start (default value is 1) using specified request. If number of streams is greater than one, "status only" display mode is used. ``` -python3 vaclient run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --status-only --number-of-streams 4 --server-address http://hbruce-desk2.jf.intel.com:8080 +./client/pipeline_client.sh run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --status-only --number-of-streams 4 --server-address http://hbruce-desk2.jf.intel.com:8080 ``` ``` Starting pipeline 0 @@ -360,7 +362,7 @@ Pipeline status @ 21s #### --request-file Specifies a JSON file that contains the complete request i.e source, destination, tags and parameters. See [Customizing Pipeline Requests](../docs/customizing_pipeline_requests.md) for examples of requests in JSON format. -> **Note**: As vaclient volume mounts /tmp, the request file may be placed there. +> **Note**: As pipeline_client volume mounts /tmp, the request file may be placed there. A sample request file can look like ```json @@ -383,14 +385,14 @@ A sample request file can look like ``` The above file, named for instance as /tmp/sample_request.json may be used as follows: ``` -./vaclient/vaclient.sh start object_detection/person_vehicle_bike --request-file /tmp/sample_request.json +./client/pipeline_client.sh start object_detection/person_vehicle_bike --request-file /tmp/sample_request.json ``` #### --show-request -All vaclient commands can be used with the `--show-request` option which will print out the HTTP request and exit i.e it will not be sent to VA Serving. +All pipeline_client commands can be used with the `--show-request` option which will print out the HTTP request and exit i.e it will not be sent to the Pipeline Server. This example shows the result of `--show-request` when the pipeline is started with options passed in ``` -./vaclient/vaclient.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --destination path /tmp/newfile.jsonl --parameter detection-device GPU --tag direction east --tag camera_location parking_lot --show-request +./client/pipeline_client.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --destination path /tmp/newfile.jsonl --parameter detection-device GPU --tag direction east --tag camera_location parking_lot --show-request ``` ``` @@ -400,7 +402,7 @@ Body:{'source': {'uri': 'https://github.com/intel-iot-devkit/sample-videos/blob/ See [View REST request](../README.md#view-rest-request) to see how the output from `--show-request` can be mapped to a curl command. ### Status/Wait/Stop Arguments -This section summarizes all the arguments for vaclient `status`, `wait` and `stop` commands. +This section summarizes all the arguments for pipeline_client `status`, `wait` and `stop` commands. #### pipeline (required) Positional argument (first) that specifies the pipeline to wait on/query status of/stop as indicated in the form of `pipeline name/pipeline version` @@ -413,23 +415,23 @@ As mentioned before, `--show-request` option which will print out the HTTP reque ##### Status ``` -./vaclient/vaclient.sh status object_detection/person_vehicle_bike 94cf72b718184615bfc181c6589b240c --show-request +./client/pipeline_client.sh status object_detection/person_vehicle_bike 94cf72b718184615bfc181c6589b240c --show-request ``` ``` -GET http://localhost:8080/pipelines/object_detection/person_vehicle_bike/94cf72b718184615bfc181c6589b240c/status +GET http://localhost:8080/pipelines/object_detection/person_vehicle_bike/status/94cf72b718184615bfc181c6589b240c ``` ##### Wait ``` -./vaclient/vaclient.sh wait object_detection/person_vehicle_bike 94cf72b718184615bfc181c6589b240c --show-request +./client/pipeline_client.sh wait object_detection/person_vehicle_bike 94cf72b718184615bfc181c6589b240c --show-request ``` ``` -GET http://localhost:8080/pipelines/object_detection/person_vehicle_bike/94cf72b718184615bfc181c6589b240c/status +GET http://localhost:8080/pipelines/object_detection/person_vehicle_bike/status/94cf72b718184615bfc181c6589b240c ``` ##### Stop ``` -./vaclient/vaclient.sh stop object_detection/person_vehicle_bike 94cf72b718184615bfc181c6589b240c --show-request +./client/pipeline_client.sh stop object_detection/person_vehicle_bike 94cf72b718184615bfc181c6589b240c --show-request ``` ``` diff --git a/vaclient/__main__.py b/client/__main__.py similarity index 100% rename from vaclient/__main__.py rename to client/__main__.py diff --git a/vaclient/arguments.py b/client/arguments.py similarity index 88% rename from vaclient/arguments.py rename to client/arguments.py index 0185613..5f0bd72 100644 --- a/vaclient/arguments.py +++ b/client/arguments.py @@ -30,7 +30,7 @@ import sys import json import argparse -import vaclient +import pipeline_client def get_typed_value(value): @@ -53,6 +53,7 @@ def add_request_arguments(parser): parser.add_argument('--destination', action='append', nargs=2, metavar=('key', 'value'), type=str, \ help='Update destination information with key and value pair') parser.add_argument('--rtsp-path', type=str, help='RTSP endpoint path') + parser.add_argument('--webrtc-peer-id', type=str, help='WebRTC server side peer id') parser.add_argument('--parameter', action='append', nargs=2, metavar=('key', 'value'), type=get_typed_value, \ dest='parameters', help='Update request parameter with key and value pair') parser.add_argument('--parameter-file', type=str, dest='parameter_file', help='Update request parameter \ @@ -64,7 +65,7 @@ def add_request_arguments(parser): parser.add_argument('--number-of-streams', type=int, default=1, dest="streams", help='Set number of streams') parser.add_argument("--status-only", action='store_true', help='Only show status') -def parse_args(program_name="Intel(R) DL Streamer Pipeline Server Client"): +def parse_args(program_name="Pipeline Client"): """Process command line options""" #pylint: disable=too-many-statements parser = argparse.ArgumentParser( @@ -75,40 +76,40 @@ def parse_args(program_name="Intel(R) DL Streamer Pipeline Server Client"): parser_run = subparsers.add_parser('run', help='Start specified pipeline with specified source. \ Meta-data will be displayed as pipeline runs. Once pipeline ends the average fps is displayed') - parser_run.set_defaults(command=vaclient.run) + parser_run.set_defaults(command=pipeline_client.run) add_request_arguments(parser_run) add_common_arguments(parser_run) parser_start = subparsers.add_parser('start', help='start specified pipeline') - parser_start.set_defaults(command=vaclient.start) + parser_start.set_defaults(command=pipeline_client.start) add_request_arguments(parser_start) add_common_arguments(parser_start) parser_status = subparsers.add_parser('status', help='Print status of specified pipeline') - parser_status.set_defaults(command=vaclient.status) + parser_status.set_defaults(command=pipeline_client.status) add_instance_arguments(parser_status) add_common_arguments(parser_status) parser_wait = subparsers.add_parser('wait', help='Connect to a running pipeline and wait until completion') - parser_wait.set_defaults(command=vaclient.wait) + parser_wait.set_defaults(command=pipeline_client.wait) add_instance_arguments(parser_wait) add_common_arguments(parser_wait) parser_stop = subparsers.add_parser('stop', help='Stop a specified pipeline') - parser_stop.set_defaults(command=vaclient.stop) + parser_stop.set_defaults(command=pipeline_client.stop) add_instance_arguments(parser_stop) add_common_arguments(parser_stop) parser_list_pipelines = subparsers.add_parser('list-pipelines', help='List loaded pipelines') - parser_list_pipelines.set_defaults(command=vaclient.list_pipelines) + parser_list_pipelines.set_defaults(command=pipeline_client.list_pipelines) add_common_arguments(parser_list_pipelines) parser_list_models = subparsers.add_parser('list-models', help='List loaded models') - parser_list_models.set_defaults(command=vaclient.list_models) + parser_list_models.set_defaults(command=pipeline_client.list_models) add_common_arguments(parser_list_models) parser_list_instances = subparsers.add_parser('list-instances', help='List active pipeline instances') - parser_list_instances.set_defaults(command=vaclient.list_instances) + parser_list_instances.set_defaults(command=pipeline_client.list_instances) add_common_arguments(parser_list_instances) parser.add_argument("--quiet", action="store_false", diff --git a/vaclient/parameter_files/object-line-crossing.json b/client/parameter_files/object-line-crossing.json similarity index 100% rename from vaclient/parameter_files/object-line-crossing.json rename to client/parameter_files/object-line-crossing.json diff --git a/vaclient/parameter_files/object-zone-count.json b/client/parameter_files/object-zone-count.json similarity index 100% rename from vaclient/parameter_files/object-zone-count.json rename to client/parameter_files/object-zone-count.json diff --git a/vaclient/vaclient.py b/client/pipeline_client.py similarity index 91% rename from vaclient/vaclient.py rename to client/pipeline_client.py index 57fdfa9..248a729 100755 --- a/vaclient/vaclient.py +++ b/client/pipeline_client.py @@ -13,7 +13,7 @@ from html.parser import HTMLParser import requests import results_watcher -from vaserving.pipeline import Pipeline +from server.pipeline import Pipeline RESPONSE_SUCCESS = 200 TIMEOUT = 30 @@ -40,6 +40,13 @@ "path": "" } } +WEBRTC_TEMPLATE = { + "frame": { + "type": "webrtc", + "peer-id": "" + } +} + SERVER_CONNECTION_FAILURE_MESSAGE = "Unable to connect to server, check if the pipeline-server microservice is running" def html_handle_data(self, data): @@ -131,7 +138,7 @@ def start(args): def stop(args): if stop_pipeline(args.server_address, args.instance, args.show_request): - print_fps(get_pipeline_status(args.server_address, args.instance)) + print_fps([get_pipeline_status(args.server_address, args.instance)]) def wait(args): try: @@ -140,11 +147,11 @@ def wait(args): print(pipeline_status["state"]) else: print("Unable to fetch status") - print_fps(wait_for_pipeline_completion(args.server_address, args.instance)) + print_fps([wait_for_pipeline_completion(args.server_address, args.instance)]) except KeyboardInterrupt: print() stop_pipeline(args.pipeline, args.instance) - print_fps(wait_for_pipeline_completion(args.server_address, args.instance)) + print_fps([wait_for_pipeline_completion(args.server_address, args.instance)]) def status(args): pipeline_status = get_pipeline_status(args.server_address, args.instance, args.show_request) @@ -202,6 +209,10 @@ def update_request_options(request, rtsp_template = RTSP_TEMPLATE rtsp_template['frame']['path'] = args.rtsp_path request['destination'].update(rtsp_template) + if hasattr(args, 'webrtc_peer_id') and args.webrtc_peer_id: + webrtc_template = WEBRTC_TEMPLATE + webrtc_template['frame']['peer-id'] = args.webrtc_peer_id + request['destination'].update(webrtc_template) if hasattr(args, 'request_file') and args.request_file: with open(args.request_file, 'r') as request_file: request.update(json.load(request_file)) @@ -256,13 +267,15 @@ def wait_for_pipeline_running(server_address, timeout_count = 0 while status and not Pipeline.State[status["state"]] == Pipeline.State.RUNNING: status = get_pipeline_status(server_address, instance_id) - if not status or status["state"] == "ERROR": - raise ValueError("Error in pipeline, please check pipeline-server log messages") + if not status or Pipeline.State[status["state"]].stopped(): + break time.sleep(SLEEP_FOR_STATUS) timeout_count += 1 if timeout_count * SLEEP_FOR_STATUS >= timeout_sec: print("Timed out waiting for RUNNING status") break + if not status or status["state"] == "ERROR": + raise ValueError("Error in pipeline, please check pipeline-server log messages") return Pipeline.State[status["state"]] == Pipeline.State.RUNNING def wait_for_pipeline_completion(server_address, instance_id): @@ -277,6 +290,7 @@ def wait_for_pipeline_completion(server_address, instance_id): def wait_for_all_pipeline_completions(server_address, instance_ids, status_only=False): status = {"state" : "RUNNING"} + status_list = [] stopped = False num_streams = len(instance_ids) if num_streams == 0: @@ -295,15 +309,17 @@ def wait_for_all_pipeline_completions(server_address, instance_ids, status_only= instance_id, status["state"], round(status["avg_fps"]))) if not Pipeline.State[status["state"]].stopped(): all_streams_stopped = False + status_list.append(status) first_pipeline = False stopped = all_streams_stopped else: time.sleep(SLEEP_FOR_STATUS) status = get_pipeline_status(server_address, instance_ids[0]) stopped = Pipeline.State[status["state"]].stopped() + status_list.append(status) if status and status["state"] == "ERROR": raise ValueError("Error in pipeline, please check pipeline-server log messages") - return status + return status_list def get_pipeline_status(server_address, instance_id, show_request=False): status_url = urljoin(server_address, @@ -361,9 +377,15 @@ def delete(url, show_request=False): raise ConnectionError(SERVER_CONNECTION_FAILURE_MESSAGE) from error return None -def print_fps(status): - if status and 'avg_fps' in status: - print('avg_fps: {:.2f}'.format(status['avg_fps'])) +def print_fps(status_list): + sum_of_all_fps = 0 + num_of_pipelines = 0 + for status in status_list: + if status and 'avg_fps' in status and status['avg_fps'] > 0: + sum_of_all_fps += status['avg_fps'] + num_of_pipelines += 1 + if num_of_pipelines > 0: + print('avg_fps: {:.2f}'.format(sum_of_all_fps/num_of_pipelines)) def print_list(item_list): for item in item_list: diff --git a/vaclient/vaclient.sh b/client/pipeline_client.sh similarity index 66% rename from vaclient/vaclient.sh rename to client/pipeline_client.sh index 8b88e87..0d222ca 100755 --- a/vaclient/vaclient.sh +++ b/client/pipeline_client.sh @@ -7,10 +7,10 @@ VOLUME_MOUNT="-v /tmp:/tmp " IMAGE="dlstreamer-pipeline-server-gstreamer" -VASERVING_ROOT=/home/pipeline-server +PIPELINE_SERVER_ROOT=/home/pipeline-server ENTRYPOINT="python3" -ENTRYPOINT_ARGS="$VASERVING_ROOT/vaclient $@" -LOCAL_VACLIENT_DIR=$(dirname $(readlink -f "$0")) -ROOT_DIR=$(dirname $LOCAL_VACLIENT_DIR) +ENTRYPOINT_ARGS="$PIPELINE_SERVER_ROOT/client $@" +LOCAL_CLIENT_DIR=$(dirname $(readlink -f "$0")) +ROOT_DIR=$(dirname $LOCAL_CLIENT_DIR) "$ROOT_DIR/docker/run.sh" $INTERACTIVE --name \"\" --network host --image $IMAGE $VOLUME_MOUNT --entrypoint $ENTRYPOINT --entrypoint-args "$ENTRYPOINT_ARGS" diff --git a/vaclient/results_watcher.py b/client/results_watcher.py similarity index 100% rename from vaclient/results_watcher.py rename to client/results_watcher.py diff --git a/docker/Dockerfile b/docker/Dockerfile index 360e89b..3ea18cf 100755 --- a/docker/Dockerfile +++ b/docker/Dockerfile @@ -24,7 +24,7 @@ USER root # Dependencies for OpenVINO ARG BASE=dlstreamer-pipeline-server-gstreamer-base -ENV VA_SERVING_BASE=${BASE} +ENV PIPELINE_SERVER_BASE=${BASE} SHELL ["/bin/bash", "-c"] # Creating user pipeline-server and adding it to groups "video" and "users" to use GPU and VPU @@ -32,21 +32,8 @@ ARG USER=pipeline-server RUN useradd -ms /bin/bash -G video,audio,users ${USER} -d /home/pipeline-server && \ chown ${USER} -R /home/pipeline-server /root -RUN if [ -f /opt/intel/openvino/install_dependencies/install_NEO_OCL_driver.sh ]; then \ - /opt/intel/openvino/install_dependencies/install_NEO_OCL_driver.sh -y ; exit 0; \ - fi - -RUN if [[ ${VA_SERVING_BASE} == *"openvino/ubuntu20_data_runtime:2021.2" ]]; then \ - DEBIAN_FRONTEND=noninteractive apt-get update && \ - apt-get install -y -q --no-install-recommends \ - intel-media-va-driver-non-free \ - gstreamer1.0-tools && \ - apt-get clean && \ - rm -rf /var/lib/apt/lists/* ;\ - fi - # Install boost library required for HDDL plugin -RUN if [[ ${VA_SERVING_BASE} == *"openvino/ubuntu20_data_runtime"* ]]; then \ +RUN if [[ ${PIPELINE_SERVER_BASE} == *"openvino/ubuntu20_data_runtime"* || ${PIPELINE_SERVER_BASE} == *"intel/dlstreamer"* ]]; then \ DEBIAN_FRONTEND=noninteractive apt-get update && \ apt-get install -y -q --no-install-recommends \ libboost-program-options1.71.0 && \ @@ -55,29 +42,46 @@ RUN if [[ ${VA_SERVING_BASE} == *"openvino/ubuntu20_data_runtime"* ]]; then \ fi RUN DEBIAN_FRONTEND=noninteractive apt-get update && \ - apt-get upgrade -y -q && \ - apt-get dist-upgrade -y -q && \ + apt-get install -y -q --no-install-recommends \ + gstreamer1.0-nice \ + python3-pip && \ apt-get clean && \ rm -rf /var/lib/apt/lists/* ; -RUN if [[ ${VA_SERVING_BASE} == *"openvisualcloud/xeone3-ubuntu1804-analytics-ffmpeg"* ]]; then \ +RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get install -y libjemalloc-dev + +# Install GStreamer packages not present dlstreamer base image +RUN if [[ ${PIPELINE_SERVER_BASE} == *"dlstreamer"* ]]; then \ DEBIAN_FRONTEND=noninteractive apt-get update && \ apt-get install -y -q --no-install-recommends \ - python3 \ - python3-setuptools \ - python3-pip && \ + gstreamer1.0-plugins-good \ + gstreamer1.0-alsa \ + gstreamer1.0-libav \ + gstreamer1.0-plugins-bad \ + gstreamer1.0-plugins-ugly \ + gstreamer1.0-tools \ + gstreamer1.0-vaapi \ + gstreamer1.0-x \ + libgstreamer-plugins-bad1.0-0 \ + libgstreamer-plugins-base1.0-dev \ + libgstreamer1.0-dev && \ apt-get clean && \ rm -rf /var/lib/apt/lists/* ;\ fi +RUN DEBIAN_FRONTEND=noninteractive apt-get update && \ + apt-get upgrade -y -q && \ + apt-get dist-upgrade -y -q && \ + apt-get clean && \ + rm -rf /var/lib/apt/lists/* ; + COPY ./requirements.txt / RUN pip3 install --upgrade pip --no-cache-dir -r /requirements.txt RUN rm -f /requirements.txt -# Intel(R) DL Streamer Pipeline Server Python Modules -COPY ./vaserving /home/pipeline-server/vaserving -COPY ./vaclient /home/pipeline-server/vaclient -COPY --chown=pipeline-server ./tools /home/pipeline-server/tools +# Pipeline Server Python Modules +COPY ./server /home/pipeline-server/server +COPY ./client /home/pipeline-server/client # Copy GVA Python extensions COPY ./extensions /home/pipeline-server/extensions @@ -97,9 +101,9 @@ FROM dlstreamer-pipeline-server as do_not_copy_models # Creates a stage that copies models from the build context FROM dlstreamer-pipeline-server as copy_models -ONBUILD ARG MODELS_PATH -ONBUILD ENV MODELS_PATH=${MODELS_PATH} -ONBUILD COPY ${MODELS_PATH} /home/pipeline-server/models +ONBUILD ARG PS_MODELS_PATH +ONBUILD ENV PS_MODELS_PATH=${PS_MODELS_PATH} +ONBUILD COPY $PS_MODELS_PATH /home/pipeline-server/models # Stage that is used is controlled via MODELS_COMMAND build argument FROM ${MODELS_COMMAND} as dlstreamer-pipeline-server-with-models @@ -115,9 +119,9 @@ FROM dlstreamer-pipeline-server-with-models as do_not_copy_pipelines # Creates a stage that copies pipelines from the build context FROM dlstreamer-pipeline-server-with-models as copy_pipelines -ONBUILD ARG PIPELINES_PATH -ONBUILD ENV PIPELINES_PATH=${PIPELINES_PATH} -ONBUILD COPY ${PIPELINES_PATH} /home/pipeline-server/pipelines +ONBUILD ARG PS_PIPELINES_PATH +ONBUILD ENV PS_PIPELINES_PATH=${PS_PIPELINES_PATH} +ONBUILD COPY ${PS_PIPELINES_PATH} /home/pipeline-server/pipelines # Stage that is used is controlled via PIPELINES_COMMAND build argument FROM ${PIPELINES_COMMAND} as dlstreamer-pipeline-server-with-models-and-pipelines @@ -128,8 +132,8 @@ FROM ${PIPELINES_COMMAND} as dlstreamer-pipeline-server-with-models-and-pipeline # Final stage is controlled by the FINAL_STAGE build argument. FROM dlstreamer-pipeline-server-with-models-and-pipelines as dlstreamer-pipeline-server-library -ONBUILD RUN rm -rf /home/pipeline-server/vaserving/__main__.py -ONBUILD RUN rm -rf /home/pipeline-server/vaserving/rest_api +ONBUILD RUN rm -rf /home/pipeline-server/server/__main__.py +ONBUILD RUN rm -rf /home/pipeline-server/server/rest_api FROM dlstreamer-pipeline-server-with-models-and-pipelines as dlstreamer-pipeline-server-service @@ -137,13 +141,27 @@ FROM dlstreamer-pipeline-server-with-models-and-pipelines as dlstreamer-pipeline ONBUILD COPY ./requirements.service.txt / ONBUILD RUN pip3 install --no-cache-dir -r /requirements.service.txt ONBUILD RUN rm -f /requirements.service.txt -ONBUILD ENTRYPOINT ["python3", "-m", "vaserving"] + +# WebRTC specific dependencies installed via pip +ONBUILD COPY ./requirements.webrtc.txt / +ONBUILD RUN if [[ ${FRAMEWORK} == "gstreamer" ]]; then \ + pip3 install --no-cache-dir -r /requirements.webrtc.txt; \ + fi +ONBUILD RUN rm -f /requirements.webrtc.txt + +ONBUILD ENTRYPOINT ["python3", "-m", "server"] FROM ${FINAL_STAGE} as deploy ARG USER=pipeline-server -ENV PYTHONPATH=$PYTHONPATH:/home/pipeline-server +ENV HOME=/home/pipeline-server +ENV PYTHONPATH=/home/pipeline-server:$PYTHONPATH +ENV GST_PLUGIN_PATH=$GST_PLUGIN_PATH:/usr/lib/x86_64-linux-gnu/gstreamer-1.0/ +ENV LD_PRELOAD=libjemalloc.so + +ENV cl_cache_dir=/home/.cl_cache +RUN mkdir -p -m g+s $cl_cache_dir && chown ${USER}:users $cl_cache_dir # Prepare XDG_RUNTIME_DIR ENV XDG_RUNTIME_DIR=/home/.xdg_runtime_dir diff --git a/docker/build.sh b/docker/build.sh index aaea21e..09d3ecd 100755 --- a/docker/build.sh +++ b/docker/build.sh @@ -10,7 +10,7 @@ DOCKERFILE_DIR=$(dirname "$(readlink -f "$0")") SOURCE_DIR=$(dirname "$DOCKERFILE_DIR") BASE_IMAGE_FFMPEG="openvisualcloud/xeone3-ubuntu1804-analytics-ffmpeg:20.10" -BASE_IMAGE_GSTREAMER="openvino/ubuntu20_data_runtime:2021.4.2" +BASE_IMAGE_GSTREAMER="intel/dlstreamer:2022.1.0-ubuntu20" BASE_IMAGE=${BASE_IMAGE:-""} BASE_BUILD_CONTEXT= @@ -18,7 +18,7 @@ BASE_BUILD_DOCKERFILE= BASE_BUILD_TAG= USER_BASE_BUILD_ARGS= MODELS=$SOURCE_DIR/models_list/models.list.yml -MODELS_PATH=models +PS_MODELS_PATH=models PIPELINES= FRAMEWORK="gstreamer" TAG= @@ -35,8 +35,10 @@ BUILD_OPTIONS="--network=host " BASE_BUILD_OPTIONS="--network=host " SUPPORTED_IMAGES=($BASE_IMAGE_GSTREAMER $BASE_IMAGE_FFMPEG) -OPEN_MODEL_ZOO_TOOLS_IMAGE=${OPEN_MODEL_ZOO_TOOLS_IMAGE:-"openvino/ubuntu20_data_dev"} -OPEN_MODEL_ZOO_VERSION=${OPEN_MODEL_ZOO_VERSION:-"2021.4.2"} +DEFAULT_OMZ_IMAGE_GSTREAMER="intel/dlstreamer" +DEFAULT_OMZ_VERSION_GSTREAMER="2022.1.0-ubuntu20-devel" +DEFAULT_OMZ_IMAGE_FFMPEG="openvino/ubuntu18_data_dev" +DEFAULT_OMZ_VERSION_FFMPEG="2021.2" FORCE_MODEL_DOWNLOAD= DEFAULT_GSTREAMER_BASE_BUILD_TAG="dlstreamer-pipeline-server-gstreamer-base" @@ -246,6 +248,13 @@ get_options() { BASE_IMAGE=${CACHE_PREFIX}$BASE_IMAGE_GSTREAMER fi fi + if [ $FRAMEWORK = 'ffmpeg' ]; then + OPEN_MODEL_ZOO_TOOLS_IMAGE=${OPEN_MODEL_ZOO_TOOLS_IMAGE:-$DEFAULT_OMZ_IMAGE_FFMPEG} + OPEN_MODEL_ZOO_VERSION=${OPEN_MODEL_ZOO_VERSION:-$DEFAULT_OMZ_VERSION_FFMPEG} + else + OPEN_MODEL_ZOO_TOOLS_IMAGE=${OPEN_MODEL_ZOO_TOOLS_IMAGE:-$DEFAULT_OMZ_IMAGE_GSTREAMER} + OPEN_MODEL_ZOO_VERSION=${OPEN_MODEL_ZOO_VERSION:-$DEFAULT_OMZ_VERSION_GSTREAMER} + fi if [ -f "$MODELS" ]; then if [[ ! " ${SUPPORTED_IMAGES[@]} " =~ " ${BASE_IMAGE} " ]]; then @@ -328,7 +337,7 @@ show_base_options() { show_image_options() { echo "" - echo "Building Intel(R) DL Streamer Pipeline Server Image: '${TAG}'" + echo "Building Pipeline Server Image: '${TAG}'" echo "" echo " Base: '${BASE_IMAGE}'" echo " Build Context: '${SOURCE_DIR}'" @@ -350,8 +359,8 @@ show_help() { echo " [--base base image]" echo " [--framework ffmpeg || gstreamer]" echo " [--models path to models directory or model list file or NONE]" - echo " [--open-model-zoo-image specify the OpenVINO(TM) image to be used for downloading models from Open Model Zoo]" - echo " [--open-model-zoo-version specify the version of OpenVINO(TM) image to be used for downloading models from Open Model Zoo]" + echo " [--open-model-zoo-image specify the base image to be used for downloading models from Open Model Zoo]" + echo " [--open-model-zoo-version specify the version of base image to be used for downloading models from Open Model Zoo]" echo " [--force-model-download force the download of models from Open Model Zoo]" echo " [--pipelines path to pipelines directory relative to $SOURCE_DIR or NONE]" echo " [--base-build-context docker context for building base image]" @@ -396,14 +405,14 @@ fi BUILD_ARGS+=" --build-arg BASE=$BASE_IMAGE " BUILD_ARGS+=" --build-arg FRAMEWORK=$FRAMEWORK " if [ -n "$MODELS" ]; then - BUILD_ARGS+="--build-arg MODELS_PATH=$MODELS_PATH " + BUILD_ARGS+="--build-arg PS_MODELS_PATH=$PS_MODELS_PATH " BUILD_ARGS+="--build-arg MODELS_COMMAND=copy_models " else BUILD_ARGS+="--build-arg MODELS_COMMAND=do_not_copy_models " fi if [ -n "$PIPELINES" ]; then - BUILD_ARGS+="--build-arg PIPELINES_PATH=$PIPELINES " + BUILD_ARGS+="--build-arg PS_PIPELINES_PATH=$PIPELINES " BUILD_ARGS+="--build-arg PIPELINES_COMMAND=copy_pipelines " else BUILD_ARGS+="--build-arg PIPELINES_COMMAND=do_not_copy_pipelines " @@ -415,7 +424,6 @@ else BUILD_ARGS+="--build-arg FINAL_STAGE=dlstreamer-pipeline-server-library " fi -cp -f $DOCKERFILE_DIR/Dockerfile $DOCKERFILE_DIR/Dockerfile.env ENVIRONMENT_FILE_LIST= if [[ "$BASE_IMAGE" == *"openvino/"* ]]; then @@ -429,11 +437,14 @@ for ENVIRONMENT_FILE in ${ENVIRONMENT_FILES[@]}; do fi done +DOCKER_FILE=$DOCKERFILE_DIR/Dockerfile if [ ! -z "$ENVIRONMENT_FILE_LIST" ]; then + DOCKER_FILE=$DOCKERFILE_DIR/Dockerfile.env + cp -f $DOCKERFILE_DIR/Dockerfile $DOCKER_FILE cat $ENVIRONMENT_FILE_LIST | grep -E '=' | sed -e 's/,\s\+/,/g' | tr '\n' ' ' | tr '\r' ' ' > $DOCKERFILE_DIR/final.env echo " HOME=/home/pipeline-server " >> $DOCKERFILE_DIR/final.env - echo "ENV " | cat - $DOCKERFILE_DIR/final.env | tr -d '\n' >> $DOCKERFILE_DIR/Dockerfile.env - printf "\nENV PYTHONPATH=\$PYTHONPATH:/home/pipeline-server\nENV GST_PLUGIN_PATH=\$GST_PLUGIN_PATH:/usr/lib/x86_64-linux-gnu/gstreamer-1.0/" >> $DOCKERFILE_DIR/Dockerfile.env + echo "ENV " | cat - $DOCKERFILE_DIR/final.env | tr -d '\n' >> $DOCKER_FILE + printf "\nENV PYTHONPATH=/home/pipeline-server:\$PYTHONPATH\nENV LD_PRELOAD=libjemalloc.so\nENV GST_PLUGIN_PATH=\$GST_PLUGIN_PATH:/usr/lib/x86_64-linux-gnu/gstreamer-1.0/" >> $DOCKER_FILE fi show_image_options @@ -441,4 +452,4 @@ show_image_options echo "-----------------------------" echo "Building Image..." echo "-----------------------------" -launch "$RUN_PREFIX docker build -f "$DOCKERFILE_DIR/Dockerfile.env" $BUILD_OPTIONS $BUILD_ARGS -t $TAG --target $TARGET $SOURCE_DIR" +launch "$RUN_PREFIX docker build -f "$DOCKER_FILE" $BUILD_OPTIONS $BUILD_ARGS -t $TAG --target $TARGET $SOURCE_DIR" diff --git a/docker/run.sh b/docker/run.sh index 485db50..b10f084 100755 --- a/docker/run.sh +++ b/docker/run.sh @@ -24,7 +24,8 @@ USER= INTERACTIVE=-it DEVICE_CGROUP_RULE= USER_GROUPS= -ENABLE_RTSP= +ENABLE_RTSP=${ENABLE_RTSP:-"false"} +ENABLE_WEBRTC=${ENABLE_WEBRTC:-"false"} RTSP_PORT=8554 SCRIPT_DIR=$(dirname "$(readlink -f "$0")") @@ -33,7 +34,7 @@ ENVIRONMENT=$(env | cut -f1 -d= | grep -E '_(proxy)$' | sed 's/^/-e / ' | tr '\n show_options() { echo "" - echo "Running Intel(R) DL Streamer Pipeline Server Image: '${IMAGE}'" + echo "Running Pipeline Server Image: '${IMAGE}'" echo " Models: '${MODELS}'" echo " Pipelines: '${PIPELINES}'" echo " Framework: '${FRAMEWORK}'" @@ -69,6 +70,7 @@ show_help() { echo " [--device device to pass to docker run]" echo " [--enable-rtsp To enable rtsp re-streaming]" echo " [--rtsp-port Specify the port to use for rtsp re-streaming]" + echo " [--enable-webrtc To enable WebRTC frame destination]" echo " [--dev run in developer mode]" exit 0 } @@ -262,6 +264,9 @@ while [[ "$#" -gt 0 ]]; do --enable-rtsp) ENABLE_RTSP=true ;; + --enable-webrtc) + ENABLE_WEBRTC=true + ;; --non-interactive) unset INTERACTIVE ;; @@ -326,11 +331,15 @@ fi enable_hardware_access -if [ ! -z "$ENABLE_RTSP" ]; then - ENVIRONMENT+="-e ENABLE_RTSP=true -e RTSP_PORT=$RTSP_PORT " +if [ "$ENABLE_RTSP" != "false" ]; then + ENVIRONMENT+="-e ENABLE_RTSP=$ENABLE_RTSP -e RTSP_PORT=$RTSP_PORT " PORTS+="-p $RTSP_PORT:$RTSP_PORT " fi +if [ "$ENABLE_WEBRTC" != "false" ]; then + ENVIRONMENT+="-e ENABLE_WEBRTC=$ENABLE_WEBRTC " +fi + if [ ! -z "$MODELS" ]; then VOLUME_MOUNT+="-v $MODELS:/home/pipeline-server/models " fi diff --git a/docs/build_script_reference.md b/docs/build_script_reference.md index 467d92c..1910098 100644 --- a/docs/build_script_reference.md +++ b/docs/build_script_reference.md @@ -11,10 +11,10 @@ usage: build.sh [--base base image] [--framework ffmpeg || gstreamer] [--models path to models directory or model list file or NONE] - [--open-model-zoo-image specify the OpenVINO image to be used for downloading models from Open Model Zoo] - [--open-model-zoo-version specify the version of OpenVINO image to be used for downloading models from Open Model Zoo] + [--open-model-zoo-image specify the base image to be used for downloading models from Open Model Zoo] + [--open-model-zoo-version specify the version of base image to be used for downloading models from Open Model Zoo] [--force-model-download force the download of models from Open Model Zoo] - [--pipelines path to pipelines directory relative to /home/thanaji/git/vaServing or NONE] + [--pipelines path to pipelines directory relative to or NONE] [--base-build-context docker context for building base image] [--base-build-dockerfile docker file path used to build base image from build context] [--build-option additional docker build option that run in the context of docker build. ex. --no-cache] @@ -39,18 +39,18 @@ Intel(R) Deep Learning Streamer (Intel(R) DL Streamer) Pipeline Server can use e This option can be used to specify path to models directory or a model list file. When its a directory, models used by pipelines are expected to be in this directory. When its a file, the models listed in the file are downloaded and converted to IR format if needed by the [model download tool](../tools/model_downloader/README.md) during build time. If nothing is specified, default models listed in the file `models_list/models.list.yml` are downloaded, converted to IR format if needed and included in the image. If set to `NONE` no models are included and the user must ensure models are made available at runtime by volume mounting. ## Open Model Zoo Image (--open-model-zoo-image) -This option can be used to specify the OpenVINO base image to be used for downloading models from Open Model Zoo. +This option can be used to specify the base image to be used for downloading models from Open Model Zoo. -For GStreamer, the Pipeline Server build script will automatically choose the Open Model Zoo image as per the table in [section](building_video_analytics_serving.md#supported-base-images). +For GStreamer, the Pipeline Server build script will automatically choose the Open Model Zoo image as per the table in [section](building_pipeline_server.md#supported-base-images). -For FFmpeg, you **must** specify the Open Model Zoo image to build with (i.e., when using `--framework ffmpeg` provide the image corresponding to the table in [section](building_video_analytics_serving.md#supported-base-images)). +For FFmpeg, you **must** specify the Open Model Zoo image to build with (i.e., when using `--framework ffmpeg` provide the image corresponding to the table in [section](building_pipeline_server.md#supported-base-images)). ## Open Model Zoo Version (--open-model-zoo-version) -This option can be used to specify the version of OpenVINO base image to be used for downloading models from Open Model Zoo. +This option can be used to specify the version of the base image to be used for downloading models from Open Model Zoo. -For GStreamer, the Pipeline Server build script will automatically choose the Open Model Zoo version as per the table in [section](building_video_analytics_serving.md#supported-base-images). +For GStreamer, the Pipeline Server build script will automatically choose the Open Model Zoo version as per the table in [section](building_pipeline_server.md#supported-base-images). -For FFmpeg, you **must** specify the Open Model Zoo version to build with (i.e., when using `--framework ffmpeg` provide the version corresponding to the table in [section](building_video_analytics_serving.md#supported-base-images)). +For FFmpeg, you **must** specify the Open Model Zoo version to build with (i.e., when using `--framework ffmpeg` provide the version corresponding to the table in [section](building_pipeline_server.md#supported-base-images)). ## Force Model Download (--force-model-download) This option instructs the [model download tool](../tools/model_downloader/README.md) to force download of models from Open Model Zoo using the `models.list.yml` even if they already exist in the models directory. This may be useful to guarantee that the models for your build have been generated using the appropriate version of Open Model Zoo before they are embedded into your freshly built image. @@ -58,7 +58,7 @@ This option instructs the [model download tool](../tools/model_downloader/README If you have previously built for a different framework, you **must** download these again (or archive your models directory to different name), because the version of Open Model Zoo used by `--framework gstreamer` produces different output than when building with `--framework ffmpeg`. ## Pipeline Directory (--pipelines) -Path to the Pipeline Server pipelines. Path must be within docker build context which defaults to the root of the dlstreamer-pipeline-server project. If not specified, [sample pipelines](../pipelines/gstreamer) are included in the image. If set to `NONE` no pipelines are included and the user must ensure pipelines are made available at runtime by [volume mounting](running_video_analytics_serving.md#selecting-pipelines-and-models-at-runtime). +Path to the Pipeline Server pipelines. Path must be within docker build context which defaults to the root of the dlstreamer-pipeline-server project. If not specified, [sample pipelines](../pipelines/gstreamer) are included in the image. If set to `NONE` no pipelines are included and the user must ensure pipelines are made available at runtime by [volume mounting](running_pipeline_server.md#selecting-pipelines-and-models-at-runtime). ## Base Build Context (--base-build-context) This option is used in conjunction with `--base-build-dockerfile` to specify the docker build file and its context. It must be a git repo URL, path to tarball or path to locally cloned folder. diff --git a/docs/building_video_analytics_serving.md b/docs/building_pipeline_server.md similarity index 86% rename from docs/building_video_analytics_serving.md rename to docs/building_pipeline_server.md index a38549a..f52f2b8 100644 --- a/docs/building_video_analytics_serving.md +++ b/docs/building_pipeline_server.md @@ -23,11 +23,10 @@ can be customized to meet an application's requirements. | **Application / Microservice**                                                           |Application or microservice using Intel(R) DL Streamer Pipeline Server python modules to execute media analytics pipelines. By default a Tornado based RESTful microservice is included. | # Default Build Commands and Image Names - | Command | Media Analytics Base Image | Image Name | Description | | --- | --- | --- | ---- | -| `./docker/build.sh`| **ubuntu20_data_runtime:2021.4.2** docker [image](https://hub.docker.com/r/openvino/ubuntu20_data_runtime) |`dlstreamer-pipeline-server-gstreamer` | Intel(R) DL Streamer based microservice with default pipeline definitions and deep learning models. | -| `./docker/build.sh --framework ffmpeg --open-model-zoo...`| **xeone3-ubuntu1804-analytics-ffmpeg:20.10** docker [image](https://hub.docker.com/r/openvisualcloud/xeon-ubuntu1804-analytics-ffmpeg) |`dlstreamer-pipeline-server-ffmpeg`| FFmpeg Video Analytics based microservice with default pipeline definitions and deep learning models. | +| `./docker/build.sh`| **intel/dlstreamer:2022.1.0-ubuntu20** docker [image](https://hub.docker.com/r/intel/dlstreamer) |`dlstreamer-pipeline-server-gstreamer` | Intel(R) DL Streamer based microservice with default pipeline definitions and deep learning models. | +| `./docker/build.sh --framework ffmpeg --open-model-zoo...`| **openvisualcloud/xeone3-ubuntu1804-analytics-ffmpeg:20.10** docker [image](https://hub.docker.com/r/openvisualcloud/xeon-ubuntu1804-analytics-ffmpeg) |`dlstreamer-pipeline-server-ffmpeg`| FFmpeg Video Analytics based microservice with default pipeline definitions and deep learning models. | ### Building with OpenVINO, Ubuntu 20.04 and Intel(R) DL Streamer Support **Example:** ``` @@ -70,7 +69,8 @@ All validation is done in docker environment. Host built (aka "bare metal") conf | **Base Image** | **Framework** | **OpenVINO Version** | **Link** | **Default** | |---------------------|---------------|---------------|------------------------|-------------| -| OpenVINO 2021.4.2 ubuntu20_data_runtime | GStreamer | 2021.4.2 | [Docker Hub](https://hub.docker.com/r/openvino/ubuntu20_data_runtime) | Y | +| OpenVINO 2021.4.2 ubuntu20_data_runtime | GStreamer | 2021.4.2 | [Docker Hub](https://hub.docker.com/r/openvino/ubuntu20_data_runtime) | N | +| Intel DL Streamer 2022.1.0-ubuntu20 | GStreamer | 2022.1.0 | [Docker Hub](https://hub.docker.com/r/intel/dlstreamer) | Y | | Open Visual Cloud 20.10 xeone3-ubuntu1804-analytics-ffmpeg | FFmpeg | 2021.1 | [Docker Hub](https://hub.docker.com/r/openvisualcloud/xeone3-ubuntu1804-analytics-ffmpeg) | Y | --- diff --git a/docs/changing_object_detection_models.md b/docs/changing_object_detection_models.md index 689a85c..f100520 100644 --- a/docs/changing_object_detection_models.md +++ b/docs/changing_object_detection_models.md @@ -48,9 +48,9 @@ Build and run the sample microservice with the following commands: ``` ### List Models -Use [vaclient](/vaclient/README.md) to list the models. Check that `object_detection/person_vehicle_bike` is present and that that `yolo-v2-tiny-tf` is not. Also count the number of models. In this example there are 7. +Use [pipeline_client](/client/README.md) to list the models. Check that `object_detection/person_vehicle_bike` is present and that that `yolo-v2-tiny-tf` is not. Also count the number of models. In this example there are 7. ``` -./vaclient/vaclient.sh list-models +./client/pipeline_client.sh list-models ``` ``` - audio_detection/environment @@ -63,9 +63,9 @@ Use [vaclient](/vaclient/README.md) to list the models. Check that `object_detec ``` ### Detect Objects on Sample Video -In a second terminal window use [vaclient](/vaclient/README.md) to run the pipeline. Expected output is abbreviated. +In a second terminal window use [pipeline_client](/client/README.md) to run the pipeline. Expected output is abbreviated. ```bash -./vaclient/vaclient.sh run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/raw/master/bottle-detection.mp4?raw=true +./client/pipeline_client.sh run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/raw/master/bottle-detection.mp4?raw=true ``` ``` @@ -95,8 +95,7 @@ In the original terminal window, stop the service using `CTRL-C`. On start-up the Pipeline Server discovers models that have been downloaded and makes them available for reference within pipelines. -Models can be downloaded either as part of the normal Video Analytics -Serving build process or using a stand-alone tool. +Models can be downloaded either as part of the normal Pipeline Server build process or using a stand-alone tool. ### Add `yolo-v2-tiny-tf` to the Model Downloader List file @@ -130,11 +129,11 @@ the end of the file. Expected output (abbreviated): ``` -[ SUCCESS ] Generated IR version 10 model. -[ SUCCESS ] XML file: /tmp/tmps4pxnu7y/public/yolo-v2-tiny-tf/FP32/yolo-v2-tiny-tf.xml -[ SUCCESS ] BIN file: /tmp/tmps4pxnu7y/public/yolo-v2-tiny-tf/FP32/yolo-v2-tiny-tf.bin -[ SUCCESS ] Total execution time: 9.70 seconds. -[ SUCCESS ] Memory consumed: 584 MB. +[ SUCCESS ] Generated IR version 11 model. +[ SUCCESS ] XML file: /tmp/tmp4u2kd1v9/public/yolo-v2-tiny-tf/FP32/yolo-v2-tiny-tf.xml +[ SUCCESS ] BIN file: /tmp/tmp4u2kd1v9/public/yolo-v2-tiny-tf/FP32/yolo-v2-tiny-tf.bin +[ SUCCESS ] Total execution time: 3.79 seconds. +[ SUCCESS ] Memory consumed: 532 MB. Copied model_proc to: /output/models/object_detection/yolo-v2-tiny-tf/yolo-v2-tiny-tf.json ``` @@ -148,6 +147,7 @@ tree models models └── object_detection └── yolo-v2-tiny-tf + ├── coco-80cl.txt ├── FP16 │   ├── yolo-v2-tiny-tf.bin │   ├── yolo-v2-tiny-tf.mapping @@ -214,7 +214,7 @@ Once started you can verify that the new model and pipeline have been loaded. The `list-models` command now shows 8 models, including `object_detection/yolo-v2-tiny-tf` ```bash -./vaclient/vaclient.sh list-models +./client/pipeline_client.sh list-models ``` ``` - emotion_recognition/1 @@ -228,7 +228,7 @@ The `list-models` command now shows 8 models, including `object_detection/yolo-v ``` The `list-pipelines` command shows `object_detection/yolo-v2-tiny-tf` ```bash -./vaclient/vaclient.sh list-pipelines +./client/pipeline_client.sh list-pipelines ``` ``` - object_detection/app_src_dst @@ -246,10 +246,10 @@ The `list-pipelines` command shows `object_detection/yolo-v2-tiny-tf` ## Step 5. Detect Objects on Sample Video with New Pipeline -Now use vaclient to run the `object_detection/yolo-v2-tiny-tf` pipeline with the new model. +Now use pipeline_client to run the `object_detection/yolo-v2-tiny-tf` pipeline with the new model. You can see the `yolo-v2-tiny-tf` model in action as objects are now correctly detected as bottles. ```bash -./vaclient/vaclient.sh run object_detection/yolo-v2-tiny-tf https://github.com/intel-iot-devkit/sample-videos/raw/master/bottle-detection.mp4?raw=true +./client/pipeline_client.sh run object_detection/yolo-v2-tiny-tf https://github.com/intel-iot-devkit/sample-videos/raw/master/bottle-detection.mp4?raw=true ``` ``` Pipeline running: object_detection/yolo-v2-tiny-tf, instance = @@ -279,7 +279,7 @@ rm -r models ``` Once started you can verify that the new model has been loaded. ```bash -./vaclient/vaclient.sh list-models +./client/pipeline_client.sh list-models ``` ``` - emotion_recognition/1 @@ -297,7 +297,7 @@ Once started you can verify that the new model has been loaded. For more information on the build, run, pipeline definition and model download please see: * [Getting Started](/README.md#getting-started) -* [Building Intel(R) DL Streamer Pipeline Server](/docs/building_video_analytics_serving.md) -* [Running Intel(R) DL Streamer Pipeline Server](/docs/running_video_analytics_serving.md) +* [Building Intel(R) DL Streamer Pipeline Server](/docs/building_pipeline_server.md) +* [Running Intel(R) DL Streamer Pipeline Server](/docs/running_pipeline_server.md) * [Defining Pipelines](/docs/defining_pipelines.md) * [Model Downloader Tool](/tools/model_downloader/README.md) diff --git a/docs/creating_extensions.md b/docs/creating_extensions.md index dc90ff0..e14b5da 100644 --- a/docs/creating_extensions.md +++ b/docs/creating_extensions.md @@ -9,7 +9,7 @@ Extensions are a simple way to add functionality to a Intel(R) Deep Learning Str An extension is a GVA Python script that is called during pipeline execution. The script is given a frame and any Pipeline Server parameters defined by the pipeline request. The extension must be added after the last element that generates data it requires. Below are some examples that cover how extensions can be used to process inference results. -> Note: Make sure to either build the Pipeline Server container with `docker/build.sh` after editing extension code and/or pipeline per [Build script reference](build_script_reference.md) or use --dev mode during run as outlined in [Running Intel(R) DL Streamer Pipeline Server](running_video_analytics_serving.md#developer-mode) +> Note: Make sure to either build the Pipeline Server container with `docker/build.sh` after editing extension code and/or pipeline per [Build script reference](build_script_reference.md) or use --dev mode during run as outlined in [Running Intel(R) DL Streamer Pipeline Server](running_pipeline_server.md#developer-mode) ## Example: Processing Inference Results @@ -51,9 +51,9 @@ The extension must be added after the last element that generates data it requir ``` ### Output -The pipeline can be run with VA Client as follows: +The pipeline can be run with Pipeline Client as follows: ```bash -vaclient/vaclient.sh run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true +./client/pipeline_client.sh run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true ``` As process_frame runs once per frame, the Pipeline Server output would resemble @@ -147,9 +147,9 @@ class ObjectCounter: ### Output -- Running VA Client as shown (parameter-file is optional for extension parameters if defaults are set in pipeline JSON) +- Running Pipeline Client as shown (parameter-file is optional for extension parameters if defaults are set in pipeline JSON) ```bash - vaclient/vaclient.sh run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true + ./client/pipeline_client.sh run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true ``` Output reflects the default count_threshold i.e 0 ```bash @@ -159,7 +159,7 @@ class ObjectCounter: Object count 1 exceeded threshold 0 ``` -- Running VA Client with the following parameter file +- Running Pipeline Client with the following parameter file ```json { "parameters": { @@ -170,7 +170,7 @@ class ObjectCounter: } ``` ```bash - vaclient/vaclient.sh run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --parameter-file /tmp/sample_parameters.json + ./client/pipeline_client.sh run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --parameter-file /tmp/sample_parameters.json ``` The Pipeline Server output shows count_threshold is set to 1 per parameter file ```bash @@ -337,9 +337,9 @@ Another optional field (unused here) is `related_objects` as shown in [line cros ### Output -VA Client can be launched as follows, as no parameter-file is given, the default count_threshold is picked up i.e 1. +Pipeline Client can be launched as follows, as no parameter-file is given, the default count_threshold is picked up i.e 1. ```bash - vaclient/vaclient.sh run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true +./client/pipeline_client.sh run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true ``` Output snippet is shown, events are fired for object count > 1: ```bash diff --git a/docs/customizing_pipeline_requests.md b/docs/customizing_pipeline_requests.md index 536eb03..de39188 100644 --- a/docs/customizing_pipeline_requests.md +++ b/docs/customizing_pipeline_requests.md @@ -1,11 +1,11 @@ -# Customizing Video Analytics Pipeline Requests -| [Request Format](#request-format) | [Source](#source) | [Destination](#destination) | [Parameters](#parameters) | [Tags](#tags) | +# Customizing Pipeline Requests +| [Request Format](#request-format) | [Source](#source) | [Destination](#metadata-destination) | [Parameters](#parameters) | [Tags](#tags) | -Pipeline requests are initiated to exercise the Intel(R) Deep Learning Streamer (Intel(R) DL Streamer) Pipeline Server REST API. Each pipeline in the Pipeline Server has a specific endpoint. A pipeline can be started by issuing a `POST` request and a running pipeline can be stopped using a `DELETE` request. The `source` and `destination` elements of Pipeline Server [pipeline templates](defining_pipelines.md#pipeline-templates) are configured and constructed based on the `source` and `destination` from the incoming requests. +Pipeline requests are initiated to exercise the Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Server REST API. Each pipeline in the Pipeline Server has a specific endpoint. A pipeline can be started by issuing a `POST` request and a running pipeline can be stopped using a `DELETE` request. The `source` and `destination` elements of Pipeline Server [pipeline templates](defining_pipelines.md#pipeline-templates) are configured and constructed based on the `source` and `destination` from the incoming requests. ## Request Format -> Note: This document shows curl requests. Requests can also be sent via vaclient using the --request-file option see [VA Client Command Options](../vaclient/README.md#command-options) +> Note: This document shows curl requests. Requests can also be sent via pipeline_client using the --request-file option see [Pipeline Client Command Options](../client/README.md#command-options) Pipeline requests sent to Pipeline Server REST API are JSON documents that have the following attributes: @@ -182,16 +182,13 @@ For example, if you'd like to set `ntp-sync` property of the `rtspsrc` element t } ``` -## Destination -Pipelines can be configured to output `frames`, `metadata` or both. The destination object within the request contains sections to configure each. +## Metadata Destination +Pipelines can optionally be configured to output metadata to a specific destination. -- Metadata (inference results) -- Frame - -### Metadata For metadata, the destination type can be set to file, mqtt, or kafka as needed. -#### File + +### File The following are available properties: - type : "file" - path (required): Path to the file. @@ -219,7 +216,7 @@ curl localhost:8080/pipelines/object_detection/person_vehicle_bike -X POST -H \ }' ``` -#### MQTT +### MQTT The following are available properties: - type : "mqtt" - host (required) expects a format of host:port @@ -275,8 +272,8 @@ Steps to run MQTT: 1632949274: Client gva-meta-publish disconnected. ``` -#### Kafka -The following are available properties: +### Kafka +The following properties are available for the Apache Kafka destination: - type : "kafka" - host (required) expects a format of host:port - topic (required) Kafka topic on which broker messages are sent @@ -326,14 +323,25 @@ Steps to run Kafka: ``` 4. Launch pipeline with parameters to emit on the Kafka topic we are listening for: - ``` - ./vaclient/vaclient.sh start object_detection/person_vehicle_bike \ - https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true \ - --destination type kafka \ - --destination host localhost \ - --destination port 9092 \ - --destination topic pipeline-server.person_vehicle_bike - ``` +``` + curl localhost:8080/pipelines/object_detection/person_vehicle_bike -X POST -H \ + 'Content-Type: application/json' -d \ + '{ + "source": { + "uri": "https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true", + "type": "uri" + }, + "destination": { + "metadata": { + "type": "kafka", + "format": "json-lines", + "host": "localhost", + "port": 9092, + "topic": "pipeline-server.person_vehicle_bike" + } + } + }' + ``` 5. Connect to Kafka broker to view inference results: ```bash @@ -353,19 +361,97 @@ Steps to run Kafka: docker-compose -f docker-compose-kafka.yml down ``` -### Frame -Frame is another aspect of destination and it can be set to RTSP. +## Frame Destination +`Frame` is another type of destination that sends frames with superimposed bounding boxes over either RTSP or WebRTC protocols. -#### RTSP -RTSP is a type of frame destination supported. The following are available properties: -- type : "rtsp" +### RTSP +RTSP functionality must be enabled in Pipeline Server for this feature to be used, see [RTSP re-streaming](running_pipeline_server.md#real-time-streaming-protocol-rtsp-re-streaming). + +Steps for visualizing output over RTSP assuming Pipeline Server and your VLC client are running on the same host. + +1. Start a pipeline with frame destination type set as `rtsp` and an endpoint path `person-detection`. +```bash +curl localhost:8080/pipelines/object_detection/person_vehicle_bike -X POST -H \ +'Content-Type: application/json' -d \ +'{ + "source": { + "uri": "https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true", + "type": "uri" + }, + "destination": { + "metadata": { + "type": "file", + "format": "json-lines", + "path": "/tmp/results.jsonl" + }, + "frame": { + "type": "rtsp", + "path": "person-detection" + } + } +}' +``` +2. Use an RTSP client such as VLC to visualize the stream at address `rtsp://localhost:8554/person-detection`. + +> **Note:** Use Pipeline Server Client's [--rtsp-path](../client/README.md#--rtsp-path) argument to specific RTSP output endpoint. + +The following parameters can be optionally used to customize the stream: - path (required): custom string to uniquely identify the stream - cache-length (default 30): number of frames to buffer in rtsp pipeline. - encoding-quality (default 85): jpeg encoding quality (0 - 100). Lower values increase compression but sacrifice quality. - sync-with-source (default True): rate limit processing pipeline to encoded frame rate (e.g. 30 fps) - sync-with-destination (default True): block processing pipeline if rtsp pipeline is blocked. -For more information, see [RTSP re-streaming](running_video_analytics_serving.md#real-time-streaming-protocol-rtsp-re-streaming) +> **Note:** If the RTSP stream playback is choppy this may be due to +> network bandwidth. Decreasing the encoding-quality or increasing the +> cache-length can help. + +### WebRTC +WebRTC must be enabled for these request parameters to be accepted. For more information, see [WebRTC streaming](running_pipeline_server.md#web-real-time-communication-webrtc). + +#### Request WebRTC Frame Output +1. Start a pipeline to request frame destination type set as WebRTC and unique `peer-id` value set. For demonstration, peer-id is set as `person_detection_001` in example request below. + +```bash +curl localhost:8080/pipelines/object_detection/person_vehicle_bike -X POST -H \ +'Content-Type: application/json' -d \ +'{ + "source": { + "uri": "https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true", + "type": "uri" + }, + "destination": { + "metadata": { + "type": "file", + "format": "json-lines", + "path": "/tmp/results.jsonl" + }, + "frame": { + "type": "webrtc", + "peer-id": "person_detection_001" + } + } +}' +``` +2. Check that pipeline is running using [status request](restful_microservice_interfaces.md#get-pipelinesnameversioninstance_id) before trying to connect the WebRTC peer. +3. View the pipeline stream in your browser with url `http://localhost:8082/?destination_peer_id=person_detection_001&instance_id=e98dae1caf7511ecaaf90242ac170004` where the value of `destination_peer_id` query parameter matches the `--webrtc-peer-id` provided in the request from step 1 and the value of `instance_id` is produced in the response from step 1. + +> **Note:** Use Pipeline Server Client's [--webrtc-peer-id](../client/README.md#--webrtc-peer-id) argument to specify peer id. + +#### WebRTC Destination Parameters +Use the following parameters to customize the request: +- type : "webrtc" +- peer-id (required): custom string to uniquely identify the stream. May contain alphanumeric or underscore `_` characters only. +- cache-length (default 30): number of frames to buffer in WebRTC pipeline. +- cq-level (default 10): vp8 constrained encoding quality level (0 - 63). Lower values increase compression but sacrifice quality. Explanation and related details found on this [encoder parameter guide](https://www.webmproject.org/docs/encoder-parameters/). +- sync-with-source (default True): rate limit processing pipeline to encoded frame rate (e.g. 30 fps) +- sync-with-destination (default True): block processing pipeline if WebRTC pipeline is blocked. + +> **Note:** If WebRTC stream playback is choppy this may be due to +> network bandwidth. Decreasing the encoding-quality or increasing the +> cache-length can help. + +> **Note:** Providing an invalid value to Pipeline Client for `--webrtc-peer-id` will output a 400 "Invalid Destination" error. ## Parameters Pipeline parameters as specified in the pipeline definition file, can be set in the REST request. diff --git a/docs/defining_pipelines.md b/docs/defining_pipelines.md index da20fba..f509dc1 100644 --- a/docs/defining_pipelines.md +++ b/docs/defining_pipelines.md @@ -60,8 +60,7 @@ the template property is specific to the underlying framework (`GStreamer` or `FFmpeg`). Templates use the `source`, `destination` and `parameters` sections of an incoming pipeline `request` to customize the source, destination and behavior of a -pipeline implemented in an underlying framework. The Video Analytics -Serving `pipeline_manager` and framework specific modules +pipeline implemented in an underlying framework. The Pipeline Server `pipeline_manager` and framework specific modules `gstreamer_pipeline` and `ffmpeg_pipeline` provide default handling for typical `source` and `destination` options. @@ -74,8 +73,7 @@ for typical `source` and `destination` options. GStreamer templates use the [GStreamer Pipeline Description](https://gstreamer.freedesktop.org/documentation/tools/gst-launch.html?gi-language=c#pipeline-description) -syntax to concatenate elements into a pipeline. The Video Analytics -Serving `pipeline_manager` and `gstreamer_pipeline` modules +syntax to concatenate elements into a pipeline. The Pipeline Server `pipeline_manager` and `gstreamer_pipeline` modules parse the template, configure the `source`, `destination`, and `appsink` elements and then construct the pipeline based on incoming requests. @@ -250,8 +248,7 @@ gvadetect model={models[person_vehicle_bike_detection][1][network]} model-proc={ ``` The `model` and `model-proc` properties reference file paths to the -deep learning model as discovered and populated by the Video Analytics -Serving `model_manager` module. The `model_manager` module provides a +deep learning model as discovered and populated by the Pipeline Server `model_manager` module. The `model_manager` module provides a python dictionary associating model names and versions to their absolute paths enabling pipeline templates to reference them by name. You can use the `model-proc` property to point to custom model-proc by specifying absolute path. More details are provided in the [Deep Learning Models](#deep-learning-models) section. @@ -337,7 +334,7 @@ Pipeline parameters enable developers to customize pipelines based on incoming requests. Parameters are an optional section within a pipeline definition and are used to specify which pipeline properties are configurable and what values are valid. Developers can also -specify default values for each parameter. +specify default values for each parameter or set to read from environment variable. ### Defining Parameters as JSON Schema @@ -402,7 +399,7 @@ on how to associate a parameter with one or more GStreamer element properties. The JSON schema for a GStreamer pipeline parameter can include an -`element` section in one of three forms. +`element` section in one of the below forms. 1. **Simple String**.

The string indicates the `name` of an element in @@ -523,6 +520,65 @@ The JSON schema for a GStreamer pipeline parameter can include an } ``` +#### Parameters and default value + +Parameters default value in pipeline definitions can be set in section in one of two forms(setting value or by environment variable) below. + +1. **Set default value directly** + + A default value can be set for the element property using `default` key. + + **Example:** + + The following snippet defines the parameter `detection-device` + which sets the `device` property of `detection` with default value `GPU` + + ```json + "parameters": { + "type": "object", + "properties": { + "detection-device": { + "element": { + "name":"detection", + "property":"device" + }, + "type": "string", + "default": "GPU" + } + } + } + ``` + +1. **Read default value from environment variable** + + A default value can be set using environment variable for the element property using `default` key. + + **Example:** + + The following snippet defines the parameter `detection-device` + which sets the `device` property of the `detection` with default value from environment variable `DETECTION_DEVICE`. If the environment variable is not set, pipeline server won't set a default and the element's built-in default will be used by the underlying framework. + + ```json + "parameters": { + "type": "object", + "properties": { + "detection-device": { + "element": { + "name":"detection", + "property":"device" + }, + "type": "string", + "default": "{env[DETECTION_DEVICE]}" + } + } + } + ``` + + Set `DETECTION_DEVICE` environment variable at Pipeline Server start. + ```bash + ./docker/run.sh -e DETECTION_DEVICE=GPU + ``` + #### Parameters and FFmpeg Filters Parameters in FFmpeg pipeline definitions can include information on @@ -719,8 +775,7 @@ Parameter Resolution: ``` ### Reserved Parameters -The following parameters have built-in handling within the Video -Analytics Serving modules and should only be included in pipeline +The following parameters have built-in handling within the Pipeline Server modules and should only be included in pipeline definitions wishing to trigger that handling. #### bus-messages @@ -789,6 +844,12 @@ The Pipeline Server automatically looks for this file in the path `models/model-alias/model-version/*.json`. Note that the model manager will fail to load if there are multiple ".json" model-proc files in this directory. +Some models might have a separate `.txt` file for `labels`, in addition to or instead of `model-proc`. +If such a file exists, the Pipeline Server automatically looks for this file in the path +`models/model-alias/model-version/*.txt`. + +For more details on model proc and labels see [Model Proc File](https://dlstreamer.github.io/dev_guide/model_proc_file.html) + ### Intel(R) DL Streamer For more information on Intel(R) DL Streamer `model-proc` files and samples for common models please see the Intel(R) DL Streamer @@ -808,31 +869,33 @@ determines their name, version and precision. On startup, the Pipeline Server `model_manager` searches the configured model directory and creates a dictionary storing the location of each model and their associated collateral -(i.e. `.bin`, `.xml`, `.json`) +(i.e. `.bin`, `.xml`, `.json`, `.txt`) The hierarchical directory structure is made up of four levels: `///` +> Note: Not all models have a file for labels. In such cases, the labels could be listed in the `model-proc`file. -Here's a sample directory listing for the `emotion_recognition` model: +Here's a sample directory listing for the `yolo-v3-tf` model: ``` models/ -├── emotion_recognition // name -│ └── 1 // version -│ ├── emotions-recognition-retail-0003.json // proc file -│ ├── FP16 // precision -│ │ ├── emotions-recognition-retail-0003-fp16.bin // bin file -│ │ └── emotions-recognition-retail-0003-fp16.xml // network file -│ ├── FP32 -│ │ ├── emotions-recognition-retail-0003.bin -│ │ └── emotions-recognition-retail-0003.xml -│ └── INT8 -│ ├── emotions-recognition-retail-0003-int8.bin -│ └── emotions-recognition-retail-0003-int8.xml +└── object_detection // name + ├── 1 // version + │ ├── yolo-v3-tf.json // proc file + │ ├── coco-80cl.txt // labels file + │ ├── FP16 // precision + │ │ ├── yolo-v3-tf.bin // bin file + │ │ ├── yolo-v3-tf.mapping + │ │ └── yolo-v3-tf.xml // network file + │ ├── FP32 + │ │ ├── yolo-v3-tf.bin + │ │ ├── yolo-v3-tf.mapping + │ │ └── yolo-v3-tf.xml ``` + ## Referencing Models in Pipeline Definitions Pipeline definitions reference models in their templates in a similar @@ -846,16 +909,18 @@ dictionary and standard Python dictionary indexing with the following hierarchy: `models[model-name][version][precision][file-type]`. The default precision for a given model and inference device -(`CPU`:`FP32`,`GPU`:`FP16`,`HDDL`:`FP16`) can also be referenced +(`CPU`:`FP32`,`HDDL`:`FP16`,`GPU`:`FP16`,`VPU`:`FP16`,`MYRIAD`:`FP16`, +`MULTI`:`FP16`,`HETERO`:`FP16`,`AUTO`:`FP16`) can also be referenced without specifying the precision: `models[model-name][version][file-type]`. **Examples:** -* `models[emotion_recognition][1][proc]` expands to `emotions-recognition-retail-0003.json` -* If running on CPU `models[emotion_recognition][1][network]` expands to `emotions-recognition-retail-0003.xml` -* Running on GPU `models[emotion_recognition][1][network]` expands to `emotions-recognition-retail-0003-fp16.xml` -* `models[emotion_recognition][1][INT8][network]` expands to `emotions-recognition-retail-0003-int8.xml` +* `models[object_detection][1][proc]` expands to `models/object_detection/1/yolo-v3-tf.json` +* `models[object_detection][1][labels]` expands to `models/object_detection/1/coco-80cl.txt` +* If running on CPU `models[object_detection][1][network]` expands to `models/object_detection/1/FP32/yolo-v3-tf.xml` +* Running on GPU `models[object_detection][1][network]` expands to `models/object_detection/1/FP16/yolo-v3-tf.xml` +* `models[object_detection][1][FP16][network]` expands to `models/object_detection/1/FP16/yolo-v3-tf.xml` --- \* Other names and brands may be claimed as the property of others. diff --git a/docs/images/grafana_dashboard_active.jpg b/docs/images/grafana_dashboard_active.jpg new file mode 100644 index 0000000..2194894 Binary files /dev/null and b/docs/images/grafana_dashboard_active.jpg differ diff --git a/docs/images/grafana_dashboard_initial.jpg b/docs/images/grafana_dashboard_initial.jpg new file mode 100644 index 0000000..9a619b1 Binary files /dev/null and b/docs/images/grafana_dashboard_initial.jpg differ diff --git a/docs/images/video_analytics_service_architecture.png b/docs/images/pipeline_server_architecture.png similarity index 100% rename from docs/images/video_analytics_service_architecture.png rename to docs/images/pipeline_server_architecture.png diff --git a/docs/images/webrtc-launch-pipeline.png b/docs/images/webrtc-launch-pipeline.png new file mode 100644 index 0000000..66a7425 Binary files /dev/null and b/docs/images/webrtc-launch-pipeline.png differ diff --git a/docs/images/webrtc-pipeline-params3.png b/docs/images/webrtc-pipeline-params3.png new file mode 100644 index 0000000..ee279d7 Binary files /dev/null and b/docs/images/webrtc-pipeline-params3.png differ diff --git a/docs/images/webrtc-visualize-autolaunch1.png b/docs/images/webrtc-visualize-autolaunch1.png new file mode 100644 index 0000000..63a0d24 Binary files /dev/null and b/docs/images/webrtc-visualize-autolaunch1.png differ diff --git a/docs/images/webrtc-visualize-remote-chrome1.png b/docs/images/webrtc-visualize-remote-chrome1.png new file mode 100644 index 0000000..0ecffc3 Binary files /dev/null and b/docs/images/webrtc-visualize-remote-chrome1.png differ diff --git a/docs/images/webrtc-visualize3.png b/docs/images/webrtc-visualize3.png new file mode 100644 index 0000000..83c2f8f Binary files /dev/null and b/docs/images/webrtc-visualize3.png differ diff --git a/docs/images/webrtc_composition.png b/docs/images/webrtc_composition.png new file mode 100644 index 0000000..9bb4659 Binary files /dev/null and b/docs/images/webrtc_composition.png differ diff --git a/docs/restful_microservice_interfaces.md b/docs/restful_microservice_interfaces.md index fc650bf..50e6a15 100644 --- a/docs/restful_microservice_interfaces.md +++ b/docs/restful_microservice_interfaces.md @@ -201,7 +201,7 @@ Return pipeline description. Start new pipeline instance. Four sections are supported by default: source, destination, parameters, and tags. -These sections have special handling based on the [default schema](/vaserving/schema.py) and/or the schema +These sections have special handling based on the [default schema](/server/schema.py) and/or the schema defined in the pipeline.json file for the requested pipeline. diff --git a/docs/run_script_reference.md b/docs/run_script_reference.md index 7744252..ab7da0b 100644 --- a/docs/run_script_reference.md +++ b/docs/run_script_reference.md @@ -64,6 +64,9 @@ This argument enables rtsp restreaming by setting `ENABLE_RTSP` environment vari ### RTSP Port (--rtsp-port) This argument specifies the port to use for rtsp re-streaming. +### Enable WebRTC re-streaming (--enable-webrtc) +This argument enables webrtc restreaming by setting `ENABLE_WEBRTC` environment. Additional dependencies must be running as described [here](./samples/webrtc/README.md). + ### Developer Mode (--dev) This argument runs the image in `developer` mode which configures the environment as follows: diff --git a/docs/running_video_analytics_serving.md b/docs/running_pipeline_server.md similarity index 72% rename from docs/running_video_analytics_serving.md rename to docs/running_pipeline_server.md index 6113e32..8438dee 100644 --- a/docs/running_video_analytics_serving.md +++ b/docs/running_pipeline_server.md @@ -34,7 +34,7 @@ the status of media analytics pipelines. | Command | Media Analytics Base Image | Image Name | Description | | --- | --- | --- | ---- | -| `./docker/run.sh`|**Intel(R) DL Streamer** docker [file](https://github.com/openvinotoolkit/docker_ci/tree/master/dockerfiles/ubuntu20) |`dlstreamer-pipeline-server-gstreamer` | Intel(R) DL Streamer based microservice with default pipeline definitions and deep learning models. Exposes port 8080. Mounts the host system's graphics devices. | +| `./docker/run.sh`|**Intel(R) DL Streamer** docker [file](https://github.com/dlstreamer/dlstreamer/blob/master/docker/binary/ubuntu20/dlstreamer.Dockerfile) |`dlstreamer-pipeline-server-gstreamer` | Intel(R) DL Streamer based microservice with default pipeline definitions and deep learning models. Exposes port 8080. Mounts the host system's graphics devices. | | `./docker/run.sh --framework ffmpeg`| **FFmpeg Video Analytics** docker [file](https://github.com/VCDP/FFmpeg-patch/blob/ffmpeg4.2_va/docker/Dockerfile.source) |`dlstreamer-pipeline-server-ffmpeg`| FFmpeg Video Analytics based microservice with default pipeline definitions and deep learning models. Mounts the graphics devices. | @@ -44,11 +44,10 @@ The following examples demonstrate how to start and issue requests to a Pipeline Server Microservice either using the `Intel(R) DL Streamer` based image or the `FFmpeg` based image. -> **Note:** The following examples assume that the Video Analytics -> Serving image has already been built. For more information and +> **Note:** The following examples assume that the Pipeline Server image has already been built. For more information and > instructions on building please see the [Getting Started > Guide](../README.md) or [Building Intel(R) DL Streamer Pipeline Server Docker -> Images](../docs/building_video_analytics_serving.md) +> Images](../docs/building_pipeline_server.md) > **Note:** Both the `Intel(R) DL Streamer` based microservice and the `FFmpeg` > based microservice use the same default port: `8080` and only one @@ -152,61 +151,43 @@ docker stop dlstreamer-pipeline-server-gstreamer docker stop dlstreamer-pipeline-server-ffmpeg ``` -# Real Time Streaming Protocol (RTSP) Re-streaming -> **Note:** RTSP Re-streaming supported only in Intel(R) DL Streamer based Microservice. +# Visualizing Inference Output + +> **Note:** This feature is supported only in the Intel(R) DL Streamer based Microservice. + +There are two modes of visualization, [RTSP](https://en.wikipedia.org/wiki/Real_Time_Streaming_Protocol) and [WebRTC](https://en.wikipedia.org/wiki/WebRTC). + +## Underlying Protocols + +RTSP and WebRTC are standards based definitions for rendering media output and facilitating **control** of stream activities (e.g., play, pause, rewind) by negotiating with remote clients about how data is to be authorized, packaged and streamed. However they are not responsible for transporting media data. + +The actual **transfer** of the media data is governed by [Realtime Transport Protocol (RTP)](https://en.wikipedia.org/wiki/Real-time_Transport_Protocol). RTP is essentially wrapping UDP to provide a level of reliability. + +The [Session Description Protocol (SDP)](https://en.wikipedia.org/wiki/Session_Description_Protocol) is used by RTSP as a standardized way to understand session level **parameters** of the media stream (e.g., URI, session name, date/time session is available, etc.). + +Real Time Control Protocol (RTCP) collects RTP **statistics** that are needed to measure throughput of streaming sessions. + +## Real Time Streaming Protocol (RTSP) The Pipeline Server contains an [RTSP](https://en.wikipedia.org/wiki/Real_Time_Streaming_Protocol) server that can be optionally started at launch time. This allows an RTSP client to connect and visualize input video with superimposed bounding boxes. -### Enable RTSP in service ```bash docker/run.sh --enable-rtsp ``` + > **Note:** RTSP server starts at service start-up for all pipelines. It uses port 8554 and has been tested with [VLC](https://www.videolan.org/vlc/index.html). -### Connect and visualize -> **Note:** Leverage REST client when available. +## Web Real Time Communication (WebRTC) -* Start a pipeline with curl request with frame destination set as rtsp and custom path set. For demonstration, path set as `person-detection` in example request below. -```bash -curl localhost:8080/pipelines/object_detection/person_vehicle_bike -X POST -H \ -'Content-Type: application/json' -d \ -'{ - "source": { - "uri": "https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true", - "type": "uri" - }, - "destination": { - "metadata": { - "type": "file", - "path": "/tmp/results.txt", - "format": "json-lines" - }, - "frame": { - "type": "rtsp", - "path": "person-detection" - } - } -}' -``` -* Check that pipeline is running using [status request](restful_microservice_interfaces.md#get-pipelinesnameversioninstance_id) before trying to connect to the RTSP server. -* Re-stream pipeline using VLC network stream with url `rtsp://localhost:8554/person-detection`. +The Pipeline Server contains support for [WebRTC](https://en.wikipedia.org/wiki/WebRTC) that allows viewing from any system on the current network right within your browser. This is enabled using an HTML5 video player and JavaScript APIs in the browser to negotiate with Pipeline Server. With these prerequisites provided as dependent microservices, it makes a very low bar for clients to render streams that show what is being detected by the running pipeline. This allows a user to connect and visualize input video with superimposed bounding boxes by navigating to a webserver that hosts the page with HTML5 video player and backed by JavaScript APIs. Has been tested with Chrome and Firefox, though [other browsers](https://html5test.com) are also supported. -### RTSP destination params. - -> **Note:** If the RTSP stream playback is choppy this may be due to -> network bandwidth. Decreasing the encoding-quality or increasing the -> cache-length can help. ```bash -"frame": { - "type": "rtsp", - "path" : (required. When path already exists, throws error), - "cache-length": (default 30) number of frames to buffer in rtsp pipeline. - "encoding-quality": (default 85): jpeg encoding quality (0 - 100). Lower values increase compression but sacrifice quality. - "sync-with-source": (default True) process media at the encoded frame rate (e.g. 30 fps) - "sync-with-destination": (default True) block processing pipeline if rtsp pipeline is blocked. -} +docker/run.sh --enable-webrtc ``` +> **Note:** WebRTC support starts at service start-up for all pipelines. It _requires_ a WebRTC signaling server container running on port 8443 and a web server container running on port 8082. + +For details and launch instructions for prerequisites, refer to our [WebRTC sample](/samples/webrtc/README.md). # Selecting Pipelines and Models at Runtime @@ -275,6 +256,12 @@ Configure your host by downloading the [HDDL driver package](https://storage.ope > The HDDL plug-in in the container communicates with the daemon on the host, so the daemon must be started before running the container. +## Mixed Device +Based on enabled hardware `MULTI`, `HETERO` and `AUTO` plugins are also supported. +* `MULTI`: The Multi-Device plugin automatically assigns inference requests to available computational devices to execute the requests in parallel. Refer to OpenVINO [documentation](https://docs.openvino.ai/latest/openvino_docs_IE_DG_supported_plugins_MULTI.html). Example Inference Device `MULTI:CPU,GPU` +* `HETERO`: The heterogeneous plugin enables computing the inference of one network on several devices.Refer to OpenVINO [documentation](https://docs.openvino.ai/latest/openvino_docs_IE_DG_supported_plugins_HETERO.html). Example Inference Device `HETERO:CPU,GPU` +* `AUTO`: Use `AUTO` as the device name to delegate selection of an actual accelerator to OpenVINO. Refer to OpenVINO [documentation](https://docs.openvino.ai/latest/openvino_docs_IE_DG_supported_plugins_AUTO.html). Example Inference Device `AUTO` + # Developer Mode The run script includes a `--dev` flag which starts the @@ -303,8 +290,11 @@ Developer mode: docker/run.sh --dev ``` ``` -pipeline-server@my-host:~$ python3 -m pipeline-server +pipeline-server@my-host:~$ python3 -m server ``` +By default, the running user's UID value determines user name inside the container. A UID of 1001 is assigned as `pipeline-server`. For other UIDs, you may see `I have no name!@my-host`. +To run as another user, you can add `--user ` to the run command. i.e. to add pipeline-server by name use add `--user pipeline-server` + --- \* Other names and brands may be claimed as the property of others. diff --git a/extensions/gva_event_meta/gva_event_convert.py b/extensions/gva_event_meta/gva_event_convert.py index c01d654..57f5d22 100644 --- a/extensions/gva_event_meta/gva_event_convert.py +++ b/extensions/gva_event_meta/gva_event_convert.py @@ -6,7 +6,7 @@ import json import gva_event_meta -from vaserving.common.utils import logging +from server.common.utils import logging logger = logging.get_logger('gva_event_convert', is_static=True) diff --git a/extensions/gva_event_meta/gva_event_meta.py b/extensions/gva_event_meta/gva_event_meta.py index 2a777d5..34b077f 100644 --- a/extensions/gva_event_meta/gva_event_meta.py +++ b/extensions/gva_event_meta/gva_event_meta.py @@ -5,7 +5,7 @@ ''' import json -from vaserving.common.utils import logging +from server.common.utils import logging logger = logging.get_logger('gva_event_meta', is_static=True) ''' diff --git a/extensions/spatial_analytics/object_line_crossing.md b/extensions/spatial_analytics/object_line_crossing.md index 06c2e98..6be840c 100644 --- a/extensions/spatial_analytics/object_line_crossing.md +++ b/extensions/spatial_analytics/object_line_crossing.md @@ -65,15 +65,15 @@ The algorithm to calculate line crossing is based on the following article: https://www.geeksforgeeks.org/check-if-two-given-line-segments-intersect/ ## Example Run -Intel(R) Deep Learning Streamer (Intel(R) DL Streamer) Pipeline Server comes with an [example configuration](../../vaclient/parameter_files/object-line-crossing.json) for object-line-crossing +Intel(R) Deep Learning Streamer (Intel(R) DL Streamer) Pipeline Server comes with an [example configuration](../../client/parameter_files/object-line-crossing.json) for object-line-crossing 1. [Build](../../README.md#building-the-microservice) & [Run](../../README.md#running-the-microservice) the Pipeline Server -2. Run object-line-crossing pipeline with vaclient using example parameter file: +2. Run object-line-crossing pipeline with pipeline_client using example parameter file: ``` - vaclient/vaclient.sh run object_tracking/object_line_crossing https://github.com/intel-iot-devkit/sample-videos/blob/master/people-detection.mp4?raw=true --parameter-file vaclient/parameter_files/object-line-crossing.json + ./client/pipeline_client.sh run object_tracking/object_line_crossing https://github.com/intel-iot-devkit/sample-videos/blob/master/people-detection.mp4?raw=true --parameter-file client/parameter_files/object-line-crossing.json ``` - You will see events among the detections in vaclient output: + You will see events among the detections in pipeline_client output: ``` Timestamp 43916666666 - person (1.00) [0.38, 0.47, 0.60, 0.91] {'id': 7} @@ -88,7 +88,7 @@ Intel(R) Deep Learning Streamer (Intel(R) DL Streamer) Pipeline Server comes wit ``` ## Watermark Example -1. Open the [example configuration](../../vaclient/parameter_files/object-line-crossing.json) and add `enable_watermark` as follows: +1. Open the [example configuration](../../client/parameter_files/object-line-crossing.json) and add `enable_watermark` as follows: ``` "object-line-crossing-config": { "lines": [ @@ -101,9 +101,9 @@ Intel(R) Deep Learning Streamer (Intel(R) DL Streamer) Pipeline Server comes wit ``` ./docker/run.sh -v /tmp:/tmp --enable-rtsp -3. Run object-line-crossing pipeline with vaclient using example parameter file with additional parameter `rtsp-path`. Note that `rtsp-path` is set to `pipeline-server`, this path is what will be used to view the rtsp stream: +3. Run object-line-crossing pipeline with pipeline_client using example parameter file with additional parameter `rtsp-path`. Note that `rtsp-path` is set to `pipeline-server`, this path is what will be used to view the rtsp stream: ``` - vaclient/vaclient.sh run object_tracking/object_line_crossing https://github.com/intel-iot-devkit/sample-videos/blob/master/people-detection.mp4?raw=true --parameter-file vaclient/parameter_files/object-line-crossing.json --rtsp-path pipeline-server + ./client/pipeline_client.sh run object_tracking/object_line_crossing https://github.com/intel-iot-devkit/sample-videos/blob/master/people-detection.mp4?raw=true --parameter-file client/parameter_files/object-line-crossing.json --rtsp-path pipeline-server ``` 4. Open up a media player with network stream viewing (VLC for example) and connect to `rtsp:://:8554/pipeline-server`. The stream is real time so you make need to rerun the pipeline request to see the stream. You will see people-detection.mp4 with an overlay of points. Each line x, has a start point (x_Start) and end point (x_End). At the midpoint between start and end, a count displays how many objects have crossed the line. diff --git a/extensions/spatial_analytics/object_line_crossing.py b/extensions/spatial_analytics/object_line_crossing.py index 090e038..31385b7 100644 --- a/extensions/spatial_analytics/object_line_crossing.py +++ b/extensions/spatial_analytics/object_line_crossing.py @@ -8,7 +8,7 @@ from collections import namedtuple from enum import Enum from extensions.gva_event_meta import gva_event_meta -from vaserving.common.utils import logging +from server.common.utils import logging Point = namedtuple('Point', ['x', 'y']) BoundingBox = namedtuple('BoundingBox', ['left', 'top', 'width', 'height']) diff --git a/extensions/spatial_analytics/object_zone_count.md b/extensions/spatial_analytics/object_zone_count.md index 216ce1f..10da00b 100644 --- a/extensions/spatial_analytics/object_zone_count.md +++ b/extensions/spatial_analytics/object_zone_count.md @@ -56,15 +56,15 @@ If a tracked object crosses any of the lines, an event of type `object-zone-coun } ``` ## Example Run -Intel(R) Deep Learning Streamer (Intel(R) DL Streamer) Pipeline Server comes with an [example configuration](../../vaclient/parameter_files/object-zone-count.json) for object-zone-count +Intel(R) Deep Learning Streamer (Intel(R) DL Streamer) Pipeline Server comes with an [example configuration](../../client/parameter_files/object-zone-count.json) for object-zone-count 1. [Build](../../README.md#building-the-microservice) & [Run](../../README.md#running-the-microservice) the Pipeline Server -2. Run object-zone-count pipeline with vaclient using example parameter file: +2. Run object-zone-count pipeline with pipeline_client using example parameter file: ``` - vaclient/vaclient.sh run object_detection/object_zone_count https://github.com/intel-iot-devkit/sample-videos/blob/master/people-detection.mp4?raw=true --parameter-file vaclient/parameter_files/object-zone-count.json + ./client/pipeline_client.sh run object_detection/object_zone_count https://github.com/intel-iot-devkit/sample-videos/blob/master/people-detection.mp4?raw=true --parameter-file client/parameter_files/object-zone-count.json ``` - You will see events among the detections in vaclient output: + You will see events among the detections in pipeline_client output: ``` Timestamp 45000000000 - person (0.76) [0.28, 0.15, 0.42, 0.72] @@ -76,7 +76,7 @@ Intel(R) Deep Learning Streamer (Intel(R) DL Streamer) Pipeline Server comes wit ``` ## Watermark Example -1. Open the [example configuration](../../vaclient/parameter_files/object-zone-count.json) and add `enable_watermark` as follows: +1. Open the [example configuration](../../client/parameter_files/object-zone-count.json) and add `enable_watermark` as follows: ``` "object-zone-count-config": { "zones": [ @@ -89,9 +89,9 @@ Intel(R) Deep Learning Streamer (Intel(R) DL Streamer) Pipeline Server comes wit ``` ./docker/run.sh -v /tmp:/tmp --enable-rtsp -3. Run object-zone-count pipeline with vaclient using example parameter file with additional parameter `rtsp-path`. Note that `rtsp-path` is set to `pipeline-server`, this path is what will be used to view the rtsp stream: +3. Run object-zone-count pipeline with pipeline_client using example parameter file with additional parameter `rtsp-path`. Note that `rtsp-path` is set to `pipeline-server`, this path is what will be used to view the rtsp stream: ``` - vaclient/vaclient.sh run object_detection/object_zone_count https://github.com/intel-iot-devkit/sample-videos/blob/master/people-detection.mp4?raw=true --parameter-file vaclient/parameter_files/object-zone-count.json --rtsp-path pipeline-server + ./client/pipeline_client.sh run object_detection/object_zone_count https://github.com/intel-iot-devkit/sample-videos/blob/master/people-detection.mp4?raw=true --parameter-file client/parameter_files/object-zone-count.json --rtsp-path pipeline-server ``` 4. Open up a media player with network stream viewing (VLC for example) and connect to `rtsp:://:8554/pipeline-server`. The stream is real time so you might want to setup your media player ahead of time. You will see people-detection.mp4 with an overlay of points. Each zone has a start point which has a label of the zone name. Other points of the zone are not labeled. If an object `intersects` or is `within` a zone the label is updated to reflect that. diff --git a/extensions/spatial_analytics/object_zone_count.py b/extensions/spatial_analytics/object_zone_count.py index addc2d2..ea778f2 100644 --- a/extensions/spatial_analytics/object_zone_count.py +++ b/extensions/spatial_analytics/object_zone_count.py @@ -6,7 +6,7 @@ import traceback from extensions.gva_event_meta import gva_event_meta -from vaserving.common.utils import logging +from server.common.utils import logging def print_message(message): print("", flush=True) diff --git a/models_list/action-recognition-0001.json b/models_list/action-recognition-0001.json index 8fdac5d..89249bf 100644 --- a/models_list/action-recognition-0001.json +++ b/models_list/action-recognition-0001.json @@ -14,7 +14,6 @@ "output_postproc": [ { "attribute_name": "action", - "layer_name": "data", "converter": "tensor_to_label", "method": "softmax", "labels": [ diff --git a/models_list/models.list.yml b/models_list/models.list.yml index 4229ae1..cd0c9a5 100644 --- a/models_list/models.list.yml +++ b/models_list/models.list.yml @@ -27,3 +27,13 @@ alias: action_recognition version: encoder precision: [FP16,FP32] +- model: person-detection-retail-0013 + alias: object_detection + version: person + precision: [FP16,FP32] + model-proc: person-detection-retail-0013.json +- model: vehicle-detection-0202 + alias: object_detection + version: vehicle + precision: [FP16,FP32] + model-proc: vehicle-detection-0202.json diff --git a/models_list/person-detection-retail-0013.json b/models_list/person-detection-retail-0013.json new file mode 100644 index 0000000..c2ce13e --- /dev/null +++ b/models_list/person-detection-retail-0013.json @@ -0,0 +1,13 @@ +{ + "json_schema_version": "2.0.0", + "input_preproc": [], + "output_postproc": [ + { + "labels": [ + "", + "person" + ], + "converter": "tensor_to_bbox_ssd" + } + ] +} diff --git a/models_list/vehicle-detection-0202.json b/models_list/vehicle-detection-0202.json new file mode 100644 index 0000000..0090b89 --- /dev/null +++ b/models_list/vehicle-detection-0202.json @@ -0,0 +1,11 @@ +{ + "json_schema_version": "2.0.0", + "input_preproc": [], + "output_postproc": [ + { + "labels": [ + "vehicle" + ] + } + ] +} diff --git a/pipelines/gstreamer/action_recognition/general/README.md b/pipelines/gstreamer/action_recognition/general/README.md index e783870..c796e37 100644 --- a/pipelines/gstreamer/action_recognition/general/README.md +++ b/pipelines/gstreamer/action_recognition/general/README.md @@ -79,7 +79,7 @@ Below is a sample of the inference results i.e metadata (json format): } ``` -The corresponding vaclient output resembles: +The corresponding pipeline_client output resembles: ```code Timestamp diff --git a/pipelines/gstreamer/audio_detection/environment/pipeline.json b/pipelines/gstreamer/audio_detection/environment/pipeline.json index 06689b1..951f8c7 100755 --- a/pipelines/gstreamer/audio_detection/environment/pipeline.json +++ b/pipelines/gstreamer/audio_detection/environment/pipeline.json @@ -14,7 +14,8 @@ "properties": { "device": { "element": "detection", - "type": "string" + "type": "string", + "default": "{env[DETECTION_DEVICE]}" }, "bus-messages": { "description": "Prints GstBus messages as logger info", diff --git a/pipelines/gstreamer/object_classification/vehicle_attributes/pipeline.json b/pipelines/gstreamer/object_classification/vehicle_attributes/pipeline.json index 49b4b3e..9a0381a 100755 --- a/pipelines/gstreamer/object_classification/vehicle_attributes/pipeline.json +++ b/pipelines/gstreamer/object_classification/vehicle_attributes/pipeline.json @@ -26,14 +26,16 @@ "name": "detection", "property": "device" }, - "type": "string" + "type": "string", + "default": "{env[DETECTION_DEVICE]}" }, "classification-device": { "element": { "name": "classification", "property": "device" }, - "type": "string" + "type": "string", + "default": "{env[CLASSIFICATION_DEVICE]}" }, "inference-interval": { "element": diff --git a/pipelines/gstreamer/object_detection/object_zone_count/pipeline.json b/pipelines/gstreamer/object_detection/object_zone_count/pipeline.json index 629165a..f39e823 100644 --- a/pipelines/gstreamer/object_detection/object_zone_count/pipeline.json +++ b/pipelines/gstreamer/object_detection/object_zone_count/pipeline.json @@ -23,7 +23,8 @@ "name": "detection", "property": "device" }, - "type": "string" + "type": "string", + "default": "{env[DETECTION_DEVICE]}" }, "detection-model-instance-id": { "element": { diff --git a/pipelines/gstreamer/object_detection/person/pipeline.json b/pipelines/gstreamer/object_detection/person/pipeline.json new file mode 100644 index 0000000..828effd --- /dev/null +++ b/pipelines/gstreamer/object_detection/person/pipeline.json @@ -0,0 +1,29 @@ +{ + "type": "GStreamer", + "template": [ + "{auto_source} ! decodebin", + " ! gvadetect model={models[object_detection][person][network]} name=detection", + " ! gvametaconvert name=metaconvert ! gvametapublish name=destination", + " ! appsink name=appsink" + ], + "description": "Person Detection based on person-detection-retail-0013", + "parameters": { + "type": "object", + "properties": { + "detection-properties": { + "element": { + "name": "detection", + "format": "element-properties" + } + }, + "detection-device": { + "element": { + "name": "detection", + "property": "device" + }, + "type": "string", + "default": "{env[DETECTION_DEVICE]}" + } + } + } +} diff --git a/pipelines/gstreamer/object_detection/person_vehicle_bike/pipeline.json b/pipelines/gstreamer/object_detection/person_vehicle_bike/pipeline.json index 9d05ace..df6c541 100755 --- a/pipelines/gstreamer/object_detection/person_vehicle_bike/pipeline.json +++ b/pipelines/gstreamer/object_detection/person_vehicle_bike/pipeline.json @@ -20,7 +20,8 @@ "name": "detection", "property": "device" }, - "type": "string" + "type": "string", + "default": "{env[DETECTION_DEVICE]}" }, "detection-model-instance-id": { "element": { diff --git a/pipelines/gstreamer/object_detection/vehicle/pipeline.json b/pipelines/gstreamer/object_detection/vehicle/pipeline.json new file mode 100644 index 0000000..83f2a63 --- /dev/null +++ b/pipelines/gstreamer/object_detection/vehicle/pipeline.json @@ -0,0 +1,29 @@ +{ + "type": "GStreamer", + "template": [ + "{auto_source} ! decodebin", + " ! gvadetect model={models[object_detection][vehicle][network]} name=detection", + " ! gvametaconvert name=metaconvert ! gvametapublish name=destination", + " ! appsink name=appsink" + ], + "description": "Vehicle Detection based on vehicle-detection-0202", + "parameters": { + "type": "object", + "properties": { + "detection-properties": { + "element": { + "name": "detection", + "format": "element-properties" + } + }, + "detection-device": { + "element": { + "name": "detection", + "property": "device" + }, + "type": "string", + "default": "{env[DETECTION_DEVICE]}" + } + } + } +} diff --git a/pipelines/gstreamer/object_tracking/object_line_crossing/pipeline.json b/pipelines/gstreamer/object_tracking/object_line_crossing/pipeline.json index ffb0a43..30c0075 100644 --- a/pipelines/gstreamer/object_tracking/object_line_crossing/pipeline.json +++ b/pipelines/gstreamer/object_tracking/object_line_crossing/pipeline.json @@ -58,14 +58,16 @@ "name": "detection", "property": "device" }, - "type": "string" + "type": "string", + "default": "{env[DETECTION_DEVICE]}" }, "classification-device": { "element": { "name": "classification", "property": "device" }, - "type": "string" + "type": "string", + "default": "{env[CLASSIFICATION_DEVICE]}" }, "tracking-device": { "element": diff --git a/pipelines/gstreamer/object_tracking/person_vehicle_bike/pipeline.json b/pipelines/gstreamer/object_tracking/person_vehicle_bike/pipeline.json index 332c81a..226c23a 100755 --- a/pipelines/gstreamer/object_tracking/person_vehicle_bike/pipeline.json +++ b/pipelines/gstreamer/object_tracking/person_vehicle_bike/pipeline.json @@ -33,14 +33,16 @@ "name": "detection", "property": "device" }, - "type": "string" + "type": "string", + "default": "{env[DETECTION_DEVICE]}" }, "classification-device": { "element": { "name": "classification", "property": "device" }, - "type": "string" + "type": "string", + "default": "{env[CLASSIFICATION_DEVICE]}" }, "tracking-device": { "element": diff --git a/requirements.service.txt b/requirements.service.txt index a976865..5f8b4b5 100644 --- a/requirements.service.txt +++ b/requirements.service.txt @@ -1,4 +1,5 @@ connexion == 2.11.1 +Flask-Cors == 3.0.10 swagger-ui-bundle == 0.0.5 python_dateutil == 2.8.0 setuptools >= 41.2.0 diff --git a/requirements.webrtc.txt b/requirements.webrtc.txt new file mode 100644 index 0000000..f034536 --- /dev/null +++ b/requirements.webrtc.txt @@ -0,0 +1,2 @@ +websockets == 10.1 +nest_asyncio == 1.5.4 diff --git a/samples/app_source_destination/README.md b/samples/app_source_destination/README.md index 4640b39..a1c8b74 100644 --- a/samples/app_source_destination/README.md +++ b/samples/app_source_destination/README.md @@ -77,7 +77,7 @@ docker/build.sh docker/run.sh --dev ``` ``` -openvino@host:~$ python3 samples/app_source_destination/app_source_destination.py +pipeline-server@host:~$ python3 samples/app_source_destination/app_source_destination.py ``` ``` {"levelname": "INFO", "asctime": "2021-04-09 05:24:43,626", "message": "Creating Instance of Pipeline object_detection/app_src_dst", "module": "pipeline_manager"} diff --git a/samples/app_source_destination/app_source_destination.py b/samples/app_source_destination/app_source_destination.py index 0d8fede..3eaeecd 100644 --- a/samples/app_source_destination/app_source_destination.py +++ b/samples/app_source_destination/app_source_destination.py @@ -16,8 +16,8 @@ # pylint: disable=wrong-import-position from gi.repository import Gst from gstgva.util import gst_buffer_data -from vaserving.gstreamer_app_source import GvaFrameData -from vaserving.vaserving import VAServing +from server.gstreamer_app_source import GvaFrameData +from server.pipeline_server import PipelineServer # pylint: enable=wrong-import-position source_dir = os.path.abspath(os.path.join(os.path.dirname(__file__))) @@ -65,13 +65,13 @@ def parse_args(args=None, program_name="App Source and Destination Sample"): decode_output = Queue() detect_input = Queue() detect_output = Queue() - VAServing.start({'log_level': 'INFO', "ignore_init_errors":True}) + PipelineServer.start({'log_level': 'INFO', "ignore_init_errors":True}) parameters = None if args.parameters: parameters = json.loads(args.parameters) # Start object detection pipeline # It will wait until it receives frames via the detect_input queue - detect_pipeline = VAServing.pipeline(args.pipeline, args.pipeline_version) + detect_pipeline = PipelineServer.pipeline(args.pipeline, args.pipeline_version) detect_pipeline.start(source={"type": "application", "class": "GStreamerAppSource", "input": detect_input, @@ -84,7 +84,7 @@ def parse_args(args=None, program_name="App Source and Destination Sample"): # Start decode only pipeline. # Its only purpose is to generate decoded frames to be fed into the object detection pipeline - decode_pipeline = VAServing.pipeline("video_decode", "app_dst") + decode_pipeline = PipelineServer.pipeline("video_decode", "app_dst") decode_pipeline.start(source={"type":"uri", "uri": args.input_uri}, destination={"type":"application", @@ -145,4 +145,4 @@ def parse_args(args=None, program_name="App Source and Destination Sample"): print("Received {} results".format(result_count)) - VAServing.stop() + PipelineServer.stop() diff --git a/samples/edgex_bridge/README.md b/samples/edgex_bridge/README.md index ed004dd..243cf66 100644 --- a/samples/edgex_bridge/README.md +++ b/samples/edgex_bridge/README.md @@ -1,4 +1,4 @@ -# Intel(R) DL Streamer Pipeline Server EdgeX Bridge +# Intel(R) Deep Learning Streamer Pipeline Server EdgeX Bridge This sample demonstrates how to emit events into [EdgeX Foundry](http://edgexfoundry.org/) from an object detection pipeline based on Intel(R) Deep Learning Streamer (Intel(R) DL Streamer) Pipeline Server and [Intel(R) DL Streamer](https://github.com/dlstreamer/dlstreamer). The sample uses the [person-vehicle-bike-detection-crossroad-0078](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/person-vehicle-bike-detection-crossroad-0078) model for detection but can be customized to use any detection or recognition model. @@ -155,7 +155,7 @@ To do this let's install an RTSP client such as [VLC Media Player*](https://linu You may extend this script or directly use any other client to connect with the RTSP endpoint being served at `rtsp://127.0.0.1:8554/edgex_event_emitter`. -Refer to our [RTSP Re-streaming](/docs/running_video_analytics_serving.md#real-time-streaming-protocol-rtsp-re-streaming) documentation for additional details. +Refer to our [RTSP Re-streaming](/docs/running_pipeline_server.md#real-time-streaming-protocol-rtsp-re-streaming) documentation for additional details. > NOTE: This has been tested when running the RTSP client on the local host. Additional configuration may be needed to view when accessing remotely, bearing in mind that docker-compose is running containers within edgex_network. diff --git a/samples/edgex_bridge/docker/build.sh b/samples/edgex_bridge/docker/build.sh index bd1acfa..92df5fd 100755 --- a/samples/edgex_bridge/docker/build.sh +++ b/samples/edgex_bridge/docker/build.sh @@ -68,17 +68,18 @@ function launch { echo $@ return $exit_code } -# Build Intel(R) DL Streamer Pipeline Server +# Build Pipeline Server launch "$SAMPLE_DIR/../../docker/build.sh --framework gstreamer --create-service true $BASE_IMAGE $OMZ_VERSION + --force-model-download --models $SAMPLE_DIR/$MODELS --pipelines samples/edgex_bridge/$PIPELINES --tag $TAG_BASE $PASS_THROUGH_PARAMS" # Build EdgeX Bridge Extension and override entrypoint defined by FINAL_STAGE -# in Intel(R) DL Streamer Pipeline Server parent image Dockerfile +# in the Pipeline Server parent image Dockerfile launch "docker build -f $WORK_DIR/Dockerfile $SAMPLE_BUILD_ARGS --build-arg BASE=$TAG_BASE $PASS_THROUGH_PARAMS -t $TAG $SAMPLE_DIR " diff --git a/samples/edgex_bridge/docker/run.sh b/samples/edgex_bridge/docker/run.sh index 9a71733..802f438 100755 --- a/samples/edgex_bridge/docker/run.sh +++ b/samples/edgex_bridge/docker/run.sh @@ -22,8 +22,8 @@ PASS_THROUGH_PARAMS= function show_help { echo "usage: ./run.sh" echo " [ --dry-run : See the raw command(s) that will be executed by this script. ] " - echo " [ --generate : Passed to the entrypoint script located at ./edgex_bridge.py, instructing it to prepare EdgeX configuration to receive and process events triggered by Intel(R) DL Streamer Pipeline Server. ] " - echo " [ Remaining parameters pass through to Intel(R) DL Streamer Pipeline Server /docker/run.sh script ] " + echo " [ --generate : Passed to the entrypoint script located at ./edgex_bridge.py, instructing it to prepare EdgeX configuration to receive and process events triggered by the Pipeline Server. ] " + echo " [ Remaining parameters pass through to the Pipeline Server /docker/run.sh script ] " } #Get options passed into script, passing through parameters that target parent script. while [[ "$#" -gt 0 ]]; do diff --git a/samples/edgex_bridge/edgex_bridge.py b/samples/edgex_bridge/edgex_bridge.py index 253b5e3..4ef1b84 100755 --- a/samples/edgex_bridge/edgex_bridge.py +++ b/samples/edgex_bridge/edgex_bridge.py @@ -9,7 +9,7 @@ import os from shutil import copyfile import traceback -from vaserving.vaserving import VAServing +from server.pipeline_server import PipelineServer DEFAULT_SOURCE_URI = "https://github.com/intel/dlstreamer-pipeline-server/raw/master/samples/bottle_detection.mp4" @@ -54,7 +54,7 @@ def parse_args(args=None): parser.add_argument("--rtsp-path", action="store", dest="requested_rtsp_path", - help="Indicates VA Serving should render processed frames output using this RTSP path.", + help="Indicates Pipeline Server should render processed frames output using this RTSP path.", default=None) parser.add_argument("--analytics-image", action="store", @@ -108,8 +108,8 @@ def print_args(args, program_name=__file__): TEMPLATE = "name: \"{edgexdevice}\"\n" \ "manufacturer: \"PipelineServer\"\n"\ "model: \"MQTT-2\"\n"\ - "description: \"Device profile for inference events published by Intel(R) DL Streamer Pipeline Server"\ - "Serving over MQTT.\"\n"\ + "description: \"Device profile for inference events published by Pipeline Server"\ + " over MQTT.\"\n"\ "labels:\n"\ "- \"MQTT\"\n"\ "- \"PipelineServer\"\n"\ @@ -181,7 +181,7 @@ def print_args(args, program_name=__file__): " DRIVER_RESPONSETOPIC: Edgex-command-response\n"\ " volumes:\n"\ " - ./res/device-mqtt-go/:/res/\n\n"\ - " vaserving:\n"\ + " pipeline_server:\n"\ " container_name: {containername}\n"\ " depends_on:\n"\ " device-mqtt:\n"\ @@ -213,20 +213,16 @@ def print_args(args, program_name=__file__): " http_proxy: $http_proxy\n"\ " socks_proxy: $socks_proxy\n"\ " no_proxy: $no_proxy\n"\ - " volumes:\n"\ - " - /tmp/.xdg_runtime_dir:/home/.xdg_runtime_dir\n"\ " devices:\n"\ " - /dev/dri:/dev/dri\n"\ " hostname: {containername}\n"\ " image: {analyticsimage}\n"\ " command: \"--source={source} --topic={topic} $edgex_request_rtsp_path\"\n" \ - "# user: \"$UID:$GID\"\n"\ " networks:\n"\ " edgex-network: {{}}\n"\ " ports:\n"\ " - 127.0.0.1:8080:8080/tcp\n"\ " - 127.0.0.1:$edgex_rtsp_port:$edgex_rtsp_port/tcp\n"\ - " read_only: true\n"\ "version: '3.7'\n\n" with open(compose_dest, 'w') as override_file: override_file.write(COMPOSE.format(**parameters["compose"])) @@ -243,28 +239,29 @@ def print_args(args, program_name=__file__): pipeline_version, pipeline_file)) - VAServing.start({'log_level': 'INFO'}) - pipeline = VAServing.pipeline(pipeline_name, pipeline_version) + PipelineServer.start({'log_level': 'INFO'}) + pipeline = PipelineServer.pipeline(pipeline_name, pipeline_version) source = {"uri":args.source, "type":"uri"} - frame_destination={} - if args.requested_rtsp_path: - frame_destination = { - "type": "rtsp", - "path": args.requested_rtsp_path - } destination = { "metadata": { "type":"mqtt", "host":args.destination, "topic":'edgex_bridge/'+args.topic - }, - "frame": frame_destination + } } + if args.requested_rtsp_path: + frame_destination = { + "frame": { + "type": "rtsp", + "path": args.requested_rtsp_path + } + } + destination.update(frame_destination) pipeline.start(source=source, destination=destination, parameters=parameters) start_time = None start_size = 0 - VAServing.wait() + PipelineServer.wait() except FileNotFoundError: print("Did you forget to run ./samples/edgex_bridge/fetch_edgex.sh ?") print("Error processing script: {}".format(traceback.print_exc())) @@ -272,4 +269,4 @@ def print_args(args, program_name=__file__): pass except Exception: print("Error processing script: {}".format(traceback.print_exc())) - VAServing.stop() + PipelineServer.stop() diff --git a/samples/edgex_bridge/extensions/edgex_transform.py b/samples/edgex_bridge/extensions/edgex_transform.py index 2ff591b..dacbc68 100755 --- a/samples/edgex_bridge/extensions/edgex_transform.py +++ b/samples/edgex_bridge/extensions/edgex_transform.py @@ -14,7 +14,7 @@ # pylint: disable=wrong-import-position from gi.repository import Gst from gstgva import VideoFrame -from vaserving.common.utils import logging +from server.common.utils import logging # pylint: enable=wrong-import-position diff --git a/samples/edgex_bridge/fetch_edgex.sh b/samples/edgex_bridge/fetch_edgex.sh index 8e6204e..94d48a2 100755 --- a/samples/edgex_bridge/fetch_edgex.sh +++ b/samples/edgex_bridge/fetch_edgex.sh @@ -2,7 +2,7 @@ # This script builds a docker-compose file for EdgeX # and fetches a configuration template for device-mqtt-go. SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" -VAS_SOURCE=$SCRIPT_DIR/../.. +PS_SOURCE=$SCRIPT_DIR/../.. EDGEX_PROJECT=$SCRIPT_DIR/edgex mkdir -p $EDGEX_PROJECT cd $EDGEX_PROJECT @@ -48,4 +48,4 @@ if ! cp "${EDGEX_DIR_REPO_DEVELOPER_SCRIPTS}/docker-compose-hanoi-no-secty-mqtt. exit $? fi echo "Successfully fetched repo and produced compose file." -cd $VAS_SOURCE +cd $PS_SOURCE diff --git a/samples/edgex_bridge/start_edgex.sh b/samples/edgex_bridge/start_edgex.sh index cd604d6..b5e4c41 100755 --- a/samples/edgex_bridge/start_edgex.sh +++ b/samples/edgex_bridge/start_edgex.sh @@ -60,7 +60,7 @@ while [[ "$#" -gt 0 ]]; do done # Convenience function to launch optional RTSP client for viewing -# Intel(R) DL Streamer Pipeline Server's processed pipeline frames. +# Pipeline Server's processed pipeline frames. function rtsp_connect { ON_SUCCESS_MESSAGE=0 ON_USER_TERMINATED=130 @@ -117,7 +117,7 @@ export edgex_rtsp_host=rtsp://127.0.0.1 export edgex_rtsp_port=8554 export edgex_default_display=:0.0 export edgex_default_rtsp_path="edgex_event_emitter" -export edgex_sample_title="Intel(R) DL Streamer Pipeline Server - EdgeX Sample" +export edgex_sample_title="Pipeline Server - EdgeX Sample" export edgex_request_rtsp_path="" if [[ ! -z "$RTSP_PATH" ]]; then @@ -149,7 +149,7 @@ if [ "$edgex_env_enable_rtsp" == "true" ]; then export edgex_request_rtsp_path="--rtsp-path $edgex_rtsp_path" fi -# Launch EdgeX Stack and another Intel(R) DL Streamer Pipeline Server pipeline instance (if input media stream has completed) +# Launch EdgeX Stack and another Pipeline Server pipeline instance (if input media stream has completed) if test -f "$COMPOSE_FILE"; then cd $COMPOSE_PATH docker-compose up -d diff --git a/samples/kubernetes/README.md b/samples/kubernetes/README.md index 5c6aac2..1161ff6 100644 --- a/samples/kubernetes/README.md +++ b/samples/kubernetes/README.md @@ -1,16 +1,18 @@ # Kubernetes Deployment with Load Balancing -| [Installing Microk8s](#installing-microk8s) | [Adding Initial Nodes](#adding-initial-nodes-to-the-cluster) | [Building and Deploying Services](#building-and-deploying-services-to-the-cluster) | [Adding Nodes to Existing Deployment](#adding-nodes-to-an-existing-deployment) | [Sending Requests](#sending-pipeline-server-requests-to-the-cluster) | [Uninstalling](#uninstalling) | [Examples](#examples) | [Useful Commands](#useful-commands) | [Limitations](#limitations) | +| [Definition](#definitions) | [Prerequisites](#prerequisites) | [Building and Deploying Services](#building-and-deploying-services-to-the-cluster) | [Sending Requests](#sending-pipeline-server-requests-to-the-cluster) | [Adding Nodes to Existing Deployment](#adding-nodes-to-an-existing-deployment) | [Examples](#examples) | [Limitations](#limitations) | [Undeploy Services](#undeploy-services) | [Useful Commands](#useful-commands) | This sample demonstrates how to set up a Kubernetes cluster using MicroK8s, how to deploy Intel(R) Deep Learning Streamer (Intel(R) DL Streamer) Pipeline Server to the cluster, and how to use HAProxy to load balance requests. ![diagram](./kubernetes-diagram.svg) -### Definitions +## Definitions | Term | Definition | |---|---| | Pipeline Server | [Intel(R) DL Streamer Pipeline Server](https://github.com/dlstreamer/pipeline-server) microservice thats runs pipelines. | +| Pipeline Server `CPU` worker | Pipeline Server microservice with Inference runs on `CPU`, check config [here](pipeline-server-worker/deployments/cpu) | +| Pipeline Server `GPU` worker | Pipeline Server microservice with Inference runs on `GPU`, check config [here](pipeline-server-worker/deployments/gpu) | | HAProxy | [HAProxy](https://www.haproxy.com/) open source load balancer and application delivery controller. | | MicroK8s | [microk8s](https://microk8s.io/) minimal production Kubernetes distribution. | | MQTT | [MQTT](https://hub.docker.com/_/eclipse-mosquitto) open source message bus. | @@ -20,440 +22,69 @@ This sample demonstrates how to set up a Kubernetes cluster using MicroK8s, how | Pod | The smallest deployable unit of computing in a Kubernetes cluster, typically a single container. | | leader-ip | Host IP address of Leader. | -### Prerequisites +## Prerequisites -The following steps, installation and deployment scripts have been tested on Ubuntu 20.04. Other operating systems may have additional requirements and are beyond the scope of this document. - -## Installing MicroK8s - -For each node that will be in the cluster run the following commands to install MicroK8s along with its dependencies. These steps must be performed on each node individually. Please review the contents of [microk8s/install.sh](microk8s/install.sh) and [microk8s/install_addons.sh](./microk8s/install_addons.sh) as these scripts will install additional components on your system as well as make changes to your groups and environment variables. - -### Step 1: Install MicroK8s Base - -#### Command -```bash -cd ./samples/kubernetes -sudo -E ./microk8s/install.sh -``` - -#### Expected Output -```text - -Assigning to microk8s group -``` -> -> NOTE: If you are running behind a proxy please ensure that your `NO_PROXY` and `no_proxy` environment variables are set correctly to allow cluster nodes to communicate directly. You can run these commands to set this up automatically: -> ```bash -> UPDATE_NO_PROXY=true sudo -E ./microk8s/install.sh -> su - $USER -> ``` -> - -### Step 2: Activate Group Membership - -Your user is now a member of a newly added 'microk8s' group. However, the current terminal session will not be aware of this until you issue this command: - -#### Command - -```bash -newgrp microk8s -groups | grep microk8s -``` - -#### Expected Output -```text - microk8s -``` - -### Step 3: Install MicroK8s Add-Ons - -Next we need to install add-on components into the cluster. These enable docker registry and dns. - -#### Command - -```bash -./microk8s/install_addons.sh -``` - -Note that this script may take **several minutes** to complete. - -#### Expected Output - -```text -Started. -Metrics-Server is enabled -DNS is enabled -Ingress is enabled -Metrics-Server is enabled -DNS is enabled -The registry is enabled -``` - -### Step 4: Wait for Kubernetes System Pods to Reach Running State - -At this point we need to wait for the Kubernetes system pods to reach the running state. This may take a few minutes. - -Check that the installation was successful by confirming `STATUS` is `Running` for all pods. Pods will cycle through `ContainerCreating`, `Pending`, and `Waiting` states but all should eventually reach the `Running` state. After a few minutes if all pods do not reach the `Running` state refer to [application cluster troubleshooting tips](https://kubernetes.io/docs/tasks/debug-application-cluster/debug-pod-replication-controller/) for more help. - -> Troubleshooting Tip: If you see `Pending` or `ContainerCreating` after waiting more than a few minutes, you may need to modify your environment variables with respect to proxy settings and restart MicroK8s. Do this by running `microk8s stop`, modifying the environment variables in your shell, and then running `microk8s start`. Then check the status of pods by running this command again. - -#### Command - -```bash -microk8s kubectl get pods --all-namespaces -``` - -#### Expected Output - -```text -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system calico-node-mhvlc 1/1 Running 0 4m28s -kube-system metrics-server-8bbfb4bdb-pl6g7 1/1 Running 0 3m1s -kube-system calico-kube-controllers-f7868dd95-mkjjk 1/1 Running 0 4m30s -kube-system dashboard-metrics-scraper-78d7698477-pgpkj 1/1 Running 0 86s -kube-system coredns-7f9c69c78c-8vjr4 1/1 Running 0 86s -ingress nginx-ingress-microk8s-controller-rjcpr 1/1 Running 0 86s -kube-system kubernetes-dashboard-85fd7f45cb-h82gk 1/1 Running 0 86s -kube-system hostpath-provisioner-5c65fbdb4f-42pdn 1/1 Running 0 86s -container-registry registry-9b57d9df8-vtmsj 1/1 Running 0 86s -``` - - -### Step 5: Setup Proxy Server DNS -> Note: This step is required if you are running behind proxy, skip otherwise. - -Use the following steps to set up the MicroK8s DNS service correctly. - -#### 1. Identify host network’s configured DNS servers - -##### Command - -```bash -systemd-resolve --status | grep "Current DNS" --after-context=3 -``` - -##### Expected Output - -```text -Current DNS Server: 10.22.1.1 - DNS Servers: - - -``` - -#### 2. Disable MicroK8s DNS - -##### Command - -```bash -microk8s disable dns -``` - -##### Expected Output - -```text -Disabling DNS -Reconfiguring kubelet -Removing DNS manifest -serviceaccount "coredns" deleted -configmap "coredns" deleted -deployment.apps "coredns" deleted -service "kube-dns" deleted -clusterrole.rbac.authorization.k8s.io "coredns" deleted -clusterrolebinding.rbac.authorization.k8s.io "coredns" deleted -DNS is disabled -``` - -#### 3. Enable DNS with Host DNS Server - -##### Command - -```bash -microk8s enable dns:,, -``` - -##### Expected Output - -```text -Enabling DNS -Applying manifest -serviceaccount/coredns created -configmap/coredns created -deployment.apps/coredns created -service/kube-dns created -clusterrole.rbac.authorization.k8s.io/coredns created -clusterrolebinding.rbac.authorization.k8s.io/coredns created -Restarting kubelet -DNS is enabled -``` - -#### 4. Confirm Update - -##### Command - -```bash -sh -c "until microk8s.kubectl rollout status deployments/coredns -n kube-system -w; do sleep 5; done" -``` - -##### Expected Output - -```text -deployment "coredns" successfully rolled out -``` - -## Adding Initial Nodes to the Cluster - -> Note: This step is only required if you have 2 or more nodes, skip otherwise. - -### Step 1: Select Leader and Add Nodes - -Choose one of your nodes as the `leader` node. - -For each additional node that will be in the cluster, issue the following command on the `leader` node. You will need to do this once for each node you want to add. - -#### Command - -```bash -microk8s add-node -``` - -#### Expected Output - -You should see output as follows, including the IP address of the primary/controller host and unique token for the node you are adding to use during connection to the cluster. - -```bash -From the node you wish to join to this cluster, run the following: -microk8s join :25000/02c66e66e811fe2c697b1cd5d31bfba2/023e49528889 - -If the node you are adding is not reachable through the default interface you can use one of the following: - microk8s join :25000/02c66e66e811fe2c697b1cd5d31bfba2/023e49528889 - microk8s join 172.17.0.1:25000/02c66e66e811fe2c697b1cd5d31bfba2/023e49528889 -``` - -### Step 2: Join Nodes to Cluster - -Run `join` command shown in the above response on each `worker node` to be added. - -#### Command - -```bash -microk8s join :25000/02c66e66e811fe2c697b1cd5d31bfba2/023e49528889 -``` - -#### Expected Output - -```text -Contacting cluster at -Waiting for this node to finish joining the cluster. .. -``` - -If you encounter an error `Connection failed. Invalid token (500)` your token may have expired or you already used it for another node. To resolve, run the `add-node` command on the leader node to get a new token. - -### Step 3: Confirm Cluster Nodes - -To confirm what nodes are running in your cluster, run: - -#### Command - -```bash -microk8s kubectl get no -``` - -#### Expected Output - -```text -NAME STATUS ROLES AGE VERSION -vcplab003 Ready 3d v1.21.5-3+83e2bb7ee39726 -vcplab002 Ready 84s v1.21.5-3+83e2bb7ee39726 -``` +| | | +|---------------------------------------------|------------------| +| **Docker** | This sample requires Docker for its build, development, and runtime environments. Please install the latest for your platform. [Docker](https://docs.docker.com/install). | +| **bash** | This sample's build and deploy scripts require bash and have been tested on systems using versions greater than or equal to: `GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu)`. Most users shouldn't need to update their version but if you run into issues please install the latest for your platform. | +| **MicroK8s** | This sample requires MicroK8s with DNS setup if system runs under proxy, please follow instructions [here](docs/microk8s.md) to install and Setup Proxy Server DNS | ## Building and Deploying Services to the Cluster -Follow the steps below to build and deploy the Pipeline Server, HAProxy and MQTT services to the cluster. - -### Step 1: Deploy MQTT - -This will enable listening to metadata using MQTT broker. - -#### Command - -```bash -./mqtt/deploy.sh -``` - -#### Expected Output - -```text -MQTT instance is up and running -``` - -### Step 2: Build and Deploy Pipeline Server(s) - -#### Update Configuration with Number of Replicas - -Update the number of replicas in the Pipeline Server deployment configuration [`pipeline-server-worker/pipeline-server.yaml#L8`](./pipeline-server-worker/pipeline-server.yaml#L8) to match the number of nodes in the cluster. - -#### Build and Deploy - -This command adds host system proxy settings to `pipeline-server-worker/pipeline-server.yaml` and deploys it. - - > The following command uses the pre built docker image from [intel/dlstreamer-pipeline-server:0.7.1](https://hub.docker.com/r/intel/dlstreamer-pipeline-server). To use a local image instead run `BASE_IMAGE=dlstreamer-pipeline-server-gstreamer:latest ./pipeline-server-worker/build.sh` - -##### Command - -```bash -./pipeline-server-worker/build.sh -./pipeline-server-worker/deploy.sh -``` - -##### Expected Output - -```text -All Pipeline Server instances are up and running -``` - -#### Check Status +Use the following command to deploy the Pipeline Server(CPU,GPU), HAProxy and MQTT services to the cluster. -##### Command +The command + * Deletes existing Pipeline Server workers. + * Creates namespace `pipeline-server`, sets it as default and deploys Services in that namespace. + * Uses the pre built docker image from [intel/dlstreamer-pipeline-server:0.7.2](https://hub.docker.com/r/intel/dlstreamer-pipeline-server). To use a local image instead run `BASE_IMAGE=dlstreamer-pipeline-server-gstreamer:latest ./build_deploy_all.sh` + * Build and deploys MQTT, Pipeline Server worker and HAProxy. Each node will have one Pipeline Server Worker, if `GPU` available on machine, `GPU` worker will be started, else `CPU` worker. + * Uses [Kubernetes Node Feature Discovery](https://kubernetes-sigs.github.io/node-feature-discovery/stable/get-started/index.html) and [Intel(R) GPU device plugin for Kubernetes](https://github.com/intel/intel-device-plugins-for-kubernetes/blob/v0.23.0/cmd/gpu_plugin/README.md) to discover, find device resources and enable GPU. + * Runs script [haproxy/check_for_changes.sh](haproxy/check_for_changes.sh) in background that looks for changes in Pipeline server pods and rebuilds and deploys haproxy. -```bash -microk8s kubectl get pods -``` - -##### Expected Output -```text -NAME READY STATUS -mqtt-deployment-7d85664dc7-f976h 1/1 Running -pipeline-server-deployment-7479f5d494-2wkkk 1/1 Running -``` - -### Step 3: Build and Deploy HAProxy - -This will enable load balancing of Pipeline Server REST Requests through the cluster on port 31000. - -> Note: Pipeline Server pod(s) must be up and running before building and deploying HAProxy. - -#### Build and Deploy - -##### Command - -```bash -./haproxy/build.sh -./haproxy/deploy.sh -``` - -##### Expected Output - -```text -HAProxy Service started -``` - -#### Check Status - -Check status of all pods - -##### Command - -```bash -microk8s kubectl get pods -``` - -##### Expected Output - -```text -NAME READY STATUS -mqtt-deployment-7d85664dc7-f976h 1/1 Running -pipeline-server-deployment-7479f5d494-2wkkk 1/1 Running -haproxy-deployment-7d79cf66f5-4d92n 1/1 Running -``` - -## Adding Nodes to an Existing Deployment - -### Step 1: Prepare New Nodes - -To add nodes to an existing deployment first follow the steps outlined in [Installing MicroK8s](#installing-microk8s) and [Joining Nodes to the Cluster](#joining-nodes-to-the-cluster) for the nodes to be added to the deployment. - -### Step 2: Update Pipeline Server Configuration with Number of Replicas - -Update the number of replicas in the Pipeline Server deployment configuration [`pipeline-server-worker/pipeline-server.yaml#L8`](./pipeline-server-worker/pipeline-server.yaml#L8) to match the number of nodes in the cluster. - -### Step 3: Redeploy Pipeline Server - -Using the node selected as the `leader`, redploy the pipeline server instances. - -#### Command +### Command ```bash -./pipeline-server-worker/deploy.sh +./build_deploy_all.sh ``` -#### Expected Output - -```text -All Pipeline Server instances are up and running -``` - -### Step 4: Check Status - -#### Command - -```bash -microk8s kubectl get pods | grep 'pipeline-server' -``` - -#### Expected Output - -```text -pipeline-server-deployment-7479f5d494-2wkkk 1/1 Running -pipeline-server-deployment-7479f5d494-2knop 1/1 Running -``` - -### Step 5: Rebuild and Redeploy HAProxy - -This will add the new Pipeline Server pod(s) to the HAProxy config. - -> Note: Pipeline Server pod(s) must be up and running before building and deploying HAProxy. - -#### Command - -```bash -./haproxy/build.sh -./haproxy/deploy.sh -``` - -#### Expected Output +### Expected Output ```text -HAProxy Service started +NAME READY STATUS +mqtt-deployment-64f7b4f5c-89l77 1/1 Running +intel-gpu-plugin-krsch 1/1 Running +pipeline-server-gpu-worker-vknh2 1/1 Running +haproxy-deployment-6c9c989957-9nrkd 1/1 Running +Running process to check for pipeline-server changes in the background + ``` -### Step 6: Check Status - -#### Command +### Check Status of All Pods ```bash microk8s kubectl get pods ``` -#### Expected Output +### Expected Output ```text -NAME READY STATUS -pipeline-server-deployment-7479f5d494-2wkkk 1/1 Running -pipeline-server-deployment-7479f5d494-2knop 1/1 Running -haproxy-deployment-7d79cf66f5-4d92n 1/1 Running -mqtt-deployment-7d85664dc7-f976h 1/1 Running +NAME READY STATUS +mqtt-deployment-64f7b4f5c-89l77 1/1 Running +intel-gpu-plugin-krsch 1/1 Running +pipeline-server-gpu-worker-vknh2 1/1 Running +haproxy-deployment-6c9c989957-9nrkd 1/1 Running ``` ## Sending Pipeline Server Requests to the Cluster -Once pods have been deployed, clients can send pipeline server requests to the cluster via the leader node. The HAProxy service is responsible for load balancing pipeline server requests accross the cluster using a `round-robin` algorithm. +Once pods have been deployed, clients can send pipeline server requests to the cluster via the leader node. The HAProxy service is responsible for load balancing pipeline server requests across the cluster using a `round-robin` algorithm. When pipeline servers are deployed, they can also be configured to stop taking new requests based on a `MAX_RUNNING_PIPELINES` setting and/or a `TARGET_FPS` setting. Pipeline servers that reach the configured `MAX_RUNNING_PIPELINES` or have a pipeline instance running with an FPS below the `TARGET_FPS` become unavailable for new requests. -Once all the pipeline servers in the cluster become unavailable, clients receive a `503 Service Unavailable` error from the load balancer. Both `MAX_RUNNING_PIPELINES` and `TARGET_FPS` are set in `pipeline-server-worker/pipeline-server.yaml`. +Once all the pipeline servers in the cluster become unavailable, clients receive a `503 Service Unavailable` error from the load balancer. Both `MAX_RUNNING_PIPELINES` and `TARGET_FPS` are set in `pipeline-server-worker/deployments/base/pipeline-server-worker.yaml`. ### Step 1: Start Pipelines on the Cluster @@ -504,238 +135,77 @@ docker run -it --entrypoint mosquitto_sub eclipse-mosquitto:1.6 --topic infere {"objects":[{"detection":{"bounding_box":{"x_max":0.1800042986869812,"x_min":0.0009236931800842285,"y_max":0.5527437925338745,"y_min":0.04479485750198364},"confidence":0.8942767381668091,"label":"person","label_id":1},"h":366,"roi_type":"person","w":229,"x":1,"y":32},{"detection":{"bounding_box":{"x_max":0.8907946944236755,"x_min":0.3679085373878479,"y_max":0.9973113238811493,"y_min":0.12812647223472595},"confidence":0.9935075044631958,"label":"vehicle","label_id":2},"h":626,"roi_type":"vehicle","w":669,"x":471,"y":92},{"detection":{"bounding_box":{"x_max":0.6346513032913208,"x_min":0.4170849323272705,"y_max":0.17429469525814056,"y_min":0.006016984581947327},"confidence":0.9765880107879639,"label":"vehicle","label_id":2},"h":121,"roi_type":"vehicle","w":278,"x":534,"y":4},{"detection":{"bounding_box":{"x_max":0.9923359751701355,"x_min":0.8340855240821838,"y_max":0.6327562630176544,"y_min":0.03546741604804993},"confidence":0.5069465041160583,"label":"vehicle","label_id":2},"h":430,"roi_type":"vehicle","w":203,"x":1068,"y":26}],"resolution":{"height":720,"width":1280},"source":"https://lvamedia.blob.core.windows.net/public/homes_00425.mkv","timestamp":34300000000} ``` -## Uninstalling - -### Step 1: Undeploy Pipeline Server, HAProxy and MQTT services - -#### Remove Pipeline Server deployment - -```bash -microk8s kubectl delete -f pipeline-server-worker/pipeline-server.yaml -``` - -#### Remove HAProxy deployment - -```bash -microk8s kubectl delete -f haproxy/haproxy.yaml -``` - -#### Remove MQTT deployment - -```bash -microk8s kubectl delete -f mqtt/mqtt.yaml -``` - -### Step 2: Remove Node - -#### Confirm running nodes - -To confirm what nodes are running in your cluster, run: - -##### Command - -```bash -microk8s kubectl get no -``` - -##### Expected Output - -```text -NAME STATUS ROLES AGE VERSION - Ready 96d v1.21.9-3+5bfa682137fad9 -``` - -#### Drain Node - -Drain the node, run below command in worker node you want to remove - -##### Command - -```bash -microk8s kubectl drain -``` - -##### Expected Output - -```text - -node/ drained -``` - -#### Leave Cluster - -Run below command in worker node you want to remove to leave the cluster - -```bash -microk8s leave -``` -#### Remove Node +## Adding Nodes to an Existing Deployment -Run below command on **leader node** +### Step 1: Prepare New Nodes -```bash -microk8s remove-node -``` +To add nodes to an existing deployment first follow the steps outlined in [Installing MicroK8s](docs/microk8s.md#installing-microk8s) and [Joining Node to the Cluster](docs/microk8s.md#adding-node-to-the-cluster) for the nodes to be added to the deployment. -### Step 3: Uninstall MicroK8s +### Step 2: Check Status +Pipeline Server should automatically increase the number of deployed CPU/GPU worker pods based on Hardware available on new nodes. HAProxy will be built and deployed automatically for any changes in Pipeline Server pods. #### Command ```bash -./microk8s/uninstall.sh +microk8s kubectl get pods ``` #### Expected Output ```text -========================== -Remove/Purge microk8s -========================== -microk8s removed +NAME READY STATUS +mqtt-deployment-64f7b4f5c-89l77 1/1 Running +intel-gpu-plugin-krsch 1/1 Running +pipeline-server-gpu-worker-vknh2 1/1 Running +pipeline-server-cpu-worker-5gtxg 1/1 Running +haproxy-deployment-6c9c989957-9nrba 1/1 Running ``` ## Examples +Find examples [here](docs/examples.md) that demonstrate how we assessed scaling by calculating _stream density_ across a variety of multi-node cluster scenarios. -These examples will show the following with a target of 30fps per stream: - -- Running a single stream on a single node and exceeding target fps indicating a stream density of at least 1. -- Running two streams on a single node and seeing both of them processing below target fps showing a stream density of 2 cannot be met. -- Adding a second node to cluster and seeing two streams exceeding target fps, thus doubling stream density to 2. - -The examples require [vaclient](../../vaclient/README.md) so the container `dlstreamer-pipeline-server-gstreamer` must be built as per [these instructions](../../README.md#building-the-microservice). - -### Single node with MQTT - -Start stream as follows - -```text -vaclient/vaclient.sh run object_detection/person_vehicle_bike https://lvamedia.blob.core.windows.net/public/homes_00425.mkv --server-address http://:31000 --destination type mqtt --destination host :31020 --destination topic person-vehicle-bike -``` - -Output should be like this (with different instance id and timestamps) - -```text -Starting pipeline object_detection/person_vehicle_bike, instance = e6846cce838311ecaf588a37d8d13e4f -Pipeline running - instance_id = e6846cce838311ecaf588a37d8d13e4f -Timestamp 1533000000 -- vehicle (1.00) [0.39, 0.13, 0.89, 1.00] -- vehicle (0.99) [0.41, 0.01, 0.63, 0.17] -Timestamp 1567000000 -- vehicle (1.00) [0.39, 0.13, 0.88, 1.00] -- vehicle (0.98) [0.41, 0.01, 0.63, 0.17] -Timestamp 1600000000 -- vehicle (1.00) [0.39, 0.13, 0.88, 0.99] -- vehicle (0.98) [0.41, 0.01, 0.63, 0.17] -``` - -Now stop stream using CTRL+C - -```text -^C -Stopping Pipeline... -Pipeline stopped -- vehicle (0.99) [0.39, 0.13, 0.89, 1.00] -- vehicle (0.99) [0.42, 0.00, 0.63, 0.17] -avg_fps: 52.32 -Done -``` - -### Single Node with Two Streams - -For two streams, we won't use MQTT but will measure fps to see if both streams can be processed at 30fps (i.e. can we attain a stream density of 2). Note the use of [model-instance-id](../../docs/defining_pipelines.md#model-persistance-in-openvino-gstreamer-elements) so pipelines can share resources. - -```text -vaclient/vaclient.sh run object_detection/person_vehicle_bike https://lvamedia.blob.core.windows.net/public/homes_00425.mkv --server-address http://:31000 --parameter detection-model-instance-id person-vehicle-bike-cpu --number-of-streams 2 -``` - -```text -Starting pipeline 1 -Starting pipeline object_detection/person_vehicle_bike, instance = 646559b0860811ec839b1c697aaaa6b4 -Pipeline 1 running - instance_id = 646559b0860811ec839b1c697aaaa6b4 -Starting pipeline 2 -Starting pipeline object_detection/person_vehicle_bike, instance = 65030b7e860811ec839b1c697aaaa6b4 -Pipeline 2 running - instance_id = 65030b7e860811ec839b1c697aaaa6b4 -2 pipelines running. -Pipeline status @ 7s -- instance=646559b0860811ec839b1c697aaaa6b4, state=RUNNING, 30fps -- instance=65030b7e860811ec839b1c697aaaa6b4, state=RUNNING, 26fps -Pipeline status @ 12s -- instance=646559b0860811ec839b1c697aaaa6b4, state=RUNNING, 29fps -- instance=65030b7e860811ec839b1c697aaaa6b4, state=RUNNING, 26fps -Pipeline status @ 17s -- instance=646559b0860811ec839b1c697aaaa6b4, state=RUNNING, 28fps -- instance=65030b7e860811ec839b1c697aaaa6b4, state=RUNNING, 27fps -Pipeline status @ 22s -- instance=646559b0860811ec839b1c697aaaa6b4, state=RUNNING, 28fps -- instance=65030b7e860811ec839b1c697aaaa6b4, state=RUNNING, 27fps -Pipeline status @ 27s -- instance=646559b0860811ec839b1c697aaaa6b4, state=RUNNING, 28fps -- instance=65030b7e860811ec839b1c697aaaa6b4, state=RUNNING, 27fps -``` - -Results show that we can't quite get to a stream density of 2. - -Use CTRL+C to stop streams. - -```text -^C -Stopping Pipeline... -Pipeline stopped -Stopping Pipeline... -Pipeline stopped -Pipeline status @ 26s -- instance=8db81ca8860d11ecb68672a0c3d9157b, state=ABORTED, 28fps -- instance=8ea33c42860d11ecb68672a0c3d9157b, state=ABORTED, 27fps -avg_fps: 26.78 -Done -``` - -> **Note:** The `avg_fps` metric is determined by the last instance in the list, it is the not the average across all instances. - -### Two Streams on Two Nodes - -We'll add a second node to see if we can get a stream density of 2. +## Limitations -First add a second node as per [Adding Nodes to Existing Deployment](#adding-nodes-to-an-existing-deployment). +- Every time a new Intel® DL Streamer Pipeline Server is added or deleted, HAProxy will be restarted automatically to update configuration. +- We cannot yet query full set of pipeline statuses across all Pipeline Server pods. This means `GET :31000/pipelines/status` may not return complete list. +- When a pipeline runs on a GPU worker, it may take up to 60 seconds to start the pipeline, setting `model-instance-id` with same value limits this issue to the first request as this setting shares resources. -Now we run two streams and monitor fps using the same request as before. This time the work should be shared across the two nodes so we anticipate a higher fps for both streams. +## Undeploy Services +Undeploy Pipeline Server, HAProxy and MQTT services +### Command ```bash -vaclient/vaclient.sh run object_detection/person_vehicle_bike https://lvamedia.blob.core.windows.net/public/homes_00425.mkv --server-address http://:31000 --parameter detection-model-instance-id cpu --number-of-streams 2  +./undeploy_all.sh ``` +### Expected Output + ```text -Starting pipeline 1 -Starting pipeline object_detection/person_vehicle_bike, instance = 1ddd102e861111ecb68672a0c3d9157b -Pipeline 1 running - instance_id = 1ddd102e861111ecb68672a0c3d9157b -Starting pipeline 2 -Starting pipeline object_detection/person_vehicle_bike, instance = 0fd59b54861111ecbc0856b37602a80f -Pipeline 2 running - instance_id = 0fd59b54861111ecbc0856b37602a80f -2 pipelines running. -Pipeline status @ 7s -- instance=1ddd102e861111ecb68672a0c3d9157b, state=RUNNING, 54fps -- instance=0fd59b54861111ecbc0856b37602a80f, state=RUNNING, 53fps -Pipeline status @ 12s -- instance=1ddd102e861111ecb68672a0c3d9157b, state=RUNNING, 53fps -- instance=0fd59b54861111ecbc0856b37602a80f, state=RUNNING, 53fps -Pipeline status @ 17s -- instance=1ddd102e861111ecb68672a0c3d9157b, state=RUNNING, 53fps -- instance=0fd59b54861111ecbc0856b37602a80f, state=RUNNING, 53fps -^C -Stopping Pipeline... -Pipeline stopped -Stopping Pipeline... -Pipeline stopped -Pipeline status @ 18s -- instance=1ddd102e861111ecb68672a0c3d9157b, state=ABORTED, 53fps -- instance=0fd59b54861111ecbc0856b37602a80f, state=ABORTED, 53fps -avg_fps: 53.27 -Done + +daemonset.apps "nfd-worker" deleted +service "nfd-master" deleted +deployment.apps "nfd-master" deleted +pod "nfd-worker-z2flv" deleted +pod "nfd-master-75b7c4d897-76f9f" deleted +configmap "kube-root-ca.crt" deleted +daemonset.apps "intel-gpu-plugin" deleted +daemonset.apps "pipeline-server-cpu-worker" deleted +daemonset.apps "pipeline-server-gpu-worker" deleted +service "mqtt-svc" deleted +service "pipeline-server-cpu-service" deleted +service "pipeline-server-gpu-service" deleted +service "haproxy-svc" deleted +deployment.apps "mqtt-deployment" deleted +deployment.apps "haproxy-deployment" deleted +pod "mqtt-deployment-64f7b4f5c-hwbsq" deleted +pod "haproxy-deployment-5d84d7b67-2m9ft" deleted +pod "intel-gpu-plugin-dv8dt" deleted +pod "pipeline-server-gpu-worker-klr2l" deleted + +Stopped and removed all services ``` -See that both streams are over 30fps so a stream density of 2 has been achieved. - ## Useful Commands ```bash @@ -748,7 +218,7 @@ microk8s kubectl get nodes -o wide # Check running nodes information in yaml format microk8s kubectl get nodes -o yaml -# Decribe all nodes and details +# Describe all nodes and details microk8s kubectl describe nodes # Describe specific node @@ -758,7 +228,7 @@ microk8s kubectl describe nodes microk8s kubectl get po -A -o wide | awk '{print $6,"\t",$4,"\t",$8,"\t",$2}' # Deletes pod, after deleting pod, kubernetes may automatically start new on based on replicas -microk8s kubectl delete pod name +microk8s kubectl delete pod # Add Service to kubernetes cluster using yaml file microk8s kubectl apply -f @@ -766,12 +236,6 @@ microk8s kubectl apply -f # Delete an existing service from cluster microk8s kubectl delete -f -# Delete Pipeline Server from cluster -microk8s kubectl delete -f pipeline-server-worker/pipeline-server.yaml - -# Delete HAProxy from cluster -microk8s kubectl delete -f haproxy/haproxy.yaml - # Get pods from all namespaces microk8s kubectl get pods --all-namespaces @@ -787,8 +251,9 @@ microk8s kubectl exec -it -- /bin/bash # Restart a deployment microk8s kubectl rollout restart deployment -deployment -# Restart All Pipeline Server deployments -microk8s kubectl rollout restart deploy pipeline-server-deployment +# Restart Pipeline Server workers +microk8s kubectl rollout restart daemonset pipeline-server-cpu-worker +microk8s kubectl rollout restart daemonset pipeline-server-gpu-worker # Restart HAProxy Service microk8s kubectl rollout restart deploy haproxy-deployment @@ -807,13 +272,3 @@ sudo snap remove --purge microk8s microk8s leave ``` -## Limitations - -- Every time a new Intel® DL Streamer Pipeline Server pod is added or an existing pod restarted, HAProxy needs to be reconfigured and deployed by running below commands - - ```bash - ./haproxy/build.sh - ./haproxy/deploy.sh - ``` - -- We cannot yet query full set of pipeline statuses across all Pipeline Server pods. This means `GET :31000/pipelines/status` may not return complete list. diff --git a/samples/kubernetes/build_deploy_all.sh b/samples/kubernetes/build_deploy_all.sh new file mode 100755 index 0000000..ec1f99d --- /dev/null +++ b/samples/kubernetes/build_deploy_all.sh @@ -0,0 +1,35 @@ +#!/bin/bash -e + +WORK_DIR=$(dirname $(readlink -f "$0")) +NAMESPACE=${NAMESPACE:-pipeline-server} +PIPELINE_SERVER_WORKER_DIR="$WORK_DIR/pipeline-server-worker" +RUN_HAPROXY_BACKGROUND=${RUN_HAPROXY_BACKGROUND:-true} + +function launch { $@ + local exit_code=$? + if [ $exit_code -ne 0 ]; then + echo "ERROR: error with $1" >&2 + exit $exit_code + fi + return $exit_code +} + +$WORK_DIR/undeploy_all.sh + +launch microk8s kubectl create namespace $NAMESPACE + +launch microk8s kubectl config set-context --current --namespace=$NAMESPACE + +launch "$WORK_DIR/mqtt/deploy.sh" + +launch "$PIPELINE_SERVER_WORKER_DIR/build.sh" +launch "$PIPELINE_SERVER_WORKER_DIR/deploy.sh" + +launch "$WORK_DIR/haproxy/build.sh" +launch "$WORK_DIR/haproxy/deploy.sh" + +if [ "$RUN_HAPROXY_BACKGROUND" == "true" ]; then + echo "Running process to check for pipeline-server changes in the background" + PIPELINE_SERVER_PODS=$(microk8s kubectl get pods | grep "pipeline-server.*worker" | awk '{ print $1 " " $4}') + nohup $WORK_DIR/haproxy/check_for_changes.sh "$PIPELINE_SERVER_PODS" >/dev/null 2>&1 & +fi diff --git a/samples/kubernetes/docs/examples.md b/samples/kubernetes/docs/examples.md new file mode 100644 index 0000000..718c774 --- /dev/null +++ b/samples/kubernetes/docs/examples.md @@ -0,0 +1,268 @@ +# Pipeline Server Stream Density Examples + +## Definitions + +| Term | Definition | +|---|---| +| Intel® NUC | Intel® NUC11PAHi7 11th Gen Intel® Core™ i7-1165G7 Processor | +| Intel® Xeon® Processor | Intel® Xeon® Platinum Processor 9221 CPU | + +## Examples + +These examples will show the following with a target of 30fps per stream: + +- Running a single stream on a single node (Intel® NUC) exceeds target fps. This indicates a stream density of at least 1. +- Running four streams on a single node showing each exceeds target fps. This indicates a stream density of 4. +- Adding a second node (Intel® NUC) to cluster and running eight streams showing each exceeds target fps. This indicates a stream density of 8 and that we are scaling effectively with each node added to the cluster. +- Adding a third node (Intel® Xeon® Processor) to cluster and running 30 streams showing each exceeds target fps. This indicates we are utilizing all available resources across the cluster's nodes to run our workloads. + +The examples require [pipeline_client](../../../client/README.md) so the container `dlstreamer-pipeline-server-gstreamer` must be built as per [these instructions](../../../README.md#building-the-microservice). + +## Single Node with MQTT + +Start stream as follows + +```bash +./client/pipeline_client.sh run object_detection/person_vehicle_bike \ + https://lvamedia.blob.core.windows.net/public/homes_00425.mkv \ + --server-address http://:31000 \ + --destination type mqtt --destination host :31020 --destination topic person-vehicle-bike +``` + +Output should be like this (with different instance id and timestamps) + +```text +Starting pipeline object_detection/person_vehicle_bike, instance = e6846cce838311ecaf588a37d8d13e4f +Pipeline running - instance_id = e6846cce838311ecaf588a37d8d13e4f +Timestamp 1533000000 +- vehicle (1.00) [0.39, 0.13, 0.89, 1.00] +- vehicle (0.99) [0.41, 0.01, 0.63, 0.17] +Timestamp 1567000000 +- vehicle (1.00) [0.39, 0.13, 0.88, 1.00] +- vehicle (0.98) [0.41, 0.01, 0.63, 0.17] +Timestamp 1600000000 +- vehicle (1.00) [0.39, 0.13, 0.88, 0.99] +- vehicle (0.98) [0.41, 0.01, 0.63, 0.17] +``` + +Now stop stream using CTRL+C, pipeline will be in `ABORTED` state after. + +```text +^C +Stopping Pipeline... +Pipeline stopped +- vehicle (0.99) [0.39, 0.13, 0.89, 1.00] +- vehicle (0.99) [0.42, 0.00, 0.63, 0.17] +avg_fps: 123.33 +Done +``` + +### Single Node with Four Streams + +For four streams, we won't use MQTT but will measure fps to see if all streams can be processed at 30fps (i.e. can we attain a stream density of 4). Note the use of [model-instance-id](../../../docs/defining_pipelines.md#model-persistance-in-openvino-gstreamer-elements) so pipelines can share resources. + +```bash +./client/pipeline_client.sh run object_detection/person_vehicle_bike \ + https://lvamedia.blob.core.windows.net/public/homes_00425.mkv \ + --server-address http://:31000 \ + --parameter detection-model-instance-id person-vehicle-bike \ + --number-of-streams 4 +``` + +```text +Starting pipeline 1 +Starting pipeline object_detection/person_vehicle_bike, instance = 73dd289eb06211ec9dc75ef1db0c4cdf +Pipeline 1 running - instance_id = 73dd289eb06211ec9dc75ef1db0c4cdf +Starting pipeline 2 +Starting pipeline object_detection/person_vehicle_bike, instance = 794fcd9ab06211ec9dc75ef1db0c4cdf +Pipeline 2 running - instance_id = 794fcd9ab06211ec9dc75ef1db0c4cdf +Starting pipeline 3 +Starting pipeline object_detection/person_vehicle_bike, instance = 7faaf5cab06211ec9dc75ef1db0c4cdf +Pipeline 3 running - instance_id = 7faaf5cab06211ec9dc75ef1db0c4cdf +Starting pipeline 4 +Starting pipeline object_detection/person_vehicle_bike, instance = 8651bb16b06211ec9dc75ef1db0c4cdf +Pipeline 4 running - instance_id = 8651bb16b06211ec9dc75ef1db0c4cdf +4 pipelines running. +Pipeline status @ 39s +- instance=73dd289eb06211ec9dc75ef1db0c4cdf, state=RUNNING, 53fps +- instance=794fcd9ab06211ec9dc75ef1db0c4cdf, state=RUNNING, 37fps +- instance=7faaf5cab06211ec9dc75ef1db0c4cdf, state=RUNNING, 36fps +- instance=8651bb16b06211ec9dc75ef1db0c4cdf, state=RUNNING, 30fps +``` + +Results show that stream density of four achieved. + +Use CTRL+C to stop streams, pipeline will be in `ABORTED` state after. + +```text + +Pipeline status @ 76s +- instance=73dd289eb06211ec9dc75ef1db0c4cdf, state=ABORTED, 41fps +- instance=794fcd9ab06211ec9dc75ef1db0c4cdf, state=ABORTED, 31fps +- instance=7faaf5cab06211ec9dc75ef1db0c4cdf, state=ABORTED, 30fps +- instance=8651bb16b06211ec9dc75ef1db0c4cdf, state=ABORTED, 30fps +avg_fps: 33 +Done +``` + +### Eight Streams on Two Nodes + +We'll add a second node to see if we can get a stream density of eight. + +First add a second node(Intel® NUC) as per [Adding Node to Existing Deployment](../README.md#adding-nodes-to-an-existing-deployment). + +Now we run eight streams and monitor fps using the same request as before. This time the work should be shared across the two nodes. + +```bash +./client/pipeline_client.sh run object_detection/person_vehicle_bike \ + https://lvamedia.blob.core.windows.net/public/homes_00425.mkv \ + --server-address http://:31000 \ + --parameter detection-model-instance-id person-vehicle-bike \ + --number-of-streams 8 +``` + +```text + +Starting pipeline 1 +Starting pipeline object_detection/person_vehicle_bike, instance = 9268a11cac9311ec9d92aa618d1feb05 +Pipeline 1 running - instance_id = 9268a11cac9311ec9d92aa618d1feb05 + +Starting pipeline 8 +Starting pipeline object_detection/person_vehicle_bike, instance = 96ad090cac9311ec984ec2ba86c884b6 +Pipeline 8 running - instance_id = 96ad090cac9311ec984ec2ba86c884b6 +8 pipelines running. +Pipeline status @ 18s +- instance=9268a11cac9311ec9d92aa618d1feb05, state=RUNNING, 35fps +- instance=9302aca8ac9311ec984ec2ba86c884b6, state=RUNNING, 37fps +- instance=93a1b230ac9311ec9d92aa618d1feb05, state=RUNNING, 33fps +- instance=943af9b8ac9311ec984ec2ba86c884b6, state=RUNNING, 34fps +- instance=94d9f50eac9311ec9d92aa618d1feb05, state=RUNNING, 32fps +- instance=9573da48ac9311ec984ec2ba86c884b6, state=RUNNING, 32fps +- instance=96138fa2ac9311ec9d92aa618d1feb05, state=RUNNING, 31fps +- instance=96ad090cac9311ec984ec2ba86c884b6, state=RUNNING, 32fps +``` + +See all the streams are over 30fps so a stream density of 8 has been achieved. + +Use CTRL+C to stop streams, pipeline will be in `ABORTED` state after. + +```text + +Pipeline status @ 82s +- instance=9268a11cac9311ec9d92aa618d1feb05, state=ABORTED, 35fps +- instance=9302aca8ac9311ec984ec2ba86c884b6, state=ABORTED, 35fps +- instance=93a1b230ac9311ec9d92aa618d1feb05, state=ABORTED, 32fps +- instance=943af9b8ac9311ec984ec2ba86c884b6, state=ABORTED, 33fps +- instance=94d9f50eac9311ec9d92aa618d1feb05, state=ABORTED, 32fps +- instance=9573da48ac9311ec984ec2ba86c884b6, state=ABORTED, 32fps +- instance=96138fa2ac9311ec9d92aa618d1feb05, state=ABORTED, 31fps +- instance=96ad090cac9311ec984ec2ba86c884b6, state=ABORTED, 32fps +avg_fps: 32.75 +Done +``` + +### 30 Streams on Three Nodes + +We'll add a third node(Intel® Xeon® Processor). As single Intel® Xeon® processor gives stream density of 22 exceeding target fps. By adding to cluster, we should be able to get around 30 streams with target fps. + +First add a third node as per [Adding Node to Existing Deployment](../README.md#adding-nodes-to-an-existing-deployment). + +Now we run 30 streams and monitor fps using the same request as before. This time the work should be shared across the three nodes. + +```bash +./client/pipeline_client.sh run object_detection/person_vehicle_bike \ + https://lvamedia.blob.core.windows.net/public/homes_00425.mkv \ + --server-address http://:31000 \ + --parameter detection-model-instance-id person-vehicle-bike \ + --number-of-streams 30 +``` + +```text + +Starting pipeline 1 +Starting pipeline object_detection/person_vehicle_bike, instance = 83eedaf6b05411eca1ccd2862a904e11 +Pipeline 1 running - instance_id = 83eedaf6b05411eca1ccd2862a904e11 +Starting pipeline 2 +Starting pipeline object_detection/person_vehicle_bike, instance = 84da6610b05411ec95675e5f5863e7f7 +Pipeline 2 running - instance_id = 84da6610b05411ec95675e5f5863e7f7 + +Starting pipeline 29 +Starting pipeline object_detection/person_vehicle_bike, instance = bb43dddab05411eca1ccd2862a904e11 +Pipeline 29 running - instance_id = bb43dddab05411eca1ccd2862a904e11 +Starting pipeline 30 +Starting pipeline object_detection/person_vehicle_bike, instance = bbe191bab05411eca1ccd2862a904e11 +Pipeline 30 running - instance_id = bbe191bab05411eca1ccd2862a904e11 +30 pipelines running. +Pipeline status @ 14s +- instance=83eedaf6b05411eca1ccd2862a904e11, state=COMPLETED, 475fps +- instance=84da6610b05411ec95675e5f5863e7f7, state=RUNNING, 51fps +- instance=87445302b05411ec9defc23594411cd1, state=RUNNING, 55fps +- instance=89fecb4ab05411eca1ccd2862a904e11, state=COMPLETED, 477fps +- instance=8a9c9870b05411ec95675e5f5863e7f7, state=RUNNING, 38fps +- instance=90a9cb84b05411ec9defc23594411cd1, state=RUNNING, 37fps +- instance=9758a734b05411eca1ccd2862a904e11, state=COMPLETED, 555fps +- instance=97f66096b05411ec95675e5f5863e7f7, state=RUNNING, 31fps +- instance=9db662ecb05411ec9defc23594411cd1, state=RUNNING, 31fps +- instance=a41838d6b05411eca1ccd2862a904e11, state=COMPLETED, 549fps +- instance=a4b5b4e4b05411ec95675e5f5863e7f7, state=RUNNING, 29fps +- instance=aac2ed5cb05411ec9defc23594411cd1, state=RUNNING, 28fps +- instance=b1729d64b05411eca1ccd2862a904e11, state=RUNNING, 90fps +- instance=b210451eb05411eca1ccd2862a904e11, state=RUNNING, 70fps +- instance=b2ad1452b05411eca1ccd2862a904e11, state=RUNNING, 60fps +- instance=b34a2c88b05411eca1ccd2862a904e11, state=RUNNING, 54fps +- instance=b3e754d6b05411eca1ccd2862a904e11, state=RUNNING, 49fps +- instance=b4842090b05411eca1ccd2862a904e11, state=RUNNING, 45fps +- instance=b5212c96b05411eca1ccd2862a904e11, state=RUNNING, 42fps +- instance=b5be3676b05411eca1ccd2862a904e11, state=RUNNING, 40fps +- instance=b65b602cb05411eca1ccd2862a904e11, state=RUNNING, 38fps +- instance=b6f87196b05411eca1ccd2862a904e11, state=RUNNING, 37fps +- instance=b7956d66b05411eca1ccd2862a904e11, state=RUNNING, 35fps +- instance=b832b314b05411eca1ccd2862a904e11, state=RUNNING, 34fps +- instance=b8cf8608b05411eca1ccd2862a904e11, state=RUNNING, 33fps +- instance=b96c52e4b05411eca1ccd2862a904e11, state=RUNNING, 32fps +- instance=ba09b476b05411eca1ccd2862a904e11, state=RUNNING, 32fps +- instance=baa6cfd6b05411eca1ccd2862a904e11, state=RUNNING, 31fps +- instance=bb43dddab05411eca1ccd2862a904e11, state=RUNNING, 30fps +- instance=bbe191bab05411eca1ccd2862a904e11, state=RUNNING, 30fps +``` + +See all 30 streams are close to 30fps so a stream density of 30 has been achieved. + +Use CTRL+C to stop streams, pipeline will be in `ABORTED` state after. + +```text + +Pipeline status @ 14s +- instance=83eedaf6b05411eca1ccd2862a904e11, state=COMPLETED, 475fps +- instance=84da6610b05411ec95675e5f5863e7f7, state=COMPLETED, 42fps +- instance=87445302b05411ec9defc23594411cd1, state=ABORTED, 40fps +- instance=89fecb4ab05411eca1ccd2862a904e11, state=COMPLETED, 477fps +- instance=8a9c9870b05411ec95675e5f5863e7f7, state=ABORTED, 35fps +- instance=90a9cb84b05411ec9defc23594411cd1, state=ABORTED, 34fps +- instance=9758a734b05411eca1ccd2862a904e11, state=COMPLETED, 555fps +- instance=97f66096b05411ec95675e5f5863e7f7, state=ABORTED, 31fps +- instance=9db662ecb05411ec9defc23594411cd1, state=ABORTED, 30fps +- instance=a41838d6b05411eca1ccd2862a904e11, state=COMPLETED, 549fps +- instance=a4b5b4e4b05411ec95675e5f5863e7f7, state=ABORTED, 31fps +- instance=aac2ed5cb05411ec9defc23594411cd1, state=ABORTED, 29fps +- instance=b1729d64b05411eca1ccd2862a904e11, state=ABORTED, 43fps +- instance=b210451eb05411eca1ccd2862a904e11, state=ABORTED, 39fps +- instance=b2ad1452b05411eca1ccd2862a904e11, state=ABORTED, 36fps +- instance=b34a2c88b05411eca1ccd2862a904e11, state=ABORTED, 35fps +- instance=b3e754d6b05411eca1ccd2862a904e11, state=ABORTED, 34fps +- instance=b4842090b05411eca1ccd2862a904e11, state=ABORTED, 33fps +- instance=b5212c96b05411eca1ccd2862a904e11, state=ABORTED, 32fps +- instance=b5be3676b05411eca1ccd2862a904e11, state=ABORTED, 32fps +- instance=b65b602cb05411eca1ccd2862a904e11, state=ABORTED, 31fps +- instance=b6f87196b05411eca1ccd2862a904e11, state=ABORTED, 31fps +- instance=b7956d66b05411eca1ccd2862a904e11, state=ABORTED, 31fps +- instance=b832b314b05411eca1ccd2862a904e11, state=ABORTED, 31fps +- instance=b8cf8608b05411eca1ccd2862a904e11, state=ABORTED, 31fps +- instance=b96c52e4b05411eca1ccd2862a904e11, state=ABORTED, 30fps +- instance=ba09b476b05411eca1ccd2862a904e11, state=ABORTED, 30fps +- instance=baa6cfd6b05411eca1ccd2862a904e11, state=ABORTED, 30fps +- instance=bb43dddab05411eca1ccd2862a904e11, state=ABORTED, 30fps +- instance=bbe191bab05411eca1ccd2862a904e11, state=ABORTED, 30fps +avg_fps: 97.26 +Done +``` \ No newline at end of file diff --git a/samples/kubernetes/docs/microk8s.md b/samples/kubernetes/docs/microk8s.md new file mode 100644 index 0000000..67b079f --- /dev/null +++ b/samples/kubernetes/docs/microk8s.md @@ -0,0 +1,334 @@ +# MicroK8s + +| [Installing Microk8s](#installing-microk8s) | [Adding Node](#adding-node-to-the-cluster) | [Removing Node](#removing-node-from-cluster) | [Uninstalling Microk8s](#uninstalling-microk8s) + +The following steps, installation and deployment scripts have been tested on Ubuntu 20.04. Other operating systems may have additional requirements and are beyond the scope of this document. + +## Installing MicroK8s + +For each node that will be in the cluster run the following commands to install MicroK8s along with its dependencies. These steps must be performed on each node individually. Please review the contents of [microk8s/install.sh](microk8s/install.sh) and [microk8s/install_addons.sh](./microk8s/install_addons.sh) as these scripts will install additional components on your system as well as make changes to your groups and environment variables. + +### Step 1: Install MicroK8s Base + +#### Command +```bash +cd ./samples/kubernetes +sudo -E ./microk8s/install.sh +``` + +#### Expected Output +```text + +Assigning to microk8s group +``` + +> NOTE: If you are running behind a proxy please ensure that your `NO_PROXY` and `no_proxy` environment variables are set correctly to allow cluster nodes to communicate directly. You can run these commands to set this up automatically: +> ```bash +> UPDATE_NO_PROXY=true sudo -E ./microk8s/install.sh +> su - $USER +> ``` + +### Step 2: Activate Group Membership + +Your user is now a member of a newly added 'microk8s' group. However, the current terminal session will not be aware of this until you issue this command: + +#### Command + +```bash +newgrp microk8s +groups | grep microk8s +``` + +#### Expected Output +```text + microk8s +``` + +### Step 3: Install MicroK8s Add-Ons + +Next we need to install add-on components into the cluster. These enable Docker Registry and DNS. + +#### Command + +```bash +./microk8s/install_addons.sh +``` + +Note that this script may take **several minutes** to complete. + +#### Expected Output + +```text +Started. +Metrics-Server is enabled +DNS is enabled +Ingress is enabled +Metrics-Server is enabled +DNS is enabled +The registry is enabled +``` + +The `install_addons.sh` script automatically monitors for the Kubernetes system pods to reach the `Running` state. This may take a few minutes. During this phase it reports: +```text +One or more dependent services are in non-Running state... +One or more dependent services are in non-Running state... + +Confirming nodes are ready... +Dependent services are now ready.. +``` + +### Step 4: Wait for Kubernetes System Pods to Reach Running State + +At this point we need to wait for the Kubernetes system pods to reach the running state. This may take a few minutes. + +Check that the installation was successful by confirming `STATUS` is `Running` for all pods. Pods will cycle through `ContainerCreating`, `Pending`, and `Waiting` states but all should eventually reach the `Running` state. After a few minutes if all pods do not reach the `Running` state refer to [application cluster troubleshooting tips](https://kubernetes.io/docs/tasks/debug-application-cluster/debug-pod-replication-controller/) for more help. + +> Troubleshooting Tip: If you see `Pending` or `ContainerCreating` after waiting more than a few minutes, you may need to modify your environment variables with respect to proxy settings and restart MicroK8s. Do this by running `microk8s stop`, modifying the environment variables in your shell, and then running `microk8s start`. Then check the status of pods by running this command again. + +#### Command + +```bash +microk8s kubectl get pods --all-namespaces +``` + +#### Expected Output + +```text +NAMESPACE NAME READY STATUS RESTARTS AGE +kube-system calico-node-mhvlc 1/1 Running 0 4m28s +kube-system metrics-server-8bbfb4bdb-pl6g7 1/1 Running 0 3m1s +kube-system calico-kube-controllers-f7868dd95-mkjjk 1/1 Running 0 4m30s +kube-system dashboard-metrics-scraper-78d7698477-pgpkj 1/1 Running 0 86s +kube-system coredns-7f9c69c78c-8vjr4 1/1 Running 0 86s +ingress nginx-ingress-microk8s-controller-rjcpr 1/1 Running 0 86s +kube-system kubernetes-dashboard-85fd7f45cb-h82gk 1/1 Running 0 86s +kube-system hostpath-provisioner-5c65fbdb4f-42pdn 1/1 Running 0 86s +container-registry registry-9b57d9df8-vtmsj 1/1 Running 0 86s +``` + +### Step 5: Setup Proxy Server DNS +> Note: This step is required if you are running behind proxy, skip otherwise. + +Use the following steps to set up the MicroK8s DNS service correctly. + +#### 1. Identify host network’s configured DNS servers + +##### Command + +```bash +systemd-resolve --status | grep "Current DNS" --after-context=3 +``` + +##### Expected Output + +```text +Current DNS Server: 10.22.1.1 + DNS Servers: + + +``` + +#### 2. Disable MicroK8s DNS + +##### Command + +```bash +microk8s disable dns +``` + +##### Expected Output + +```text +Disabling DNS +Reconfiguring kubelet +Removing DNS manifest +serviceaccount "coredns" deleted +configmap "coredns" deleted +deployment.apps "coredns" deleted +service "kube-dns" deleted +clusterrole.rbac.authorization.k8s.io "coredns" deleted +clusterrolebinding.rbac.authorization.k8s.io "coredns" deleted +DNS is disabled +``` + +#### 3. Enable DNS with Host DNS Server + +##### Command + +```bash +microk8s enable dns:,, +``` + +##### Expected Output + +```text +Enabling DNS +Applying manifest +serviceaccount/coredns created +configmap/coredns created +deployment.apps/coredns created +service/kube-dns created +clusterrole.rbac.authorization.k8s.io/coredns created +clusterrolebinding.rbac.authorization.k8s.io/coredns created +Restarting kubelet +DNS is enabled +``` + +#### 4. Confirm Update + +##### Command + +```bash +sh -c "until microk8s.kubectl rollout status deployments/coredns -n kube-system -w; do sleep 5; done" +``` + +##### Expected Output + +```text +deployment "coredns" successfully rolled out +``` + +## Adding Node to the Cluster + +> Note: This step is only required if you have 2 or more nodes, skip otherwise. + +### Step 1: Select Leader and Add Node + +Choose one of your nodes as the `leader` node. + +For each additional node that will be in the cluster, issue the following command on the `leader` node. You will need to do this once for each node you want to add. + +#### Command + +```bash +microk8s add-node +``` + +#### Expected Output + +You should see output as follows, including the IP address of the primary/controller host and unique token for the node you are adding to use during connection to the cluster. + +```bash +From the node you wish to join to this cluster, run the following: +microk8s join :25000/02c66e66e811fe2c697b1cd5d31bfba2/023e49528889 + +If the node you are adding is not reachable through the default interface you can use one of the following: + microk8s join :25000/02c66e66e811fe2c697b1cd5d31bfba2/023e49528889 + microk8s join 172.17.0.1:25000/02c66e66e811fe2c697b1cd5d31bfba2/023e49528889 +``` + +### Step 2: Join Node to Cluster + +Run `join` command shown in the above response on each `worker node` to be added. + +#### Command + +```bash +microk8s join :25000/02c66e66e811fe2c697b1cd5d31bfba2/023e49528889 +``` + +#### Expected Output + +```text +Contacting cluster at +Waiting for this node to finish joining the cluster. .. +``` + +If you encounter an error `Connection failed. Invalid token (500)` your token may have expired or you already used it for another node. To resolve, run the `add-node` command on the leader node to get a new token. + +### Step 3: Confirm Cluster Nodes + +To confirm what nodes are running in your cluster, run: + +#### Command + +```bash +microk8s kubectl get no +``` + +#### Expected Output + +```text +NAME STATUS ROLES AGE VERSION +vcplab003 Ready 3d v1.21.5-3+83e2bb7ee39726 +vcplab002 Ready 84s v1.21.5-3+83e2bb7ee39726 +``` + +## Removing Node from Cluster + +### Confirm Running Nodes + +To confirm what nodes are running in your cluster, run: + +#### Command + +```bash +microk8s kubectl get no +``` + +#### Expected Output + +```text +NAME STATUS ROLES AGE VERSION + Ready 96d v1.21.9-3+5bfa682137fad9 +``` + +### Drain Node + +Drain the node, run below command in worker node you want to remove + +#### Command + +```bash +microk8s kubectl drain --ignore-daemonsets +``` + +#### Expected Output + +```text + +node/ drained +``` + +### Leave Cluster + +Run below command in worker node you want to remove to leave the cluster + +```bash +microk8s leave +``` + +### Remove Node + +Run below command on **leader node** + +```bash +microk8s remove-node +``` + +## Uninstalling Microk8s + +### Step 1: Undeploy Pipeline Server, HAProxy and MQTT services + +Follow steps to [Undeploy Services](../README.md#undeploy-services) + +### Step 2: Remove Node + +Follow steps to [Removing nodes from cluster](#removing-node-from-cluster) + +### Step 3: Uninstall MicroK8s + +#### Command + +```bash +./microk8s/uninstall.sh +``` + +#### Expected Output + +```text +========================== +Remove/Purge microk8s +========================== +microk8s removed +``` diff --git a/samples/kubernetes/haproxy/build.sh b/samples/kubernetes/haproxy/build.sh index 815dcbe..01b56dc 100755 --- a/samples/kubernetes/haproxy/build.sh +++ b/samples/kubernetes/haproxy/build.sh @@ -21,13 +21,13 @@ GET_SERVER_STRING=" server server-name server-address:8080 check" running=0 for (( i=0; i<20; ++i)); do - running=$(microk8s kubectl get pods --all-namespaces | grep "pipeline-server" | awk '{ print $4 }' | grep 'Running' | wc -l) + running=$(microk8s kubectl get pods | grep "pipeline-server.*worker" | awk '{ print $3 }' | grep 'Running' | wc -l) if [ $running -gt 0 ]; then echo " $running pipeline-server services running " break else echo "No pipeline-server services are running" - echo "$(microk8s kubectl get pods --all-namespaces | grep "pipeline-server")" + echo "$(microk8s kubectl get pods | grep "pipeline-server.*worker")" sleep 10 fi done diff --git a/samples/kubernetes/haproxy/check_for_changes.sh b/samples/kubernetes/haproxy/check_for_changes.sh new file mode 100755 index 0000000..98ef417 --- /dev/null +++ b/samples/kubernetes/haproxy/check_for_changes.sh @@ -0,0 +1,31 @@ +#!/bin/bash -e + +WORK_DIR=$(dirname $(readlink -f "$0")) + +function launch { $@ + local exit_code=$? + if [ $exit_code -ne 0 ]; then + echo "ERROR: error with $1" >&2 + exit $exit_code + fi + return $exit_code +} + +PIPELINE_SERVER_PODS=${PIPELINE_SERVER_PODS:-$1} +echo $PIPELINE_SERVER_PODS +echo "Starting loop to look for any changes in Pipeline Servers" + +while true +do + sleep 10 + echo "looking for changes in Pipeline Servers" + pods=$(microk8s kubectl get pods | grep "pipeline-server.*worker" | grep 'Running' | awk '{ print $1 " " $4}') + if [ "$PIPELINE_SERVER_PODS" != "$pods" ]; then + echo "Pipeline server pod added or restarted" + echo "$pods" + launch $WORK_DIR/build.sh + launch $WORK_DIR/deploy.sh + PIPELINE_SERVER_PODS=$(microk8s kubectl get pods | grep "pipeline-server.*worker" | grep 'Running' | awk '{ print $1 " " $4}') + fi +done +echo "exited" diff --git a/samples/kubernetes/haproxy/deploy.sh b/samples/kubernetes/haproxy/deploy.sh index 8f2e826..caab667 100755 --- a/samples/kubernetes/haproxy/deploy.sh +++ b/samples/kubernetes/haproxy/deploy.sh @@ -22,7 +22,7 @@ microk8s kubectl rollout restart deploy haproxy-deployment sleep 10 for (( i=0; i<10; ++i)); do - terminating=$(microk8s kubectl get pods --all-namespaces | grep "haproxy" | awk '{ print $4 }' | grep 'Terminating' | wc -l) + terminating=$(microk8s kubectl get pods | grep "haproxy" | awk '{ print $3 }' | grep 'Terminating' | wc -l) echo "Waiting for haproxy service to start......." if [ $terminating == 0 ]; then echo "HAProxy Service started" @@ -30,4 +30,6 @@ for (( i=0; i<10; ++i)); do else sleep 10 fi -done \ No newline at end of file +done + +echo "$(microk8s kubectl get pods)" \ No newline at end of file diff --git a/samples/kubernetes/microk8s/install_addons.sh b/samples/kubernetes/microk8s/install_addons.sh index 6e4b020..2e4150f 100755 --- a/samples/kubernetes/microk8s/install_addons.sh +++ b/samples/kubernetes/microk8s/install_addons.sh @@ -40,7 +40,7 @@ if [ "$SKIP_CHECKS" == "false" ]; then sleep 3 # Sanity check to confirm all dependent services are running. for (( i=0; i<20; ++i)); do - non_running_status=$(microk8s kubectl get pods --all-namespaces | awk '{ print $4 }' | grep -vE 'Running|STATUS') + non_running_status=$(microk8s kubectl get pods --all-namespaces | awk '{ print $3 }' | grep -vE 'Running|STATUS') if [ -z "$non_running_status" ]; then echo "All Services are in Running state." break diff --git a/samples/kubernetes/mqtt/deploy.sh b/samples/kubernetes/mqtt/deploy.sh index 7e28b20..81bcac5 100755 --- a/samples/kubernetes/mqtt/deploy.sh +++ b/samples/kubernetes/mqtt/deploy.sh @@ -16,7 +16,7 @@ for (( i=0; i<25; ++i)); do not_running=$(microk8s kubectl get pods | grep "mqtt" | grep -vE 'Running' | wc -l) echo "Waiting for mqtt to start......." if [ $not_running == 0 ]; then - echo "mqtt instance is up and running" + echo "MQTT instance is up and running" break else sleep 10 @@ -26,3 +26,5 @@ if [ $not_running != 0 ]; then echo "Failed to deploy mqtt" exit 1 fi + +echo "$(microk8s kubectl get pods)" diff --git a/samples/kubernetes/pipeline-server-worker/build.sh b/samples/kubernetes/pipeline-server-worker/build.sh index ba5a1c4..232d013 100755 --- a/samples/kubernetes/pipeline-server-worker/build.sh +++ b/samples/kubernetes/pipeline-server-worker/build.sh @@ -2,18 +2,17 @@ SCRIPT_DIR=$(dirname $(readlink -f "$0")) WORK_DIR="$SCRIPT_DIR/docker" -BASE_IMAGE=${BASE_IMAGE:-intel/dlstreamer-pipeline-server:0.7.1} - -PIPELINE_SERVER_WORKER_IMAGE=${PIPELINE_SERVER_WORKER_IMAGE:-"localhost:32000/dlstreamer-pipeline-server-worker:latest"} +BASE_IMAGE=${BASE_IMAGE} +PIPELINE_SERVER_WORKER_IMAGE="localhost:32000/dlstreamer-pipeline-server-worker:latest" +BUILD_ARGS=$(env | cut -f1 -d= | grep -E '_(proxy|REPO|VER)$' | sed 's/^/--build-arg / ' | tr '\n' ' ') +if [ ! -z "$BASE_IMAGE" ]; then + echo "Building Pipeline Server Worker image using Base image $BASE_IMAGE" + BUILD_ARGS+=" --build-arg BASE=$BASE_IMAGE " +fi echo "Building Pipeline Server image with feedback files" - -docker build --network=host \ - --build-arg no_proxy --build-arg https_proxy --build-arg socks_proxy --build-arg http_proxy \ - --build-arg BASE=$BASE_IMAGE \ - -t $PIPELINE_SERVER_WORKER_IMAGE $WORK_DIR - +docker build --network=host $BUILD_ARGS -t $PIPELINE_SERVER_WORKER_IMAGE $WORK_DIR echo "pushing $PIPELINE_SERVER_WORKER_IMAGE into k8s registry..." docker push $PIPELINE_SERVER_WORKER_IMAGE diff --git a/samples/kubernetes/pipeline-server-worker/deploy.sh b/samples/kubernetes/pipeline-server-worker/deploy.sh index 2239b91..bf30384 100755 --- a/samples/kubernetes/pipeline-server-worker/deploy.sh +++ b/samples/kubernetes/pipeline-server-worker/deploy.sh @@ -2,35 +2,8 @@ WORK_DIR=$(dirname $(readlink -f "$0")) ADD_PROXY_ENV=${ADD_PROXY_ENV:-true} -PIPELINE_SERVER_YAML=${WORK_DIR}/pipeline-server.yaml - -if [ "$ADD_PROXY_ENV" == "true" ]; then - echo "Adding proxy settings to $PIPELINE_SERVER_YAML" - PROXY_CONFIG= - NEWLINE=$'\n' - - if [[ ! -z "$http_proxy" && -z $(grep -R http_proxy $PIPELINE_SERVER_YAML) ]]; then - PROXY_CONFIG=${PROXY_CONFIG}" - name: http_proxy"${NEWLINE} - PROXY_CONFIG=${PROXY_CONFIG}" value: "${http_proxy}${NEWLINE} - fi - - if [[ ! -z "$https_proxy" && -z $(grep -R https_proxy $PIPELINE_SERVER_YAML) ]]; then - PROXY_CONFIG=${PROXY_CONFIG}" - name: https_proxy"${NEWLINE} - PROXY_CONFIG=${PROXY_CONFIG}" value: "${https_proxy}${NEWLINE} - fi - - if [[ ! -z "$no_proxy" && -z $(grep -R no_proxy $PIPELINE_SERVER_YAML) ]]; then - PROXY_CONFIG=${PROXY_CONFIG}" - name: no_proxy"${NEWLINE} - PROXY_CONFIG=${PROXY_CONFIG}" value: "${no_proxy}${NEWLINE} - fi - - if [ ! -z "$PROXY_CONFIG" ]; then - PROXY_CONFIG=" env:${NEWLINE}${PROXY_CONFIG}" - FILE_OUTPUT="$(awk -v r="$PROXY_CONFIG" '{gsub(/ env:/,r)}1' $PIPELINE_SERVER_YAML )" - echo "$FILE_OUTPUT" > $PIPELINE_SERVER_YAML - fi - -fi +CPU_DEVICE_DEPLOYMENT_DIR="${WORK_DIR}/deployments/cpu" +GPU_DEVICE_DEPLOYMENT_DIR="${WORK_DIR}/deployments/gpu" function launch { $@ local exit_code=$? @@ -41,14 +14,33 @@ function launch { $@ return $exit_code } -echo "Using $PIPELINE_SERVER_YAML to deploy pipeline server" -launch microk8s kubectl apply -f $PIPELINE_SERVER_YAML +echo "Deploying Intel GPU device plugin " +launch microk8s kubectl apply -k https://github.com/kubernetes-sigs/node-feature-discovery/deployment/overlays/default?ref=v0.10.1 +launch microk8s kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/gpu_plugin/overlays/nfd_labeled_nodes?ref=v0.23.0 +sleep 10 +running=0 +for (( i=0; i<25; ++i)); do + running=$(microk8s kubectl get pods | grep "intel-gpu-plugin" | grep -E 'Running' | wc -l) + echo "Waiting for Intel GPU plugin to start......." + if [ $running == 0 ]; then + sleep 10 + else + echo "Intel GPU plugin enabled" + break + fi +done + +echo "Deploying Pipeline Server CPU worker" +launch microk8s kubectl apply -k $CPU_DEVICE_DEPLOYMENT_DIR + +echo "Deploying Pipeline Server GPU worker" +launch microk8s kubectl apply -k $GPU_DEVICE_DEPLOYMENT_DIR sleep 10 not_running=0 for (( i=0; i<25; ++i)); do - not_running=$(microk8s kubectl get pods | grep "pipeline-server" | grep -vE 'Running' | wc -l) + not_running=$(microk8s kubectl get pods | grep "pipeline-server.*worker" | grep -vE 'Running' | wc -l) echo "Waiting for Pipeline Server instances to start......." if [ $not_running == 0 ]; then echo "All Pipeline Server instances are up and running" @@ -61,3 +53,5 @@ if [ $not_running != 0 ]; then echo "Failed to deploy Pipeline Server, not all Services are in running state" exit 1 fi + +echo "$(microk8s kubectl get pods)" diff --git a/samples/kubernetes/pipeline-server-worker/deployments/base/kustomization.yaml b/samples/kubernetes/pipeline-server-worker/deployments/base/kustomization.yaml new file mode 100644 index 0000000..c9ed7ce --- /dev/null +++ b/samples/kubernetes/pipeline-server-worker/deployments/base/kustomization.yaml @@ -0,0 +1,2 @@ +resources: + - pipeline-server-worker.yaml diff --git a/samples/kubernetes/pipeline-server-worker/pipeline-server.yaml b/samples/kubernetes/pipeline-server-worker/deployments/base/pipeline-server-worker.yaml similarity index 90% rename from samples/kubernetes/pipeline-server-worker/pipeline-server.yaml rename to samples/kubernetes/pipeline-server-worker/deployments/base/pipeline-server-worker.yaml index b28bf29..a8991bd 100644 --- a/samples/kubernetes/pipeline-server-worker/pipeline-server.yaml +++ b/samples/kubernetes/pipeline-server-worker/deployments/base/pipeline-server-worker.yaml @@ -1,11 +1,10 @@ apiVersion: apps/v1 -kind: Deployment +kind: DaemonSet metadata: - name: pipeline-server-deployment + name: pipeline-server-worker labels: app: pipeline-server spec: - replicas: 1 selector: matchLabels: app: pipeline-server @@ -30,7 +29,7 @@ spec: apiVersion: v1 kind: Service metadata: - name: pipeline-server-svc + name: pipeline-server-service labels: app: pipeline-server spec: diff --git a/samples/kubernetes/pipeline-server-worker/deployments/cpu/cpu-overlay.yaml b/samples/kubernetes/pipeline-server-worker/deployments/cpu/cpu-overlay.yaml new file mode 100644 index 0000000..bdc3a1e --- /dev/null +++ b/samples/kubernetes/pipeline-server-worker/deployments/cpu/cpu-overlay.yaml @@ -0,0 +1,16 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: pipeline-server-cpu-worker +spec: + template: + spec: + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: feature.node.kubernetes.io/pci-0300_8086.present + operator: DoesNotExist + containers: + - name: pipeline-server \ No newline at end of file diff --git a/samples/kubernetes/pipeline-server-worker/deployments/cpu/kustomization.yaml b/samples/kubernetes/pipeline-server-worker/deployments/cpu/kustomization.yaml new file mode 100644 index 0000000..9b775d6 --- /dev/null +++ b/samples/kubernetes/pipeline-server-worker/deployments/cpu/kustomization.yaml @@ -0,0 +1,16 @@ +bases: + - ../base +patches: + - target: + kind: DaemonSet + patch: |- + - op: replace + path: /metadata/name + value: pipeline-server-cpu-worker + - target: + kind: Service + patch: |- + - op: replace + path: /metadata/name + value: pipeline-server-cpu-service + - path: cpu-overlay.yaml \ No newline at end of file diff --git a/samples/kubernetes/pipeline-server-worker/deployments/gpu/gpu-overlay.yaml b/samples/kubernetes/pipeline-server-worker/deployments/gpu/gpu-overlay.yaml new file mode 100644 index 0000000..f8d291e --- /dev/null +++ b/samples/kubernetes/pipeline-server-worker/deployments/gpu/gpu-overlay.yaml @@ -0,0 +1,19 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: pipeline-server-gpu-worker +spec: + template: + spec: + nodeSelector: + feature.node.kubernetes.io/pci-0300_8086.present: "true" + containers: + - name: pipeline-server + resources: + limits: + gpu.intel.com/i915: 1 + env: + - name: DETECTION_DEVICE + value: GPU + - name: CLASSIFICATION_DEVICE + value: GPU diff --git a/samples/kubernetes/pipeline-server-worker/deployments/gpu/kustomization.yaml b/samples/kubernetes/pipeline-server-worker/deployments/gpu/kustomization.yaml new file mode 100644 index 0000000..eb508fb --- /dev/null +++ b/samples/kubernetes/pipeline-server-worker/deployments/gpu/kustomization.yaml @@ -0,0 +1,16 @@ +bases: + - ../base +patches: + - target: + kind: DaemonSet + patch: |- + - op: replace + path: /metadata/name + value: pipeline-server-gpu-worker + - target: + kind: Service + patch: |- + - op: replace + path: /metadata/name + value: pipeline-server-gpu-service + - path: gpu-overlay.yaml diff --git a/samples/kubernetes/pipeline-server-worker/docker/Dockerfile b/samples/kubernetes/pipeline-server-worker/docker/Dockerfile index 816a06e..1ad6ef6 100644 --- a/samples/kubernetes/pipeline-server-worker/docker/Dockerfile +++ b/samples/kubernetes/pipeline-server-worker/docker/Dockerfile @@ -1,4 +1,4 @@ -ARG BASE=intel/dlstreamer-pipeline-server:0.7.1 +ARG BASE=intel/dlstreamer-pipeline-server:0.7.2 FROM $BASE diff --git a/samples/kubernetes/pipeline-server-worker/docker/entrypoint.sh b/samples/kubernetes/pipeline-server-worker/docker/entrypoint.sh index de1b11b..93b1655 100755 --- a/samples/kubernetes/pipeline-server-worker/docker/entrypoint.sh +++ b/samples/kubernetes/pipeline-server-worker/docker/entrypoint.sh @@ -1,2 +1,2 @@ /etc/init.d/xinetd restart -python3 -m vaserving \ No newline at end of file +python3 -m server \ No newline at end of file diff --git a/samples/kubernetes/undeploy_all.sh b/samples/kubernetes/undeploy_all.sh new file mode 100755 index 0000000..e552738 --- /dev/null +++ b/samples/kubernetes/undeploy_all.sh @@ -0,0 +1,28 @@ +#!/bin/bash + +WORK_DIR=$(dirname $(readlink -f "$0")) +NAMESPACE=${NAMESPACE:-pipeline-server} + +function launch { $@ + local exit_code=$? + if [ $exit_code -ne 0 ]; then + echo "ERROR: error with $1" >&2 + exit $exit_code + fi + return $exit_code +} + +pkill -f check_for_changes.sh + +launch microk8s kubectl delete serviceaccount,clusterrole,clusterrolebinding,configmap,daemonsets,services,deployments,pods --all --namespace=node-feature-discovery + +launch microk8s kubectl delete configmap,daemonsets,services,deployments,pods --all + +namespace_exists=$(microk8s kubectl get namespaces | grep $NAMESPACE) + +if [ ! -z "$namespace_exists" ]; then + launch microk8s kubectl delete namespaces $NAMESPACE + launch microk8s kubectl config set-context --current --namespace=default +fi + +echo "Stopped and removed all services" diff --git a/samples/record_frames/README.md b/samples/record_frames/README.md index ab1191d..9691673 100644 --- a/samples/record_frames/README.md +++ b/samples/record_frames/README.md @@ -182,7 +182,7 @@ samples/record_frames/run_server.sh --frame-store samples/record_frames/frame_st Check the pipeline is loaded: ``` -vaclient/vaclient.sh list-pipelines +./client/pipeline_client.sh list-pipelines ``` ``` diff --git a/samples/record_frames/run_client.sh b/samples/record_frames/run_client.sh index cba58ed..a7d2e6b 100755 --- a/samples/record_frames/run_client.sh +++ b/samples/record_frames/run_client.sh @@ -40,7 +40,7 @@ if [ -z $FRAME_STORE ]; then fi FILE_LOCATION=$FRAME_STORE/$SPECIFIER.jpg -$ROOT_DIR/vaclient/vaclient.sh start $PIPELINE $MEDIA \ +$ROOT_DIR/client/pipeline_client.sh start $PIPELINE $MEDIA \ --destination type mqtt --destination host $BROKER_ADDR:$BROKER_PORT --destination topic $TOPIC \ --parameter file-location $FILE_LOCATION echo Frame store file location = $FILE_LOCATION diff --git a/samples/record_playback/preproc_callbacks/insert_metadata.py b/samples/record_playback/preproc_callbacks/insert_metadata.py index 5f6be10..066dc27 100644 --- a/samples/record_playback/preproc_callbacks/insert_metadata.py +++ b/samples/record_playback/preproc_callbacks/insert_metadata.py @@ -38,7 +38,9 @@ def load_file(self, file_name): def process_frame(self, frame: VideoFrame, _: float = DETECT_THRESHOLD) -> bool: while self.json_objects: metadata_pts = self.json_objects[0]["timestamp"] + self.offset_timestamp - timestamp_difference = abs(frame.video_meta().buffer.pts - metadata_pts) + # pylint: disable=protected-access + buffer = frame._VideoFrame__buffer + timestamp_difference = abs(buffer.pts - metadata_pts) # A margin of error of 1000 nanoseconds # If the difference is greater than the margin of error: # If frame has a higher pts then the timestamp at the head of the list, @@ -47,7 +49,7 @@ def process_frame(self, frame: VideoFrame, _: float = DETECT_THRESHOLD) -> bool: # its still possible for the timestamp to come up, so break # Otherwise, assume this timestamp at the head of the list is accurate to that frame if timestamp_difference > 1000: - if (frame.video_meta().buffer.pts - metadata_pts) > 0: + if (buffer.pts - metadata_pts) > 0: self.json_objects.pop(0) continue break diff --git a/samples/record_playback/record_playback.py b/samples/record_playback/record_playback.py index f3cc31b..cf1b1f7 100644 --- a/samples/record_playback/record_playback.py +++ b/samples/record_playback/record_playback.py @@ -15,7 +15,7 @@ import gi gi.require_version('Gst', '1.0') # pylint: disable=wrong-import-position -from vaserving.vaserving import VAServing +from server.pipeline_server import PipelineServer # pylint: enable=wrong-import-position # default video folder if not provided @@ -86,7 +86,7 @@ def gst_record(options): print("No write permissions for video output directory") return -1 - # Populate the request to provide to VAServing library + # Populate the request to provide to PipelineServer library request = { "source": { "type": "uri", @@ -103,16 +103,16 @@ def gst_record(options): } } - # Start the recording, once complete, stop VAServing + # Start the recording, once complete, stop PipelineServer record_playback_file_dir = os.path.dirname(os.path.realpath(__file__)) - VAServing.start({'log_level': 'INFO', 'pipeline_dir': os.path.join(record_playback_file_dir, "pipelines")}) - pipeline = VAServing.pipeline("object_detection", "segment_record") + PipelineServer.start({'log_level': 'INFO', 'pipeline_dir': os.path.join(record_playback_file_dir, "pipelines")}) + pipeline = PipelineServer.pipeline("object_detection", "segment_record") pipeline.start(request) status = pipeline.status() while (not status.state.stopped()): time.sleep(0.1) status = pipeline.status() - VAServing.stop() + PipelineServer.stop() # Used by playback # If given a file instead of a folder to playback, check to see if file is in the @@ -138,7 +138,7 @@ def gst_playback(options): start_pts = get_timestamp_from_filename(options.input_video_path) location = options.input_video_path - # Populate the request to provide to VAServing library + # Populate the request to provide to PipelineServer library module = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'preproc_callbacks/insert_metadata.py') metadata_args = {"metadata_file_path": options.metadata_file_path, "offset_timestamp": start_pts} request = { @@ -155,16 +155,16 @@ def gst_playback(options): } } - # Start the recording, once complete, stop VAServing + # Start the recording, once complete, stop PipelineServer record_playback_file_dir = os.path.dirname(os.path.realpath(__file__)) - VAServing.start({'log_level': 'INFO', 'pipeline_dir': os.path.join(record_playback_file_dir, "pipelines")}) - pipeline = VAServing.pipeline("recording_playback", "playback") + PipelineServer.start({'log_level': 'INFO', 'pipeline_dir': os.path.join(record_playback_file_dir, "pipelines")}) + pipeline = PipelineServer.pipeline("recording_playback", "playback") pipeline.start(request) status = pipeline.status() while (not status.state.stopped()): time.sleep(0.1) status = pipeline.status() - VAServing.stop() + PipelineServer.stop() def launch_pipeline(options): """Playback the video with metadata inserted back into the video""" diff --git a/samples/webrtc/.gitignore b/samples/webrtc/.gitignore new file mode 100644 index 0000000..61c075e --- /dev/null +++ b/samples/webrtc/.gitignore @@ -0,0 +1,3 @@ +grafana/content/http-stream-dashboard.json +grafana/grafana-storage/** +webserver/www/js-client/** diff --git a/samples/webrtc/README.md b/samples/webrtc/README.md new file mode 100644 index 0000000..1199bbc --- /dev/null +++ b/samples/webrtc/README.md @@ -0,0 +1,223 @@ +# WebRTC Frame Destination + +Intel(R) Deep Learning Streamer (Intel(R) DL Streamer) Pipeline Server supports sending media frames to WebRTC peers running on your network. This demonstrates playback within HTML5 video components supported by compatible browsers running anywhere on your network. The use of additional resources and configuration also allows usage beyond your private network - across cloud and public networks. + +## Getting Started + +The WebRTC sample is comprised of three sample containers that we will run using docker-compose alongside Pipeline Server. The general separation of responsibilities for these microservices are shown below. + +![webrtc microservice composition](/docs/images/webrtc_composition.png) + +## WebRTC Protocol + +[WebRTC](https://opensource.com/article/19/1/gstreamer) brings seamless streaming support from/to web browsers. WebRTC makes use of a set of [communication protocols](https://www.w3.org/TR/webrtc/) _similar_ to [RTSP](/README.md#real-time-streaming-protocol-rtsp). + +### WebRTC Signaling Microservice +This sample provides a basic [WebRTC signaling server](https://www.tutorialspoint.com/webrtc/webrtc_signaling.htm) using the starting point provided in [gst-examples](https://gitlab.freedesktop.org/gstreamer/gst-examples/-/blob/master/webrtc/signaling/README.md). +> NOTE: You will need to secure endpoints such as by generating certificates and using secure websocket connections. + +### WebRTC Web Server Microservice +This sample provides a basic [WebRTC web server](https://gitlab.freedesktop.org/gstreamer/gst-examples/-/tree/master/webrtc/sendrecv/js) for out-of-the-box runtime compatibility. + +### WebRTC Grafana Microservice +This sample provides a container built on top of Grafana. This is used to consolidate multiple requests to the WebRTC Web Server on a single dashboard. In the steps below we will show how to build the Sample Pipeline Server Dashboard from our template and include configuration to automatically load this into the image at runtime using their built-in [Infinity](https://grafana.com/grafana/plugins/yesoreyeram-infinity-datasource/) data source. + +## Configuring Client Browser +WebRTC functionality has been validated in Firefox and Chrome browsers on Ubuntu 20.04 hosts with the default Gnome desktop GUI installed. Other browsers are supported through similar updates to configuration. Compatibility checks are available: +* Confirm HTML5 video output compatibility using http://html5test.com/. +* Confirm Browser settings compatibility using https://myownconference.com/blog/en/webrtc/ + +### Mozilla Firefox +In particular be sure to assign settings by navigating to `about:config`: +1. Confirm `media.peerconnection.enabled` is set to `true`. This is assigned by default. +2. Confirm `media.peerconnection.ice.obfuscate_host_addresses` is set to `false`. This allows successful operation with `http://localhost`. +3. The first time navigating to the site, be sure to click the shield at the left of the address bar in your browser and `Enhanced Tracking Protection` is turned `OFF`. + +### Chrome +In particular be sure to assign settings by navigating to `chrome://flags/`: +1. Confirm that `Enable WebRTC actions in Media Session` is set to `Enabled`. +2. Confirm that `Anonymize local IPs exposed by WebRTC` is set to `Disabled`. + +## Build All Dependencies + +Build the images and launch them as containers in docker-compose. + +To do this, open a terminal to the folder where you have cloned Pipeline Server and enter these commands: + +``` +cd ./samples/webrtc +./build.sh +``` + +``` +Successfully tagged webrtc_signaling_server:latest + +Successfully tagged webrtc_webserver:latest + +Successfully tagged webrtc_grafana:latest + +Successfully tagged dlstreamer-pipeline-server-gstreamer:latest +``` + +> Note: On subsequent runs you may pass the optional `--remove-volumes` parameter to build.sh. This removes local volumes used to store Grafana runtime information on dashboard panels. + +## Run All Microservices + +Next we will launch three microservices needed to provide our solution: + +```bash +./run.sh +``` + +```text +Creating network "webrtc_app_network" with driver "bridge" +Creating volume "webrtc_grafana-storage" with local driver +Creating webrtc_webserver ... done +Creating webrtc_signaling_server ... done +Creating webrtc_grafana ... done +Creating pipeline_server ... done +``` + +## Launch and Visualize Pipelines + +This section provides examples for starting and viewing WebRTC frame output from the localhost using your host's browser. + +### Start Pipeline + +Starting a [Pipeline Client](/client/README.md) tool to instruct the `pipeline_server` container running in docker-compose to launch our WebRTC enabled pipeline with web-based input media of a parking lot scene. + +``` +../../client/pipeline_client.sh start \ + "object_detection/person_vehicle_bike" \ + "https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true" \ + --webrtc-peer-id "pipeline_webrtc1_29442" +``` + +This produces output: + +``` + +Starting pipeline object_detection/person_vehicle_bike, instance = f84ff6b29bfc11ecb4780242ac150004 +``` + +We then construct a URL that includes this same `peer-id` value as supplied in the request. This query parameter allows the `webrtc_webserver` web app to connect to Pipeline Server and render media frames in your browser. + +``` +http://${HOSTNAME}:8082?destination_peer_id=pipeline_webrtc1_29442 +``` + +> NOTE: The sample web app accepts an _optional_ `instance_id` in case you will be launching a Pipeline Server instance by other means and wish to have the web app independently monitor Pipeline Server instance status otherwise maintain an association with the primary pipeline instance. + ``` + http://${HOSTNAME}:8082?destination_peer_id=pipeline_webrtc1_29442&instance_id=f84ff6b29bfc11ecb4780242ac150004 + ``` + +### Example 1: Visualize in web app + +The `webrtc_webserver` microservice provides an example that uses WebRTC JavaScript client to connect to Pipeline Server through the `webrtc_signaling` microservice. To get familiar with this web app example, perform these steps: + +1. Launch your browser and navigate to the page hosted by the `webrtc_webserver` microservice at http://localhost:8082. + +![webrtc web_server](/docs/images/webrtc-launch-pipeline.png) + +> NOTE: This web page inherits the hostname specified in the address field with the default port `:8080` as the target address for the `pipeline_server` microservice. You may replace `localhost` with the value from `echo $HOSTNAME` (in a terminal) and navigate to http://$HOSTNAME:8082 in your browser (where $HOSTNAME is the name of your current host). + +2. Click `Get Pipelines` to have the web page invoke Pipeline Server's Get Pipelines REST API and populate the dropdown. + +3. Choose a value from the dropdown for `Choose Pipeline` and `Choose Media Source` fields. + +4. Provide a unique value for the `Destination Peer ID` and click the `Launch Pipeline` button. + +![webrtc launch pipeline](/docs/images/webrtc-pipeline-params3.png) + +5. At this point a client is able to begin visualizing the rendered frame output for the launched pipeline via WebRTC. This can be performed now by clicking the Visualize button. Alternately, you may open this page from a browser running on a remote client (at http://$HOSTNAME:8082). + +![webrtc visualize local](/docs/images/webrtc-visualize3.png) + +> NOTE: If you do not see errors but also are not seeing any stream rendering, confirm `Enhanced Tracking Protection` is turned `OFF` for the site. In Firefox you can do this by navigating to `about:preferences`, search for `Enhanced Tracking Protection` and click the `Manage Exceptions...` button. Confirm the host being used for the `webrtc_webserver` (on port 8082 by default) is on the list of exceptions. + +> NOTE: You may optionally expand the "Chosen Pipeline Parameters" and "WebRTC Visualization Parameters" sections to view the relevant parameters that are sent to Pipeline Server's Start Pipeline REST API. + +### Initiating More Pipeline Instances + +You can disconnect the pipeline and start additional pipelines to visualize directly in same tab. You may also create multiple tabs to produce 3-4 simultaneous streams. + +> IMPORTANT: Be certain to assign a unique value to the `Destination Peer ID` _for each request_ or the browser will be unable to connect to the WebRTC stream. + +> NOTE: Keep in mind that we need to either wait for the playing stream to exit and disconnect upon completion, or we must manually click the `Disconnect` button beneath it's video player. This will terminate the GStreamer WebRTC Render pipeline. Clicking the `Stop Pipeline` button will ABORT the reference GStreamer pipeline and cause the WebRTC pipeline to cease rendering. + +### Example 2: Quick Launch + Visualization in Web App + +The `./scripts/start_pipeline_psclient.sh` script is provided as a quick way to launch a supported pipeline and render results manually. This prepares a unique `peer-id` and then automatically launches the pipeline and visualization stream (on localhost) when the web page loads. + +These scripts launch your host's default browser, which works if compatible with settings applied as explained [above](#configuring_client_browser). + +> NOTE: You may optionally pass a `--remote` flag to instruct the script to provide the URL so you can paste in a browser running on a remote system (such as Firefox running on your Windows laptop). However, you must replace `localhost` in the URL with the IP address or fully qualified DNS name (FQDN) for your host. + + ```bash + ./scripts/start_pipeline_psclient.sh --remote + ``` + + ```text + Paste into browser address field: + http://pipeline-server.intra.acme.com:8082?destination_peer_id=pipeline_webrtc1_29442&instance_id=36060dc6938c11ec9bfe0242ac140004 + ``` + +> HINT: You may modify the `scripts/launch_browser.sh` bash script used by start_pipeline_psclient.sh so that it provides a suitable FQDN for your host. + +> NOTE: When the `instance_id` and `destination_peer_id` query parameters are provided in the URL in addition to the browser address, this web app sample recognizes the pipeline has already launched and WebRTC visualization should immediately begin. The Pipeline Server instance identifier is used for obtaining status and performance information, while the Destination Peer ID is used to request rendered frames from `webrtcbin`. + +### Example 3: Grafana Dashboard with multiple instances + +Perhaps the easiest way to interact with this sample. Viewing metrics related to Pipeline Server and WebRTC may be consumed by integration into Grafana or similar visualization components. This sample shows how this may be done using a default Grafana Docker image and populating with a dashboard that uses their built-in Infinity datasource to collect FPS metrics and show `active` vs `completed` pipelines as you run these sample streams. + +> NOTE: The example Grafana dashboard is optimized for desktops running at resolutions > 1920x1080 or better or mobile > 1116x1609 or 1609x1116. + +Navigate in your browser to `http://localhost:3222` or the IP address/FQDN of your host running this sample on port `3222`. + +The first time in you will be prompted for credentials so input `admin` as the user and `admin` as the password. Then skip or change the credentials on the next page. + +Next click the `Search` menu and choose the dashboard titled `Pipeline Server Sample Dashboard` to view this page. + +``` +![loaded grafana dashboard](/docs/images/grafana_dashboard_initial.jpg) +``` + +As shown, this dashboard is populated with four AJAX iframe panels, each representing the same interface we used in Example 1. Notice that each time a panel loads it automatically chooses a pipeline, media source, and appends random digits to the `peer-id` field. + +Click the `Launch Pipeline` and `Visualize` button in each panel to see the result. + +![webrtc grafana dashboard](/docs/images/grafana_dashboard_active.jpg) + +Notice that as primary pipelines run a secondary WebRTC render pipeline also runs. Upon completion we show the results of only the primary pipeline in the panel along the right side of the dashboard. This is populated using content from Pipeline Server's Get Pipeline Status endpoint (http://localhost:8080/pipelines/status). + +## Troubleshooting + +1. If you encounter issues, be sure to check your host is running the latest version of browser, Docker and Docker Compose. + +1. The `./samples/webrtc/run.sh` command instructs Docker Compose to intelligently start only containers that have changed in composition or configuration since your last build. If you run `./samples/webrtc/teardown.sh` beforehand, it will cause all microservices to have clear logs and start from scratch. + ``` + ./teardown.sh + ./build.sh --remove-volumes + ./run.sh + ``` + Navigate in browser to `http://localhost:3222`. + +1. To troubleshoot issues with `pipeline_server` container's use of GStreamer and the webrtcbin element, consider increasing logging by adding the following to the `environment:` section of `./samples/webrtc/docker-compose.yml` + ``` + - GST_DEBUG=3,*webrtc*:7 + ``` + +1. The `webrtc_grafana` container runs as the 'grafana' user and 'root' group as documented [here](https://grafana.com/docs/grafana/latest/installation/docker/#migrate-to-v51-or-later). Updates to users/content are stored in the volume mount. + + To wipe out the volume and clear all data, run: + ``` + ./samples/webrtc/build.sh --remove-volumes + ``` + +1. To interact with Pipeline Server from a remote browser such as on your Windows laptop or Android mobile device, update the URL in the browser to point to the system hosting Pipeline Server microservices. This may require that you change from localhost to its IP address or a fully qualified name (FQDN) for the host running Pipeline Server and dependent microservices. + + Confirm that the host running Pipeline Server microservices reside on the same network and are configured to allow access. For example, if your remote system is behind a proxy server, check in the Network Settings of your system and/or browser. + + > NOTE: Be aware that some corporate networks may restrict access to ports or have other configuration that limits routes so you may need to check your firewall rules or other network configuration. On restricted or public networks you may extend the implementation by adding secure endpoints and configuring services to utilize STUN/TURN servers -- this will permit connection traversal using best available throughput for alternate/available ports and public IP negotiation. + +1. When grafana dashboard fails to load AJAX iframe panels and gives the error "Failed to retrieve requested URL." along with the IP address, consider adding the IP address to the No Proxy in the browser settings. diff --git a/samples/webrtc/build.sh b/samples/webrtc/build.sh new file mode 100755 index 0000000..c84a1f7 --- /dev/null +++ b/samples/webrtc/build.sh @@ -0,0 +1,70 @@ +#!/bin/bash +# +# Copyright (C) 2022 Intel Corporation. +# +# SPDX-License-Identifier: BSD-3-Clause +# +DOCKERFILE_DIR=$(dirname "$(readlink -f "$0")") +SOURCE_DIR=$(dirname "$DOCKERFILE_DIR") + +function show_help { + echo "usage: ./build.sh" + echo " [ --remove-volumes ] Removes local volumes used to store information during previous session(s)." + echo " [ --help -h -? ] Usage information." +} + +#Get options passed into script, passing through parameters supported by the parent build script. +while [[ "$#" -gt 0 ]]; do + case $1 in + -h | -\? | --help) + show_help + exit + ;; + *) # Default case: No more known options, so break out of the loop. + PASS_THROUGH_PARAMS=$@ + break ;; + esac + shift +done + +function launch { echo $@ + $@ + local exit_code=$? + if [ $exit_code -ne 0 ]; then + echo "ERROR: error with $1" >&20 + if [[ "$1" == *"grafana"* ]]; then + echo "To re-build the \"grafana-storage\" volume used for sample Grafana dashboard, pass:" + echo "--remove-volumes (requires sudo)" + fi + exit $exit_code + fi + return $exit_code +} + +# Build Signaling Server container for simple WebRTC sample +echo $'\n\n==================================================' +echo "Building webrtc_signaling_server image..." +echo "$SOURCE_DIR/webrtc/signaling/build.sh" +echo "==================================================" +launch $SOURCE_DIR/webrtc/signaling/build.sh + +# Build Web Server container to host WebRTC javascript sample +echo $'\n\n==================================================' +echo "Building webrtc_webserver image..." +echo "$SOURCE_DIR/webrtc/webserver/build.sh" +echo "==================================================" +launch $SOURCE_DIR/webrtc/webserver/build.sh + +# Build Grafana container for MVP Dashboard using AJAX panels +echo $'\n\n==================================================' +echo "Building webrtc_grafana image..." +echo "$SOURCE_DIR/webrtc/grafana/build.sh" +echo "==================================================" +launch $SOURCE_DIR/webrtc/grafana/build.sh $1 + +# Build Pipeline Server container +echo $'\n\n==================================================' +echo "Building Intel(R) DL Streamer Pipeline Server image..." +echo "$SOURCE_DIR/../docker/build.sh" +echo "==================================================" +launch $SOURCE_DIR/../docker/build.sh diff --git a/samples/webrtc/docker-compose.yml b/samples/webrtc/docker-compose.yml new file mode 100644 index 0000000..7c4dcca --- /dev/null +++ b/samples/webrtc/docker-compose.yml @@ -0,0 +1,106 @@ +# +# Copyright (C) 2022 Intel Corporation. +# +# SPDX-License-Identifier: BSD-3-Clause +# + +version: "3" + +services: + + pipeline_server: + image: dlstreamer-pipeline-server-gstreamer:latest + hostname: pipeline_server + container_name: pipeline_server + environment: + - ENABLE_WEBRTC=true + - WEBRTC_SIGNALING_SERVER=ws://webrtc_signaling_server:8443 + - no_proxy=$no_proxy + - http_proxy=$http_proxy + - https_proxy=$https_proxy + - LOG_LEVEL=DEBUG + depends_on: + - webrtc_signaling_server + - webrtc_webserver + ports: + - '8080:8080' + networks: + - app_network + volumes: + - /tmp:/tmp + user: $USER_ID:$GROUP_ID + + webrtc_signaling_server: + build: ./signaling + image: webrtc_signaling_server:latest + hostname: webrtc_signaling_server + container_name: webrtc_signaling_server + environment: + - no_proxy=$no_proxy + - http_proxy=$http_proxy + - https_proxy=$https_proxy + ports: + - '8443:8443' + networks: + - app_network + entrypoint: /opt/start.sh + + webrtc_webserver: + build: ./webserver + image: webrtc_webserver:latest + hostname: webrtc_webserver + container_name: webrtc_webserver + environment: + - no_proxy=$no_proxy + - http_proxy=$http_proxy + - https_proxy=$https_proxy + - NGINX_PORT=80 + - NGINX_HOST=localhost + ports: + - '8082:80' + networks: + - app_network + user: root + + webrtc_grafana: + build: ./grafana + image: webrtc_grafana:latest + hostname: webrtc_grafana + container_name: webrtc_grafana + depends_on: + - pipeline_server + ports: + - '3222:3000' + networks: + - app_network + cap_drop: + - NET_ADMIN + - SYS_ADMIN + - SYS_MODULE + user: '472:0' + volumes: + - grafana-storage:/var/lib/grafana + security_opt: + - apparmor:unconfined + ulimits: + nproc: 65535 + nofile: + soft: 20000 + hard: 40000 + healthcheck: + test: ["CMD", "curl", "-f", "http://localhost"] + interval: 2m30s + timeout: 10s + retries: 3 + +volumes: + grafana-storage: + driver: local + driver_opts: + type: local + o: bind + device: $PWD/grafana/grafana-storage/ + +networks: + app_network: + driver: "bridge" diff --git a/samples/webrtc/grafana/.dockerignore b/samples/webrtc/grafana/.dockerignore new file mode 100644 index 0000000..ca8d7f8 --- /dev/null +++ b/samples/webrtc/grafana/.dockerignore @@ -0,0 +1,3 @@ +# Avoid issues with container-specific assignment of user permissions +grafana-storage/**/* +grafana-storage diff --git a/samples/webrtc/grafana/Dockerfile b/samples/webrtc/grafana/Dockerfile new file mode 100644 index 0000000..b95c360 --- /dev/null +++ b/samples/webrtc/grafana/Dockerfile @@ -0,0 +1,15 @@ +# +# Copyright (C) 2022 Intel Corporation. +# +# SPDX-License-Identifier: BSD-3-Clause +# +FROM grafana/grafana:7.3.4 +USER root +RUN grafana-cli --pluginsDir "/var/lib/grafana/plugins" plugins install ryantxu-ajax-panel +RUN grafana-cli --pluginsDir "/var/lib/grafana/plugins" plugins install yesoreyeram-infinity-datasource + +COPY ./conf/datasources/datasources.yml /etc/grafana/provisioning/datasources/datasources.yml +COPY ./conf/dashboards/dashboards.yml /etc/grafana/provisioning/dashboards/dashboards.yml +COPY ./content/http-stream-dashboard.json /var/lib/grafana/dashboards/http-stream-dashboard.json + +USER grafana diff --git a/samples/webrtc/grafana/build.sh b/samples/webrtc/grafana/build.sh new file mode 100755 index 0000000..b931061 --- /dev/null +++ b/samples/webrtc/grafana/build.sh @@ -0,0 +1,53 @@ +#!/bin/bash +# +# Copyright (C) 2022 Intel Corporation. +# +# SPDX-License-Identifier: BSD-3-Clause +# +DOCKERFILE_DIR=$(dirname "$(readlink -f "$0")") +SOURCE_DIR=$(dirname "$DOCKERFILE_DIR") + +# Usage: +# ./build.sh [--remove-volumes] +# Pass --remove-volumes to forcibly remove local volume storage used by Grafana +# full rebuild (requires sudo) + +# Prepare Pipeline Server MVP Dashboard +echo "---------------------------------------" +echo "Preparing Pipeline Server MVP Dashboard." +echo "---------------------------------------" +ADD_IP_ADDR=$(hostname -I | cut -d' ' -f1) +HTTP_HOST="http:\/\/${ADD_IP_ADDR}:8082" +PS_HOST="http:\/\/${ADD_IP_ADDR}:8080" +mkdir -p $SOURCE_DIR/grafana/content +sed "s/\%HTTP_HOST\%/$HTTP_HOST/g" $SOURCE_DIR/grafana/http-stream-dashboard.json.template > $SOURCE_DIR/grafana/content/http-stream-dashboard.json +sed -i "s/\%PS_HOST\%/$PS_HOST/g" $SOURCE_DIR/grafana/content/http-stream-dashboard.json + +# Prepare shared storage mountpoint on local host +if [ "$1" == "--remove-volumes" ]; then + echo "---------------------------------------" + echo "Preparing Grafana volume for storage." + echo "---------------------------------------" + pushd $SOURCE_DIR + docker-compose stop webrtc_grafana + docker-compose rm webrtc_grafana + vol_exists="$(docker volume ls | grep webrtc_grafana-storage)" + if [[ "$vol_exists" != "" ]]; then + echo "Grafana storage volume will be removed/re-created." + docker volume rm webrtc_grafana-storage + else + echo "Grafana storage volume will be re-created." + fi + echo "Removing 'grafana-storage' volume mount" + sudo rm -rf $SOURCE_DIR/grafana/grafana-storage/ + popd +fi +mkdir -p $SOURCE_DIR/grafana/grafana-storage/ + +# Prepare Grafana container +echo "---------------------------------------" +echo "Building Grafana container w/ plugins." +echo "---------------------------------------" +TAG=webrtc_grafana:latest +BASE_BUILD_ARGS=$(env | cut -f1 -d= | grep -E '_(proxy|REPO|VER)$' | sed 's/^/--build-arg / ' | tr '\n' ' ') +docker build $SOURCE_DIR/grafana $BASE_BUILD_ARGS -t $TAG diff --git a/samples/webrtc/grafana/conf/dashboards/dashboards.yml b/samples/webrtc/grafana/conf/dashboards/dashboards.yml new file mode 100644 index 0000000..ca51c35 --- /dev/null +++ b/samples/webrtc/grafana/conf/dashboards/dashboards.yml @@ -0,0 +1,24 @@ +apiVersion: 1 + +providers: + # an unique provider name. Required + - name: 'Pipeline Server Dashboard Provider' + # Org id. Default to 1 + orgId: 1 + # name of the dashboard folder. + folder: 'PS_Dashboards' + # folder UID. will be automatically generated if not specified + folderUid: '' + # provider type. Default to 'file' + type: file + # disable dashboard deletion + disableDeletion: false + # how often Grafana will scan for changed dashboards + updateIntervalSeconds: 15 + # allow updating provisioned dashboards from the UI + allowUiUpdates: false + options: + # path to dashboard files on disk. Required when using the 'file' type + path: /var/lib/grafana/dashboards + # use folder names from filesystem to create folders in Grafana + foldersFromFilesStructure: true diff --git a/samples/webrtc/grafana/conf/datasources/datasources.yml b/samples/webrtc/grafana/conf/datasources/datasources.yml new file mode 100644 index 0000000..65557d4 --- /dev/null +++ b/samples/webrtc/grafana/conf/datasources/datasources.yml @@ -0,0 +1,8 @@ +apiVersion: 1 + +datasources: + - name: Infinity + type: yesoreyeram-infinity-datasource + access: proxy + isDefault: true + jsonData: \ No newline at end of file diff --git a/samples/webrtc/grafana/http-stream-dashboard.json.template b/samples/webrtc/grafana/http-stream-dashboard.json.template new file mode 100644 index 0000000..120e44a --- /dev/null +++ b/samples/webrtc/grafana/http-stream-dashboard.json.template @@ -0,0 +1,665 @@ +{ + "annotations": { + "list": [ + { + "builtIn": 1, + "datasource": "-- Grafana --", + "enable": true, + "hide": true, + "iconColor": "rgba(0, 211, 255, 1)", + "name": "Annotations & Alerts", + "type": "dashboard" + } + ] + }, + "editable": true, + "gnetId": null, + "graphTooltip": 0, + "id": 1, + "links": [], + "panels": [ + { + "collapsed": false, + "datasource": null, + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 0 + }, + "id": 17, + "panels": [], + "repeat": null, + "title": "Pipeline Server Stream Status", + "type": "row" + }, + { + "datasource": "Infinity", + "description": "", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "custom": {}, + "decimals": 1, + "mappings": [], + "max": 50, + "min": 1, + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "red", + "value": null + }, + { + "color": "#EF843C", + "value": 8 + }, + { + "color": "#6ED0E0", + "value": 12 + }, + { + "color": "green", + "value": 30 + } + ] + }, + "unit": "short" + }, + "overrides": [] + }, + "gridPos": { + "h": 3, + "w": 12, + "x": 0, + "y": 1 + }, + "id": 14, + "interval": "5s", + "links": [ + { + "title": "", + "url": "%PS_HOST%/pipelines/status" + } + ], + "maxDataPoints": 200, + "options": { + "displayMode": "lcd", + "orientation": "horizontal", + "reduceOptions": { + "calcs": [ + "mean" + ], + "fields": "/^FPS \\(avg\\)$/", + "limit": 6, + "values": true + }, + "showUnfilled": true + }, + "pluginVersion": "7.3.4", + "targets": [ + { + "columns": [ + { + "selector": "avg_fps", + "text": "FPS (avg)", + "type": "number" + }, + { + "selector": "id", + "text": "Pipeline", + "type": "string" + }, + { + "selector": "state", + "text": "Status", + "type": "string" + }, + { + "selector": "elapsed_time", + "text": "Elapsed", + "type": "number" + } + ], + "data": "", + "filters": [ + { + "field": "Status", + "operator": "in", + "value": [ + "QUEUED,RUNNING" + ] + }, + { + "field": "Elapsed", + "operator": ">", + "value": [ + "0" + ] + }, + { + "field": "FPS (avg)", + "operator": ">", + "value": [ + "0" + ] + } + ], + "format": "table", + "global_query_id": "", + "json_options": { + "columnar": false, + "root_is_not_array": false + }, + "refId": "A", + "root_selector": "", + "source": "url", + "type": "json", + "url": "%PS_HOST%/pipelines/status", + "url_options": { + "data": "", + "method": "GET" + } + } + ], + "timeFrom": "now-5s", + "timeShift": null, + "title": "Active Streams", + "type": "bargauge" + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": null, + "fieldConfig": { + "defaults": { + "custom": {} + }, + "overrides": [] + }, + "fill": 1, + "fillGradient": 0, + "gridPos": { + "h": 7, + "w": 12, + "x": 12, + "y": 1 + }, + "hiddenSeries": false, + "id": 19, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "nullPointMode": "null", + "options": { + "alertThreshold": true + }, + "percentage": false, + "pluginVersion": "7.3.4", + "pointradius": 2, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "columns": [ + { + "selector": "state", + "text": "state", + "type": "string" + }, + { + "selector": "avg_fps", + "text": "avg_fps", + "type": "number" + }, + { + "selector": "start_time", + "text": "start_time", + "type": "timestamp_epoch_s" + } + ], + "filters": [], + "format": "table", + "global_query_id": "", + "refId": "A", + "root_selector": "", + "source": "url", + "type": "json", + "url": "%PS_HOST%/pipelines/status", + "url_options": { + "data": "", + "method": "GET" + } + } + ], + "thresholds": [], + "timeFrom": null, + "timeRegions": [ + { + "colorMode": "background6", + "fill": true, + "fillColor": "rgba(234, 112, 112, 0.12)", + "line": false, + "lineColor": "rgba(237, 46, 24, 0.60)", + "op": "time" + } + ], + "timeShift": null, + "title": "Running Avg", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "buckets": null, + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ], + "yaxis": { + "align": false, + "alignLevel": null + } + }, + { + "datasource": "Infinity", + "description": "", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "custom": {}, + "decimals": 1, + "mappings": [], + "max": 50, + "min": 1, + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "red", + "value": null + }, + { + "color": "#007dc3", + "value": 8 + }, + { + "color": "#007dc3", + "value": 12 + }, + { + "color": "green", + "value": 30 + } + ] + }, + "unit": "short" + }, + "overrides": [] + }, + "gridPos": { + "h": 4, + "w": 12, + "x": 0, + "y": 4 + }, + "id": 15, + "interval": "5s", + "links": [ + { + "title": "", + "url": "%PS_HOST%/pipelines/status" + } + ], + "maxDataPoints": 200, + "options": { + "displayMode": "lcd", + "orientation": "horizontal", + "reduceOptions": { + "calcs": [ + "mean" + ], + "fields": "/^FPS \\(avg\\)$/", + "values": true + }, + "showUnfilled": true + }, + "pluginVersion": "7.3.4", + "targets": [ + { + "columns": [ + { + "selector": "avg_fps", + "text": "FPS (avg)", + "type": "number" + }, + { + "selector": "id", + "text": "Pipeline", + "type": "string" + }, + { + "selector": "state", + "text": "Status", + "type": "string" + }, + { + "selector": "elapsed_time", + "text": "Elapsed", + "type": "number" + } + ], + "data": "", + "filters": [ + { + "field": "Status", + "operator": "notin", + "value": [ + "QUEUED,RUNNING" + ] + }, + { + "field": "Elapsed", + "operator": ">", + "value": [ + "0" + ] + }, + { + "field": "FPS (avg)", + "operator": ">", + "value": [ + "0" + ] + } + ], + "format": "table", + "global_query_id": "", + "json_options": { + "columnar": false, + "root_is_not_array": false + }, + "refId": "A", + "root_selector": "", + "source": "url", + "type": "json", + "url": "%PS_HOST%/pipelines/status", + "url_options": { + "data": "", + "method": "GET" + } + } + ], + "timeFrom": "now-5s", + "timeShift": null, + "title": "Completed Pipeline Streams", + "type": "bargauge" + }, + { + "collapsed": false, + "datasource": null, + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 8 + }, + "id": 6, + "panels": [], + "title": "WebRTC Peer Clients", + "type": "row" + }, + { + "datasource": null, + "fieldConfig": { + "defaults": { + "custom": {} + }, + "overrides": [] + }, + "gridPos": { + "h": 10, + "w": 12, + "x": 0, + "y": 9 + }, + "header_js": "{}", + "id": 4, + "method": "iframe", + "mode": "html", + "params_js": "{ }", + "pluginVersion": "7.3.4", + "request": "http", + "responseType": "text", + "showErrors": true, + "showTime": false, + "showTimeFormat": "LTS", + "showTimePrefix": null, + "showTimeValue": "request", + "skipSameURL": true, + "targets": [ + { + "columns": [], + "filters": [], + "format": "table", + "global_query_id": "", + "queryType": "randomWalk", + "refId": "A", + "root_selector": "", + "source": "url", + "type": "json", + "url": "https://jsonplaceholder.typicode.com/users", + "url_options": { + "data": "", + "method": "GET" + } + } + ], + "templateResponse": true, + "timeFrom": "now-1d", + "timeShift": null, + "title": "WebRTC Stream 1", + "type": "ryantxu-ajax-panel", + "url": "%HTTP_HOST%?destination_peer_id=webrtc_stream_1_$RANDOM&pipeline_idx=1&media_idx=1", + "withCredentials": false + }, + { + "datasource": null, + "fieldConfig": { + "defaults": { + "custom": {} + }, + "overrides": [] + }, + "gridPos": { + "h": 10, + "w": 12, + "x": 12, + "y": 9 + }, + "header_js": "{}", + "id": 8, + "method": "iframe", + "mode": "html", + "params_js": "{ }", + "pluginVersion": "7.3.4", + "request": "http", + "responseType": "text", + "showErrors": true, + "showTime": false, + "showTimeFormat": "LTS", + "showTimePrefix": null, + "showTimeValue": "request", + "skipSameURL": true, + "targets": [ + { + "queryType": "randomWalk", + "refId": "A" + } + ], + "templateResponse": true, + "timeFrom": "now-1d", + "timeShift": null, + "title": "WebRTC Stream 2", + "type": "ryantxu-ajax-panel", + "url": "%HTTP_HOST%?destination_peer_id=webrtc_stream_2_$RANDOM&pipeline_idx=2&media_idx=2", + "withCredentials": false + }, + { + "datasource": null, + "fieldConfig": { + "defaults": { + "custom": {} + }, + "overrides": [] + }, + "gridPos": { + "h": 10, + "w": 12, + "x": 0, + "y": 19 + }, + "header_js": "{}", + "id": 2, + "method": "iframe", + "mode": "html", + "params_js": "{ }", + "pluginVersion": "7.3.4", + "request": "http", + "responseType": "text", + "showErrors": true, + "showTime": false, + "showTimeFormat": "LTS", + "showTimePrefix": null, + "showTimeValue": "request", + "skipSameURL": true, + "targets": [ + { + "queryType": "randomWalk", + "refId": "A" + } + ], + "templateResponse": true, + "timeFrom": "now-1d", + "timeShift": null, + "title": "WebRTC Stream 3", + "type": "ryantxu-ajax-panel", + "url": "%HTTP_HOST%?destination_peer_id=webrtc_stream_3_$RANDOM&pipeline_idx=6&media_idx=3", + "withCredentials": false + }, + { + "datasource": null, + "fieldConfig": { + "defaults": { + "custom": {} + }, + "overrides": [] + }, + "gridPos": { + "h": 10, + "w": 12, + "x": 12, + "y": 19 + }, + "header_js": "{}", + "id": 10, + "method": "iframe", + "mode": "html", + "params_js": "{ }", + "pluginVersion": "7.3.4", + "request": "http", + "responseType": "text", + "showErrors": true, + "showTime": false, + "showTimeFormat": "LTS", + "showTimePrefix": null, + "showTimeValue": "request", + "skipSameURL": true, + "targets": [ + { + "queryType": "randomWalk", + "refId": "A" + } + ], + "templateResponse": true, + "timeFrom": "now-1d", + "timeShift": null, + "title": "WebRTC Stream 4", + "type": "ryantxu-ajax-panel", + "url": "%HTTP_HOST%?destination_peer_id=webrtc_stream_4_$RANDOM&pipeline_idx=9&media_idx=4", + "withCredentials": false + } + ], + "refresh": "5s", + "schemaVersion": 26, + "style": "dark", + "tags": [], + "templating": { + "list": [] + }, + "time": { + "from": "now-3h", + "to": "now" + }, + "timepicker": { + "hidden": false, + "refresh_intervals": [ + "5s", + "15s", + "1m", + "10m", + "30m", + "1h", + "6h", + "12h", + "1d", + "7d", + "14d", + "30d" + ] + }, + "timezone": "", + "title": "Pipeline Server Sample Dashboard", + "uid": "kzGVerJnz2", + "version": 1 +} \ No newline at end of file diff --git a/samples/webrtc/run.sh b/samples/webrtc/run.sh new file mode 100755 index 0000000..f5c2bd8 --- /dev/null +++ b/samples/webrtc/run.sh @@ -0,0 +1,14 @@ +#!/bin/bash -e +# +# Copyright (C) 2022 Intel Corporation. +# +# SPDX-License-Identifier: BSD-3-Clause +# +SCRIPT_DIR=$(dirname "$(readlink -f "$0")") +SOURCE_DIR=$(dirname "$SCRIPT_DIR") + +pushd $SCRIPT_DIR +export USER_ID=$UID +export GROUP_ID=$GID +docker-compose up -d +popd diff --git a/samples/webrtc/scripts/launch_browser.sh b/samples/webrtc/scripts/launch_browser.sh new file mode 100755 index 0000000..7fd559e --- /dev/null +++ b/samples/webrtc/scripts/launch_browser.sh @@ -0,0 +1,34 @@ +#!/bin/bash +# +# Copyright (C) 2022 Intel Corporation. +# +# SPDX-License-Identifier: BSD-3-Clause +# + +# Usage: +# ./launch_browser.sh [--remote] +# +# is a UUID value returned by Pipeline Server after pipeline is instantiated. +# +# --remote Print out the URL to paste into remote browser. +# +# Default behavior attempts to automatically launch browser with the WebRTC viewing URL. + +DOCKERFILE_DIR=$(dirname "$(readlink -f "$0")") +SOURCE_DIR=$(dirname "$DOCKERFILE_DIR") + +# This currently launches default browser on a localhost system running Ubuntu. +WEB_SERVER=http://localhost:8082 +# IMPORTANT: Update the value of WEB_SERVER variable as appropriate for your deployment model. +# For example, to refer to a remote host/cluster. +# WEB_SERVER=http://${HOSTNAME}.internal.acme.com:8082 +# WEB_SERVER=https://acme.com +peer_id=$1 +instance_id=$2 +url_full="${WEB_SERVER}?destination_peer_id=$peer_id&instance_id=${instance_id}" +if [ "$3" == "--remote" ]; then + echo "In your remote browser, paste the following and press ENTER" + echo $url_full +else + xdg-open $url_full +fi diff --git a/samples/webrtc/scripts/start_pipeline_psclient.sh b/samples/webrtc/scripts/start_pipeline_psclient.sh new file mode 100755 index 0000000..48b9008 --- /dev/null +++ b/samples/webrtc/scripts/start_pipeline_psclient.sh @@ -0,0 +1,43 @@ +#!/bin/bash +# +# Copyright (C) 2022 Intel Corporation. +# +# SPDX-License-Identifier: BSD-3-Clause +# + +# Usage: +# ./start_pipeline_psclient.sh [--remote] +# +# --remote Print out the URL to paste into remote browser. +# +# Default behavior attempts to automatically launch browser with the WebRTC viewing URL. + +SCRIPT_DIR=$(dirname "$(readlink -f "$0")") +SOURCE_DIR=$(dirname "$SCRIPT_DIR") +PIPELINE_SERVER=http://localhost:8080 +PIPELINE="object_detection/person_vehicle_bike" +VIDEO_INPUT2="https://github.com/intel-iot-devkit/sample-videos/blob/master/face-demographics-walking-and-pause.mp4?raw=true" + +# Prepare unique peer-id for WebRTC connection to frame destination +RANDOM=$(date +%s%N | cut -b10-19) +PEER_ID="pipeline_webrtc1_${RANDOM}" + +$SOURCE_DIR/../../client/pipeline_client.sh --quiet start \ + --server-address $PIPELINE_SERVER \ + $PIPELINE \ + $VIDEO_INPUT2 \ + --webrtc-peer-id $PEER_ID --show-request + +# Start Pipeline Server pipeline using Pipeline Server client to initiate request. +instance_id="$($SOURCE_DIR/../../client/pipeline_client.sh --quiet start \ + --server-address $PIPELINE_SERVER \ + $PIPELINE \ + $VIDEO_INPUT2 \ + --webrtc-peer-id $PEER_ID | tail -n 1)" +echo $instance_id +echo "launched with PeerID=${PEER_ID}" + +# Launch browser with query parameters to auto-connect to frame destination +pushd $SCRIPT_DIR +./launch_browser.sh $PEER_ID $instance_id $1 +popd diff --git a/samples/webrtc/signaling/Dockerfile b/samples/webrtc/signaling/Dockerfile new file mode 100644 index 0000000..22f89ff --- /dev/null +++ b/samples/webrtc/signaling/Dockerfile @@ -0,0 +1,14 @@ +# +# Copyright (C) 2022 Intel Corporation. +# +# SPDX-License-Identifier: BSD-3-Clause +# +FROM python:3 + +# Extends: https://gitlab.freedesktop.org/gstreamer/gst-examples/-/blob/master/webrtc/signaling/Dockerfile +WORKDIR /opt/ +COPY . /opt/ + +RUN pip3 install --no-cache-dir --user -r requirements.signaling.txt + +CMD python -u ./simple_server.py --disable-ssl diff --git a/samples/webrtc/signaling/build.sh b/samples/webrtc/signaling/build.sh new file mode 100755 index 0000000..1c451ba --- /dev/null +++ b/samples/webrtc/signaling/build.sh @@ -0,0 +1,16 @@ +#!/bin/bash +# +# Copyright (C) 2022 Intel Corporation. +# +# SPDX-License-Identifier: BSD-3-Clause +# +DOCKERFILE_DIR=$(dirname "$(readlink -f "$0")") +SOURCE_DIR=$(dirname "$DOCKERFILE_DIR") + +# Build WebRTC signaling server docker image +echo "---------------------------------------" +echo "Building webrtc_signaling_server container." +echo "---------------------------------------" +TAG=webrtc_signaling_server:latest +BASE_BUILD_ARGS=$(env | cut -f1 -d= | grep -E '_(proxy|REPO|VER)$' | sed 's/^/--build-arg / ' | tr '\n' ' ') +docker build $SOURCE_DIR/signaling/. $BASE_BUILD_ARGS -t $TAG diff --git a/samples/webrtc/signaling/requirements.signaling.txt b/samples/webrtc/signaling/requirements.signaling.txt new file mode 100644 index 0000000..3f335a4 --- /dev/null +++ b/samples/webrtc/signaling/requirements.signaling.txt @@ -0,0 +1 @@ +websockets == 10.1 diff --git a/samples/webrtc/signaling/simple_server.py b/samples/webrtc/signaling/simple_server.py new file mode 100755 index 0000000..42bf07d --- /dev/null +++ b/samples/webrtc/signaling/simple_server.py @@ -0,0 +1,233 @@ +#!/usr/bin/env python3 +# +# Example 1-1 call signaling server +# +# Copyright (C) 2017 Centricular Ltd. +# +# Author: Nirbheek Chauhan +# +''' +* Copyright (C) 2022 Intel Corporation. +* +* SPDX-License-Identifier: BSD-3-Clause +''' + +import os +import sys +import ssl +import logging +import asyncio +import argparse +from websockets.server import serve +from websockets.exceptions import ConnectionClosed + +parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) +parser.add_argument('--addr', default='0.0.0.0', help='Address to listen on') +parser.add_argument('--port', default=8443, type=int, help='Port to listen on') +parser.add_argument('--keepalive-timeout', dest='keepalive_timeout', default=-1, type=int, + help='Timeout for keepalive (in seconds), -1 blocks until complete') +parser.add_argument('--cert-path', default=os.path.dirname(__file__)) +parser.add_argument('--disable-ssl', default=False, help='Disable ssl', action='store_true') + +options = parser.parse_args(sys.argv[1:]) + +ADDR_PORT = (options.addr, options.port) +if options.keepalive_timeout == -1: + KEEPALIVE_TIMEOUT = None +else: + KEEPALIVE_TIMEOUT = options.keepalive_timeout + +############### Global data ############### + +# Format: {uid: (Peer WebSocketServerProtocol, +# remote_address, +# <'session'|room_id|None>)} +peers = dict() +# Format: {caller_uid: callee_uid, +# callee_uid: caller_uid} +# Bidirectional mapping between the two peers +sessions = dict() +# Format: {room_id: {peer1_id, peer2_id, peer3_id, ...}} +# Room dict with a set of peers in each room +rooms = dict() + +############### Helper functions ############### + +async def recv_msg_ping(ws, raddr): + ''' + Wait for a message forever, and send a regular ping to prevent bad routers + from closing the connection. + ''' + msg = None + while msg is None: + try: + msg = await asyncio.wait_for(ws.recv(), KEEPALIVE_TIMEOUT) + except TimeoutError: + print('Sending keepalive ping to {!r} in recv'.format(raddr)) + await ws.ping() + return msg + +async def disconnect(ws, peer_id): + ''' + Remove @peer_id from the list of sessions and close our connection to it. + This informs the peer that the session and all calls have ended, and it + must reconnect. + ''' + global sessions # pylint: disable=global-statement + if peer_id in sessions: + del sessions[peer_id] + # Close connection + if ws and ws.open: + # Don't care about errors + asyncio.ensure_future(ws.close(reason='hangup')) + +async def cleanup_session(uid): + if uid in sessions: + other_id = sessions[uid] + del sessions[uid] + print("Cleaned up {} session".format(uid)) + if other_id in sessions: + del sessions[other_id] + print("Also cleaned up {} session".format(other_id)) + # If there was a session with this peer, also + # close the connection to reset its state. + if other_id in peers: + print("Closing connection to {}".format(other_id)) + wso, _, _ = peers[other_id] + del peers[other_id] + await wso.close() + +async def cleanup_room(uid, room_id): + room_peers = rooms[room_id] + if uid not in room_peers: + return + room_peers.remove(uid) + for pid in room_peers: + wsp, _, _ = peers[pid] + msg = 'ROOM_PEER_LEFT {}'.format(uid) + print('room {}: {} -> {}: {}'.format(room_id, uid, pid, msg)) + await wsp.send(msg) + +async def remove_peer(uid): + await cleanup_session(uid) + if uid in peers: + ws, raddr, status = peers[uid] + if status and status != 'session': + await cleanup_room(uid, status) + del peers[uid] + await ws.close() + print("Disconnected from peer {!r} at {!r}".format(uid, raddr)) + +############### Handler functions ############### + +async def connection_handler(ws, uid): + global peers, sessions, rooms # pylint: disable=global-statement + raddr = ws.remote_address + peer_status = None + peers[uid] = [ws, raddr, peer_status] + print("Registered peer {!r} at {!r}".format(uid, raddr)) + while True: + # Receive command, wait forever if necessary + msg = await recv_msg_ping(ws, raddr) + # Update current status + peer_status = peers[uid][2] + # We are in a session, messages must be relayed + if peer_status is not None: + # We're in a session, route message to connected peer + if peer_status == 'session': + other_id = sessions[uid] + wso, _, status = peers[other_id] + assert(status == 'session') + print("{} -> {}: {}".format(uid, other_id, msg)) + await wso.send(msg) + else: + raise AssertionError('Unknown peer status {!r}'.format(peer_status)) + # Requested a session with a specific peer + elif msg.startswith('SESSION'): + print("{!r} command {!r}".format(uid, msg)) + _, callee_id = msg.split(maxsplit=1) + if callee_id not in peers: + await ws.send('ERROR peer {!r} not found'.format(callee_id)) + continue + if peer_status is not None: + await ws.send('ERROR peer {!r} busy'.format(callee_id)) + continue + await ws.send('SESSION_OK') + wsc = peers[callee_id][0] + print('Session from {!r} ({!r}) to {!r} ({!r})' + ''.format(uid, raddr, callee_id, wsc.remote_address)) + # Register session + peers[uid][2] = peer_status = 'session' + sessions[uid] = callee_id + peers[callee_id][2] = 'session' + sessions[callee_id] = uid + else: + print('Ignoring unknown message {!r} from {!r}'.format(msg, uid)) + +async def hello_peer(ws): + ''' + Exchange hello, register peer + ''' + raddr = ws.remote_address + hello = await ws.recv() + hello, uid = hello.split(maxsplit=1) + if hello != 'HELLO': + await ws.close(code=1002, reason='invalid protocol') + raise Exception("Invalid hello from {!r}".format(raddr)) + if not uid or uid in peers or uid.split() != [uid]: # no whitespace + await ws.close(code=1002, reason='invalid peer uid') + raise Exception("Invalid uid {!r} from {!r}".format(uid, raddr)) + # Send back a HELLO + await ws.send('HELLO') + return uid + +async def handler(ws, path): # pylint: disable=unused-argument + ''' + All incoming messages are handled here. @path is unused. + ''' + raddr = ws.remote_address + print("Connected to {!r}".format(raddr)) + peer_id = await hello_peer(ws) + try: + await connection_handler(ws, peer_id) + except ConnectionClosed: + print("Connection to peer_id {!r} at address {!r} closed, exiting handler".format(peer_id, raddr)) + finally: + await remove_peer(peer_id) + +sslctx = None +if not options.disable_ssl: + # Create an SSL context to be used by the websocket server + certpath = options.cert_path + print('Using TLS with keys in {!r}'.format(certpath)) + if 'letsencrypt' in certpath: + chain_pem = os.path.join(certpath, 'fullchain.pem') + key_pem = os.path.join(certpath, 'privkey.pem') + else: + chain_pem = os.path.join(certpath, 'cert.pem') + key_pem = os.path.join(certpath, 'key.pem') + + sslctx = ssl.create_default_context() + try: + sslctx.load_cert_chain(chain_pem, keyfile=key_pem) + except FileNotFoundError: + print("Certificates not found, did you run generate_cert.sh?") + sys.exit(1) + sslctx.check_hostname = False + sslctx.verify_mode = ssl.CERT_NONE + +print("Listening on https://{}:{}".format(*ADDR_PORT)) +# Websocket server +wsd = serve(handler, *ADDR_PORT, ssl=sslctx, + # Maximum number of messages that websockets will pop + # off the asyncio and OS buffers per connection. See: + # https://websockets.readthedocs.io/en/stable/api.html#websockets.protocol.WebSocketCommonProtocol + max_queue=16) + +logger = logging.getLogger('websockets.server') + +logger.setLevel(logging.ERROR) +logger.addHandler(logging.StreamHandler()) + +asyncio.get_event_loop().run_until_complete(wsd) +asyncio.get_event_loop().run_forever() diff --git a/samples/webrtc/signaling/start.sh b/samples/webrtc/signaling/start.sh new file mode 100755 index 0000000..3666431 --- /dev/null +++ b/samples/webrtc/signaling/start.sh @@ -0,0 +1,10 @@ +#!/bin/bash +# +# Copyright (C) 2022 Intel Corporation. +# +# SPDX-License-Identifier: BSD-3-Clause +# +echo "Running Signaling Server entrypoint start.sh" + +# Launch signaling server +python -u ./simple_server.py --disable-ssl \ No newline at end of file diff --git a/samples/webrtc/teardown.sh b/samples/webrtc/teardown.sh new file mode 100755 index 0000000..86b31a8 --- /dev/null +++ b/samples/webrtc/teardown.sh @@ -0,0 +1,12 @@ +#!/bin/bash -e +# +# Copyright (C) 2022 Intel Corporation. +# +# SPDX-License-Identifier: BSD-3-Clause +# +SCRIPT_DIR=$(dirname "$(readlink -f "$0")") +SOURCE_DIR=$(dirname "$SCRIPT_DIR") + +pushd $SCRIPT_DIR +docker-compose down --volumes --remove-orphans +popd diff --git a/samples/webrtc/webserver/Dockerfile b/samples/webrtc/webserver/Dockerfile new file mode 100644 index 0000000..bad5b53 --- /dev/null +++ b/samples/webrtc/webserver/Dockerfile @@ -0,0 +1,21 @@ +#!/bin/bash +# +# Copyright (C) 2022 Intel Corporation. +# +# SPDX-License-Identifier: BSD-3-Clause +# +ARG BASE=nginx:latest + +FROM $BASE + +# Extends Centricular example found here: +# https://gitlab.freedesktop.org/gstreamer/gst-examples/-/tree/master/webrtc/sendrecv/js + +COPY ./www /usr/share/nginx/html + +RUN sed -i 's/var default_peer_id;/var default_peer_id = 1;/g' \ + /usr/share/nginx/html/webrtc.js +RUN sed -i 's/wss/ws/g' \ + /usr/share/nginx/html/webrtc.js + +USER nginx diff --git a/samples/webrtc/webserver/build.sh b/samples/webrtc/webserver/build.sh new file mode 100755 index 0000000..ccb03d8 --- /dev/null +++ b/samples/webrtc/webserver/build.sh @@ -0,0 +1,15 @@ +# +# Copyright (C) 2022 Intel Corporation. +# +# SPDX-License-Identifier: BSD-3-Clause +# +DOCKERFILE_DIR=$(dirname "$(readlink -f "$0")") +SOURCE_DIR=$(dirname "$DOCKERFILE_DIR") + +# Build WebRTC web server container +echo "---------------------------------------" +echo "Building webrtc_webserver container." +echo "---------------------------------------" +TAG=webrtc_webserver:latest +BASE_BUILD_ARGS=$(env | cut -f1 -d= | grep -E '_(proxy|REPO|VER)$' | sed 's/^/--build-arg / ' | tr '\n' ' ') +docker build $SOURCE_DIR/webserver/. $BASE_BUILD_ARGS -t $TAG diff --git a/samples/webrtc/webserver/www/index.html b/samples/webrtc/webserver/www/index.html new file mode 100644 index 0000000..360f09b --- /dev/null +++ b/samples/webrtc/webserver/www/index.html @@ -0,0 +1,135 @@ + + + + + + + + + + + + + +
+
+ +
+
+
+ +
+ +
+ + unknown +
+
+
+
+ +
+

+ + +

+

+ + + +

+

+ + +

+

+ + +

+

+ + + + + + +

+
+
+
+ +
+

+ + +

+

+ + +

+

+ + +

+
+ +
+
+ +
+

+ + +

+

+ + +

+

+ + +

+
+

+ + + +

+ +
+ + Waiting for Launch of Pipeline... +
+
+
+
+
getUserMedia constraints being used:
+
+
+
+ + \ No newline at end of file diff --git a/samples/webrtc/webserver/www/pipeline.js b/samples/webrtc/webserver/www/pipeline.js new file mode 100644 index 0000000..e8db0d8 --- /dev/null +++ b/samples/webrtc/webserver/www/pipeline.js @@ -0,0 +1,488 @@ +/* +* Copyright (C) 2022 Intel Corporation. +* +* SPDX-License-Identifier: BSD-3-Clause +*/ + +var g_pipeline_server_host=window.location.hostname +var g_pipeline_server_port=8080 +var PIPELINE_SERVER = "http://" + g_pipeline_server_host + ':' + g_pipeline_server_port; +var VIDEO_INPUT = ["https://github.com/intel-iot-devkit/sample-videos/raw/master/person-bicycle-car-detection.mp4", + "https://github.com/intel-iot-devkit/sample-videos/raw/master/face-demographics-walking-and-pause.mp4", + "https://github.com/intel-iot-devkit/sample-videos/raw/master/car-detection.mp4", + "https://github.com/intel-iot-devkit/sample-videos/raw/master/people-detection.mp4", + "https://github.com/intel-iot-devkit/sample-videos/raw/master/classroom.mp4", + "https://github.com/intel-iot-devkit/sample-videos/raw/master/head-pose-face-detection-male.mp4", + "https://github.com/intel-iot-devkit/sample-videos/raw/master/bottle-detection.mp4", + "https://lvamedia.blob.core.windows.net/public/homes_00425.mkv"] +var g_trim_media_prefix = true; +var g_assigned_by_query_param = false; +var g_pipeline_server_instance_id = null; +var g_default_server_peer_id = null; +var g_init_pipeline_idx = null; +var g_init_media_idx = null; +var g_sync_playback = true; +var g_poll_status = true; +var g_initialized_expando = false; +var g_grafana_dashboard_manual_launch = false; + +function initExpando(classname) { + if (!g_initialized_expando) { + var coll = document.getElementsByClassName(classname); + var i; + for (i = 0; i < coll.length; i++) { + if (coll[i].getAttribute('listener') !== 'true') { + coll[i].addEventListener("click", function(elm) { + const elmClicked = elm.target; + elmClicked.setAttribute('listener', 'true'); + console.log("Attached click listener to expando section.") + this.classList.toggle("active"); + var content = this.nextElementSibling; + if (content.style.maxHeight){ + content.style.maxHeight = null; + } else { + content.style.maxHeight = content.scrollHeight + "px"; + } + }); + } + } + g_initialized_expando = true; + } +} + +function restReq(verb, route, body, callback) { + var request = new XMLHttpRequest(); + request.open(verb, route); + request.setRequestHeader("Content-Type", "application/json;charset=UTF-8"); + request.setRequestHeader("Vary", "Origin"); + request.onreadystatechange = function() { + if (request.readyState === 4) { + callback(request.response, request.status); + } + } + console.log("Invoking " + verb + " on route " + route); + request.send(body); +} + +function receivePipelines(responseValue, http_status) { + pipelines = []; + responsePipelines = JSON.parse(responseValue); + responsePipelines.forEach(function(obj) { + pipeline = obj.name + "/" + obj.version; + console.log(pipeline); + pipelines.push(pipeline); + }); + setPipelinesAvailable(pipelines); +} + +function onGetPipelinesClicked() { + var pipeline_server = getPipelineServer(); + var requestPath = "/pipelines" + requestPipelines(pipeline_server, requestPath, receivePipelines); +} + +function requestPipelines(pipeline_server, requestPath, callback) { + restReq("GET", pipeline_server + requestPath, '', callback); +} + +function setPipelinesAvailable(options) { + var selPipelines = document.getElementById('pipelines'); + selPipelines.options.length = 0; + var def = document.createElement("option"); + def.textContent = "[Choose a pipeline]"; + def.value = ""; + selPipelines.appendChild(def); + for(var i = 0; i < options.length; i++) { + var opt = options[i]; + var el = document.createElement("option"); + el.textContent = opt; + el.value = opt; + selPipelines.appendChild(el); + if (g_init_pipeline_idx == i) { + def.value = opt; + setPipeline(opt); + } + } + selPipelines.options.selectedIndex = g_init_pipeline_idx; + selPipelines.onchange(); +} + +function receiveStopResult(responseValue, http_status) { + console.log(responseValue); +} + +function onStopClicked() { + var pipeline_server = getPipelineServer(); + var idInstanceID = document.getElementById("instance-id") + var instance_id = idInstanceID.value; + if (pipeline_server != null && instance_id != null && instance_id != "") { + if (instance_id == "unknown") { + alert("Pipeline Server instance_id was created offline (it is unknown)"); + return; + } + var requestPath = pipeline_server + "/pipelines/" + instance_id; + doStop(requestPath, receiveStopResult); + } else { + alert("No active Pipeline Server pipeline instance!"); + g_poll_status = false; + } +} + +function doStop(requestPath, callback) { + setVizStatus("Stopping primary pipeline ("+requestPath+")..."); + restReq("DELETE", requestPath, '', callback); +} + +function getJsonValue(objJSON, key) { + var value = null; + if (key in objJSON) { + value = objJSON[key]; + } + return value; +} + +function receiveStatusResult(responseValue, status_code) { + if (g_pipeline_server_instance_id != "unknown" && g_poll_status) { + var objStatus = JSON.parse(responseValue); + var span = document.getElementById("fps-status"); + span.style.visibility = "visible"; + if (status_code == 200 && objStatus != "Invalid instance") { + var avg_fps = parseFloat(getJsonValue(objStatus, "avg_fps")).toFixed(2); + var elapsed_sec = parseFloat(getJsonValue(objStatus, "elapsed_time")).toFixed(0); + var state = getJsonValue(objStatus, "state"); + var status_text = "Pipeline State: " + state + " " + avg_fps + " fps (avg) Elapsed: " + elapsed_sec + "s"; + document.getElementById("fps-status").innerHTML = status_text; + span.style.color = "#007dc3"; + if (state == "COMPLETED") { + console.log("Pipeline state is COMPLETED, so will no longer query for pipeline status each 3s for instance " + getJsonValue(objStatus, "id") + "."); + g_poll_status=false; + connect_attempts = 15; // expire visualization retries for this primary pipeline + console.log("Toggle to expand pipeline_launcher section for new stream") + document.getElementById("pipeline_launcher").click(); + // Disable Disconnect/Visualize and Stop Pipeline Buttons until we have a running stream + setVizStatus("Launch a new pipeline..."); + } else if (state == "ABORTED") { + console.log("Pipeline state is ABORTED, so will no longer query for pipeline status each 3s for instance " + getJsonValue(objStatus, "id") + "."); + g_poll_status=false; + connect_attempts = 15; // expire visualization retries for this primary pipeline + setVizStatus("Launch a new pipeline..."); + } + } else { + span.innerHTML = "Pipeline State: " + objStatus + " http_status: " + status_code; + span.style.color = "red"; + if (objStatus == "Invalid instance") { + g_poll_status = false; + console.log("Pipeline state had status_code " + status_code + " so will no longer query for pipeline status each 3s for instance " + getJsonValue(objStatus, "id") + "."); + } + } + } +} + +function doGetStatus(requestPath, callback) { + restReq("GET", requestPath, '', callback); +} + +function updateFPS() { + g_poll_status = true; + var statusEvery3Sec = window.setInterval(function() { + console.log("Updating FPS from Pipeline Server status endpoint for instance " + g_pipeline_server_instance_id + "."); + var pipeline_server = getPipelineServer(); + if (pipeline_server != null && g_pipeline_server_instance_id != null + && g_pipeline_server_instance_id != "unknown" && g_pipeline_server_instance_id != "") { + var requestPath = pipeline_server + "/pipelines/status/" + g_pipeline_server_instance_id; + doGetStatus(requestPath, receiveStatusResult); + } else { + if (g_pipeline_server_instance_id == "unknown") { + console.log("WARNING: Pipeline Server pipeline instance is completely unknown!"); + } else { + console.log("WARNING: No active Pipeline Server pipeline instance! Stopping fps polling."); + g_poll_status = false; + } + } + if (!g_poll_status) { + if (peer_connection == null) { + clearInterval(statusEvery3Sec); // fps, etc should no longer update. + console.log("Cleared status update interval for pipeline instance " + g_pipeline_server_instance_id + "."); + } else { + console.log("Waiting for WebRTC peer connection to close for pipeline instance " + g_pipeline_server_instance_id + "."); + } + } + }, 3000); +} + +function setPipelineServer(value) { + document.getElementById('pipeline-server').value = value; +} +function getPipelineServer() { + return document.getElementById('pipeline-server').value; +} + +function setMediaSource(value) { + elm = document.getElementById('mediasource'); + elm.value = value; + if (elm.onchange) + elm.onchange(); +} +function getMediaSource() { + return document.getElementById('mediasource').value; +} + +function setPipeline(value) { + elm = document.getElementById('pipeline'); + elm.value = value; + if (elm.onchange) + elm.onchange(); +} +function getPipeline() { + return document.getElementById('pipeline').value; +} + +function getRandomInt(max) { + return Math.floor(Math.random() * max); +} + +function setDestinationPeerID(value) { + elm = document.getElementById('destination-peer-id'); + elm.value = value.replace('$RANDOM', getRandomInt(100000).toString()); + if (elm.onchange) + elm.onchange(); +} +function getDestinationPeerID() { + return document.getElementById('destination-peer-id').value; +} + +function setMediaSources(options) { + var selSources = document.getElementById("sel_mediasources"); + selSources.options.length = 0; + if (selSources != null) { + var def = document.createElement("option"); + def.textContent = "[Choose a media source]"; + def.value = ""; + selSources.appendChild(def); + for(var i = 0; i < options.length; i++) { + var opt = options[i]; + var el = document.createElement("option"); + if (g_trim_media_prefix) { + media_name = opt.replace("https://github.com/intel-iot-devkit/sample-videos/raw/master/", ""); + media_name = media_name.replace("https://lvamedia.blob.core.windows.net/public/", ""); + } else { + media_name = opt; + } + el.textContent = media_name; + el.value = opt; + selSources.appendChild(el); + if (g_init_media_idx == i) { + def.value = opt; + setMediaSource(opt); + } + } + selSources.options.selectedIndex = g_init_media_idx; + selSources.onchange(); + } else { + alert("sel_mediasources select element is not yet on page") + } +} + +function initPipelineValues() { + setPipelineServer(PIPELINE_SERVER); + var sources = []; + for (var i = 0; i < VIDEO_INPUT.length; i++) { + sources.push(VIDEO_INPUT[i]); + } + const params = new Proxy(new URLSearchParams(window.location.search), { + get: (searchParams, prop) => searchParams.get(prop), + }); + if (params.autolaunch) { + g_grafana_dashboard_auto_launch = (params.autolaunch == "true"); + } + if (params.pipeline_idx) { + g_init_pipeline_idx = params.pipeline_idx; + onGetPipelinesClicked(); + if (params.media_idx) { + g_init_media_idx = params.media_idx; + } + } + setMediaSources(sources); +} + +function receivePipelineInstance(responseValue) { + if (responseValue == null || responseValue == "") { + console.log("ERROR launching Pipeline Server pipeline instance!"); + } else { + setStreamInstanceValue(responseValue); + } +} + +function updateLaunchButtonState() { + document.getElementById("pipeline-launch-button").disabled = true; + p = getPipeline(); + s = getMediaSource(); + f = getDestinationPeerID(); + if (p && s && f) { + document.getElementById("pipeline-launch-button").disabled = false; + } else { + console.log("Launch button disabled because user must supply:") + console.log("pipeline: " + p); + console.log("source: " + s); + console.log("destination peer-id: " + f); + } +} + +function onLaunchClicked() { + var pipeline_server = getPipelineServer(); + var pipeline = getPipeline(); + if (!pipeline) { alert("You must specify a pipeline!"); } + var source = getMediaSource(); + if (!source) { alert("You must specify a media source!"); } + var frame_destination_peer_id = getDestinationPeerID(); + var sync_playback = getSyncPlayback(); + var requestPath = pipeline_server + "/pipelines/" + pipeline; + var frame_destination = { "type": "webrtc", + "peer-id": frame_destination_peer_id, + "sync-with-source": sync_playback, + "sync-with-destination": sync_playback + }; + var parameters = {"detection-device": "CPU"} + var requestBody = JSON.stringify( + { + "source": {"uri": source, "type": "uri"}, + "destination": { + "metadata": {"type": "file", "path": "/tmp/results.jsonl", "format": "json-lines"}, + "frame": frame_destination + }, + "parameters": parameters + }); + setFrameDestinationLabel(); + doLaunch(requestPath, requestBody, receivePipelineInstance); +} + +function doLaunch(requestPath, requestBody, callback) { + connect_attempts = 0; + restReq("POST", requestPath, requestBody, callback); +} + +function getSyncPlayback() { + return document.getElementById("sync-checkbox").checked; +} + +function setFrameDestinationLabel() { + var peer_id = getDestinationPeerID(); + var sync_playback = getSyncPlayback() + var elm = document.getElementById("destination") + elm.value = JSON.stringify({ "type": "webrtc", "peer-id": peer_id, + "sync-with-source": sync_playback, "sync-with-destination": sync_playback }); + if (elm.onchange) + elm.onchange(); +} + +function setStreamInstanceValue(value) { + if (value == null) { + console.log("ERROR: Cannot set stream instance to empty value"); + console.log("WARNING: Assigning primary stream instance_id to value 'unknown' to allow WebRTC rendering."); + //return; + } + // remove any quotes surrounding instance_id response + instance_id=value.replace(/['"\n]+/g, '') + g_pipeline_server_instance_id = instance_id; + var idInstanceID = document.getElementById("instance-id") + idInstanceID.value = g_pipeline_server_instance_id; + var frame_destination_peer_id = getDestinationPeerID(); + console.log("Pipeline Server instance: " + instance_id + " was launched with destination_peerid: " + frame_destination_peer_id + "."); + g_default_server_peer_id = frame_destination_peer_id; + var idServerPeer = document.getElementById("peer-connect") + idServerPeer.value = frame_destination_peer_id; + console.log("Toggle to collapse pipeline_launcher section on SetStreamInstanceValue") + document.getElementById("pipeline_launcher").click(); + console.log("Pipeline Server - starting Pipeline Status (FPS) timer for instance " + instance_id + " with destination_peerid: " + frame_destination_peer_id + "."); + statsTimerPipelineStatus = setTimeout(updateFPS, 100); + + var auto_visualize_params = "?instance_id=" + instance_id + "&destination_peer_id=" + frame_destination_peer_id + ""; + var viz_message = "" + if (g_assigned_by_query_param) { + if (window.location.hostname === "localhost") { + viz_message = "Click the 'Visualize' button to view the pipeline stream. Stream may be alternately viewed by navigating to this site remotely using query parameters: "; + } else { + viz_message = "Remove the following query parameters to launch/view new streams: "; + } + } else { + if (window.location.hostname === "localhost") { + viz_message = "Stream may be viewed by clicking 'Visualize' button or navigating in remote browser to this site, adding query parameters: " + } else { + viz_message = "Click the 'Visualize' button to view the pipeline stream." + auto_visualize_params = ""; + } + } + setVizStatus(viz_message + auto_visualize_params); +} + +function updateLaunchFormVisibility(value) { + document.getElementById("launch-form").style.display = value; +} + +function setVizStatus(text) { + console.log(text); + document.getElementById("viz_status").textContent = text; + if (text == "Launch a new pipeline...") { + // disable visualize actions; will re-enable once pipeline is launched + console.log("Visualize/Stop buttons disabled until pipeline is launched.") + document.getElementById("peer-connect-button").disabled = true; + document.getElementById("pipeline-stop-button").disabled = true; + } else { + document.getElementById("peer-connect-button").disabled = false; + if (g_pipeline_server_instance_id != "unknown") { + document.getElementById("pipeline-stop-button").disabled = false; + } + } +} + +function setStreamInstance() { + const params = new Proxy(new URLSearchParams(window.location.search), { + get: (searchParams, prop) => searchParams.get(prop), + }); + if (g_pipeline_server_instance_id == null || g_pipeline_server_instance_id == "" || g_pipeline_server_instance_id == "unknown" ) { + g_pipeline_server_instance_id = params.instance_id; + } + var idServerPeer = document.getElementById("peer-connect"); + if (idServerPeer.value == null || idServerPeer.value == "") { + g_default_server_peer_id = params.destination_peer_id; + idServerPeer.value = g_default_server_peer_id; + if (g_default_server_peer_id) { + setDestinationPeerID(g_default_server_peer_id); + } + } + if (g_pipeline_server_instance_id) { + console.log("Will attempt to automatically connect to instance_id: " + g_pipeline_server_instance_id); + console.log("Collapsing New Pipeline Launcher section") + document.getElementById("pipeline_launcher").click(); + } else { + console.log("Toggle to show pipeline_launcher section on setStreamInstance") + document.getElementById("pipeline_launcher").click(); + setConnectButtonState("Visualize"); + console.log("Assigning Pipeline Server instance_id to 'unknown' in case peer-id is legitimate to visualize directly.") + document.getElementById("pipeline-stop-button").disabled = true; + g_pipeline_server_instance_id = "unknown"; + document.getElementById("instance-id").value = g_pipeline_server_instance_id; + } + g_grafana_dashboard_manual_launch = (g_init_pipeline_idx && g_init_media_idx); + var automate_grafana = false; + if (g_grafana_dashboard_manual_launch && automate_grafana) { + console.log("AUTO-LAUNCH grafana panels"); + onLaunchClicked(); + onConnectClicked(); + } + var autoConnect = false; + if (g_grafana_dashboard_manual_launch) { + autoConnect = (idServerPeer.value && g_pipeline_server_instance_id && g_pipeline_server_instance_id != "unknown"); + } else { + if (g_default_server_peer_id) { + g_assigned_by_query_param = true; + setStreamInstanceValue(g_pipeline_server_instance_id); + updateLaunchButtonState(); + setConnectButtonState("Disconnect"); // since we're autoplaying + console.log("Will attempt to automatically connect using frame destination peer_id: " + g_default_server_peer_id); + setStatus("WebRTC auto-connecting to instance: '" + g_pipeline_server_instance_id + "' peer_id: '" + g_default_server_peer_id + "'"); + } + autoConnect = (idServerPeer.value && g_pipeline_server_instance_id); + } + return autoConnect; +} diff --git a/samples/webrtc/webserver/www/styles.css b/samples/webrtc/webserver/www/styles.css new file mode 100644 index 0000000..fa12a8c --- /dev/null +++ b/samples/webrtc/webserver/www/styles.css @@ -0,0 +1,128 @@ +/* +* Copyright (C) 2022 Intel Corporation. +* +* SPDX-License-Identifier: BSD-3-Clause +*/ +label +{ + width: 175px; + text-align: right; + display: inline-block; +} +.error { color: red; } +video { + width: 450px; + height: auto; + object-fit: contain; +} +/* input[name=field] */ +input.input-disabled { + /* uncomment to treat as disabled, otherwise allow user to copy values */ + /* pointer-events:none; */ + color: #AAA; + background: #F5F5F5; + outline: none !important; +} + +.labelStatus +{ + display: inline; + text-align: left; + color: gray; +} +.class-status.span +{ + width: 70%; + text-align: left; + color: gray; +} +div { + font-family: "intel-one","intel-clear",Helvetica,Arial,sans-serif; + font-size: 16px; + font-weight: normal; + color: #007dc3; +} +.class-status +{ + font-family: "intel-one","intel-clear",Helvetica,Arial,sans-serif; + font-size: 16px; + font-weight: normal; + display: block; + color: #007dc3; +} +.class-convenience-form +{ + font-family: "intel-one","intel-clear",Helvetica,Arial,sans-serif; + font-size: 16px; + font-weight: normal; + display: block; +} +.class-launch-form +{ + font-family: "intel-one","intel-clear",Helvetica,Arial,sans-serif; + font-size: 16px; + font-weight: normal; + display: block; +} +.class-input-form +{ + font-family: "intel-one","intel-clear",Helvetica,Arial,sans-serif; + font-size: 16px; + font-weight: normal; + display: block; +} +.class-details-debug +{ + font-family: "intel-one","intel-clear",Helvetica,Arial,sans-serif; + font-size: 16px; + font-weight: normal; + display: none; +} +.class-stats-box +{ + font-family: "intel-one","intel-clear",Helvetica,Arial,sans-serif; + font-size: 10px; + font-weight: normal; +} +.expando { + background-color: #007dc3; + color: white; + cursor: pointer; + padding: 8px; + width: 100%; + border: none; + text-align: left; + outline: none; + font-size: 14px; + font-family: "intel-one","intel-clear",Helvetica,Arial,sans-serif; + font-weight: normal; + display: block; +} + +.active, .expando:hover { + background-color: #00a3f6; +} + +.expando:after { + content: '\002B'; + color: white; + font-weight: bold; + float: right; + margin-left: 5px; +} + +.active:after { + content: "\2212"; +} + +.content { + padding: 0 18px; + max-height: 0; + overflow: hidden; + transition: max-height 0.2s ease-out; + background-color: #f1f1f1; + font-family: "intel-one","intel-clear",Helvetica,Arial,sans-serif; + font-size: 14px; + font-weight: normal; + display: block; +} diff --git a/samples/webrtc/webserver/www/webrtc.js b/samples/webrtc/webserver/www/webrtc.js new file mode 100644 index 0000000..ec23321 --- /dev/null +++ b/samples/webrtc/webserver/www/webrtc.js @@ -0,0 +1,482 @@ +/* vim: set sts=4 sw=4 et : + * + * Demo Javascript app for negotiating and streaming a sendrecv webrtc stream + * with a GStreamer app. Runs only in passive mode, i.e., responds to offers + * with answers, exchanges ICE candidates, and streams. + * + * Author: Nirbheek Chauhan + */ + +/* +* Copyright (C) 2022 Intel Corporation. +* +* SPDX-License-Identifier: BSD-3-Clause +*/ + +// Set this to override the automatic detection in websocketServerConnect() +var ws_server; +var ws_port; +// Set this to use a specific peer id instead of a random one; e.g., 256 +var default_peer_id = null; + +// Override with your own STUN servers if you want +var rtc_configuration = {}; +//var rtc_configuration = {iceServers: [{urls: "stun:stun.services.mozilla.com"}, +// {urls: "stun:stun.l.google.com:19302"}]}; + +// Launchable WebRTC pipelines +var auto_retry = true; +var DELAY_AUTOSTART_MSEC=2000 + +// The default constraints that will be attempted. Can be overriden by the user. +var default_constraints = {video: true, audio: false}; + +var connect_attempts = 0; +var loop = false; +var peer_connection; +var send_channel; +var ws_conn = null; +// Promise for local stream after constraints are approved by the user +var local_stream_promise; +var statsTimer; +var statsTimerPipelineStatus; + +function setConnectButtonState(value) { + document.getElementById("peer-connect-button").value = value; + if (value == "Visualize") { + document.getElementById("divStreamPlayback").style.display = "none"; + if (g_pipeline_server_instance_id == "unknown") { + viz_message = "WARNING: No pipeline instance provided so its state is unknown. Click 'Visualize' to attempt direct render." + setVizStatus(viz_message); + } else if (g_poll_status) { // handle pipeline stopped condition + viz_message = "Click 'Visualize' to resume rendering (re-connect to the stream), or Stop Pipeline to terminate the stream." + setVizStatus(viz_message); + } + } else { + document.getElementById("divStreamPlayback").style.display = ""; + if (g_pipeline_server_instance_id == "unknown") { + viz_message = "WARNING: No pipeline instance provided so its state is unknown. Click 'Disconnect' to attempt direct un-render." + setVizStatus(viz_message); + } else if (g_poll_status) { // handle pipeline stopped condition + viz_message = "Click the 'Disconnect' button to suspend rendering, or Stop Pipeline to terminate the stream." + setVizStatus(viz_message); + } + } +} + +function isSocketOpen(ws_conn) { + if (ws_conn == null) { + return false; + } + return (ws_conn.readyState === ws_conn.OPEN); +} + +function onConnectClicked() { + if (isSocketOpen(ws_conn) || document.getElementById("peer-connect-button").value === "Disconnect") { + resetState(); + setConnectButtonState("Visualize"); + return; + } + var id = document.getElementById("peer-connect").value; + if (id == "") { + alert("Target stream peer id must be provided. Populate peer-connect field with destination_peer_id and click Visualize."); + return; + } + window.setTimeout(websocketServerConnect, 600); + ws_conn.send("SESSION " + id); + setConnectButtonState("Disconnect"); +} + +function getOurId() { + return Math.floor(Math.random() * (9000 - 10) + 10).toString(); +} + +function resetState() { + // This will call onServerClose() + if (ws_conn) { + ws_conn.close(); + } else { + console.log("INFO: resetState called with no active connection.") + } +} + +function handleIncomingError(error) { + setError("ERROR: " + error); + resetState(); + if (auto_retry) { + window.setTimeout(websocketServerConnect, DELAY_AUTOSTART_MSEC); + } +} + +function getVideoElement() { + return document.getElementById("stream"); +} + +function setStatus(text) { + console.log(text); + var span = document.getElementById("status") + if (!span.classList.contains('error')) + span.textContent = text; +} + +function setError(text) { + + console.error(text); + var span = document.getElementById("status") + var newContent = text; + if (!span.classList.contains('error')) + span.classList.add('error'); + else { + newContent = span.textContent + text; + } + span.style.visibility = "visible"; + span.textContent = newContent; +} + +function resetVideo() { + // Release the webcam and mic + if (local_stream_promise) + local_stream_promise.then(stream => { + if (stream) { + stream.getTracks().forEach(function (track) { track.stop(); }); + } + }); + + // Reset the video element and stop showing the last received frame + var videoElement = getVideoElement(); + videoElement.pause(); + videoElement.src = ""; + videoElement.load(); +} + +// SDP offer received from peer, set remote description and create an answer +function onIncomingSDP(sdp) { + peer_connection.setRemoteDescription(sdp).then(() => { + setStatus("Remote SDP set"); + if (sdp.type != "offer") + return; + setStatus("Got SDP offer"); + if (local_stream_promise) { + local_stream_promise.then((stream) => { + setStatus("Got local stream, creating answer"); + peer_connection.createAnswer() + .then(onLocalDescription).catch(setError); + }).catch(setError); + } else { + peer_connection.createAnswer().then(onLocalDescription).catch(setError); + } + }).catch(setError); +} + +// Local description was set, send it to peer +function onLocalDescription(desc) { + console.log("Got local description: " + JSON.stringify(desc)); + peer_connection.setLocalDescription(desc).then(function() { + setStatus("Sending SDP answer"); + sdp = {'sdp': peer_connection.localDescription} + ws_conn.send(JSON.stringify(sdp)); + }); +} + +// ICE candidate received from peer, add it to the peer connection +function onIncomingICE(ice) { + var candidate = new RTCIceCandidate(ice); + peer_connection.addIceCandidate(candidate).catch(setError); +} + +function onServerMessage(event) { + console.log("Received " + event.data); + switch (event.data) { + case "HELLO": + ws_conn.send('SESSION ' + g_default_server_peer_id); + setStatus("Registered with server, waiting for call"); + return; + case "SESSION_OK": + setStatus("Initiating stream session"); + + ws_conn.send('START_WEBRTC_STREAM'); + /* Intentionally not implementing offers from client + if (wantRemoteOfferer()) { + ws_conn.send("OFFER_REQUEST"); + setStatus("Sent OFFER_REQUEST, waiting for offer"); + return; + } + if (!peer_connection) + createCall(null).then (generateOffer); + */ + return; + /* Not implementing + case "OFFER_REQUEST": + // The peer wants us to set up and then send an offer + if (!peer_connection) + createCall(null).then (generateOffer); + return; + */ + default: + setStatus("Event received while registering with server: " + event.data); + if (event.data.startsWith("ERROR")) { + handleIncomingError(event.data); + return; + } + // Handle incoming JSON SDP and ICE messages + try { + msg = JSON.parse(event.data); + } catch (e) { + if (e instanceof SyntaxError) { + handleIncomingError("Error parsing incoming JSON: " + event.data); + } else { + handleIncomingError("Unknown error parsing response: " + event.data); + } + return; + } + + // Incoming JSON signals the beginning of a call + if (!peer_connection) + createCall(msg); + + if (msg.sdp != null) { + onIncomingSDP(msg.sdp); + } else if (msg.ice != null) { + onIncomingICE(msg.ice); + } else { + handleIncomingError("Unknown incoming JSON: " + msg); + } + } +} + +function onServerClose(event) { + setStatus('Stream ended. Disconnected from server'); + resetVideo(); + + if (peer_connection) { + clearTimeout(statsTimer); + peer_connection.close(); + peer_connection = null; + } + + // Reset after a second if we want to loop (connect and re-stream) + if (loop) { + window.setTimeout(websocketServerConnect, 1000); + } else { + setConnectButtonState("Visualize"); + } +} + +function onServerError(event) { + setError("Unable to connect to server. Confirm it is running and accessible on network.") + // Retry after 2 seconds + window.setTimeout(websocketServerConnect, DELAY_AUTOSTART_MSEC); +} + +function getLocalStream() { + var constraints; + var textarea = document.getElementById('constraints'); + try { + constraints = JSON.parse(textarea.value); + } catch (e) { + console.error(e); + setError('ERROR parsing constraints: ' + e.message + ', using default constraints'); + constraints = default_constraints; + } + console.log(JSON.stringify(constraints)); + + // Add local stream + if (navigator.mediaDevices.getUserMedia) { + return navigator.mediaDevices.getUserMedia(constraints); + } else { + errorUserMediaHandler(); + } +} + +function websocketServerConnect() { + connect_attempts++; + if (connect_attempts > 15) { + setError("Too many connection attempts, aborting. Refresh page to try again"); + return; + } + setConnectButtonState("Disconnect"); + + // initialize expandable form sections + initExpando("expando"); + + // initialize Pipeline Server API parameters + initPipelineValues(); + + // if empty, populate pipeline server instance id to connect using value from query param + var attempt_connect = setStreamInstance(); + if (attempt_connect) { + console.log("Parameters provided - attempting to connect"); + } else { + return; + } + if (!g_default_server_peer_id) { + console.log("Will not attempt connection until Pipeline Server WebRTC frame destination peerid is requested."); + return; + } + + // Clear errors in the status span for fresh run + console.log("Clearing error state from WebRTC status field.") + var span = document.getElementById("status"); + span.classList.remove('error'); + console.log("Clearing stats for fresh WebRTC run"); + window.document.getElementById("stats").innerHTML = "Preparing WebRTC Stats..."; + // Populate constraints + var textarea = document.getElementById('constraints'); + if (textarea.value == '') + textarea.value = JSON.stringify(default_constraints); + // Fetch the peer id to use + peer_id = default_peer_id || getOurId(); + ws_port = ws_port || '8443'; + if (window.location.protocol.startsWith ("file")) { + ws_server = ws_server || "127.0.0.1"; + } else if (window.location.protocol.startsWith ("http")) { + ws_server = ws_server || window.location.hostname; + } else { + throw new Error ("Don't know how to connect to the signaling server with uri" + window.location); + } + + // Support AutoPlay in Chrome/Safari browsers + var autoPlayVideo = document.getElementById("stream"); + autoPlayVideo.oncanplaythrough = function() { + autoPlayVideo.muted = true; + autoPlayVideo.play(); + autoPlayVideo.pause(); + autoPlayVideo.play(); + } + autoPlayVideo.scrollIntoView(false); + + var ws_url = 'ws://' + ws_server + ':' + ws_port; + setStatus("Connecting to server " + ws_url); + ws_conn = new WebSocket(ws_url); + /* When connected, immediately register with the server */ + ws_conn.addEventListener('open', (event) => { + document.getElementById("peer-id").value = peer_id; + ws_conn.send('HELLO ' + peer_id); + setStatus("Registering with server as client " + peer_id.toString()); + //setConnectButtonState("Visualize"); + }); + ws_conn.addEventListener('error', onServerError); + ws_conn.addEventListener('message', onServerMessage); + ws_conn.addEventListener('close', onServerClose); +} + +function onRemoteTrack(event) { + if (getVideoElement().srcObject !== event.streams[0]) { + console.log('Incoming stream'); + getVideoElement().srcObject = event.streams[0]; + } +} + +function errorUserMediaHandler() { + setError("Browser doesn't support getUserMedia!"); +} + +const handleDataChannelOpen = (event) =>{ + console.log("dataChannel.OnOpen", event); +}; + +const handleDataChannelMessageReceived = (event) =>{ + console.log("dataChannel.OnMessage:", event, event.data.type); + + setStatus("Received data channel message"); + if (typeof event.data === 'string' || event.data instanceof String) { + console.log('Incoming string message: ' + event.data); + textarea = document.getElementById("text") + textarea.value = textarea.value + '\n' + event.data + } else { + console.log('Incoming data message'); + } + send_channel.send("Hi! (from browser)"); +}; + +const handleDataChannelError = (error) =>{ + console.log("dataChannel.OnError:", error); +}; + +const handleDataChannelClose = (event) =>{ + console.log("dataChannel.OnClose", event); +}; + +function onDataChannel(event) { + setStatus("Data channel created"); + let receiveChannel = event.channel; + receiveChannel.onopen = handleDataChannelOpen; + receiveChannel.onmessage = handleDataChannelMessageReceived; + receiveChannel.onerror = handleDataChannelError; + receiveChannel.onclose = handleDataChannelClose; +} + + +function printStats() { + var statsEvery6Sec = window.setInterval(function() { + if (peer_connection == null) { + setStatus("Stream completed. Check console output for detailed information."); + clearInterval(statsEvery6Sec); + setConnectButtonState("Visualize"); + } else { + peer_connection.getStats(null).then(stats => { + let statsOutput = ""; + if (g_pipeline_server_instance_id != "unknown") { + statsOutput += `

Pipeline Summary:

\n
${PIPELINE_SERVER}/pipelines/${g_pipeline_server_instance_id}` + statsOutput += `

Pipeline Status:

\n${PIPELINE_SERVER}/pipelines/status/${g_pipeline_server_instance_id}` + } else { + statsOutput += `

Pipeline Server Instance ID UNKNOWN

` + } + stats.forEach(report => { + statsOutput += `

Report: ${report.type}

\nID: ${report.id}
\n` + + `Timestamp: ${report.timestamp}
\n`; + // Now the statistics for this report; we intentionally drop the ones we + // sorted to the top above + Object.keys(report).forEach(statName => { + if (statName !== "id" && statName !== "timestamp" && statName !== "type") { + statsOutput += `${statName}: ${report[statName]}
\n`; + } + }); + }); + window.document.getElementById("stats").innerHTML = statsOutput; + }); + } + }, 6000); +} + +function createCall(msg) { + // Reset connection attempts because we connected successfully + connect_attempts = 0; + + console.log('Creating RTCPeerConnection'); + setStatus("Creating RTCPeerConnection"); + + peer_connection = new RTCPeerConnection(rtc_configuration); + send_channel = peer_connection.createDataChannel('label', null); + send_channel.onopen = handleDataChannelOpen; + send_channel.onmessage = handleDataChannelMessageReceived; + send_channel.onerror = handleDataChannelError; + send_channel.onclose = handleDataChannelClose; + peer_connection.ondatachannel = onDataChannel; + peer_connection.ontrack = onRemoteTrack; + statsTimer = setTimeout(printStats, 4000); + + /* Send our video/audio to the other peer */ + /* local_stream_promise = getLocalStream().then((stream) => { + console.log('Adding local stream'); + peer_connection.addStream(stream); + return stream; + }).catch(setError); */ + + if (!msg.sdp) { + console.log("WARNING: First message wasn't an SDP message!?"); + } + + peer_connection.onicecandidate = (event) => { + // We have a candidate, send it to the remote party with the same uuid + if (event.candidate == null) { + console.log("ICE Candidate was null, done"); + return; + } + ws_conn.send(JSON.stringify({'ice': event.candidate})); + }; + + setConnectButtonState("Disconnect"); + + setStatus("Created peer connection for call, waiting for SDP"); +} diff --git a/vaserving/__init__.py b/server/__init__.py similarity index 100% rename from vaserving/__init__.py rename to server/__init__.py diff --git a/vaserving/__main__.py b/server/__main__.py similarity index 66% rename from vaserving/__main__.py rename to server/__main__.py index 846fb16..d9c531d 100644 --- a/vaserving/__main__.py +++ b/server/__main__.py @@ -7,8 +7,9 @@ import sys import connexion -from vaserving.common.utils import logging -from vaserving.vaserving import VAServing +from flask_cors import CORS +from server.common.utils import logging +from server.pipeline_server import PipelineServer logger = logging.get_logger('main', is_static=True) @@ -17,22 +18,25 @@ def main(options): app = connexion.App(__name__, port=options.port, specification_dir='rest_api/', server='tornado') app.add_api('dlstreamer-pipeline-server.yaml', arguments={'title': 'Intel(R) DL Streamer Pipeline Server API'}) + # Ref: https://github.com/spec-first/connexion/blob/main/docs/cookbook.rst#cors-support + # Enables CORS on all domains/routes/methods per https://flask-cors.readthedocs.io/en/latest/#usage + CORS(app.app) logger.info("Starting Tornado Server on port: %s", options.port) app.run(port=options.port, server='tornado') except (KeyboardInterrupt, SystemExit): logger.info("Keyboard Interrupt or System Exit") except Exception as error: logger.error("Error Starting Tornado Server: %s", error) - VAServing.stop() + PipelineServer.stop() logger.info("Exiting") if __name__ == '__main__': try: - VAServing.start() + PipelineServer.start() except Exception as error: - logger.error("Error Starting VA Serving: %s", error) + logger.error("Error Starting Pipeline Server: %s", error) sys.exit(1) try: - main(VAServing.options) + main(PipelineServer.options) except Exception as error: logger.error("Unexpected Error: %s", error) diff --git a/vaserving/app_destination.py b/server/app_destination.py similarity index 100% rename from vaserving/app_destination.py rename to server/app_destination.py diff --git a/vaserving/app_source.py b/server/app_source.py similarity index 100% rename from vaserving/app_source.py rename to server/app_source.py diff --git a/vaserving/arguments.py b/server/arguments.py similarity index 84% rename from vaserving/arguments.py rename to server/arguments.py index 826b6c9..5cfde27 100644 --- a/vaserving/arguments.py +++ b/server/arguments.py @@ -46,7 +46,14 @@ def parse_options(args=None): default=bool(util.strtobool(os.getenv('ENABLE_RTSP', 'false')))) parser.add_argument("--rtsp-port", action="store", type=int, dest="rtsp_port", default=int(os.getenv('RTSP_PORT', '8554'))) - + parser.add_argument("--enable-webrtc", + dest="enable_webrtc", + action="store", + type=lambda x: bool(util.strtobool(x)), + default=bool(util.strtobool(os.getenv('ENABLE_WEBRTC', 'false')))) + parser.add_argument("--webrtc-signaling-server", action="store", + dest="webrtc_signaling_server", + default=os.getenv('WEBRTC_SIGNALING_SERVER', 'ws://webrtc_signaling_server:8443')) if (isinstance(args, dict)): args = ["--{}={}".format(key, value) @@ -56,7 +63,7 @@ def parse_options(args=None): result = parser.parse_args(args) parse_network_preference(result) except Exception: - print("Unrecognized argument passed to VAServing") + print("Unrecognized argument passed to PipelineServer") parser.print_help() raise return result diff --git a/vaserving/common/__init__.py b/server/common/__init__.py similarity index 100% rename from vaserving/common/__init__.py rename to server/common/__init__.py diff --git a/vaserving/common/utils/__init__.py b/server/common/utils/__init__.py similarity index 100% rename from vaserving/common/utils/__init__.py rename to server/common/utils/__init__.py diff --git a/vaserving/common/utils/logging.py b/server/common/utils/logging.py similarity index 96% rename from vaserving/common/utils/logging.py rename to server/common/utils/logging.py index b34a096..5342fd1 100644 --- a/vaserving/common/utils/logging.py +++ b/server/common/utils/logging.py @@ -19,6 +19,10 @@ def set_default_log_level(level): LOG_LEVEL = level +def is_debug_level(logger): + return logger.isEnabledFor(logging.DEBUG) + + def _set_log_level(logger, level): try: logger.setLevel(level) diff --git a/vaserving/ffmpeg_pipeline.py b/server/ffmpeg_pipeline.py similarity index 99% rename from vaserving/ffmpeg_pipeline.py rename to server/ffmpeg_pipeline.py index c10b4cc..f138574 100644 --- a/vaserving/ffmpeg_pipeline.py +++ b/server/ffmpeg_pipeline.py @@ -20,8 +20,8 @@ import os from datetime import datetime, timedelta -from vaserving.pipeline import Pipeline -from vaserving.common.utils import logging +from server.pipeline import Pipeline +from server.common.utils import logging if shutil.which('ffmpeg') is None: diff --git a/vaserving/gstreamer_app_destination.py b/server/gstreamer_app_destination.py similarity index 95% rename from vaserving/gstreamer_app_destination.py rename to server/gstreamer_app_destination.py index 9886c39..4692eb7 100644 --- a/vaserving/gstreamer_app_destination.py +++ b/server/gstreamer_app_destination.py @@ -7,8 +7,8 @@ from collections import namedtuple from enum import Enum, auto from gstgva.video_frame import VideoFrame -from vaserving.app_destination import AppDestination -from vaserving.gstreamer_pipeline import GStreamerPipeline +from server.app_destination import AppDestination +from server.gstreamer_pipeline import GStreamerPipeline GvaSample = namedtuple('GvaSample', ['sample', 'video_frame']) GvaSample.__new__.__defaults__ = (None, None) diff --git a/vaserving/gstreamer_app_source.py b/server/gstreamer_app_source.py similarity index 96% rename from vaserving/gstreamer_app_source.py rename to server/gstreamer_app_source.py index 6d53754..eb9e978 100644 --- a/vaserving/gstreamer_app_source.py +++ b/server/gstreamer_app_source.py @@ -16,9 +16,9 @@ # pylint: disable=wrong-import-position from gi.repository import Gst from gstgva.util import GVAJSONMeta -from vaserving.app_source import AppSource -from vaserving.gstreamer_app_destination import GvaSample -from vaserving.gstreamer_pipeline import GStreamerPipeline +from server.app_source import AppSource +from server.gstreamer_app_destination import GvaSample +from server.gstreamer_pipeline import GStreamerPipeline # pylint: enable=wrong-import-position fields = ['data', diff --git a/vaserving/gstreamer_pipeline.py b/server/gstreamer_pipeline.py similarity index 78% rename from vaserving/gstreamer_pipeline.py rename to server/gstreamer_pipeline.py index 0612c51..9e42375 100755 --- a/vaserving/gstreamer_pipeline.py +++ b/server/gstreamer_pipeline.py @@ -15,14 +15,15 @@ gi.require_version('Gst', '1.0') gi.require_version('GstApp', '1.0') # pylint: disable=wrong-import-position -from gi.repository import GLib, Gst, GstApp # pylint: disable=unused-import -from gstgva.util import GVAJSONMeta -from vaserving.app_destination import AppDestination -from vaserving.app_source import AppSource -from vaserving.common.utils import logging -from vaserving.pipeline import Pipeline -from vaserving.rtsp.gstreamer_rtsp_destination import GStreamerRtspDestination # pylint: disable=unused-import -from vaserving.rtsp.gstreamer_rtsp_server import GStreamerRtspServer +from gi.repository import GLib, Gst, GstApp +from server.app_destination import AppDestination +from server.app_source import AppSource +from server.common.utils import logging +from server.pipeline import Pipeline +from server.rtsp.gstreamer_rtsp_destination import GStreamerRtspDestination +from server.rtsp.gstreamer_rtsp_server import GStreamerRtspServer +from server.webrtc.gstreamer_webrtc_destination import GStreamerWebRTCDestination +from server.webrtc.gstreamer_webrtc_manager import GStreamerWebRTCManager # pylint: enable=wrong-import-position class GStreamerPipeline(Pipeline): @@ -30,8 +31,21 @@ class GStreamerPipeline(Pipeline): GVA_INFERENCE_ELEMENT_TYPES = ["GstGvaDetect", "GstGvaClassify", "GstGvaInference", + "GstGvaActionRecognitionBin", "GvaAudioDetect", - "GstGvaActionRecognitionBin"] + "GvaDetectBin", + "GvaClassifyBin", + "GvaInferenceBin", + "GvaActionRecognitionBin"] + GVA_ELEMENT_ENUM_TYPES = ["GstGVAMetaPublishFileFormat", + "InferenceRegionType", + "GstGVAMetaconvertFormatType", + "GstGVAMetaPublishMethod", + "GstGVAActionRecognitionBinBackend", + "GvaMetaPublishFileFormat", + "GvaInferenceBinRegion", + "GvaVideoToTensorBackend"] + SOURCE_ALIAS = "auto_source" GST_ELEMENTS_WITH_SOURCE_SETUP = ("GstURISourceBin") @@ -39,6 +53,7 @@ class GStreamerPipeline(Pipeline): _mainloop = None _mainloop_thread = None _rtsp_server = None + _webrtc_manager = None CachedElement = namedtuple("CachedElement", ["element", "pipelines"]) @staticmethod @@ -56,7 +71,6 @@ def __init__(self, identifier, config, model_manager, request, finished_callback self.identifier = identifier self.pipeline = None self.template = config['template'] - self.models = model_manager.models self.model_manager = model_manager self.request = request self._auto_source = None @@ -93,30 +107,33 @@ def __init__(self, identifier, config, model_manager, request, finished_callback target=GStreamerPipeline.gobject_mainloop) GStreamerPipeline._mainloop_thread.daemon = True GStreamerPipeline._mainloop_thread.start() - - if (options.enable_rtsp and not GStreamerPipeline._rtsp_server): - GStreamerPipeline._rtsp_server = GStreamerRtspServer(options.rtsp_port) - GStreamerPipeline._rtsp_server.start() - + if options: + if (options.enable_rtsp and not GStreamerPipeline._rtsp_server): + GStreamerPipeline._rtsp_server = GStreamerRtspServer(options.rtsp_port) + GStreamerPipeline._rtsp_server.start() + if (options.enable_webrtc and not GStreamerPipeline._webrtc_manager): + GStreamerPipeline._webrtc_manager = GStreamerWebRTCManager(options.webrtc_signaling_server) self.rtsp_server = GStreamerPipeline._rtsp_server - - + self.webrtc_manager = GStreamerPipeline._webrtc_manager @staticmethod def mainloop_quit(): if (GStreamerPipeline._rtsp_server): GStreamerPipeline._rtsp_server.stop() # Explicit delete frees GstreamerRtspServer resources. - # Avoids hang or segmentation fault on vaserving.stop() + # Avoids hang or segmentation fault on pipeline_server.stop() del GStreamerPipeline._rtsp_server GStreamerPipeline._rtsp_server = None + if (GStreamerPipeline._webrtc_manager): + GStreamerPipeline._webrtc_manager.stop() + GStreamerPipeline._webrtc_manager = None if (GStreamerPipeline._mainloop): GStreamerPipeline._mainloop.quit() GStreamerPipeline._mainloop = None if (GStreamerPipeline._mainloop_thread): GStreamerPipeline._mainloop_thread = None - def _verify_and_set_rtsp_destination(self): + def _verify_and_set_frame_destinations(self): destination = self.request.get("destination", {}) frame_destination = destination.get("frame", {}) frame_destination_type = frame_destination.get("type", None) @@ -127,13 +144,22 @@ def _verify_and_set_rtsp_destination(self): if not self.rtsp_path.startswith('/'): self.rtsp_path = "/" + self.rtsp_path self.rtsp_server.check_if_path_exists(self.rtsp_path) - frame_destination["class"] = "GStreamerRtspDestination" + frame_destination["class"] = GStreamerRtspDestination.__name__ rtsp_destination = AppDestination.create_app_destination(self.request, self, "frame") if not rtsp_destination: raise Exception("Unsupported Frame Destination: {}".format( frame_destination["class"])) self._app_destinations.append(rtsp_destination) - + if frame_destination_type == "webrtc": + self._logger.info("Request assigned webrtc frame destination {dest}".format( + dest=json.dumps(frame_destination))) + if (not self.appsink_element): + raise Exception("Pipeline does not support Frame Destination") + frame_destination["class"] = GStreamerWebRTCDestination.__name__ + webrtc_destination = AppDestination.create_app_destination(self.request, self, "frame") + if not webrtc_destination: + raise Exception("Unsupported Frame Destination: {}".format(frame_destination["class"])) + self._app_destinations.append(webrtc_destination) def _delete_pipeline(self, new_state): self._cal_avg_fps() @@ -154,10 +180,20 @@ def _delete_pipeline(self, new_state): if self._app_source: self._app_source.finish() + del self._app_source + self._app_source = None for destination in self._app_destinations: destination.finish() + if self.appsrc_element: + del self.appsrc_element + self.appsrc_element = None + + if self.appsink_element: + del self.appsink_element + self.appsink_element = None + self._app_destinations.clear() if (new_state == Pipeline.State.ERROR): @@ -332,18 +368,18 @@ def _get_elements_by_type(pipeline, type_strings): return [element for element in pipeline.iterate_elements() if element.__gtype__.name in type_strings] - def _set_model_proc(self): + def _set_model_property(self, property_name): gva_elements = [element for element in self.pipeline.iterate_elements() if ( element.__gtype__.name in self.GVA_INFERENCE_ELEMENT_TYPES)] for element in gva_elements: - if element.get_property("model-proc") is None: - proc = None - if element.get_property("model") in self.model_manager.model_procs: - proc = self.model_manager.model_procs[element.get_property("model")] - if proc is not None: - self._logger.debug("Setting model proc to {} for element {}".format( - proc, element.get_name())) - element.set_property("model-proc", proc) + if element.find_property(property_name) and not element.get_property(property_name): + if element.get_property("model") in self.model_manager.model_properties[property_name]: + property_value = self.model_manager.model_properties[property_name][element.get_property("model")] + if property_value is None: + continue + self._logger.debug("Setting {} to {} for element {}".format( + property_name, property_value, element.get_name())) + element.set_property(property_name, property_value) @staticmethod def validate_config(config): @@ -352,11 +388,13 @@ def validate_config(config): if GStreamerPipeline.SOURCE_ALIAS in field_names: template = template.replace("{"+ GStreamerPipeline.SOURCE_ALIAS +"}", "fakesrc") pipeline = Gst.parse_launch(template) - appsink_elements = GStreamerPipeline._get_elements_by_type(pipeline, ["GstAppSink"]) + logger = logging.get_logger('GSTPipeline', is_static=True) + logger.info("Validating pipeline elements of type {} and {}".format(GstApp.AppSrc.__gtype__.name, + GstApp.AppSink.__gtype__.name)) + appsink_elements = GStreamerPipeline._get_elements_by_type(pipeline, [GstApp.AppSink.__gtype__.name]) metaconvert = pipeline.get_by_name("metaconvert") metapublish = pipeline.get_by_name("destination") - appsrc_elements = GStreamerPipeline._get_elements_by_type(pipeline, ["GstAppSrc"]) - logger = logging.get_logger('GSTPipeline', is_static=True) + appsrc_elements = GStreamerPipeline._get_elements_by_type(pipeline, [GstApp.AppSrc.__gtype__.name]) if (len(appsrc_elements) > 1): logger.warning("Multiple appsrc elements found") if len(appsink_elements) != 1: @@ -464,15 +502,34 @@ def _set_model_instance_id(self): gva_elements = [element for element in self.pipeline.iterate_elements() if (element.__gtype__.name in self.GVA_INFERENCE_ELEMENT_TYPES) and model_instance_id in [x.name for x in element.list_properties()] - and (element.get_property(model_instance_id) is None)] + and not element.get_property(model_instance_id)] for element in gva_elements: name = element.get_property("name") instance_id = name + "_" + str(self.identifier) element.set_property(model_instance_id, instance_id) - def start(self): + def _set_source_and_sink(self): + src = self._get_any_source() + if self._auto_source and src.__gtype__.name in self.GST_ELEMENTS_WITH_SOURCE_SETUP: + src.connect("source_setup", self.source_setup_callback, src) + sink = self.pipeline.get_by_name("appsink") + if (not sink): + sink = self.pipeline.get_by_name("sink") + if src and sink: + src_pad = src.get_static_pad("src") + if (src_pad): + src_pad.add_probe(Gst.PadProbeType.BUFFER, + GStreamerPipeline.source_probe_callback, self) + else: + src.connect( + "pad-added", GStreamerPipeline.source_pad_added_callback, self) + sink_pad = sink.get_static_pad("sink") + sink_pad.add_probe(Gst.PadProbeType.BUFFER, + GStreamerPipeline.appsink_probe_callback, self) - self.request["models"] = self.models + def start(self): + if self.model_manager: + self.request["models"] = self.model_manager.models field_names = [fname for _, fname, _, _ in string.Formatter().parse(self.template)] if self.SOURCE_ALIAS in field_names: self._set_auto_source() @@ -492,28 +549,11 @@ def start(self): self._set_properties() self._set_bus_messages_flag() self._set_default_models() - self._set_model_proc() + self._set_model_property("model-proc") + self._set_model_property("labels") self._cache_inference_elements() self._set_model_instance_id() - - src = self._get_any_source() - if self._auto_source and src.__gtype__.name in self.GST_ELEMENTS_WITH_SOURCE_SETUP: - src.connect("source_setup", self.source_setup_callback, src) - - sink = self.pipeline.get_by_name("appsink") - if (not sink): - sink = self.pipeline.get_by_name("sink") - if src and sink: - src_pad = src.get_static_pad("src") - if (src_pad): - src_pad.add_probe(Gst.PadProbeType.BUFFER, - GStreamerPipeline.source_probe_callback, self) - else: - src.connect( - "pad-added", GStreamerPipeline.source_pad_added_callback, self) - sink_pad = sink.get_static_pad("sink") - sink_pad.add_probe(Gst.PadProbeType.BUFFER, - GStreamerPipeline.appsink_probe_callback, self) + self._set_source_and_sink() bus = self.pipeline.get_bus() bus.add_signal_watch() @@ -528,6 +568,11 @@ def start(self): self._set_application_source() self._set_application_destination() + self._log_launch_string() + + if "prepare-pads" in self.config: + self.config["prepare-pads"](self.pipeline) + self.pipeline.set_state(Gst.State.PLAYING) self.start_time = time.time() except Exception as error: @@ -536,14 +581,56 @@ def start(self): # Context is already within _create_delete_lock self._delete_pipeline(Pipeline.State.ERROR) + def _log_launch_string(self): + if not self._gst_launch_string or not logging.is_debug_level(self._logger): + return + try: + elements = [value.strip() + for value in self._gst_launch_string.split("!")] + for idx, element_str in enumerate(elements): + element_name = element_str.split(" ")[0] + properties_str = self._get_element_properties_string( + element_name) + if properties_str: + elements[idx] = "{} {}".format( + element_name, properties_str) + + self._logger.debug( + "Gst launch string is only for debugging purposes, may not be accurate") + self._logger.debug( + "gst-launch-1.0 {}".format(" ! ".join(elements))) + except Exception as error: + self._logger.debug("Unable to log Gst launch string {id}: {err}".format( + id=self.identifier, err=error)) + + def _get_element_properties_string(self, element_name, add_defaults=False): + properties_str = "" + for element in self.pipeline.iterate_elements(): + if element_name in element.__gtype__.name.lower(): + for paramspec in element.list_properties(): + # Skipping adding of caps and params that aren't writable + if paramspec.name in ['caps', 'parent', 'name'] or paramspec.flags == 225: + continue + if add_defaults or paramspec.default_value != element.get_property(paramspec.name): + property_value = element.get_property( + paramspec.name) + if element.find_property(paramspec.name).value_type.name \ + in self.GVA_ELEMENT_ENUM_TYPES: + property_value = property_value.value_nick + properties_str = "{} {}={}".format( + properties_str, paramspec.name, property_value) + break + + return properties_str + def _set_application_destination(self): self.appsink_element = None - app_sink_elements = GStreamerPipeline._get_elements_by_type(self.pipeline, ["GstAppSink"]) + app_sink_elements = GStreamerPipeline._get_elements_by_type(self.pipeline, [GstApp.AppSink.__gtype__.name]) if (app_sink_elements): self.appsink_element = app_sink_elements[0] - self._verify_and_set_rtsp_destination() + self._verify_and_set_frame_destinations() destination = self.request.get("destination", None) if destination and "metadata" in destination and destination["metadata"]["type"] == "application": @@ -591,7 +678,7 @@ def _set_application_source(self): appsrc_element = self.pipeline.get_by_name("source") - if (appsrc_element) and (appsrc_element.__gtype__.name == "GstAppSrc"): + if (appsrc_element) and (appsrc_element.__gtype__.name == GstApp.AppSrc.__gtype__.name): self.appsrc_element = appsrc_element self._app_source = AppSource.create_app_source(self.request, self) @@ -653,21 +740,7 @@ def on_sample_app_destination(self, sink): return Gst.FlowReturn.OK def on_sample(self, sink): - self._logger.debug("Received Sample from Pipeline {id}".format( - id=self.identifier)) - sample = sink.emit("pull-sample") - try: - - buf = sample.get_buffer() - - for meta in GVAJSONMeta.iterate(buf): - json_object = json.loads(meta.get_message()) - self._logger.debug(json.dumps(json_object)) - - except Exception as error: - self._logger.error("Error on Pipeline {id}: {err}".format( - id=self.identifier, err=error)) - return Gst.FlowReturn.ERROR + _ = sink.emit("pull-sample") self.frame_count += 1 return Gst.FlowReturn.OK diff --git a/vaserving/model_manager.py b/server/model_manager.py similarity index 84% rename from vaserving/model_manager.py rename to server/model_manager.py index 28898b3..f6e35e7 100644 --- a/vaserving/model_manager.py +++ b/server/model_manager.py @@ -9,7 +9,7 @@ import os import fnmatch import string -from vaserving.common.utils import logging +from server.common.utils import logging class ModelsDict(MutableMapping): @@ -48,24 +48,29 @@ def __init__(self, model_dir, network_preference=None, ignore_init_errors=False) self.model_dir = model_dir self.network_preference = network_preference self.models = defaultdict(dict) - self.model_procs = {} + self.model_properties = defaultdict(dict) + if not self.network_preference: self.network_preference = {'CPU': ["FP32"], 'HDDL': ["FP16"], 'GPU': ["FP16"], 'VPU': ["FP16"], 'MYRIAD': ["FP16"], - 'KMB': ["U8"]} + 'KMB': ["U8"], + 'MULTI': ["FP16"], + 'HETERO': ["FP16"], + 'AUTO': ["FP16"], + 'DEFAULT': ["FP16"]} success = self.load_models(self.model_dir, self.network_preference) if (not ignore_init_errors) and (not success): raise Exception("Error Initializing Models") - def _get_model_proc(self, path): - candidates = fnmatch.filter(os.listdir(path), "*.json") + def _get_model_property(self, path, model_property, extension): + candidates = fnmatch.filter(os.listdir(path), "*.{}".format(extension)) if (len(candidates) > 1): - raise Exception("Multiple model proc files found in %s" % (path,)) + raise Exception("Multiple {} files found in {}".format(model_property, path)) if (len(candidates) == 1): return os.path.abspath(os.path.join(path, candidates[0])) return None @@ -106,6 +111,14 @@ def get_network(self, model, network): def get_default_network_for_device(self, device, model): if "VA_DEVICE_DEFAULT" in model: + if device not in self.network_preference: + mixed_device = [device_type for device_type in [ + "HETERO", "AUTO", "MULTI"] if device.startswith(device_type)] + if mixed_device and mixed_device[0] in self.network_preference: + device = mixed_device[0] + elif 'DEFAULT' in self.network_preference: + device = 'DEFAULT' + for preference in self.network_preference[device]: ret = self.get_network(model, preference) if ret: @@ -157,7 +170,10 @@ def load_models(self, model_dir, network_preference): version_path = os.path.join(model_path, version) if (os.path.isdir(version_path)): version = self.convert_version(version) - proc = self._get_model_proc(version_path) + proc = self._get_model_property( + version_path, "model-proc", "json") + labels = self._get_model_property( + version_path, "labels", "txt") if proc is None: self.logger.info("Model {model}/{ver} is missing Model-Proc".format( model=model_name, ver=version)) @@ -166,15 +182,18 @@ def load_models(self, model_dir, network_preference): if (networks): for key in networks: networks[key].update({"proc": proc, + "labels": labels, "version": version, "type": "IntelDLDT", "description": model_name}) - self.model_procs[networks.get(key).get("network")] = proc + self.model_properties["model-proc"][networks[key]["network"]] = proc + self.model_properties["labels"][networks[key]["network"]] = labels models[model_name][version] = ModelsDict(model_name, version, {"networks": networks, "proc": proc, + "labels" : labels, "version": version, "type": "IntelDLDT", "description": model_name @@ -182,6 +201,7 @@ def load_models(self, model_dir, network_preference): network_paths = { key: value["network"] for key, value in networks.items()} network_paths["model-proc"] = proc + network_paths["labels"] = labels self.logger.info("Loading Model: {} version: {} " "type: {} from {}".format( model_name, version, "IntelDLDT", network_paths)) @@ -214,11 +234,14 @@ def get_model_parameters(self, name, version): if "networks" in self.models[name][version]: proc = None + labels = None for _, value in self.models[name][version]['networks'].items(): proc = value['proc'] + labels = value['labels'] break params_obj["networks"] = { 'model-proc': proc, + 'labels' : labels, 'networks': {key: value['network'] for key, value in self.models[name][version]['networks'].items()}} diff --git a/vaserving/pipeline.py b/server/pipeline.py similarity index 85% rename from vaserving/pipeline.py rename to server/pipeline.py index e4771cf..86a0822 100644 --- a/vaserving/pipeline.py +++ b/server/pipeline.py @@ -32,12 +32,18 @@ def params(self): def validate_config(config): pass + @staticmethod + def get_config_section(config, config_section): + for key in config_section: + config = config.get(key, {}) + + return config + @staticmethod def get_section_and_config(request, config, request_section, config_section): for key in request_section: request = request.get(key, {}) - for key in config_section: - config = config.get(key, {}) + config = Pipeline.get_config_section(config, config_section) return request, config diff --git a/vaserving/pipeline_manager.py b/server/pipeline_manager.py similarity index 86% rename from vaserving/pipeline_manager.py rename to server/pipeline_manager.py index 79faee3..84e7ccf 100644 --- a/vaserving/pipeline_manager.py +++ b/server/pipeline_manager.py @@ -6,15 +6,16 @@ import os import json +import string import traceback from threading import Lock from collections import deque from collections import defaultdict import uuid import jsonschema -from vaserving.common.utils import logging -from vaserving.pipeline import Pipeline -from vaserving import schema +from server.common.utils import logging +from server.pipeline import Pipeline +from server import schema class PipelineManager: @@ -39,14 +40,14 @@ def __init__(self, model_manager, pipeline_dir, max_running_pipelines, def _import_pipeline_types(self): pipeline_types = {} try: - from vaserving.gstreamer_pipeline import GStreamerPipeline # pylint: disable=import-error + from server.gstreamer_pipeline import GStreamerPipeline # pylint: disable=import-error pipeline_types['GStreamer'] = GStreamerPipeline except Exception as error: pipeline_types['GStreamer'] = None self.logger.info( "GStreamer Pipelines Not Enabled: %s\n", error) try: - from vaserving.ffmpeg_pipeline import FFmpegPipeline # pylint: disable=import-error + from server.ffmpeg_pipeline import FFmpegPipeline # pylint: disable=import-error pipeline_types['FFmpeg'] = FFmpegPipeline except Exception as error: pipeline_types['FFmpeg'] = None @@ -60,7 +61,7 @@ def _import_pipeline_types(self): def _load_pipelines(self): # TODO: refactor - # pylint: disable=too-many-branches,too-many-nested-blocks + # pylint: disable=too-many-branches,too-many-nested-blocks,too-many-statements self.log_banner("Loading Pipelines") error_occurred = False self.pipeline_types = self._import_pipeline_types() @@ -112,6 +113,7 @@ def _load_pipelines(self): version, config['type'], path)) + self._update_defaults_from_env(pipelines[pipeline][version]) else: del pipelines[pipeline][version] self.logger.error("Pipeline %s with type %s not supported", @@ -135,6 +137,34 @@ def _load_pipelines(self): self.log_banner("Completed Loading Pipelines") return not error_occurred + def _update_defaults_from_env(self, config): + config = Pipeline.get_config_section( + config, ["parameters", "properties"]) + for key in config: + if "default" in config[key]: + default_value = config[key]["default"] + if not isinstance(default_value, str): + continue + try: + env_value = string.Formatter().vformat( + default_value, [], {'env': os.environ}) + if config[key]["type"] != "string": + env_value = self._get_typed_value(env_value) + config[key]["default"] = env_value + self.logger.debug( + "Setting {}={} using {}".format(key, env_value, default_value)) + except Exception as env_key: + self.logger.debug( + "ENV variable {} is not set, " + "element will use its default value for property {}".format(env_key, key)) + del config[key]["default"] + + def _get_typed_value(self, value): + try: + return json.loads(value) + except ValueError: + return value + def warn_if_mounted(self): if os.path.islink(self.pipeline_dir): self.logger.warning( @@ -193,7 +223,7 @@ def set_section_defaults(self, request, config, request_section, config_section) section, config = Pipeline.get_section_and_config( request, config, request_section, config_section) for key in config: - if (key not in section) and ("default" in config[key]): + if key not in section and "default" in config[key]: section[key] = config[key]["default"] if (len(section) != 0): @@ -249,8 +279,13 @@ def create_instance(self, name, version, request_original, options): if not self.is_input_valid(request, pipeline_config, "parameters"): return None, "Invalid Parameters" - if not self.is_input_valid(request, pipeline_config, "destination"): - return None, "Invalid Destination" + if "destination" in request: + destination_section = request.get("destination") + destination_config = pipeline_config.get("destination", {}) + for destination in destination_section: + if not self.is_input_valid(destination_section, destination_config, destination) or \ + not isinstance(destination_section[destination], dict): + return None, "Invalid Destination" if not self.is_input_valid(request, pipeline_config, "source"): return None, "Invalid Source" if not self.is_input_valid(request, pipeline_config, "tags"): diff --git a/vaserving/vaserving.py b/server/pipeline_server.py similarity index 86% rename from vaserving/vaserving.py rename to server/pipeline_server.py index 684bfdf..c28da73 100644 --- a/vaserving/vaserving.py +++ b/server/pipeline_server.py @@ -7,20 +7,20 @@ import time from collections import defaultdict from collections import namedtuple -from vaserving.arguments import parse_options -from vaserving.pipeline_manager import PipelineManager -from vaserving.model_manager import ModelManager -from vaserving.common.utils import logging +from server.arguments import parse_options +from server.pipeline_manager import PipelineManager +from server.model_manager import ModelManager +from server.common.utils import logging -# Allow non-PascalCase class name for __VAServing +# Allow non-PascalCase class name for __PipelineServer #pylint: disable=invalid-name -class __VAServing: +class __PipelineServer: class ModelProxy: - def __init__(self, vaserving, model, logger): + def __init__(self, pipeline_server, model, logger): self._model = model - self._vaserving = vaserving + self._pipeline_server = pipeline_server self._logger = logger def name(self): @@ -34,9 +34,9 @@ def networks(self): class PipelineProxy: - def __init__(self, vaserving, pipeline, logger, instance=None): + def __init__(self, pipeline_server, pipeline, logger, instance=None): self._pipeline = pipeline - self._vaserving = vaserving + self._pipeline_server = pipeline_server self._instance = instance self._logger = logger self._status_named_tuple = None @@ -48,7 +48,7 @@ def version(self): return self._pipeline["version"] def stop(self): - return self._vaserving.pipeline_manager.stop_instance(self._instance) + return self._pipeline_server.pipeline_manager.stop_instance(self._instance) def wait(self, timeout=None): status = self.status() @@ -66,7 +66,7 @@ def wait(self, timeout=None): def status(self): if (self._instance): - result = self._vaserving.pipeline_manager.get_instance_status(self._instance) + result = self._pipeline_server.pipeline_manager.get_instance_status(self._instance) if 'avg_pipeline_latency' not in result: result['avg_pipeline_latency'] = None @@ -98,7 +98,7 @@ def start(self, request=None, source=None, destination=None, parameters=None, ta self._set_or_update(request, "destination", destination) self._set_or_update(request, "parameters", parameters) self._set_or_update(request, "tags", tags) - self._instance, err = self._vaserving.pipeline_instance( + self._instance, err = self._pipeline_server.pipeline_instance( self.name(), self.version(), request) if (not self._instance): @@ -106,7 +106,7 @@ def start(self, request=None, source=None, destination=None, parameters=None, ta return self._instance def __init__(self): - self._logger = logging.get_logger("VAServing", is_static=True) + self._logger = logging.get_logger("PipelineServer", is_static=True) self.options = None self.model_manager = None self.pipeline_manager = None @@ -165,7 +165,7 @@ def stop(self): if (self.options) and (self.options.framework == "gstreamer") and (not self._stopped): try: - from vaserving.gstreamer_pipeline import GStreamerPipeline + from server.gstreamer_pipeline import GStreamerPipeline GStreamerPipeline.mainloop_quit() except Exception as exception: self._logger.warning("Failed in quitting GStreamer main loop: %s", @@ -204,7 +204,7 @@ def pipeline_instance(self, name, version, request): if (not self._stopped): return self.pipeline_manager.create_instance(name, version, request, self.options) - return None, "VA Serving Stopped" + return None, "Pipeline Server Stopped" -VAServing = __VAServing() +PipelineServer = __PipelineServer() diff --git a/vaserving/rest_api/dlstreamer-pipeline-server.yaml b/server/rest_api/dlstreamer-pipeline-server.yaml similarity index 93% rename from vaserving/rest_api/dlstreamer-pipeline-server.yaml rename to server/rest_api/dlstreamer-pipeline-server.yaml index 4638457..6b8dc4c 100644 --- a/vaserving/rest_api/dlstreamer-pipeline-server.yaml +++ b/server/rest_api/dlstreamer-pipeline-server.yaml @@ -1,7 +1,7 @@ openapi: 3.0.0 info: - description: Intel(R) DL Streamer Pipeline Server API - title: Intel(R) DL Streamer Pipeline Server API + description: Intel(R) Deep Learning Streamer Pipeline Server (Intel(R) DL Streamer Pipeline Server) API + title: Pipeline Server API version: 0.0.3 servers: - url: / @@ -19,7 +19,7 @@ paths: $ref: '#/components/schemas/Model' type: array description: Success - x-openapi-router-controller: vaserving.rest_api.endpoints + x-openapi-router-controller: server.rest_api.endpoints /pipelines: get: description: Return supported pipelines @@ -33,7 +33,7 @@ paths: $ref: '#/components/schemas/Pipeline' type: array description: Success - x-openapi-router-controller: vaserving.rest_api.endpoints + x-openapi-router-controller: server.rest_api.endpoints /pipelines/{name}/{version}: get: description: Return pipeline description. @@ -60,7 +60,7 @@ paths: schema: $ref: '#/components/schemas/Pipeline' description: Success - x-openapi-router-controller: vaserving.rest_api.endpoints + x-openapi-router-controller: server.rest_api.endpoints post: description: Start new pipeline instance. operationId: pipelines_name_version_post @@ -88,7 +88,7 @@ paths: responses: 200: description: Success - x-openapi-router-controller: vaserving.rest_api.endpoints + x-openapi-router-controller: server.rest_api.endpoints /pipelines/status: get: description: Returns all pipeline instance status. @@ -102,7 +102,7 @@ paths: $ref: '#/components/schemas/PipelineInstanceStatus' type: array description: Success - x-openapi-router-controller: vaserving.rest_api.endpoints + x-openapi-router-controller: server.rest_api.endpoints /pipelines/{name}/{version}/{instance_id}: delete: description: Stop pipeline instance. @@ -133,7 +133,7 @@ paths: responses: 200: description: Success - x-openapi-router-controller: vaserving.rest_api.endpoints + x-openapi-router-controller: server.rest_api.endpoints get: description: Return pipeline instance summary. operationId: pipelines_name_version_instance_id_get @@ -167,7 +167,7 @@ paths: schema: $ref: '#/components/schemas/PipelineInstanceSummary' description: Success - x-openapi-router-controller: vaserving.rest_api.endpoints + x-openapi-router-controller: server.rest_api.endpoints /pipelines/{instance_id}: delete: description: Stop pipeline instance. @@ -184,7 +184,7 @@ paths: responses: 200: description: Success - x-openapi-router-controller: vaserving.rest_api.endpoints + x-openapi-router-controller: server.rest_api.endpoints get: description: Return pipeline instance summary. operationId: pipelines_instance_id_get @@ -203,7 +203,7 @@ paths: schema: $ref: '#/components/schemas/PipelineInstanceSummary' description: Success - x-openapi-router-controller: vaserving.rest_api.endpoints + x-openapi-router-controller: server.rest_api.endpoints /pipelines/{name}/{version}/{instance_id}/status: get: description: Return pipeline instance status. @@ -237,7 +237,7 @@ paths: schema: $ref: '#/components/schemas/PipelineInstanceStatus' description: Success - x-openapi-router-controller: vaserving.rest_api.endpoints + x-openapi-router-controller: server.rest_api.endpoints /pipelines/status/{instance_id}: get: description: Return pipeline instance status. @@ -258,7 +258,7 @@ paths: schema: $ref: '#/components/schemas/PipelineInstanceStatus' description: Success - x-openapi-router-controller: vaserving.rest_api.endpoints + x-openapi-router-controller: server.rest_api.endpoints components: schemas: AnyValue: {} @@ -421,6 +421,9 @@ components: enum: - kafka type: string + required: + - host + - topic type: object MQTTDestination: properties: @@ -433,6 +436,9 @@ components: enum: - mqtt type: string + required: + - host + - topic type: object RTSPDestination: properties: diff --git a/vaserving/rest_api/endpoints.py b/server/rest_api/endpoints.py similarity index 89% rename from vaserving/rest_api/endpoints.py rename to server/rest_api/endpoints.py index f8c9969..173ae37 100755 --- a/vaserving/rest_api/endpoints.py +++ b/server/rest_api/endpoints.py @@ -5,8 +5,8 @@ ''' from http import HTTPStatus import connexion -from vaserving.common.utils import logging -from vaserving.vaserving import VAServing +from server.common.utils import logging +from server.pipeline_server import PipelineServer logger = logging.get_logger('Default Controller', is_static=True) @@ -25,7 +25,7 @@ def models_get(): # noqa: E501 """ try: logger.debug("GET on /models") - return VAServing.model_manager.get_loaded_models() + return PipelineServer.model_manager.get_loaded_models() except Exception as error: logger.error('pipelines_name_version_get %s', error) return ('Unexpected error', HTTPStatus.INTERNAL_SERVER_ERROR) @@ -41,7 +41,7 @@ def pipelines_get(): # noqa: E501 """ try: logger.debug("GET on /pipelines") - return VAServing.pipeline_manager.get_loaded_pipelines() + return PipelineServer.pipeline_manager.get_loaded_pipelines() except Exception as error: logger.error('pipelines_name_version_get %s', error) return ('Unexpected error', HTTPStatus.INTERNAL_SERVER_ERROR) @@ -62,7 +62,7 @@ def pipelines_name_version_get(name, version): # noqa: E501 try: logger.debug( "GET on /pipelines/{name}/{version}".format(name=name, version=version)) - result = VAServing.pipeline_manager.get_pipeline_parameters( + result = PipelineServer.pipeline_manager.get_pipeline_parameters( name, version) if result: return result @@ -89,7 +89,7 @@ def pipelines_name_version_instance_id_delete(name, version, instance_id): # no try: logger.debug("DELETE on /pipelines/{name}/{version}/{id}".format( name=name, version=str(version), id=instance_id)) - result = VAServing.pipeline_manager.stop_instance( + result = PipelineServer.pipeline_manager.stop_instance( instance_id, name, version) if result: result['state'] = result['state'].name @@ -112,7 +112,7 @@ def pipelines_instance_id_delete(instance_id): # noqa: E501 """ try: logger.debug("DELETE on /pipelines/{id}".format(id=instance_id)) - result = VAServing.pipeline_manager.stop_instance(instance_id) + result = PipelineServer.pipeline_manager.stop_instance(instance_id) if result: result['state'] = result['state'].name return result @@ -139,7 +139,7 @@ def pipelines_name_version_instance_id_get(name, version, instance_id): # noqa: try: logger.debug("GET on /pipelines/{name}/{version}/{id}".format( name=name, version=version, id=instance_id)) - result = VAServing.pipeline_manager.get_instance_parameters( + result = PipelineServer.pipeline_manager.get_instance_parameters( name, version, instance_id) if result: return result @@ -161,7 +161,7 @@ def pipelines_instance_id_get(instance_id): # noqa: E501 """ try: logger.debug("GET on /pipelines/{id}".format(id=instance_id)) - result = VAServing.pipeline_manager.get_instance_summary(instance_id) + result = PipelineServer.pipeline_manager.get_instance_summary(instance_id) if result: return result return (bad_request_response, HTTPStatus.BAD_REQUEST) @@ -188,7 +188,7 @@ def pipelines_name_version_instance_id_status_get(name, version, instance_id): logger.debug("GET on /pipelines/{name}/{version}/{id}/status".format(name=name, version=version, id=instance_id)) - result = VAServing.pipeline_manager.get_instance_status( + result = PipelineServer.pipeline_manager.get_instance_status( instance_id, name, version) if result: result['state'] = result['state'].name @@ -207,7 +207,7 @@ def pipelines_status_get_all(): # noqa: E501 """ try: logger.debug("GET on /pipelines/status") - results = VAServing.pipeline_manager.get_all_instance_status() + results = PipelineServer.pipeline_manager.get_all_instance_status() for result in results: result['state'] = result['state'].name return results @@ -217,7 +217,7 @@ def pipelines_status_get_all(): # noqa: E501 def pipelines_instance_id_status_get(instance_id): # noqa: E501 - """pipelines_name_version_instance_id_status_get + """pipelines_instance_id_status_get Return instance status summary # noqa: E501 @@ -228,7 +228,7 @@ def pipelines_instance_id_status_get(instance_id): # noqa: E501 """ try: logger.debug("GET on /pipelines/status/{id}".format(id=instance_id)) - result = VAServing.pipeline_manager.get_instance_status(instance_id) + result = PipelineServer.pipeline_manager.get_instance_status(instance_id) if result: result['state'] = result['state'].name return result @@ -258,7 +258,7 @@ def pipelines_name_version_post(name, version): # noqa: E501 "POST on /pipelines/{name}/{version}".format(name=name, version=str(version))) if connexion.request.is_json: try: - pipeline_id, err = VAServing.pipeline_instance( + pipeline_id, err = PipelineServer.pipeline_instance( name, version, connexion.request.get_json()) if pipeline_id is not None: return pipeline_id diff --git a/vaserving/rtsp/gstreamer_rtsp_destination.py b/server/rtsp/gstreamer_rtsp_destination.py similarity index 97% rename from vaserving/rtsp/gstreamer_rtsp_destination.py rename to server/rtsp/gstreamer_rtsp_destination.py index 57e4909..0f8782a 100644 --- a/vaserving/rtsp/gstreamer_rtsp_destination.py +++ b/server/rtsp/gstreamer_rtsp_destination.py @@ -8,8 +8,8 @@ gi.require_version('Gst', '1.0') # pylint: disable=wrong-import-position from gi.repository import Gst -from vaserving.common.utils import logging -from vaserving.app_destination import AppDestination +from server.common.utils import logging +from server.app_destination import AppDestination # pylint: enable=wrong-import-position class GStreamerRtspDestination(AppDestination): diff --git a/vaserving/rtsp/gstreamer_rtsp_factory.py b/server/rtsp/gstreamer_rtsp_factory.py similarity index 98% rename from vaserving/rtsp/gstreamer_rtsp_factory.py rename to server/rtsp/gstreamer_rtsp_factory.py index 0548c65..3c08e5d 100644 --- a/vaserving/rtsp/gstreamer_rtsp_factory.py +++ b/server/rtsp/gstreamer_rtsp_factory.py @@ -9,7 +9,7 @@ gi.require_version('Gst', '1.0') # pylint: disable=wrong-import-position from gi.repository import Gst, GstRtspServer -from vaserving.common.utils import logging +from server.common.utils import logging # pylint: enable=wrong-import-position class GStreamerRtspFactory(GstRtspServer.RTSPMediaFactory): diff --git a/vaserving/rtsp/gstreamer_rtsp_server.py b/server/rtsp/gstreamer_rtsp_server.py similarity index 96% rename from vaserving/rtsp/gstreamer_rtsp_server.py rename to server/rtsp/gstreamer_rtsp_server.py index 16546c0..9bba649 100644 --- a/vaserving/rtsp/gstreamer_rtsp_server.py +++ b/server/rtsp/gstreamer_rtsp_server.py @@ -12,8 +12,8 @@ gi.require_version('Gst', '1.0') # pylint: disable=wrong-import-position from gi.repository import Gst, GstRtspServer, GLib -from vaserving.common.utils import logging -from vaserving.rtsp.gstreamer_rtsp_factory import GStreamerRtspFactory +from server.common.utils import logging +from server.rtsp.gstreamer_rtsp_factory import GStreamerRtspFactory # pylint: enable=wrong-import-position Stream = namedtuple('stream', ['source', 'caps']) diff --git a/vaserving/schema.py b/server/schema.py similarity index 91% rename from vaserving/schema.py rename to server/schema.py index 17fb810..d7e1605 100644 --- a/vaserving/schema.py +++ b/server/schema.py @@ -299,9 +299,42 @@ "path" ] }, + "webrtc": { + "type":"object", + "properties": { + "type": { + "type":"string", + "enum":["webrtc"] + }, + "peer-id": { + "type":"string", + "minLength": 1, + "pattern" : "^[a-zA-Z0-9][a-zA-Z0-9_]*[a-zA-Z0-9]$" + }, + "cache-length": { + "type":"integer" + }, + "sync-with-source": { + "type":"boolean" + }, + "sync-with-destination":{ + "type":"boolean" + }, + "encode-cq-level":{ + "type":"integer" + } + }, + "required": [ + "type", + "peer-id" + ] + }, "oneOf": [ { "$ref": "#/rtsp" + }, + { + "$ref": "#/webrtc" } ] }, diff --git a/server/webrtc/gstreamer_webrtc_destination.py b/server/webrtc/gstreamer_webrtc_destination.py new file mode 100644 index 0000000..9954ae6 --- /dev/null +++ b/server/webrtc/gstreamer_webrtc_destination.py @@ -0,0 +1,115 @@ +''' +* Copyright (C) 2022 Intel Corporation. +* +* SPDX-License-Identifier: BSD-3-Clause +''' + +import gi +gi.require_version('Gst', '1.0') +# pylint: disable=wrong-import-position +from gi.repository import Gst +from server.common.utils import logging +from server.app_destination import AppDestination +# pylint: enable=wrong-import-position + +class GStreamerWebRTCDestination(AppDestination): + + def __init__(self, request, pipeline): + AppDestination.__init__(self, request, pipeline) + self._pipeline = pipeline + self._webrtc_manager = pipeline.webrtc_manager + self._app_src = None + self._logger = logging.get_logger('GStreamerWebRTCDestination', is_static=True) + self._need_data = False + self._pts = 0 + self._last_timestamp = 0 + self._frame_size = 0 + self._frame_count = 0 + self._clock = Gst.SystemClock() + caps = Gst.Caps.from_string("video/x-raw") + if self._pipeline.appsink_element.props.caps: + caps = caps.intersect(self._pipeline.appsink_element.props.caps) + self._pipeline.appsink_element.props.caps = caps + self._get_request_parameters(request) + + def _get_request_parameters(self, request): + destination_config = request.get("destination", {}) + frame_config = destination_config.get("frame", {}) + self._webrtc_peerid = frame_config["peer-id"] + self._cache_length = frame_config.get("cache-length", 30) + self._sync_with_source = frame_config.get("sync-with-source", True) + self._sync_with_destination = frame_config.get("sync-with-destination", True) + self._encode_cq_level = frame_config.get("encode-cq-level", 10) + + def _init_stream(self, sample): + self._frame_size = sample.get_buffer().get_size() + caps = sample.get_caps() + self._need_data = False + self._last_timestamp = self._clock.get_time() + if self._sync_with_source: + self._pipeline.appsink_element.set_property("sync", True) + self._logger.info("Adding WebRTC frame destination stream for peer_id {}.".format(self._webrtc_peerid)) + self._logger.debug("WebRTC Stream frame caps == {}".format(caps)) + self._webrtc_manager.add_stream(self._webrtc_peerid, caps, self) + + def _on_need_data(self, _unused_src, _): + self._need_data = True + + def _on_enough_data(self, _): + self._need_data = False + + def set_app_src(self, app_src, webrtc_pipeline): + self._app_src = app_src + self._pts = 0 + self._app_src.set_property("is-live", True) + self._app_src.set_property("do-timestamp", True) + self._app_src.set_property("blocksize", self._frame_size) + if self._sync_with_destination: + self._app_src.set_property("block", True) + self._app_src.set_property("min-percent", 100) + if self._cache_length: + self._app_src.set_property("max-bytes", + int(self._frame_size*self._cache_length)) + encoder = webrtc_pipeline.get_by_name("vp8encoder") + if self._encode_cq_level and encoder: + encoder.set_property("cq-level", self._encode_cq_level) + self._app_src.connect('need-data', self._on_need_data) + self._app_src.connect('enough-data', self._on_enough_data) + + def _push_buffer(self, buffer): + timestamp = self._clock.get_time() + delta = timestamp - self._last_timestamp + buffer.pts = buffer.dts = self._pts + buffer.duration = delta + self._pts += delta + self._last_timestamp = timestamp + retval = self._app_src.emit('push-buffer', buffer) + if retval != Gst.FlowReturn.OK: + self._logger.debug( + "Push buffer failed for stream {} with {}".format(self._webrtc_peerid, retval)) + self._end_stream() + + def process_frame(self, frame): + self._logger.info("process_frame negotiating caps") + self._init_stream(frame) + self.process_frame = self._process_frame + self.process_frame(frame) + + def _process_frame(self, frame): + if self._need_data: + self._push_buffer(frame.get_buffer()) + else: + self._last_timestamp = self._clock.get_time() + + def _end_stream(self): + self._need_data = False + if self._app_src: + self._app_src.end_of_stream() + self._logger.debug("WebRTC Stream - EOS received") + del self._app_src + self._app_src = None + + def finish(self): + self._end_stream() + if self._webrtc_manager: + self._webrtc_manager.remove_stream(self._webrtc_peerid) diff --git a/server/webrtc/gstreamer_webrtc_manager.py b/server/webrtc/gstreamer_webrtc_manager.py new file mode 100644 index 0000000..27535aa --- /dev/null +++ b/server/webrtc/gstreamer_webrtc_manager.py @@ -0,0 +1,65 @@ +''' +* Copyright (C) 2022 Intel Corporation. +* +* SPDX-License-Identifier: BSD-3-Clause +''' + +from server.webrtc.gstreamer_webrtc_stream import GStreamerWebRTCStream +from server.common.utils import logging + +class GStreamerWebRTCManager: + _source = "appsrc name=webrtc_source format=GST_FORMAT_TIME " + _WebRTCVideoPipeline = " ! videoconvert ! queue ! gvawatermark " \ + " ! vp8enc name=vp8encoder deadline=1 ! rtpvp8pay " \ + " ! queue ! application/x-rtp,media=video,encoding-name=VP8,payload=97 " \ + " ! webrtcbin name=webrtc_destination bundle-policy=max-bundle" + + def __init__(self, signaling_server): + self._logger = logging.get_logger('GStreamerWebRTCManager', is_static=True) + self._signaling_server = signaling_server + self._streams = {} + + def _peerid_in_use(self, peer_id): + if not peer_id: + raise Exception("Empty peer_id was passed to WebRTCManager!") + if peer_id in self._streams: + return True + return False + + def add_stream(self, peer_id, frame_caps, destination_instance): + stream_caps = self._select_caps(frame_caps.to_string()) + if not self._peerid_in_use(peer_id): + launch_string = self._get_launch_string(stream_caps) + self._streams[peer_id] = GStreamerWebRTCStream(peer_id, stream_caps, launch_string, destination_instance, + self._signaling_server) + self._logger.info("Starting WebRTC Stream for peer_id:{}".format(peer_id)) + self._streams[peer_id].start() + + def _select_caps(self, caps): + split_caps = caps.split(',') + new_caps = [] + selected_caps = ['video/x-raw', 'width', 'height', 'framerate', + 'layout', 'format'] + for cap in split_caps: + for selected in selected_caps: + if selected in cap: + new_caps.append(cap) + return new_caps + + def _get_launch_string(self, stream_caps): + s_src = "{} caps=\"{}\"".format(self._source, ','.join(stream_caps)) + pipeline_launch = " {} {} ".format(s_src, self._WebRTCVideoPipeline) + self._logger.info(pipeline_launch) + return pipeline_launch + + def remove_stream(self, peer_id): + if peer_id in self._streams: + self._logger.info("Stopping WebRTC Stream for peer_id {id}".format(id=peer_id)) + self._streams[peer_id].stop() + del self._streams[peer_id] + self._logger.debug("Remaining set of WebRTC Streams {}".format(self._streams)) + + def stop(self): + for peer_id in list(self._streams): + self.remove_stream(peer_id) + self._streams = None diff --git a/server/webrtc/gstreamer_webrtc_stream.py b/server/webrtc/gstreamer_webrtc_stream.py new file mode 100644 index 0000000..e51f93b --- /dev/null +++ b/server/webrtc/gstreamer_webrtc_stream.py @@ -0,0 +1,279 @@ +''' +* Copyright (C) 2022 Intel Corporation. +* +* SPDX-License-Identifier: BSD-3-Clause +''' + +import asyncio +import json +import time +from threading import Thread +from socket import gaierror +from websockets.client import connect as WSConnect +from websockets.exceptions import ConnectionClosedOK, ConnectionClosedError +import gi + +# pylint: disable=wrong-import-position +gi.require_version('Gst', '1.0') +from gi.repository import Gst +gi.require_version('GstWebRTC', '1.0') +from gi.repository import GstWebRTC +gi.require_version('GstSdp', '1.0') +from gi.repository import GstSdp +# pylint: enable=wrong-import-position +import server.gstreamer_pipeline as GstPipeline +from server.common.utils import logging + +class GStreamerWebRTCStream: + def __init__(self, peer_id, frame_caps, launch_string, destination_instance, + signaling_server): + self._logger = logging.get_logger('GStreamerWebRTCStream', is_static=True) + self._peer_id = peer_id + self._frame_caps = frame_caps + self._launch_string = launch_string + self._destination_instance = destination_instance + self._server = signaling_server + self._logger.debug("GStreamerWebRTCStream __init__ with Signaling Server at {}".format(self._server)) + self._conn = None + self._webrtc = None + self._stopped = False + self._thread = None + self._pipe = None + self._state = None + self._webrtc_pipeline = None + self._webrtc_pipeline_type = "GStreamer WebRTC Stream" + self._retry_limit = 5 + self._retry_delay = 5 + self._retries_attempted = 0 + + async def connect(self): + if self._conn: + self._logger.warning("Encountered open connection when attempting to re-connect!") + return + self._logger.info("WebRTC Stream connect will negotiate with {} to accept connections from peerid {}".format( + self._server, self._peer_id)) + self._conn = await WSConnect(self._server) + await self._conn.send("HELLO {peer_id}".format(peer_id=self._peer_id)) + + async def _setup_call(self): + self._logger.debug("WebRTC Stream setup_call entered") + await self._conn.send('SESSION {peer_id}'.format(peer_id=self._peer_id)) + + def _send_sdp_offer(self, offer): + text = offer.sdp.as_text() + self._logger.info("WebRTC Stream Sending offer:\n{}".format(text)) + msg = json.dumps({'sdp': {'type': 'offer', 'sdp': text}}) + event_loop = asyncio.new_event_loop() + event_loop.run_until_complete(self._conn.send(msg)) + event_loop.close() + + def _on_offer_created(self, promise, _, __): + self._logger.debug("WebRTC Stream on_offer_created entered") + promise.wait() + reply = promise.get_reply() + offer = reply.get_value('offer') + promise = Gst.Promise.new() + self._webrtc.emit('set-local-description', offer, promise) + promise.interrupt() + self._send_sdp_offer(offer) + + def _on_negotiation_needed(self, element): + self._logger.debug("WebRTC Stream on_negotiation_needed for element {}".format(element)) + promise = Gst.Promise.new_with_change_func(self._on_offer_created, element, None) + element.emit('create-offer', None, promise) + + def _send_ice_candidate_message(self, _, mlineindex, candidate): + icemsg = json.dumps({'ice': {'candidate': candidate, 'sdpMLineIndex': mlineindex}}) + event_loop = asyncio.new_event_loop() + event_loop.run_until_complete(self._conn.send(icemsg)) + event_loop.close() + + def _on_incoming_decodebin_stream(self, _, pad): + self._logger.debug("checking pad caps {}".format(pad)) + if not pad.has_current_caps(): + self._logger.warning("WebRTC Stream Pad has no caps, ignoring: {}".format(pad)) + return + caps = pad.get_current_caps() + name = caps.to_string() + self._logger.debug("WebRTC Stream Pad caps: {}".format(name)) + if name.startswith('video'): + queue = Gst.ElementFactory.make('queue') + conv = Gst.ElementFactory.make('videoconvert') + sink = Gst.ElementFactory.make('autovideosink') + self._pipe.add(queue) + self._pipe.add(conv) + self._pipe.add(sink) + self._pipe.sync_children_states() + pad.link(queue.get_static_pad('sink')) + queue.link(conv) + conv.link(sink) + + def _on_incoming_stream(self, _, pad): + self._logger.debug("WebRTC Stream preparing incoming stream {}".format(pad)) + if pad.direction != Gst.PadDirection.SRC: + return + decodebin = Gst.ElementFactory.make('decodebin') + decodebin.connect('pad-added', self._on_incoming_decodebin_stream) + self._pipe.add(decodebin) + decodebin.sync_state_with_parent() + self._webrtc.link(decodebin) + + def prepare_destination_pads(self, pipeline): + self._pipe = pipeline + self._pipe.caps = self._frame_caps + appsrc = self._pipe.get_by_name("webrtc_source") + self._destination_instance.set_app_src(appsrc, self._pipe) + self._webrtc = self._pipe.get_by_name('webrtc_destination') + self._webrtc.connect('on-negotiation-needed', self._on_negotiation_needed) + self._webrtc.connect('on-ice-candidate', self._send_ice_candidate_message) + self._webrtc.connect('pad-added', self._on_incoming_stream) + + def _finished_callback(self): + self._logger.info("GStreamerPipeline finished for peer_id:{}".format(self._peer_id)) + + def _start_pipeline(self): + self._logger.info("Starting WebRTC pipeline for peer_id:{}".format(self._peer_id)) + config = {"type": self._webrtc_pipeline_type, "template": self._launch_string, + "prepare-pads": self.prepare_destination_pads} + request = {"source": { "type": "webrtc_destination" }, "peer_id": self._peer_id} + self._reset() + self._webrtc_pipeline = GstPipeline.GStreamerPipeline( + self._peer_id, config, None, request, self._finished_callback, None) + self._webrtc_pipeline.start() + self._logger.info("WebRTC pipeline started for peer_id:{}".format(self._peer_id)) + + async def _handle_sdp(self, message): + if self._webrtc: + try: + msg = json.loads(message) + except ValueError: + self._logger.error("Error processing empty or bad SDP message!") + return + self._logger.info("Handle SDP message {}".format(msg)) + if 'sdp' in msg: + sdp = msg['sdp'] + assert(sdp['type'] == 'answer') + sdp = sdp['sdp'] + self._logger.warning("WebRTC Received answer: {}".format(sdp)) + _, sdpmsg = GstSdp.SDPMessage.new() + GstSdp.sdp_message_parse_buffer(bytes(sdp.encode()), sdpmsg) + answer = GstWebRTC.WebRTCSessionDescription.new(GstWebRTC.WebRTCSDPType.ANSWER, sdpmsg) + promise = Gst.Promise.new() + self._webrtc.emit('set-remote-description', answer, promise) + promise.interrupt() + elif 'ice' in msg: + ice = msg['ice'] + candidate = ice['candidate'] + sdpmlineindex = ice['sdpMLineIndex'] + self._webrtc.emit('add-ice-candidate', sdpmlineindex, candidate) + else: + self._logger.debug("Peer not yet connected or webrtcbin element missing from frame destination.") + + def _log_banner(self, heading): + banner = "="*len(heading) + self._logger.info(banner) + self._logger.info(heading) + self._logger.info(banner) + + async def message_loop(self): + self._logger.debug("Entered WebRTC Stream message_loop") + assert self._conn + while not self._stopped: + message = None + try: + message = await asyncio.wait_for(self._conn.recv(), 10) + except asyncio.TimeoutError: + continue + except ConnectionClosedError: + self._logger.error("ConnectionClosedError in WebRTC message_loop!") + break + except ConnectionClosedOK: + self._logger.info("ConnectionClosedOK in WebRTC message_loop") + break + if message: + self._log_banner("WebRTC Message") + if message == 'HELLO': + await self._setup_call() + self._logger.info("Registered to Pipeline Server...") + elif message == 'START_WEBRTC_STREAM': + self._start_pipeline() + elif message.startswith('ERROR'): + self._logger.warning("WebRTC Stream Error: {}".format(message)) + return 1 + else: + await self._handle_sdp(message) + self._logger.debug("WebRTC Stream exiting message_loop.") + return 0 + + def _check_plugins(self): + self._log_banner("WebRTC Plugin Check") + needed = ["opus", "vpx", "nice", "webrtc", "dtls", "srtp", "rtp", + "rtpmanager"] + missing = list(filter(lambda p: Gst.Registry.get().find_plugin(p) is None, needed)) + if missing: + self._logger.info("Missing gstreamer plugins: {}".format(missing)) + return False + self._logger.debug("Successfully found required gstreamer plugins") + return True + + def _listen_for_peer_connections(self): + self._logger.debug("Listening for peer connections") + event_loop = asyncio.new_event_loop() + res = None + while not self._stopped: + try: + event_loop.run_until_complete(self.connect()) + res = event_loop.run_until_complete(self.message_loop()) + except gaierror: + self._logger.error("Cannot reach WebRTC Signaling Server {}! Is it running?".format(self._server)) + except ConnectionClosedError: + self._logger.error("ConnectionClosedError in WebRTC listen_for_peer_connections!") + except ConnectionClosedOK: + self._logger.info("ConnectionClosedOK in WebRTC listen_for_peer_connections") + except (KeyboardInterrupt, SystemExit): + pass + finally: + if self._conn: + self._logger.debug("closing connection to re-init.") + event_loop.run_until_complete(self._conn.close()) + self._conn = None + self._retries_attempted += 1 + if self._retries_attempted <= self._retry_limit: + self._logger.warning("Attempt {}/{} to restart listener begins in {} seconds.".format( + self._retries_attempted, self._retry_limit, self._retry_delay)) + time.sleep(self._retry_delay) + else: + break + if res: + self._logger.info("WebRTC Result: {}".format(res)) + event_loop.close() + + def _thread_launcher(self): + try: + self._listen_for_peer_connections() + except (KeyboardInterrupt, SystemExit): + pass + self._logger.info("Exiting WebRTC thread launcher") + + def start(self): + self._logger.info("Starting WebRTC Stream using Signaling Server at: {}".format(self._server)) + if not self._check_plugins(): + self._logger.error("WebRTC Stream error - dependent plugins are missing!") + self._thread = Thread(target=self._thread_launcher) + self._thread.start() + + def _reset(self): + if self._webrtc_pipeline: + self._webrtc_pipeline.stop() + self._webrtc_pipeline = None + self._pipe = None + self._webrtc = None + + def stop(self): + self._reset() + self._stopped = True + self._logger.info("Stopping GStreamer WebRTC Stream for peer_id {}".format(self._peer_id)) + if self._thread: + self._thread.join() + self._thread = None + self._logger.debug("GStreamer WebRTC Stream completed pipeline for peer_id {}".format(self._peer_id)) diff --git a/third-party-programs.txt b/third-party-programs.txt index 904389e..c6d9236 100644 --- a/third-party-programs.txt +++ b/third-party-programs.txt @@ -1206,7 +1206,7 @@ necessary. Here is a sample; alter the names: That's all there is to it! ------------------------------------------------------------- -12. JSON-C +13. JSON-C Copyright (c) 2009-2012 Eric Haszlakiewicz Permission is hereby granted, free of charge, to any person obtaining a @@ -1254,6 +1254,11 @@ SOFTWARE. * Docker Base Images + Intel(R) Deep Learning Streamer Pipeline Framework Ubuntu 20.04 Runtime Image + https://hub.docker.com/r/intel/dlstreamer + https://github.com/dlstreamer/dlstreamer/tree/master/docker/source/ubuntu20 + Copyright (c) 2018-2022 Intel Corporation + OpenVINO(TM) Runtime Base Image https://hub.docker.com/r/openvino/ubuntu18_runtime https://hub.docker.com/r/openvino/ubuntu20_runtime diff --git a/tools/model_downloader/README.md b/tools/model_downloader/README.md index 3803229..a956f9a 100644 --- a/tools/model_downloader/README.md +++ b/tools/model_downloader/README.md @@ -4,8 +4,8 @@ The model downloader downloads and prepares models from the OpenVINO Toolkit [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo) for use with -Intel(R) DL Streamer Pipeline Server. It can be run as a standalone tool or as -part of the Intel(R) DL Streamer Pipeline Server build process. For more +Intel(R) Deep Learning Streamer (Intel(R) DL Streamer) Pipeline Server. It can be run as a standalone tool or as +part of the Pipeline Server build process. For more information on model file formats and the directory structure used by Intel(R) DL Streamer Pipeline Server see [defining_pipelines](/docs/defining_pipelines.md#deep-learning-models). @@ -37,27 +37,31 @@ more optional properties (alias, version, precision, local model-proc). If an optional property is not specified the downloader will use [default values](#default-values). +> Note: Models can have a separate file that contains labels. + Example: ```yaml - model: yolo-v3-tf alias: object_detection version: 2 - precision: [FP32,INT8] + precision: [FP32] model-proc: object_detection.json + labels: coco.txt ``` +The `model-proc` and `labels` entries above can be set if a local override is desired. +In that case, the corresponding files are expected to be in the same directory as the models.list.yml specified. + ## Default Values * alias = model_name * version = 1 * precision = all available precisions -* model-proc = model_name.json - -If a local model-proc is not specified, the tool will download the -corresponding model-proc file from the Intel(R) DL Streamer repository (if one -exists). +* model-proc = .json +* labels = .txt +If a local model-proc and/or labels file(s) are not specified, the tool will use the model-proc and/or labels file that is part of the Intel(R) DL Streamer developer image. # Downloading Models diff --git a/tools/model_downloader/__main__.py b/tools/model_downloader/__main__.py index 4a5e0cb..fd940e3 100644 --- a/tools/model_downloader/__main__.py +++ b/tools/model_downloader/__main__.py @@ -27,21 +27,17 @@ def main(): args = parse_args() _print_args(args) - if ( - os.path.isfile(downloader.MODEL_DOWNLOADER_PATH) - and os.path.isfile(downloader.MODEL_CONVERTER_PATH) - and os.path.isfile(downloader.MODEL_OPTIMIZER_PATH) - ): - downloader.download_and_convert_models( - args.model_list, args.output_dir, args.force, args.dl_streamer_version - ) - else: + base_type = downloader.get_base_image_type() + if not base_type: print( "Intel(R) Distribution of OpenVINO(TM) Toolkit tools not " "found. Please check if all dependent tools are installed and try again." ) sys.exit(1) + downloader.download_and_convert_models( + base_type, args.model_list, args.output_dir, args.force, args.dl_streamer_version + ) if __name__ == "__main__": main() diff --git a/tools/model_downloader/arguments.py b/tools/model_downloader/arguments.py index 9c3643a..4eb0b8a 100644 --- a/tools/model_downloader/arguments.py +++ b/tools/model_downloader/arguments.py @@ -13,9 +13,10 @@ def parse_args(args=None): help='path where to save models') parser.add_argument('--model-list', default="models_list/models.list.yml", help='input file with model names and properties') - parser.add_argument('--model-proc-version', default="v1.3", + parser.add_argument('--model-proc-version', required=False, default="v1.3", dest="dl_streamer_version", - help='Intel(R) DL Streamer Framework version for model proc files') + help='(Applies only to OpenVINO(TM) images) \ + Intel(R) DL Streamer Framework version for model proc files') parser.add_argument("--force", required=False, dest="force", action="store_true", default=False, help='force download and conversion of existing models') diff --git a/tools/model_downloader/downloader.py b/tools/model_downloader/downloader.py index 970ceb8..ddecd1a 100644 --- a/tools/model_downloader/downloader.py +++ b/tools/model_downloader/downloader.py @@ -11,65 +11,77 @@ import tempfile import shlex from glob import glob +from collections import namedtuple import requests import yaml from jsonschema import Draft7Validator, FormatChecker from mdt_schema import model_list_schema +from model_index_schema import model_index_schema + +OMZ_PATHS = { + "dlstreamer_devel": { + "MODEL_DOWNLOADER_PATH" : "/usr/local/bin/omz_downloader", + "MODEL_CONVERTER_PATH" : "/usr/local/bin/omz_converter", + "MODEL_OPTIMIZER_PATH" : "/usr/local/bin/mo", + "SAMPLES_ROOT" : "/opt/intel/dlstreamer/samples", + "MODEL_INDEX_FILE": "/opt/intel/dlstreamer/samples/model_index.yaml" + }, + "openvino_data_dev": { + "MODEL_DOWNLOADER_PATH": "/opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader/downloader.py", + "MODEL_CONVERTER_PATH": "/opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader/converter.py", + "MODEL_OPTIMIZER_PATH": "/opt/intel/openvino/deployment_tools/model_optimizer/mo.py", + "DLSTREAMER_VERSION_FILE" : "/opt/intel/openvino/data_processing/dl_streamer/version.txt" + } +} + +MODEL_PROC_SEARCH_PATH ="/opt/intel/openvino/data_processing/dl_streamer/samples/model_proc/**/{0}.json" +# Pointer to Intel(R) Deep Learning Streamer (Intel(R) DL Streamer) repository +DL_STREAMER_REPO_ROOT = "https://raw.githubusercontent.com/openvinotoolkit/dlstreamer_gst" + +MODEL_LIST_EXPECTED_SCHEMA = "\n- model(Required): mobilenet-ssd\n \ + alias(Optional): object_detection\n \ + version(Optional): 1\n \ + precision(Optional): [FP16,FP32]" + +MODEL_INDEX_EXPECTED_SCHEMA = '{: {model-proc (optional): ,\ + labels (optional) : }}' + +def get_base_image_type(): + valid_types = list(OMZ_PATHS.keys()) + for base_type, path_dict in OMZ_PATHS.items(): + for _, file_path in path_dict.items(): + if not os.path.exists(file_path): + valid_types.remove(base_type) + break + if not valid_types: + return None + return valid_types[0] -MODEL_OPTIMIZER_ROOT = ( - "/opt/intel/dldt" - if os.path.isdir("/opt/intel/dldt") - else "/opt/intel/openvino/deployment_tools" -) -OPEN_MODEL_ZOO_ROOT = os.path.join(MODEL_OPTIMIZER_ROOT, "open_model_zoo") -MODEL_DOWNLOADER_PATH = os.path.join( - OPEN_MODEL_ZOO_ROOT, "tools/downloader/downloader.py" -) -MODEL_CONVERTER_PATH = os.path.join( - OPEN_MODEL_ZOO_ROOT, "tools/downloader/converter.py" -) -MODEL_OPTIMIZER_PATH = ( - os.path.join(MODEL_OPTIMIZER_ROOT, "model-optimizer/mo.py") - if MODEL_OPTIMIZER_ROOT == "/opt/intel/dldt" - else os.path.join(MODEL_OPTIMIZER_ROOT, "model_optimizer/mo.py") -) - -MODEL_PROC_ROOT = "/opt/intel/dl_streamer/samples/model_proc" -DL_STREAMER_REPO_ROOT = ( - "https://raw.githubusercontent.com/openvinotoolkit/dlstreamer_gst" -) -DLSTREAMER_VERSION_FILE = "/opt/intel/openvino/data_processing/dl_streamer/version.txt" -MODEL_PROC_SEARCH_PATH = "/opt/intel/openvino/data_processing/dl_streamer/samples/model_proc/**/{0}.json" - -def _validate_schema(model_list): +def _validate_schema(data, schema): try: - validator = Draft7Validator(model_list_schema, format_checker=FormatChecker()) - validator.validate(model_list) + validator = Draft7Validator(schema, format_checker=FormatChecker()) + validator.validate(data) except Exception as error: print("Yaml input schema validation error.") print(error) sys.exit(1) -def _load_model_list(model_list_path): - model_list = None +def _load_yaml_data(file_path, schema, expected_schema): + data = None try: - with open(model_list_path) as model_list_file: - model_list = yaml.safe_load(model_list_file) + with open(file_path) as file: + data = yaml.safe_load(file) except Exception: print("Exception while loading yaml file. File could be malformed.") print("Please make sure model list file is in correct yml file format.") - print("Expected Schema: ") - print("- model(Required): mobilenet-ssd") - print(" alias(Optional): object_detection") - print(" version(Optional): 1") - print(" precision(Optional): [FP16,FP32]") + print("Expected Schema: {}".format(expected_schema)) - if model_list is None: + if data is None: sys.exit(1) else: - _validate_schema(model_list) - return model_list + _validate_schema(data, schema) + return data def _find_downloaded_model(model_name, download_dir): @@ -78,13 +90,15 @@ def _find_downloaded_model(model_name, download_dir): return os.path.abspath(os.path.join(root, model_name)) return None -def _copy_datadev_model_proc(target_dir, model_name, dl_streamer_version): +def _copy_datadev_model_proc(target_dir, model_name, dl_streamer_version, version_file): result = None - with open(DLSTREAMER_VERSION_FILE) as local_version: + + with open(version_file) as local_version: version = "v" + local_version.readline() if version.startswith(dl_streamer_version): model_proc = None - search_path = MODEL_PROC_SEARCH_PATH.format(model_name) + search_path = MODEL_PROC_SEARCH_PATH.format( + model_name) for model_proc in glob(search_path, recursive=True): break if model_proc: @@ -93,74 +107,65 @@ def _copy_datadev_model_proc(target_dir, model_name, dl_streamer_version): print("Copied model_proc to: {}".format(result)) except PermissionError: print("Permission denied copying model_proc") - except: - print("Unexpected error:", sys.exc_info()) + except Exception as error: + print("Unexpected error: {}".format(error)) return result def _download_model_proc(target_dir, model_name, dl_streamer_version): - model_proc = None - if os.path.isdir(MODEL_PROC_ROOT): - for root, _, files in os.walk(MODEL_PROC_ROOT): - for filepath in files: - if os.path.splitext(filepath)[0] == model_name: - model_proc = os.path.join(root, filepath) - if model_proc: - shutil.move(model_proc, os.path.join(target_dir, "{}.json".format(model_name))) - else: - url = "{0}/{1}/samples/model_proc/{2}.json".format( - DL_STREAMER_REPO_ROOT, dl_streamer_version, model_name - ) - response = requests.get(url) - with tempfile.TemporaryDirectory() as temp_dir: - if response.status_code == 200: - with open( - "{0}/{1}.json".format(temp_dir, model_name), "wb" - ) as out_file: - out_file.write(response.content) - print( - "Downloaded {0} model-proc file from gst-video-analytics repo".format( - model_name - ) - ) - model_proc = os.path.abspath( - "{0}/{1}.json".format(temp_dir, model_name) + url = "{0}/{1}/samples/model_proc/{2}.json".format(DL_STREAMER_REPO_ROOT, dl_streamer_version, model_name + ) + response = requests.get(url) + with tempfile.TemporaryDirectory() as temp_dir: + if response.status_code == 200: + with open( + "{0}/{1}.json".format(temp_dir, model_name), "wb" + ) as out_file: + out_file.write(response.content) + print( + "Downloaded {0} model-proc file from Intel(R) DL Streamer repository".format( + model_name ) - shutil.move(model_proc, os.path.join(target_dir, "{}.json".format(model_name))) - else: - print("WARNING: model-proc not found in gst-video-analytics repo!") + ) + model_proc = os.path.abspath( + "{0}/{1}.json".format(temp_dir, model_name) + ) + shutil.move(model_proc, os.path.join(target_dir, "{}.json".format(model_name))) + else: + print("WARNING: model-proc not found in Intel(R) DL Streamer repository!") -def _create_convert_command(model_name, output_dir, precisions): +def _create_convert_command(base_type, model_name, output_dir, precisions): if precisions: cmd = "python3 {0} -d {3} --name {1} --precisions {2} -o {3} --mo {4}" return shlex.split( cmd.format( - MODEL_CONVERTER_PATH, + OMZ_PATHS[base_type]["MODEL_CONVERTER_PATH"], model_name, ",".join(map(str, precisions)), output_dir, - MODEL_OPTIMIZER_PATH, + OMZ_PATHS[base_type]["MODEL_OPTIMIZER_PATH"], ) ) cmd = "python3 {0} -d {2} --name {1} -o {2} --mo {3}" return shlex.split( - cmd.format(MODEL_CONVERTER_PATH, model_name, output_dir, MODEL_OPTIMIZER_PATH) + cmd.format(OMZ_PATHS[base_type]["MODEL_CONVERTER_PATH"], model_name, output_dir, + OMZ_PATHS[base_type]["MODEL_OPTIMIZER_PATH"]) ) -def _create_download_command(model_name, output_dir, precisions): +def _create_download_command(base_type, model_name, output_dir, precisions): if precisions: cmd = "python3 {0} --name {1} --precisions {2} -o {3}" return shlex.split( cmd.format( - MODEL_DOWNLOADER_PATH, + OMZ_PATHS[base_type]["MODEL_DOWNLOADER_PATH"], model_name, ",".join(map(str, precisions)), output_dir, ) ) cmd = "python3 {0} --name {1} -o {2}" - return shlex.split(cmd.format(MODEL_DOWNLOADER_PATH, model_name, output_dir)) + return shlex.split(cmd.format(OMZ_PATHS[base_type]["MODEL_DOWNLOADER_PATH"], model_name, output_dir)) def _run_command(command, model_name, step): @@ -172,13 +177,13 @@ def _run_command(command, model_name, step): sys.exit(1) -def _download_model(model_name, output_dir, precisions): - command = _create_download_command(model_name, output_dir, precisions) +def _download_model(base_type, model_name, output_dir, precisions): + command = _create_download_command(base_type, model_name, output_dir, precisions) _run_command(command, model_name, "downloading") -def _convert_model(model_name, output_dir, precisions): - command = _create_convert_command(model_name, output_dir, precisions) +def _convert_model(base_type, model_name, output_dir, precisions): + command = _create_convert_command(base_type, model_name, output_dir, precisions) _run_command(command, model_name, "converting") @@ -191,35 +196,45 @@ def _get_model_properties(model, model_list_path, target_root): result.setdefault("version", 1) result.setdefault("precision", None) result.setdefault("model-proc", None) + result.setdefault("labels", None) if result["model-proc"]: result["model-proc"] = os.path.abspath( os.path.join(os.path.dirname(model_list_path), result["model-proc"]) ) + if result["labels"]: + result["labels"] = os.path.abspath( + os.path.join(os.path.dirname(model_list_path), + result["labels"]) + ) result["target-dir"] = os.path.join(target_root, result["alias"], str(result["version"])) return result +def _find_model_property_value(model_name, model_key, model_proc_dict, samples_root): + model_value = None + model_data = model_proc_dict.get(model_name, {}) + if model_key in model_data: + model_value = os.path.join(samples_root, model_data[model_key]) + print("{} : {} for model {} per model_index.yaml".format( + model_key, model_value, model_name)) + if model_value is None: + print("No {} for model {} in the image per model_index.yaml".format( + model_key, model_name)) -def _download_and_convert_model( - target_root, model, force, model_list_path, dl_streamer_version -): - model_properties = _get_model_properties(model, model_list_path, target_root) + return model_value + +def _download_and_convert_model(base_type, model_properties): model_name = model_properties["model"] precision = model_properties["precision"] - model_proc = model_properties["model-proc"] target_dir = model_properties["target-dir"] - if (not force) and (os.path.isdir(target_dir)): - print("Model Directory {0} Exists - Skipping".format(target_dir)) - return - if os.path.isdir(target_dir): shutil.rmtree(target_dir) with tempfile.TemporaryDirectory() as output_dir: - _download_model(model_name, output_dir, precision) - _convert_model(model_name, output_dir, precision) + _download_model(base_type, model_name, output_dir, precision) + _convert_model(base_type, model_name, output_dir, precision) downloaded_model_path = _find_downloaded_model(model_name, output_dir) for path in os.listdir(downloaded_model_path): source = os.path.join(downloaded_model_path, path) @@ -227,25 +242,55 @@ def _download_and_convert_model( if os.path.isdir(source): shutil.move(source, target) - if model_proc: - if os.path.isfile(model_proc): - shutil.copy(model_proc, target_dir) + +def _handle_model_files(base_type, model_properties, model_proc_dict, dl_streamer_version): + model_name = model_properties["model"] + model_proc = model_properties["model-proc"] + labels = model_properties["labels"] + target_dir = model_properties["target-dir"] + + if base_type == "openvino_data_dev": + version_file = OMZ_PATHS["openvino_data_dev"]["DLSTREAMER_VERSION_FILE"] + if not model_proc and _copy_datadev_model_proc( + target_dir, model_name, dl_streamer_version, version_file) is None: + _download_model_proc(target_dir, model_name, dl_streamer_version) + elif base_type == "dlstreamer_devel": + samples_root = OMZ_PATHS["dlstreamer_devel"]["SAMPLES_ROOT"] + if not model_proc: + model_proc = _find_model_property_value(model_name, "model-proc", model_proc_dict, samples_root) + if not labels: + labels = _find_model_property_value(model_name, "labels", model_proc_dict, samples_root) + + collateral = namedtuple('Collateral', ['name', 'value']) + model_proc_tuple = collateral("model-proc", model_proc) + labels_tuple = collateral("labels", labels) + for item in [model_proc_tuple, labels_tuple]: + if item.value: + if os.path.isfile(item.value): + shutil.copy(item.value, target_dir) else: - print("Error, model-proc {} specified but not found", model_proc) + print("Error, {} {} specified but not found".format(item.name, item.value)) sys.exit(1) - else: - if _copy_datadev_model_proc(target_dir, model_name, dl_streamer_version) is None: - _download_model_proc(target_dir, model_name, dl_streamer_version) - def download_and_convert_models( - model_list_path, output_dir, force, dl_streamer_version + base_type, model_list_path, output_dir, force, dl_streamer_version ): - model_list = _load_model_list(model_list_path) + model_list = _load_yaml_data(model_list_path, model_list_schema, MODEL_LIST_EXPECTED_SCHEMA) + model_proc_dict = None + if base_type == "dlstreamer_devel": + model_index_file = OMZ_PATHS["dlstreamer_devel"]["MODEL_INDEX_FILE"] + model_proc_dict = _load_yaml_data( + model_index_file, model_index_schema, MODEL_INDEX_EXPECTED_SCHEMA + ) target_root = os.path.join(output_dir, "models") os.makedirs(target_root, exist_ok=True) for model in model_list: - _download_and_convert_model( - target_root, model, force, model_list_path, dl_streamer_version - ) + model_properties = _get_model_properties( + model, model_list_path, target_root) + target_dir = model_properties["target-dir"] + if (not force) and (os.path.isdir(target_dir)): + print("Model Directory {0} Exists - Skipping".format(target_dir)) + continue + _download_and_convert_model(base_type, model_properties) + _handle_model_files(base_type, model_properties, model_proc_dict, dl_streamer_version) diff --git a/tools/model_downloader/mdt_schema.py b/tools/model_downloader/mdt_schema.py index 65b454e..0bd3772 100644 --- a/tools/model_downloader/mdt_schema.py +++ b/tools/model_downloader/mdt_schema.py @@ -20,7 +20,8 @@ "FP16-INT8", "FP32-INT8", "FP32-INT1", "FP16-INT1", "INT1"]} }, - "model-proc" : {"type": "string"} + "model-proc" : {"type": "string"}, + "labels": {"type": "string"}, }, "required" : ["model"], "additionalProperties" : False diff --git a/tools/model_downloader/model_downloader.sh b/tools/model_downloader/model_downloader.sh index d16ceee..570f916 100755 --- a/tools/model_downloader/model_downloader.sh +++ b/tools/model_downloader/model_downloader.sh @@ -11,8 +11,8 @@ SOURCE_DIR=$(dirname "$TOOLS_DIR") OUTPUT_DIR=$(realpath $( pwd )) FORCE= RUN_PREFIX= -OPEN_MODEL_ZOO_TOOLS_IMAGE=${OPEN_MODEL_ZOO_TOOLS_IMAGE:-"${CACHE_PREFIX}openvino/ubuntu20_data_dev"} -OPEN_MODEL_ZOO_VERSION=${OPEN_MODEL_ZOO_VERSION:-"2021.4.2"} +OPEN_MODEL_ZOO_TOOLS_IMAGE=${OPEN_MODEL_ZOO_TOOLS_IMAGE:-"${CACHE_PREFIX}intel/dlstreamer"} +OPEN_MODEL_ZOO_VERSION=${OPEN_MODEL_ZOO_VERSION:-"2022.1.0-ubuntu20-devel"} NAME="dlstreamer-pipeline-server-model-downloader" DL_STREAMER_VERSION= @@ -100,26 +100,20 @@ done YML_DIR=$(dirname "${MODEL_LIST}") YML_FILE_NAME=$(basename "${MODEL_LIST}") VOLUME_MOUNT+="-v $TOOLS_DIR:/home/pipeline-server/tools -v $YML_DIR:/models_yml -v $OUTPUT_DIR:/output" +DL_STREAMER_VERSION="" -case $OPEN_MODEL_ZOO_VERSION in - 2020.4) - DL_STREAMER_VERSION="v1.1.0" - ;; - 2021.1) - DL_STREAMER_VERSION="v1.2.1" - ;; - 2021.2) - DL_STREAMER_VERSION="v1.3" - ;; - 2021.3*) - DL_STREAMER_VERSION="v1.4.1" - ;; - 2021.4*) - DL_STREAMER_VERSION="v1.5" - ;; - *) - error 'ERROR: Unknown Open Model Zoo version: ' $OPEN_MODEL_ZOO_VERSION -esac +if [[ "$OPEN_MODEL_ZOO_TOOLS_IMAGE" == *"openvino/"* ]]; then + case $OPEN_MODEL_ZOO_VERSION in + 2021.2) + DL_STREAMER_VERSION="v1.3" + ;; + 2021.4*) + DL_STREAMER_VERSION="v1.5" + ;; + *) + error 'ERROR: Unsupported Open Model Zoo version: ' $OPEN_MODEL_ZOO_VERSION + esac +fi if [ ! -z "$TEAMCITY_VERSION" ]; then NON_INTERACTIVE=--non-interactive @@ -130,4 +124,8 @@ if [ ! -d "$OUTPUT_DIR/models" ]; then echo "Created output models folder as UID: $UID" fi -$SOURCE_DIR/docker/run.sh --user "$UID" -e HOME=/tmp $NON_INTERACTIVE --name $NAME --image $OPEN_MODEL_ZOO_TOOLS_IMAGE:$OPEN_MODEL_ZOO_VERSION $VOLUME_MOUNT $DRY_RUN --entrypoint /bin/bash --entrypoint-args "\"-i\" \"-c\" \"pip3 install -r /home/pipeline-server/tools/model_downloader/requirements.txt ; python3 -u /home/pipeline-server/tools/model_downloader --model-proc-version $DL_STREAMER_VERSION --model-list /models_yml/$YML_FILE_NAME --output /output $FORCE\"" +if [ ! -z "$DL_STREAMER_VERSION" ]; then + MODEL_PROC_VERSION="--model-proc-version $DL_STREAMER_VERSION" +fi + +$SOURCE_DIR/docker/run.sh --user "$UID" -e PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python -e HOME=/tmp $NON_INTERACTIVE --name $NAME --image $OPEN_MODEL_ZOO_TOOLS_IMAGE:$OPEN_MODEL_ZOO_VERSION $VOLUME_MOUNT $DRY_RUN --entrypoint /bin/bash --entrypoint-args "\"-i\" \"-c\" \"pip3 install -r /home/pipeline-server/tools/model_downloader/requirements.txt ; python3 -u /home/pipeline-server/tools/model_downloader $MODEL_PROC_VERSION --model-list /models_yml/$YML_FILE_NAME --output /output $FORCE\"" diff --git a/tools/model_downloader/model_index_schema.py b/tools/model_downloader/model_index_schema.py new file mode 100644 index 0000000..adbb9e2 --- /dev/null +++ b/tools/model_downloader/model_index_schema.py @@ -0,0 +1,20 @@ +''' +* Copyright (C) 2022 Intel Corporation. +* +* SPDX-License-Identifier: BSD-3-Clause +''' + +model_index_schema = { + "type": "object", + "additionalProperties": { + "type": "object", + "properties": { + "labels": { + "type": ["string", "null"] + }, + "model-proc": { + "type": ["string", "null"] + } + } + } + }