Skip to content

Releases: UU-tracktech/tracktech

Release v1.0.0

29 Jun 10:14
02a628a
Compare
Choose a tag to compare

Release v1.0.0

The following release note will contain a brief overview of each component and its features. Underneath, The currently known bugs can be found.

Features

Processor

The Camera Processor handles the core processing, using detection, tracking, and re-identification algorithms on an image or video feed. It can swap algorithms via new implementations of subclasses of the relevant superclass. Currently implemented are YOLOv5 and YOLOR for detection, SORT for tracking, and TorchReid and FastReid for re-identification.

Multiple input methods

The processor processes OpenCV frames. It can process any source that can be turned into a sequence of frames. The supported sources are implemented via a capture interface. The available captures are HLS, video stream, webcam, and an image folder. HLS is how a video feed is received via the internet, which performs extra work to add proper timestamps to the feed.

Plug and play for main pipeline components

The main pipeline contains a detection, tracking, and re-identification phase. All these phases are implemented and adhere to the interface belonging to the phase. Implementing another algorithm that conforms to this interface would allow for the algorithm to be loaded in via the configuration. This way, many different algorithms can be defined and swapped when needed.

Scheduler

Create a node structure representing a graph, and the scheduler will handle the scheduling of all nodes in each graph iteration. This prevents rewriting things like the pipeline for a more significant change in the form of the pipeline. These graphs are called plans, and thus multiple self-contained plans can be created that can also be swapped on-premise.

Multiple output methods, deploy opencv tornado

The processor has three output methods: deploy, opencv, and tornado. Deploy sends information about the processed frame to the orchestrator, which sends it to other processors or the interface. OpenCV displays the processed frames in an OpenCV window. Tornado displays the same OpenCV output but does so in a dedicated webpage. It is discouraged to use the tornado mode for anything other than development since it takes a heavy toll on performance.

Training of algorithms

Both the detection and the re-identifications algorithms can be trained with custom datasets. Instructions on how to train these individual components can be found here. The tracking is not based on a neural-network-based implementation and can therefore not be trained.

Accuracy measurement and metrics

Several metrics were implemented for determining the accuracy of the detection, the tracking and the re-identification. The detection uses the Mean Average Precision metric. The tracking uses the MOT metric. The re-identification uses the Mean Average Precision and Rank-1 metrics. An extensive explanation of the used accuracy metrics can be found here

Interface

A tornado-based webpage interface is used to view the video feeds as well as the detected bounding boxes. It features automated syncing for different camera feeds and their bounding boxes. It has options to select classification types to detect and swap camera focus. The user can click on a bounding box to start tracking an object. The interface features a timeline that keeps track of when and for how long a subject has appeared on each camera for a clear overview.

Automated bounding box syncing

When the interface received bounding boxes from the orchestrator and a video stream from the forwarder, it will try to match each box to the frame it belongs to. This is done internally using frame ids. This prevents the user from manually setting the box/video delay to synchronize them.

Timelines

Timelines is a page where the history of all tracked objects can be found back. This can be useful to see where an object was during the time it was tracked. When an object is still being tracked, the cutout will be visible next to the object id.

Forwarder

Adaptive bitrate

The forwarder can convert a single incoming stream (like RTMP or RTSP) to multiple bitrate output streams. This way, the stream bitrate can be adapted according to available bandwidth.

Other

Security

OAuth2 is used to make sure only authorized people can access services they should be able to access. Using authentication is optional and can be ignored when developing or testing.

Docker Images

Each component contains a Dockerfile used to build images. These images are publicly available on Dockerhub. This allows for easy downloading and deployment.

Known bugs

Syncing

The synchronization of the bounding boxes and the video stream on the interface sometimes mismatch, causing the bounding boxes to have an offset compared to the expected location. Sometimes this can be fixed by pausing the video for a few seconds, but not always.

Authentication between processor and forwarder

The OpenCV library to pull in the video from the forwarder does not allow any header to be added to the requests. This means that authentication needs to be disabled for local requests. Luckily most orchestration tools (like docker swarm) allow for a selective port opening to the outside. We allowed unauthenticated forwarder access over port 80 on HTTP (as auth should not be done over an unencrypted connection), which can be used by the processors.

Processor does not properly handle memory paging on some computers

This issue only occurred on one computer which had too little memory too handle the processor. The team could not reproduce the bug on other computers that had memory constraints. On this computer, the paging file size keeps increasing until there is no more disk space left, eventually resulting in a processor crash. The processors memory profile does not grow over time thus a system that has enough memory to run for 10 minutes should be able to run for 24 hours or longer. The only memory consumption increasing over time is the feature maps of tracked objects. But these vectors take up little space, and it is generally expected that there are not that many tracked objects.