Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add "ignore_zone" param for monitors #37

Open
hqhoang opened this issue Jan 7, 2022 · 10 comments
Open

add "ignore_zone" param for monitors #37

hqhoang opened this issue Jan 7, 2022 · 10 comments

Comments

@hqhoang
Copy link

hqhoang commented Jan 7, 2022

Scenario: car parks on driveway most of the time. With tree shadow, cloud, snow, rain, ...triggering the camera, it's easy to get the car detected and alerted over and over.

While car_past_det_max_diff_area param works for cars that park short-term, it doesn't work well for long-term parked cars. Sometimes the detected box is far left or far right of the actual car, especially when it snows. If the detection box is far left then far right, the diff is over the threshold and triggers an alert.

What I'm doing for the above scenario is to tap into the websocket notification. In my script that handles the events from websocket, I have a polygon defined as "parking zone" where the car can be anywhere in it. The detected box is then compared against this parking zone:

if (detected_label == 'car') and (detected_box.intersection(parking_zone).area/detected_box.area > 0.85) : 
    no alert

The above logic works well that the detected box can jump around the actual car without triggering an alert. Sometimes YOLOv4 can even detect a fake car in front of the real car, but it's small and still fit into the parking zone, so that works well to avoid false positive, too (no car that small). While it ignores cars detected in that zone, it doesn't ignore other objects so a person walking into that zone will still get detected and alerted.

Maybe add a new generic param to the config to ignore objects detected in the zone, matching the defined labels with certain overlapping percentage, e.g.:

ignore_zone=(car,truck),(polygon points), 0.85
ignore_zone=(cat,dog,squirrel),(polygon points), 0.9
@baudneo
Copy link
Contributor

baudneo commented Jan 7, 2022

I like this but I would suggest it conform to how the current zones are configured in the per-monitor overrides. New config file syntax is in YAML format BTW (objectconfig.yml, zmeventnotfication.yml, zm_secrets.yml, mlapiconfig.yml).
INI

[monitor-1]
parking_area_polygon_zone = polygon,points go,here
man_door_polygon_zone = polygon,points go,here
ignore_zones = [  { 'parking_area': { 'overlap': 0.85, 'labels': 'car,truck' } } ]

New 'Neo' YAML

monitors:
  1:
    # can end in _polygon_zone or _polygonzone
    parking_area_polygonzone: polygon,points go,here
    man_door_polygon_zone: polygon,points go,here
    ignore_zones: 
      - parking_area: 
        overlap: 0.85
        labels: car,truck

I will implement this tonight and test it as much as I can.

@hqhoang
Copy link
Author

hqhoang commented Jan 7, 2022

Here's one example where YOLOv4 detected a false car in front of the real car, the ignore_zone can be defined so that a real car would not match as it'd be way bigger than the zone, yet the false car can match the zone and be ignored.

13_301880_0

Another idea is to have separate object_min_confidence for different object types. I'm more concerned about people on my driveway than cars, YOLOv4 detects people more accurately than cars. Thus, it'd be handy to have 0.6 min confidence for car, but 0.4 min confidence for person, for example.

@baudneo
Copy link
Contributor

baudneo commented Jan 7, 2022

Another idea is to have separate object_min_confidence for different object types. I'm more concerned about people on my driveway than cars, YOLOv4 detects people more accurately than cars. Thus, it'd be handy to have 0.6 min confidence for car, but 0.4 min confidence for person, for example.

I have already implemented per label filtering in the new code, I have also implemented a 'contained wihtin' filter ->

01/06/22 23:15:11.956005 zm_mlapi[7127] DBG2 detect_sequence:1147 [frame: 120 [strategy:'first'] (3 of 6) - model: 'object' [strategy:'first'] (1 of 1) - sequence: 'coral::MobileNETv2-SSD TensorFlow 2.0 300x300'
   [strategy:'most'] (1 of 2)]
  01/06/22 23:15:11.958232 zm_mlapi[7127] DBG2 coral_edgetpu:201 [coral: model dimensions requested -> 300*300]
  01/06/22 23:15:11.981378 zm_mlapi[7127] DBG2 object:39 [coral:portalock: Waiting for 'pyzm_uid1000_TPU_lock' portalock...]
  01/06/22 23:15:11.984003 zm_mlapi[7127] DBG2 object:42 [coral:portalock: got 'pyzm_uid1000_TPU_lock']
  01/06/22 23:15:11.98621 zm_mlapi[7127] DBG1 coral_edgetpu:219 [coral: 'coral::MobileNETv2-SSD TensorFlow 2.0 300x300' input image (w*h): 1920*1080 resized by model_width/height to 300*300]
  01/06/22 23:15:12.184837 zm_mlapi[7127] DBG1 coral_edgetpu:236 [perf:coral: 'coral::MobileNETv2-SSD TensorFlow 2.0 300x300' detection took: 196.20 ms]
  01/06/22 23:15:12.187343 zm_mlapi[7127] DBG2 object:62 [coral:portalock: released 'pyzm_uid1000_TPU_lock']
  01/06/22 23:15:12.189596 zm_mlapi[7127] DBG2 coral_edgetpu:257 [coral: The image was resized before processing by the 'model width/height', scaling bounding boxes in image back up by factors of -> x=6.4 y=3.6]
  01/06/22 23:15:12.191833 zm_mlapi[7127] DBG1 coral_edgetpu:266 [coral: returning ['person'] -- [[128, 292, 1184, 594]] -- [0.6875]]
  01/06/22 23:15:12.193975 zm_mlapi[7127] DBG2 detect_sequence:1171 [detect: model: 'object' seq: 'coral::MobileNETv2-SSD TensorFlow 2.0 300x300' found 1 detection -> person]
  01/06/22 23:15:12.196234 zm_mlapi[7127] DBG1 detect_sequence:341 [DEBUG!>>> SEQUENCE OPTIONS min_conf = '0.6' -- min_conf_found = 'object_min_conf:sequence->coral::MobileNETv2-SSD TensorFlow 2.0 300x300']
  01/06/22 23:15:12.198451 zm_mlapi[7127] DBG1 detect_sequence:376 [>>> detected 'person (1/1)' confidence: 0.69]
  01/06/22 23:15:12.200796 zm_mlapi[7127] DBG1 detect_sequence:448 ['person (1/1)' minimum confidence found: (object_min_conf:sequence->coral::MobileNETv2-SSD TensorFlow 2.0 300x300) -> '0.6']
  01/06/22 23:15:12.203065 zm_mlapi[7127] DBG2 detect_sequence:496 [checking if 'person (1/1)' @ [128, 292, 1184, 594] is inside polygon/zone 'back_yard' located at [(0, 496), (1910, 0), (1910, 634), (0, 640)]]
  01/06/22 23:15:12.20535 zm_mlapi[7127] DBG1 detect_sequence:501 ['person (1/1)' INTERSECTS polygon/zone 'back_yard']
  01/06/22 23:15:12.208026 zm_mlapi[7127] DBG2 detect_sequence:506 ['person (1/1)' has 262769.07 pixels (82.40%) inside 'back_yard']
  01/06/22 23:15:12.210354 zm_mlapi[7127] DBG3 detect_sequence:557 [detection label match pattern: zone 'back_yard' has overrides->'(person)']
  01/06/22 23:15:12.212599 zm_mlapi[7127] DBG2 detect_sequence:575 [match pattern: (person)]
  01/06/22 23:15:12.215053 zm_mlapi[7127] DBG2 detect_sequence:877 [detection: 'person (1/1)' has PASSED filtering]
  01/06/22 23:15:12.217212 zm_mlapi[7127] DBG2 detect_sequence:1209 [detect:strategy: '1' filtered label: ['person'] [0.6875] ['coral'] [[128, 292, 1184, 594]]]
  01/06/22 23:15:12.219306 zm_mlapi[7127] DBG2 detect_sequence:1147 [frame: 120 [strategy:'first'] (3 of 6) - model: 'object' [strategy:'first'] (1 of 1) - sequence: 'DarkNet::v4 Pre-Trained' [strategy:'most'] (2
  of 2)]
  01/06/22 23:15:12.224887 zm_mlapi[7127] DBG2 object:39 [yolo:portalock: Waiting for 'pyzm_uid1000_GPU_lock' portalock...]
  01/06/22 23:15:12.227533 zm_mlapi[7127] DBG2 object:42 [yolo:portalock: got 'pyzm_uid1000_GPU_lock']
  01/06/22 23:15:12.229788 zm_mlapi[7127] DBG1 yolo:200 [yolo: 'DarkNet::v4 Pre-Trained' (GPU) - input image 1920*1080 - resized by  model_width/height to: 416*416]
  01/06/22 23:15:12.286648 zm_mlapi[7127] DBG2 object:62 [yolo:portalock: released 'pyzm_uid1000_GPU_lock']
  01/06/22 23:15:12.701064 zm_mlapi[7127] DBG2 yolo:313 [perf:yolo:GPU: 'DarkNet::v4 Pre-Trained' detection took: 468.82 ms]
  01/06/22 23:15:12.70338 zm_mlapi[7127] DBG1 yolo:324 [yolo: no detections to return!]
  01/06/22 23:15:12.707065 zm_mlapi[7127] DBG2 detect_sequence:1171 [detect: model: 'object' seq: 'DarkNet::v4 Pre-Trained' found 0 detections -> ]
  01/06/22 23:15:12.709161 zm_mlapi[7127] DBG2 detect_sequence:1209 [detect:strategy: '0' filtered label: [] [] [] []]
  01/06/22 23:15:12.711276 zm_mlapi[7127] DBG2 detect_sequence:1469 [perf:frame: 120 took 755.32 ms]
  01/06/22 23:15:12.715429 zm_mlapi[7127] DBG2 detect_sequence:1487 [detect: breaking out of frame loop as 'frame_strategy' is 'first']
  01/06/22 23:15:12.717766 zm_mlapi[7127] DBG2 object:57 [coral:portalock: already released 'pyzm_uid1000_TPU_lock']
  01/06/22 23:15:12.720054 zm_mlapi[7127] DBG2 object:57 [yolo:portalock: already released 'pyzm_uid1000_GPU_lock']
  01/06/22 23:15:12.72217 zm_mlapi[7127] DBG1 detect_sequence:1556 [perf:detect:FINAL: 'Monitor': Back Alley - MODECT (2)->'Event': 63328 -> complete detection sequence took: 9629.52 ms]
  01/06/22 23:15:12.751218 zm_mlapi[7127] INF mlapi:808 [mlapi:detect: returning matched image and detection data -> {'labels': ['person'], 'model_names': ['coral'], 'confidences': [0.6875], 'frame_id': '120', 'ty
  pe': ['object'], 'boxes': [[128, 292, 1184, 594]], 'image_dimensions': {'original': (1080, 1920), 'resized': None}, 'polygons': [{'name': 'back_yard', 'value': [(0, 496), (1910, 0), (1910, 634), (0, 640)], 'patt
  ern': '(person)'}], 'error_boxes': [], 'image': None}]

01/06/22 23:15:12.208026 zm_mlapi[7127] DBG2 detect_sequence:506 ['person (1/1)' has 262769.07 pixels (82.40%) inside 'back_yard']

@baudneo
Copy link
Contributor

baudneo commented Jan 7, 2022

Im wondering if the contained within filter would be enough for your specific situation already though. Technically you could specify your polygon zone 'parking_area' and then say you want at least 85% of the car to be within that zone to be a hit.

monitors:
  1:
    parking_area_polygonzone: points,points, points,points
    car_contained_area: 85%

This means that a car must be detected and the area of its bounding box must have 85% within any polygon zone. I need to fine tune configs to allow per zone filtering as well but I think the contained within filter would cover your use case here.

@hqhoang
Copy link
Author

hqhoang commented Jan 7, 2022

This means that a car must be detected and the area of its bounding box must have 85% within any polygon zone. I need to fine tune configs to allow per zone filtering as well but I think the contained within filter would cover your use case here.

That would work! If I understand correctly, instead of defining an ignoring zone, I'd draw a polygon for the rest of the driveway as a valid zone for a car to be detected in.

YAML is more structured, definitely a better way going forward for configuration (I work with Drupal 8/9 daily). Do you have the new codes available? I can set up another box for testing/development.

BTW, let's not hard-code for "car", my driveway has "truck" and "bus" too often :-D

We would also need a GUI tool to define the polygons. Currently I'm using the zone tool in ZM (saving the polygons as inactive zones), but it's tedious to copy the points manually to the config file. Maybe add an option in the zone tool to export the points as a comma-separated list (or whatever the new format is), in-browser Javascript should do.

There's another problem that I don't have a solution yet: the road curves down across the frame, a long bus or truck would have its bounding box taking half of my driveway when it's near the right edge of the frame. I guess most detection algorithms (YOLO, Coral, ...) only draw vertical/horizontal rectangle boxes around the detected objects, so there's really no way around it. Maybe in the future we can think of a way to define rotation of the bounding boxes for specific areas.

@baudneo
Copy link
Contributor

baudneo commented Jan 7, 2022

No hardcoding needed lol, it will be highly configurable. I may change how the polygon_zones are defined into their own data structure

monitors:
  2:
    #frame_set:  'snapshot,70,110,160,snapshot,alarm'
    object_detection_pattern: (person)
    person_max_detection_size: 60%
    #Back Alley / Car  - CLONE with Modect
#   PREVIOUSLY USED ZONE CONFIG
#    parking_area_zone_detection_pattern: (person)
#    parking_area_polygonzone: 0,496 1910,0 1910,634 0,640
    #    person_min_confidence: 0.60
    frame_set: snapshot,70,110,160,snapshot,alarm,210,280

    zones:
      parking_area:
        # Polygon points
        coords: 805,200 1897,125 1910,562 7,594
        # detection pattern REGEX
        pattern: (person)

        contains:
          # 85% of car bounding box must be contained within this zones area for it to be considered a hit
          car: 85%
          # 1 pixel of the person bounding box must be contained in this zones area for it to be considered a hit
          person: 1px
        max_size:
          # max size of the detected object
          person: 60%
        min_conf:
          # min confidence of the detected object
          person: 0.60
        past_area_diff:
          # match_past_detections
          # difference in area between the detected object and the saved bounding box
          person: 0.10
        

There's another problem that I don't have a solution yet: the road curves down across the frame, a long bus or truck would have its bounding box taking half of my driveway when it's near the right edge of the frame. I guess most detection algorithms (YOLO, Coral, ...) only draw vertical/horizontal rectangle boxes around the detected objects, so there's really no way around it. Maybe in the future we can think of a way to define rotation of the bounding boxes for specific areas.

I would recommend training YOLO models for your specific vehicles and people you want to detect, use that model as the first sequence and then pre-trained yolo as the 2nd sequence. There are probably some other ideas that could be used as well, YOLO only does rectangles and doesn't follow contours for the bounding boxes. There may be things we could do in the future, we will see.

Neo repos are in my pinned repositories, might be some kinks as I just merged and haven't tested as of yet. I am just cleaning things up first and then testing before letting the team know its ready for review. I also have a working docker image for MLAPI that utilizes GPU/TPU and has ALPR/face-detection libs all installed.

 01/07/22 01:43:40.397042 zm_mlapi[9467] DBG1 mlapi:639 [mlc.polygons = {1: [{'name': 'front_yard', 'value': [(0, 877), (2170, 553), (3822, 1131), (3822, 2141), (0, 2159)], 'pattern': '(person|dog|cat)'}], 2: [{'
  name': 'parking_area', 'value': [(805, 200), (1897, 125), (1910, 562), (7, 594)], 'pattern': '(person)', 'contains': {'car': '85%', 'person': '1px'}, 'max_size': {'person': '60%'}, 'min_conf': {'person': 0.6}}]}
  ]

@baudneo
Copy link
Contributor

baudneo commented Jan 8, 2022

I am leaving the current (legacy) way of defining polygon_zones and detection patterns and also adding the new way of defining zones as shown above.

Priority will be DEFINED_ZONE, SEQUENCE and :general: for options pertaining to filters.

@baudneo
Copy link
Contributor

baudneo commented Jan 8, 2022

@hqhoang - Would you be willing to test the new code base? I am testing myself but another user to test would be great as well.

There is a helper script to convert the secrets .ini files to YAML and the zmeventnotification.ini to YAML but mlapiconfig and objectconfig will need to be switched to YAML manually. The helper script is in the 'hook' folder, its syntax is ini_to_yaml.py -i <INI FILE - REQUIRED> -o output filename [optional]

If you do not specify an output name it will take the current name of the file and simply change .ini to .yml.

For object detection and mlapi there is the option to have all the keys on the 'base' level ()legacy/default) and an option to enable 'sections'. This means it is too hard to make a conversion script; technically the 'sections' are only for readability and editing the config file in something like Pycharm so you can collapse the sections or navigate them using the 'Structure' tab. Once the object detection config or mlapi config is parsed, sections are removed and all keys are on the 'base' level.

This is still a WIP and things WILL change because I am currently rewriting the pyzm libs for something new that uses API and also allows accessing ZM via DB.

Edit: Also if you have any other ideas let me know and I can try and implement if it makes sense, match_past_detections is going to be worked on eventually to get it up to snuff, I just don't have an extra camera to setup a monitor to properly test a situation like yours.

@hqhoang
Copy link
Author

hqhoang commented Jan 22, 2022

Sorry, I got busy, but finally able to try to swap to your code base. It's quite a mess on my end :-)

Documenting a few things that I encountered so that we can try to address later:

  • pycoral.adapters: need to compile from source, the latest pycoral in Ubuntu 20.04 is a little outdated. Perhaps need more documentation. I don't have a Coral TPU (hesitated as I have a GTX 1050, and now it's OOS everywhere), can I skip this dependency by disabling TPU in the config?
  • consolidate or list out the config files. Too many confusions: secrets.yml, zm_secrets.yml, objectconfig.yml, ...
  • missing keys/sections in my yml file leads to endless uncaught errors, not sure which keys are required/optional
AttributeError: 'NoneType' object has no attribute 'get'
TypeError: object of type 'NoneType' has no len()
TypeError: 'NoneType' object is not iterable

Also, the neo-* packages are a little behind as well, so I'm symlinking directly to git cloned directories. I got mlapi to run, debugging and testing zm_detect.py at the moment. Will keep you updated.

@baudneo
Copy link
Contributor

baudneo commented Jan 22, 2022

  1. I have made a change to the TPU importing that should solve the pycoral issues (see pull_req branch), need a user without TPU to test. There should be a warning message that pycoral failed to import but that it is ok without pycoral as long as no TPU is being used.
  2. In the updated docs in the pull_req branch of my repos there is an explanation of the config and secrets files. Consolidating would make them huge and hard to navigate. If users desire this I may consider it or if someone comes up with a better-consolidated system. zmeventnotification.yml and secrets.yml are for configuring the Perl part of the event server while objectconfig.yml and zm_secrets.yml are for configuring the object detection pipeline and how ZMES handles the newly implemented notifications and other customizations (animations, etc.). There is also the mlapiconfig.yml and its secrets YAML file for MLAPI. Tedious? yes, I feel your pain.
  3. I will start work on defining optional/required configuration options. The hardest part will be users moving their old objectconfig.ini to YAML. Please be aware that technically this is a developmental version even though it is not 0.0.x, things may change drastically and will be kind of rough.

At the moment pull_req branch' of my repos is the one being considered to merge. The team is reviewing and will cherry-pick what to merge. Packages of neo are a mess as of now due to focusing on PR, it is recommended now to install and pull from pull_req branches to test until the merge is complete (Meaning I will need to make an alternate install.sh to pull from specific branch).

The final code for merge may differ and some things may not work as expected, expect there to be some headaches and issues until the merge is complete and solid docs have been written up about all the changes and new features.

Thank you for testing and reporting what issues and annoyances you find along the way, it is extremely helpful feedback!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants