Skip to content

Commit

Permalink
Added two configuration parameters. Deleted "Delay" configuration par…
Browse files Browse the repository at this point in the history
…ameter

Added new config parameter "Path" inside the "AI" section on appsettings.json allowing to change the path used by Synoai to send the snapshot and wait for results from Deepstack, useful if you want to change the included objects detection model into a custom one:

Path: Default value is "v1/vision/detection" for standard deepstack trained model. I.E.: If you want to use a third party / custom module, like the DeepStack Dark Scene Objects Detection, you will need this value to be "v1/vision/custom/dark"

Added new config parameter "MaxSnapshots" inside the general section on appsettings.json. Default value is 1. Max value is 254.

Upon receiving a Motion detect trigger from Synology Surveillance Station, this parameter controls how many snapshots SynoAI will keep retrieving and analyzing until it finds a valid object. Then it stops and returns an alert notification.

This greatly enhances the detection capability in certain scenarios (like mine) because:

First limitation comes from Synology: The fastest you can trigger a motion event is "one each 5 seconds".

BUT ny DS920+ can process a 640x480 snapshot at about 650 ms ... even with Deepstack configured at "Medium" quality! So inside those 5 seconds, I could actually inspect like 7 or even 8 frames for the object I want to be alerted upon ("Person").

In my scenario, some person can walk from side to side of the camera field in LESS than 5 seconds.

Usually, motion detection from SSS will detect a person entering the frame from a side where only his / her head is visible. If SynoAI takes that snapshot only and sends it to Deepstack, it will not detect a person.

But if I keep retrieving frames inside those 5 seconds, the person will finally appear "whole body" at 2nd or 3rd frame and Deepstack is able to discover that person.

So I greatly increased the chances of detecting people by letting SynoAI to take several snapshots once a motion is detected, instead of letting SSS to dictate the timing for each snapshot.

Actually I increased SSS triggering event call from 5 seconds into 20 seconds. So now when I get a motion triggering alert from SSS, SynoAI takes control of the situation and starts taking up to 20 snapshots, inside that 20 second windows.

Two benefits:

1) It can detect persons which where missed on the earlier scenario of "one snapshot each 5 seconds"

2) If someone is standing in front of the camera, I can get ONE notification each 20 seconds because SSS now triggers the motion event each 20 seconds and SynoAI returns a notification when it first detects a person and STOPS for that run

Lastly, I deleted the configuration parameter "DELAY" since actually the delay is given by the triggering event configured in Synology Surveillance Station; being mínimum 5 seconds, so there is no actual need for SynoAI to also take care of Delay.
  • Loading branch information
euquiq authored and djdd87 committed Dec 14, 2021
1 parent c96eb97 commit da64a46
Show file tree
Hide file tree
Showing 4 changed files with 85 additions and 113 deletions.
5 changes: 1 addition & 4 deletions SynoAI/AIs/DeepStack/DeepStackAI.cs
Original file line number Diff line number Diff line change
Expand Up @@ -7,15 +7,12 @@
using System.IO;
using System.Linq;
using System.Net.Http;
using System.Net.Http.Json;
using System.Threading.Tasks;

namespace SynoAI.AIs.DeepStack
{
public class DeepStackAI : AI
{
private const string URL_VISION_DETECTION = "v1/vision/detection";

public async override Task<IEnumerable<AIPrediction>> Process(ILogger logger, Camera camera, byte[] image)
{
using (HttpClient client = new HttpClient())
Expand All @@ -36,7 +33,7 @@ public async override Task<IEnumerable<AIPrediction>> Process(ILogger logger, Ca

logger.LogDebug($"{camera.Name}: DeepStackAI: Sending image.");

HttpResponseMessage response = await client.PostAsync(URL_VISION_DETECTION, multipartContent);
HttpResponseMessage response = await client.PostAsync(Config.AIPath, multipartContent);
if (response.IsSuccessStatusCode)
{
DeepStackResponse deepStackResponse = await GetResponse(response);
Expand Down
25 changes: 13 additions & 12 deletions SynoAI/Config.cs
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,8 @@
using SynoAI.AIs;
using SynoAI.Models;
using SynoAI.Notifiers;
using SynoAI.Notifiers.Pushbullet;
using System;
using System.Collections.Generic;
using System.Dynamic;
using System.Linq;
using System.Threading.Tasks;

namespace SynoAI
{
Expand Down Expand Up @@ -51,12 +47,6 @@ public static class Config
/// 2 = Low bandwidth
/// </summary>
public static CameraQuality Quality { get; private set; }

/// <summary>
/// The amount of time that needs to have passed between the last call to check the camera and the current call.
/// </summary>
public static int Delay { get; private set; }

/// <summary>
/// The hex code of the colour to use for the boxing around image matches.
/// </summary>
Expand Down Expand Up @@ -96,6 +86,11 @@ public static class Config
/// </summary>
public static bool LabelBelowBox { get; private set; }
/// <summary>
/// <summary>
/// Upon movement, the maximum number of snapshots sequentially retrieved from SSS until finding an object of interest (i.e. 4 snapshots)
/// </summary>
public static int MaxSnapshots { get; private set; }
/// <summary>
/// Whether this original snapshot generated from the API should be saved to the file system.
/// </summary>
public static bool SaveOriginalSnapshot { get; private set; }
Expand All @@ -105,6 +100,7 @@ public static class Config
/// </summary>
public static AIType AI { get; private set; }
public static string AIUrl { get; private set; }
public static string AIPath { get; private set; }
public static int MinSizeX { get; private set; }
public static int MinSizeY { get; private set; }

Expand Down Expand Up @@ -144,8 +140,7 @@ public static void Generate(ILogger logger, IConfiguration configuration)
ApiVersionCamera = configuration.GetValue<int>("ApiVersionCamera", 9); // Surveillance Station 8.0

Quality = configuration.GetValue<CameraQuality>("Quality", CameraQuality.Balanced);

Delay = configuration.GetValue<int>("Delay", 5000);

DrawMode = configuration.GetValue<DrawMode>("DrawMode", DrawMode.Matches);

StrokeWidth = configuration.GetValue<int>("StrokeWidth", 2);
Expand All @@ -164,12 +159,18 @@ public static void Generate(ILogger logger, IConfiguration configuration)

LabelBelowBox = configuration.GetValue<bool>("LabelBelowBox", false);
AlternativeLabelling = configuration.GetValue<bool>("AlternativeLabelling", false);
MaxSnapshots = configuration.GetValue<int>("MaxSnapshots", 1);
if (MaxSnapshots > 254) {
MaxSnapshots = 254;
logger.LogWarning("ATTENTION: Config parameter MaxSnapshots is too big: Maximum accepted value is 254 ");
}

SaveOriginalSnapshot = configuration.GetValue<bool>("SaveOriginalSnapshot", false);

IConfigurationSection aiSection = configuration.GetSection("AI");
AI = aiSection.GetValue<AIType>("Type", AIType.DeepStack);
AIUrl = aiSection.GetValue<string>("Url");
AIPath = aiSection.GetValue<string>("Path","v1/vision/detection");

Cameras = GenerateCameras(logger, configuration);
Notifiers = GenerateNotifiers(logger, configuration);
Expand Down
167 changes: 71 additions & 96 deletions SynoAI/Controllers/CameraController.cs
Original file line number Diff line number Diff line change
Expand Up @@ -58,78 +58,93 @@ public async void Get(string id)
return;
}

// Enforce a delay between checks
if (!HasSufficientDelay(id))
{
return;
}
// Get the min X and Y values for object; initialize snapshots counter.
int minX = camera.GetMinSizeX();
int minY = camera.GetMinSizeY();
int snapshotCount= 1;

// Create the stopwatches for reporting timings
Stopwatch overallStopwatch = Stopwatch.StartNew();

// Take the snapshot from Surveillance Station
byte[] snapshot = await GetSnapshot(id);
snapshot = PreProcessSnapshot(camera, snapshot);

// Save the original unprocessed image if required
if (Config.SaveOriginalSnapshot)
{
_logger.LogInformation($"{id}: Saving original image before processing");
SnapshotManager.SaveOriginalImage(_logger, camera, snapshot);
}

// Get the min X and Y values
int minX = camera.GetMinSizeX();
int minY = camera.GetMinSizeY();

// Use the AI to get the valid predictions and then get all the valid predictions, which are all the AI predictions where the result from the AI is
// in the list of types and where the size of the object is bigger than the defined value.
IEnumerable<AIPrediction> predictions = await GetAIPredications(camera, snapshot);
if (predictions != null)
//Start bucle for asking snapshots until a valid prediction is found or MaxSnapshots is reached
while (snapshotCount > 0 && snapshotCount <= Config.MaxSnapshots)
{
IEnumerable<AIPrediction> validPredictions = predictions.Where(x =>
camera.Types.Contains(x.Label, StringComparer.OrdinalIgnoreCase) && // Is a type we care about
x.SizeX >= minX && x.SizeY >= minY) // Is bigger than the minimum size
.ToList();

if (validPredictions.Count() > 0)
_logger.LogInformation($"Snapshot {snapshotCount} of {Config.MaxSnapshots} asked at EVENT TIME {overallStopwatch.ElapsedMilliseconds}ms.");
// Take the snapshot from Surveillance Station
byte[] snapshot = await GetSnapshot(id);
_logger.LogInformation($"Snapshot {snapshotCount} of {Config.MaxSnapshots} received at EVENT TIME {overallStopwatch.ElapsedMilliseconds}ms.");
snapshot = PreProcessSnapshot(camera, snapshot);

// Use the AI to get the valid predictions and then get all the valid predictions, which are all the AI predictions where the result from the AI is
// in the list of types and where the size of the object is bigger than the defined value.
IEnumerable<AIPrediction> predictions = await GetAIPredications(camera, snapshot);
_logger.LogInformation($"Snapshot {snapshotCount} of {Config.MaxSnapshots} processed {predictions.Count()} objects at EVENT TIME {overallStopwatch.ElapsedMilliseconds}ms.");
if (predictions != null)
{
// Because we don't want to process the image if it isn't even required, then we pass the snapshot manager to the notifiers. It will then perform
// the necessary actions when it's GetImage method is called.
SnapshotManager snapshotManager = new SnapshotManager(snapshot, predictions, validPredictions, _snapshotManagerLogger);

// Generate text for notifications

IList<String> labels = new List<String>();
IEnumerable<AIPrediction> validPredictions = predictions.Where(x =>
camera.Types.Contains(x.Label, StringComparer.OrdinalIgnoreCase) && // Is a type we care about
x.SizeX >= minX && x.SizeY >= minY) // Is bigger than the minimum size
.ToList();

if (Config.AlternativeLabelling && Config.DrawMode == DrawMode.Matches)
if (validPredictions.Count() > 0)
{
if (validPredictions.Count() == 1)
// Save the original unprocessed image if required
if (Config.SaveOriginalSnapshot)
{
decimal confidence = Math.Round(validPredictions.First().Confidence, 0, MidpointRounding.AwayFromZero);
labels.Add($"{validPredictions.First().Label.FirstCharToUpper()} {confidence}%");
_logger.LogInformation($"{id}: Saving original image");
SnapshotManager.SaveOriginalImage(_logger, camera, snapshot);
}
else

// Because we don't want to process the image if it isn't even required, then we pass the snapshot manager to the notifiers. It will then perform
// the necessary actions when it's GetImage method is called.
SnapshotManager snapshotManager = new SnapshotManager(snapshot, predictions, validPredictions, _snapshotManagerLogger);

// Generate text for notifications
IList<String> labels = new List<String>();

if (Config.AlternativeLabelling && Config.DrawMode == DrawMode.Matches)
{
//Since there is more than one object detected, include correlating number
int counter = 1;
foreach (AIPrediction prediction in validPredictions)
if (validPredictions.Count() == 1)
{
decimal confidence = Math.Round(prediction.Confidence, 0, MidpointRounding.AwayFromZero);
String label = $"{counter}. {prediction.Label.FirstCharToUpper()} {confidence}%";
labels.Add(label);
counter++;
decimal confidence = Math.Round(validPredictions.First().Confidence, 0, MidpointRounding.AwayFromZero);
labels.Add($"{validPredictions.First().Label.FirstCharToUpper()} {confidence}%");
}
else
{
//Since there is more than one object detected, include correlating number
int counter = 1;
foreach (AIPrediction prediction in validPredictions)
{
decimal confidence = Math.Round(prediction.Confidence, 0, MidpointRounding.AwayFromZero);
labels.Add($"{counter}. {prediction.Label.FirstCharToUpper()} {confidence}%");
counter++;
}
}
}
else
{
labels = validPredictions.Select(x => x.Label.FirstCharToUpper()).ToList();
}

//Send Notifications
await SendNotifications(camera, snapshotManager, labels);
_logger.LogInformation($"{id}: Valid object found in snapshot {snapshotCount} of {Config.MaxSnapshots} at EVENT TIME {overallStopwatch.ElapsedMilliseconds}ms.");

//Stop snapshot bucle iteration:
snapshotCount = -1;
}
else if (predictions.Count() > 0)
{
// We got predictions back from the AI, but nothing that should trigger an alert
_logger.LogInformation($"{id}: No valid objects at EVENT TIME {overallStopwatch.ElapsedMilliseconds}ms.");
}
else
{
labels = validPredictions.Select(x => x.Label.FirstCharToUpper()).ToList();
// We didn't get any predictions whatsoever from the AI
_logger.LogInformation($"{id}: Nothing detected by the AI at EVENT TIME {overallStopwatch.ElapsedMilliseconds}ms.");
}

//Send Notifications
await SendNotifications(camera, snapshotManager, labels);
}
snapshotCount++;
else if (predictions.Count() > 0)
{
// We got predictions back from the AI, but nothing that should trigger an alert
Expand All @@ -144,6 +159,7 @@ public async void Get(string id)

_logger.LogInformation($"{id}: Finished ({overallStopwatch.ElapsedMilliseconds}ms).");
}
_logger.LogInformation($"{id}: FINISHED EVENT at EVENT TIME {overallStopwatch.ElapsedMilliseconds}ms.");
}

/// <summary>
Expand Down Expand Up @@ -228,8 +244,6 @@ private async Task SendNotifications(Camera camera, ISnapshotManager snapshotMan
/// <returns>A byte array for the image, or null on failure.</returns>
private async Task<byte[]> GetSnapshot(string cameraName)
{
_logger.LogInformation($"{cameraName}: Motion detected, fetching snapshot.");

Stopwatch stopwatch = Stopwatch.StartNew();

byte[] imageBytes = await _synologyService.TakeSnapshotAsync(cameraName);
Expand All @@ -241,7 +255,7 @@ private async Task<byte[]> GetSnapshot(string cameraName)
else
{
stopwatch.Stop();
_logger.LogInformation($"{cameraName}: Snapshot received ({stopwatch.ElapsedMilliseconds}ms).");
_logger.LogInformation($"{cameraName}: Snapshot received in {stopwatch.ElapsedMilliseconds}ms.");
}

return imageBytes;
Expand All @@ -255,59 +269,20 @@ private async Task<byte[]> GetSnapshot(string cameraName)
/// <returns>A list of predictions, or null on failure.</returns>
private async Task<IEnumerable<AIPrediction>> GetAIPredications(Camera camera, byte[] imageBytes)
{
_logger.LogInformation($"{camera}: Processing.");

IEnumerable<AIPrediction> predictions = await _aiService.ProcessAsync(camera, imageBytes);
if (predictions == null)
{
_logger.LogError($"{camera}: Failed to get get predictions.");
return null;
}
else
else if (_logger.IsEnabled(LogLevel.Information))
{
foreach (AIPrediction prediction in predictions)
{
_logger.LogInformation($"{camera}: {prediction.Label} ({prediction.Confidence}%) [Size: {prediction.SizeX}x{prediction.SizeY}] [Start: {prediction.MinX},{prediction.MinY} | End: {prediction.MaxX},{prediction.MaxY}]");
}
}

return predictions;
}

/// <summary>
/// Ensures that the camera doesn't get called too often.
/// </summary>
/// <param name="id">The ID of the camera to check.</param>
/// <returns>True if enough time has passed.</returns>
private bool HasSufficientDelay(string id)
{
if (_lastCameraChecks.TryGetValue(id, out DateTime lastCheck))
{
TimeSpan timeSpan = DateTime.UtcNow - lastCheck;
_logger.LogInformation($"{id}: Camera last checked {timeSpan.Milliseconds}ms ago");

if (timeSpan.TotalMilliseconds < Config.Delay)
{
_logger.LogInformation($"{id}: Ignoring request due to last check being under {Config.Delay}ms.");
return false;
}

if (!_lastCameraChecks.TryUpdate(id, DateTime.UtcNow, lastCheck))
{
_logger.LogInformation($"{id}: Ignoring request due multiple concurrent calls.");
return false;
}
}
else
{
if (!_lastCameraChecks.TryAdd(id, DateTime.UtcNow))
{
_logger.LogInformation($"{id}: Ignoring request due multiple concurrent calls.");
return false;
}
}

return true;
}
}
}
1 change: 0 additions & 1 deletion SynoAI/appsettings.json
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@
"Password": "",
"AllowInsecureUrl": false,

"Delay": 5000,
"DrawMode": "Matches",

"AI": {
Expand Down

0 comments on commit da64a46

Please sign in to comment.