Skip to content

Latest commit

 

History

History
569 lines (400 loc) · 24.9 KB

3.1-PipelineProtocol.md

File metadata and controls

569 lines (400 loc) · 24.9 KB

Pipeline Protocol

Overview

{
    "TaskA": {
        "next: [
            "TaskB",
            "TaskC"
        ]
        // properties ...
    },
    "TaskB": {
        // properties ...
    },
    // other task ...
}

When we execute a task (passing the task name to the MaaTaskerPostPipeline interface), it will recognize the tasks in the "next" list one by one (based on the recognition settings for each task). Once a match is found, it will exit the recognition of the "next" list and proceed to execute the matched task. It's similar to traversing and comparing, and as soon as a match is found, it will break and execute the found task.

Example

For example, let's say we have a game where different fruits, such as apples, oranges, and bananas, can appear on the screen, and we need to click them. Here's a simple JSON representation:

{
    "StartFruit": {
        "next": [
            "Apple",
            "Orange",
            "Banana"
        ]
    },
    "Apple": {
        "recognition": XXX,
        "action": "Click",
        // ...
    },
    "Orange": {
        "recognition": XXX,
        "action": "Click",
        "next": [
            "Cat",
            "Dog"
        ]
    },
    "Banana": {
        // ...
    },
    // ...
}

Let's assume there are no apples on the screen, but there are oranges and bananas. In the above JSON, if we execute "StartFruit" (i.e., pass "StartFruit" to the MaaTaskerPostPipeline interface), it will first recognize "Apple." Since there are no apples on the screen, it will continue to recognize "Orange." If it recognizes an orange, it will start executing the "Orange" task, and it won't attempt to recognize "Banana." After executing "Orange" according to its action, it will continue to recognize "Orange's" "next" tasks.

Within "Orange's" "next," if it recognizes "Cat," it won't continue to recognize "Dog." It will execute the "Cat" action and continue to recognize "Cat's" "next" after the action is completed. If neither "Cat" nor "Dog" is recognized, it will continue to attempt recognition for these two tasks until a timeout occurs.

This loop continues until the "next" of a task is empty, which signifies that the task is complete.

Property Fields

Note: For required fields, they can still be empty in the Pipeline JSON file and set through the interface before actual execution.

  • recognition: string
    Recognition algorithm type. Optional, default is DirectHit.
    Possible values: DirectHit | TemplateMatch | FeatureMatch | ColorMatch | OCR | NeuralNetworkClassify | NeuralNetworkDetect | Custom.
    See Algorithm Types for details.

  • action: string
    Action to execute. Optional, default is DoNothing.
    Possible values: DoNothing | Click | Swipe | Key | InputText | StartApp | StopApp | StopTask | Custom.
    See Action Types for details.

  • next: string | list<string, >
    List of tasks to execute next. Optional, default is empty.
    It recognizes each task in sequence and executes the first one it recognizes.

  • interrupt : string | list<string, >
    The list of candidate tasks when all tasks in next are not recognized, and similar interrupt operations will be performed. Optional, empty by default.
    If all tasks in next are not recognized, each task in the interrupt list will be recognized in order, and the first recognized one will be executed. After all subsequent tasks are executed, jump back to this task to try to recognize it again.
    For example: A: { next: [B, C], interrupt: [D, E] }
    When B and C are not recognized and D is recognized, D and D.next will be fully executed. But when the pipeline of D is fully executed. It will return to task A again and continue to try to recognize B, C, D, E.
    This field is mostly used for exception handling. For example, D is to recognize the "network disconnection prompt box". After clicking confirm and waiting for the network connection to succeed, continue the previous task flow.

  • is_sub: bool
    (Deprecated in version 2.x, but retains compatibility. interrupt is recommended instead.)
    Whether it is a subtask. Optional, default is false.
    If it's a subtask, after completing this task (and subsequent tasks such as "next"), it will return to re-recognize the "next" list of this task.
    For example: A.next = [B, Sub_C, D], where Sub_C.is_sub = true. If Sub_C is matched, after fully executing Sub_C and subsequent tasks, it will return to re-recognize [B, Sub_C, D] and execute the matching items and subsequent tasks.

  • rate_limit: uint
    Identification rate limit, in milliseconds. Optional, default 1000.
    Each round of identification "next" + "interrupt" consumes at least rate_limit milliseconds, and sleep will wait if the time is less than that.

  • timeout: uint
    Timeout for recognizing "next" + "interrupt" tasks, in milliseconds. Optional, Default is 20,000 milliseconds (20 seconds).
    The detailed logic is while(!timeout) { foreach(next + interrupt); sleep_until(rate_limit); }

  • on_error : string | list<string, >
    When recognition timeout or the action fails to execute, the tasks in this list will be executed next. Optional, empty by default.

  • timeout_next: string | list<string, >
    (Deprecated in version 2.x, but retains compatibility. on_error is recommended instead.)
    List of tasks to execute after a timeout. Optional, default is empty.

  • inverse: bool
    Reverse the recognition result: recognized as not recognized, and not recognized as recognized. Optional, default is false.
    Please note that tasks recognized through this setting will have their own clicking actions disabled (because nothing was actually recognized). If there is a need, you can set the target separately.

  • enabled: bool
    Whether to enable this task. Optional, default is true.
    If set to false, this task will be skipped when it appears in the "next" lists of other tasks, meaning it won't be recognized or executed.

  • pre_delay: uint
    Delay in milliseconds between recognizing a task and executing the action. Optional, default is 200 milliseconds.
    It is recommended to add intermediate tasks whenever possible and use less delay to maintain both speed and stability.

  • post_delay: uint
    Delay in milliseconds between executing the action and recognizing the "next" tasks. Optional, default is 200 milliseconds.
    It is recommended to add intermediate tasks whenever possible and use less delay to maintain both speed and stability.

  • pre_wait_freezes: uint | object
    Time in milliseconds to wait for the screen to stop changing between recognizing a task and executing the action. Optional, default is 0 (no waiting).
    It will exit the action only when the screen has not had significant changes for "pre_wait_freezes" milliseconds in a row.
    If it's an object, more parameters can be set, see Waiting for the Screen to Stabilize for details. The specific order is pre_wait_freezes - pre_delay - action - post_wait_freezes - post_delay.

  • post_wait_freezes: uint | object
    Time in milliseconds to wait for the screen to stop changing between executing the action and recognizing the "next" tasks. Optional, default is 0 (no waiting).
    Other logic is the same as pre_wait_freezes.

  • focus: bool
    Whether to focus on the task, resulting in additional callback messages. Optional, default is false (no messages).
    See Task Notifications for details.

Default Properties

Please refer to default_pipline.json Default can set the default values ​​of all fields. And the object of algorithm/action name can set the default parameter value of the corresponding algorithm/action.

Algorithm Types

DirectHit

Direct hit, meaning no recognition is performed, and the action is executed directly.

TemplateMatch

Template matching, also known as "find image."

This algorithm property requires additional fields:

  • roi: array<int, 4> | string
    Recognition area coordinates. Optional, default [0, 0, 0, 0], i.e. full screen.

    • array<int, 4>: Recognition area coordinates, [x, y, w, h], if you want full screen, you can set it to [0, 0, 0, 0].
    • string: Fill in the task name, and identify within the target range identified by a previously executed task.
  • roi_offset: array<int, 4>
    Move additionally based on roi as the range, and add the four values ​​separately. Optional, default [0, 0, 0, 0].

  • template: string | list<string, >
    Path to the template image, relative to the "image" folder. Required. The images used need to be cropped from the lossless original image and scaled to 720p. Reference to here.

  • threshold: double | list<double, >
    Template matching threshold. Optional, default is 0.7.
    If it's an array, its length should match the length of the template array.

  • order_by: string
    How the results are sorted. Optional, default is Horizontal
    Possible values: Horizontal | Vertical | Score | Random
    You can use it with the index field.

  • index: int
    Index to hit. Optional, default is 0.
    If there are N results in total, the value range of index is [-N, N - 1], where negative numbers are converted to N - index using Python-like rules. If it exceeds the range, it is considered that there is no result in the current identification.

  • method: int
    Template matching algorithm, equivalent to cv::TemplateMatchModes. Optional, default is 5.
    Only supports 1, 3, and 5, with higher values providing greater accuracy but also taking more time.
    For more details, refer to the OpenCV official documentation.

  • green_mask: bool
    Whether to apply a green mask. Optional, default is false.
    If set to true, you can paint the unwanted parts in the image green with RGB: (0, 255, 0), and those green parts won't be matched.

FeatureMatch

Feature matching, a more powerful "find image" with better generalization, resistant to perspective and size changes.

This algorithm property requires additional fields:

  • roi: array<int, 4> | string
    Same as TemplateMatch.roi.

  • roi_offset: array<int, 4>
    Same as TemplateMatch.roi_offset.

  • template: string | list<string, >
    Path to the template image, relative to the "image" folder. Required.

  • count: int
    The number of required matching feature points (threshold), default is 4.

  • order_by: string
    How the results are sorted. Optional, default is Horizontal
    Possible values: Horizontal | Vertical | Score | Area | Random You can use it with the index field.

  • index: int
    Index to hit. Optional, default is 0.
    If there are N results in total, the value range of index is [-N, N - 1], where negative numbers are converted to N - index using Python-like rules. If it exceeds the range, it is considered that there is no result in the current identification.

  • green_mask: bool
    Whether to apply a green mask. Optional, default is false.
    If set to true, you can paint the unwanted parts in the image green with RGB: (0, 255, 0), and those green parts won't be matched.

  • detector: string
    Feature detector. Optional, default is SIFT.
    Currently, it supports the following algorithms:

    • SIFT
      High computational complexity, scale invariance, and rotation invariance. Best performance.
    • KAZE
      Suitable for 2D and 3D images, scale invariance, and rotation invariance.
    • AKAZE
      Faster computation speed, scale invariance, and rotation invariance.
    • BRISK
      Very fast computation speed, scale invariance, and rotation invariance.
    • ORB
      Very fast computation speed, rotation invariance, but lacks scale invariance.

    You can look up detailed characteristics of each algorithm on your own.

  • ratio: double
    The distance ratio for KNN matching, [0 - 1.0], where larger values make the matching more lenient (easier to connect). Optional, default is 0.6.

ColorMatch

Color matching, also known as "find color."

This algorithm property requires additional fields:

  • roi: array<int, 4> | string
    Same as TemplateMatch.roi.

  • roi_offset: array<int, 4>
    Same as TemplateMatch.roi_offset.

  • method: int
    Color matching method, equivalent to cv::ColorConversionCodes. Optional, default is 4 (RGB).
    Common values are 4 (RGB, 3 channels), 40 (HSV, 3 channels), and 6 (GRAY, 1 channel).
    For more details, refer to the OpenCV official documentation.

  • lower: list<int, > | list<list<int, >>
    Lower bound for colors. Required. The innermost list length should match the number of channels in the method.

  • upper: list<int, > | list<list<int, >>
    Upper bound for colors. Required. The innermost list length should match the number of channels in the method.

  • count: int
    The threshold for the number of matching points required. Optional, default is 1.

  • order_by: string
    How the results are sorted. Optional, default is Horizontal
    Possible values: Horizontal | Vertical | Score | Area | Random
    You can use it with the index field.

  • index: int
    Index to hit. Optional, default is 0.
    If there are N results in total, the value range of index is [-N, N - 1], where negative numbers are converted to N - index using Python-like rules. If it exceeds the range, it is considered that there is no result in the current identification.

  • connected: bool
    Whether to count only connected points. Optional, default is false.
    If set to true, after applying color filtering, it will only count the maximum connected block of pixels. If set to false, it won't consider whether these pixels are connected.

OCR

Text recognition.

This algorithm property requires additional fields:

  • roi: array<int, 4> | string
    Same as TemplateMatch.roi.

  • roi_offset: array<int, 4>
    Same as TemplateMatch.roi_offset.

  • expected: string | list<string, >
    The expected results, supports regular expressions. Required.

  • replace: array<string, 2> | list<array<string, 2>>
    Some text recognition results may not be accurate, so replacements are performed. Optional.

  • order_by: string
    How the results are sorted. Optional, default is Horizontal
    Possible values: Horizontal | Vertical | Area | Length | Random
    You can use it with the index field.

  • index: int
    Index to hit. Optional, default is 0.
    If there are N results in total, the value range of index is [-N, N - 1], where negative numbers are converted to N - index using Python-like rules. If it exceeds the range, it is considered that there is no result in the current identification.

  • only_rec: bool
    Whether to recognize only (without detection, requires precise roi). Optional, default is false.

  • model: string
    Model folder path. Use a relative path to the "model/ocr" folder. Optional, default is empty.
    If empty, it will use the models in the root of the "model/ocr" folder. The folder should include three files: rec.onnx, det.onnx, and keys.txt.

NeuralNetworkClassify

Deep learning classification, to determine if the image in a fixed position matches the expected "category."

This algorithm property requires additional fields:

  • roi: array<int, 4> | string
    Same as TemplateMatch.roi.

  • roi_offset: array<int, 4>
    Same as TemplateMatch.roi_offset.

  • labels: list<string, >
    Labels, meaning the names of each category. Optional.
    It only affects debugging images and logs. If not filled, it will be filled with "Unknown."

  • model: string
    Model file path. Use a relative path to the "model/classify" folder. Required.
    Currently, only ONNX models are supported.

  • expected: int | list<int, >
    The expected category index.

  • order_by: string
    How the results are sorted. Optional, default is Horizontal
    Possible values: Horizontal | Vertical | Random
    You can use it with the index field.

  • index: int
    Index to hit. Optional, default is 0.
    If there are N results in total, the value range of index is [-N, N - 1], where negative numbers are converted to N - index using Python-like rules. If it exceeds the range, it is considered that there is no result in the current identification.

For example, if you want to recognize whether a cat or a mouse appears in a fixed position in the image, and you've trained a model that supports this three-category classification, and you want to click when it recognizes a cat or a mouse but not when it recognizes a dog, the relevant fields would be:

{
    "labels": ["Cat", "Dog", "Mouse"],
    "expected": [0, 2]
}

Please note that these values should match the actual model output.

NeuralNetworkDetect

Deep learning object detection, an advanced version of "find image."

The main difference from classification is the flexibility to find objects at arbitrary positions. However, this often requires more complex models, more training data, longer training times, and significantly higher resource usage during inference.

This algorithm property requires additional fields:

  • roi: array<int, 4> | string
    Same as TemplateMatch.roi.

  • roi_offset: array<int, 4>
    Same as TemplateMatch.roi_offset.

  • labels: list<string, >
    Labels, meaning the names of each category. Optional.
    It only affects debugging images and logs. If not filled, it will be filled with "Unknown."

  • model: string
    Model file path. Use a relative path to the "model/detect" folder. Required.
    Currently, only YoloV8 ONNX models are supported.

  • expected: int | list<int, >
    The expected category index.

  • threshold: double | list<double, >
    Model confidence threshold. Optional, default is 0.3.
    If it's an array, its length should match the length of the expected array.

  • order_by: string
    How the results are sorted. Optional, default is Horizontal
    Possible values: Horizontal | Vertical | Area | Random
    You can use it with the index field.

  • index: int
    Index to hit. Optional, default is 0.
    If there are N results in total, the value range of index is [-N, N - 1], where negative numbers are converted to N - index using Python-like rules. If it exceeds the range, it is considered that there is no result in the current identification.

For example, if you want to detect cats, dogs, and mice in an image and only click when a cat or a mouse is detected but not when a dog is detected, the relevant fields would be:

{
    "labels": ["Cat", "Dog", "Mouse"],
    "expected": [0, 2]
}

Please note that these values should match the actual model output.

Custom

Execute the recognition handle passed in through the MaaResourceRegisterCustomRecognition interface

This algorithm property requires additional fields:

  • custom_recognition: string
    Recognition name, same as the one passed in through the registration interface. It will also be passed through MaaCustomRecognitionCallback.custom_recognition_name. Required.

  • custom_recognition_param: any
    Recognition parameter, any type, will be passed through MaaCustomRecognitionCallback.custom_recognition_param. Optional, default empty json, i.e. {}

  • roi: array<int, 4> | string
    Same as TemplateMatch.roi, will be passed through MaaCustomRecognitionCallback.roi. Optional, default [0, 0, 0, 0].

  • roi: array<int, 4> | string
    Same as TemplateMatch.roi.

Action Types

DoNothing

Does nothing.

Click

Clicks.

Additional properties for this action:

  • target: true | string | array<int, 4>
    The position to click. Optional, default is true.

    • true: Clicks the target just recognized in this task (i.e., clicks itself).
    • string: Enter the task name to click a target recognized by a previously executed task.
    • array<int, 4>: Clicks a random point within a fixed coordinate area [x, y, w, h]. To click the entire screen, set it to [0, 0, 0, 0].
  • target_offset: array<int, 4>
    Additional movement from the target before clicking, where the four values are added together. Optional, default is [0, 0, 0, 0].

Swipe

Swipes.

Additional properties for this action:

  • begin: true | string | array<int, 4>
    The starting point of the swipe. Optional, default is true. The values are the same as Click.target.

  • begin_offset: array<int, 4>
    Additional movement from the begin before swiping, where the four values are added together. Optional, default is [0, 0, 0, 0].

  • end: true | string | array<int, 4>
    The end point of the swipe. Required. The values are the same as Click.target.

  • end_offset: array<int, 4>
    Additional movement from the end before swiping, where the four values are added together. Optional, default is [0, 0, 0, 0].

  • duration: uint
    Duration of the swipe in milliseconds. Optional, default is 200.

Key

Presses a key.

InputText

Inputs text.

  • input_text: string
    The text to input, some controller only supports ascii.

StartApp

Starts an app.

Additional properties for this action:

  • package: string
    Launch entry. Required.
    You need to enter the package name or activity, for example, com.hypergryph.arknights or com.hypergryph.arknights/com.u8.sdk.U8UnityContext.

StopApp

Closes an app.

Additional properties for this action:

  • package: string
    The app to close. Required.
    You need to enter the package name, for example, com.hypergryph.arknights.

StopTask

Stops the current task chain (the individual task chain passed to MaaTaskerPostPipeline).

Custom

Execute the action handle passed in through the MaaResourceRegisterCustomAction interface

This action attribute requires additional fields:

  • custom_action: string
    Action name, same as the identifier name passed in the registration interface. It will also be passed through MaaCustomActionCallback.custom_action_name. Required.

  • custom_action_param: any
    Action parameter, any type, will be passed through MaaCustomActionCallback.custom_action_param. Optional, default empty json, i.e. {}

  • target: true | string | array<int, 4>
    Same as Click.target, will be passed through MaaCustomActionCallback.box. Optional, default true.

  • target_offset: array<int, 4>
    Same as Click.target_offset.

Waiting for the Screen to Stabilize

Waits for the screen to stabilize. It exits the action only when there is no significant change in the screen for a certain continuous time.

The field value can be a uint or an object. For example:

{
    "TaskA": {
        "pre_wait_freezes": 500
    },
    "TaskB": {
        "pre_wait_freezes": {
            // more properties ...
        }
    }
}

If the value is an object, you can set additional fields:

  • time: uint
    It exits the action only when there has been no significant change in the screen for "time" milliseconds in a row. Optional, default is 1.

  • target: true | string | array<int, 4>
    The target to wait for. Optional, default is true. The values are the same as Click.target.

  • target_offset: array<int, 4>
    Additional movement from the target to be used as the waiting target, where the four values are added together. Optional, default is [0, 0, 0, 0].

  • threshold: double
    The template matching threshold to determine "no significant change." Optional, default is 0.95.

  • method: int
    The template matching algorithm to determine "no significant change," i.e., cv::TemplateMatchModes. Optional, default is 5. The same as TemplateMatch.method.

  • rate_limit: uint
    Identification rate limit, in milliseconds. Optional, default 1000.
    Each identification consumes at least rate_limit milliseconds, and sleep will be executed if the time is less than that.

  • timeout: uint
    Timeout for recognizing, in milliseconds. Optional, default is 20,000 milliseconds (20 seconds).

Task Notifications

See Callback Protocol (not written yet).