diff --git a/blog/2023-11-21-workers-group/index.mdx b/blog/2023-11-21-workers-group/index.mdx
index 4b0c7d189..d2daf8b35 100644
--- a/blog/2023-11-21-workers-group/index.mdx
+++ b/blog/2023-11-21-workers-group/index.mdx
@@ -104,7 +104,7 @@ When writing a script that should be executed on a specific worker group, you ha
diff --git a/docs/assets/flows/flow_advanced_settings.png b/docs/assets/flows/flow_advanced_settings.png
new file mode 100644
index 000000000..f6bd5d3e4
Binary files /dev/null and b/docs/assets/flows/flow_advanced_settings.png differ
diff --git a/docs/assets/flows/flow_example.png b/docs/assets/flows/flow_example.png
index 80afdd3c8..46575ce15 100644
Binary files a/docs/assets/flows/flow_example.png and b/docs/assets/flows/flow_example.png differ
diff --git a/docs/assets/flows/flow_example.png.webp b/docs/assets/flows/flow_example.png.webp
deleted file mode 100644
index 095e114fa..000000000
Binary files a/docs/assets/flows/flow_example.png.webp and /dev/null differ
diff --git a/docs/assets/flows/flow_triggers.png b/docs/assets/flows/flow_triggers.png
new file mode 100644
index 000000000..2bcb95493
Binary files /dev/null and b/docs/assets/flows/flow_triggers.png differ
diff --git a/docs/core_concepts/11_persistent_storage/within_windmill.mdx b/docs/core_concepts/11_persistent_storage/within_windmill.mdx
index 44e031672..90dd6cc7a 100644
--- a/docs/core_concepts/11_persistent_storage/within_windmill.mdx
+++ b/docs/core_concepts/11_persistent_storage/within_windmill.mdx
@@ -12,7 +12,7 @@ Instead, Windmill is very convenient to use alongside data storage providers to
There are however internal methods to persist data between executions of jobs.
-## States and Resources
+## States and resources
Within Windmill, you can use [States](../3_resources_and_types/index.mdx#states) and [Resources](../3_resources_and_types/index.mdx) as a way to store a transient state - that can be represented as small JSON.
@@ -98,7 +98,7 @@ _Python_
/>
-### Custom Flow States
+### Custom flow states
Custom flow states are a way to store data across steps in a [flow](../../flows/1_flow_editor.mdx). You can set and retrieve a value given a key from any step of flow and it will be available from within the flow globally. That state will be stored in the flow state itself and thus has the same lifetime as the flow [job](../20_jobs/index.mdx) itself.
@@ -176,30 +176,25 @@ In conclusion `setState` and `setResource` are convenient ways to persist json b
/>
-## Shared Directory
+## Shared directory
-For heavier ETL processes or sharing data between steps in a flow, Windmill provides a [Shared Directory](../../flows/3_editor_components.mdx#shared-directory) feature.
-
-The Shared Directory allows steps within a flow to share data by storing it in a designated folder.
+For heavier ETL processes or sharing data between steps in a flow, Windmill provides a Shared Directory feature. This allows steps within a flow to share data by storing it in a designated folder at `./shared`.
:::caution
-
-Although Shared Folders are recommended for persisting states within a flow, it's important to note that all steps are executed on the same worker and the data stored in the Shared Directory is strictly ephemeral to the flow execution.
-
+Although Shared Directories are recommended for persisting states within a flow, it's important to note that:
+- All steps are executed on the same worker
+- The data stored in the Shared Directory is strictly ephemeral to the flow execution
+- The contents are not preserved across [suspends](../../flows/11_flow_approval.mdx) and [sleeps](../../flows/15_sleep.md)
:::
-To enable the Shared Directory, follow these steps:
-
-1. Open the `Settings` menu in the Windmill interface.
-2. Go to the `Shared Directory` section.
-3. Toggle on the option for `Shared Directory on './shared'`.
+To enable the Shared Directory:
+1. Open the `Settings` menu in the Windmill interface
+2. Go to the `Shared Directory` section
+3. Toggle on the option for `Shared Directory on './shared'`
![Flow Shared Directory](../../assets/flows/flow_settings_shared_directory.png.webp)
-Once the Shared Directory is enabled, you can use it in your flow by referencing the `./shared` folder. This folder is shared among the steps in the flow, allowing you to store and access data between them.
-
-:::tip
-
-Keep in mind that the contents of the `./shared` folder are not preserved across [suspends](../../flows/11_flow_approval.mdx) and [sleeps](../../flows/15_sleep.md). The directory is temporary and active only during the execution of the flow.
-
-:::
\ No newline at end of file
+Once enabled, steps can read and write files to the `./shared` folder to pass data between them. This is particularly useful for:
+- Handling larger datasets that would be impractical to pass as step inputs/outputs
+- Temporary storage of intermediate processing results
+- Sharing files between steps in an ETL pipeline
\ No newline at end of file
diff --git a/docs/core_concepts/1_scheduling/index.mdx b/docs/core_concepts/1_scheduling/index.mdx
index 80788358e..eb95d46b5 100644
--- a/docs/core_concepts/1_scheduling/index.mdx
+++ b/docs/core_concepts/1_scheduling/index.mdx
@@ -41,6 +41,23 @@ Cron jobs are one of many ways to [trigger workflows](../../getting_started/9_tr
:::
+## Cron syntax
+
+Windmill uses [zslayton's cron expression parser](https://github.com/zslayton/cron). This library differs slightly from the Unix library.
+
+Although the syntaxes are similar, there are some notable differences:
+
+| Feature | Unix cron | zslayton's `cron` library |
+|---------------------------|-------------------------|--------------------------------------------|
+| **Day of Week Index** | Sunday = 0 through Saturday = 6 | Shifted by one (Sunday = 1 through Saturday = 7, 0 = Sunday) |
+| **Seconds Field** | Not included | Included as the first field |
+| **Year Field** | Not included | Optional, can specify specific years |
+| **Month Representation** | Numeric and short names | Numeric, short names, and name ranges |
+| **List and Range in Fields** | Supports lists and ranges | Supports lists, ranges, and combinations |
+| **Step Values** | Supported (e.g., `*/2`) | Supported, including complex patterns like `2018/2` |
+
+Anyway, the simplified builder and [Windmill AI](../22_ai_generation/index.mdx) will help you to create the cron expression.
+
## Set a schedule
Scripts and flows can have unique [primary schedules](#primary-schedule) and multiple [other schedules](#other-schedules).
@@ -150,22 +167,19 @@ More at:
/>
-## Cron syntax
-
-Windmill uses [zslayton's cron expression parser](https://github.com/zslayton/cron). This library differs slightly from the Unix library.
+## Scheduled polls
-Although the syntaxes are similar, there are some notable differences:
+A particular use case for schedules are [Trigger scripts](../../flows/10_flow_trigger.mdx).
-| Feature | Unix cron | zslayton's `cron` library |
-|---------------------------|-------------------------|--------------------------------------------|
-| **Day of Week Index** | Sunday = 0 through Saturday = 6 | Shifted by one (Sunday = 1 through Saturday = 7, 0 = Sunday) |
-| **Seconds Field** | Not included | Included as the first field |
-| **Year Field** | Not included | Optional, can specify specific years |
-| **Month Representation** | Numeric and short names | Numeric, short names, and name ranges |
-| **List and Range in Fields** | Supports lists and ranges | Supports lists, ranges, and combinations |
-| **Step Values** | Supported (e.g., `*/2`) | Supported, including complex patterns like `2018/2` |
+ and have it return all of the new items since the last run as scheduled polls, without resorting to external webhooks.
-Anyway, the simplified builder and [Windmill AI](../22_ai_generation/index.mdx) will help you to create the cron expression.
+
+
+
## Control permissions and errors
diff --git a/docs/core_concepts/20_jobs/index.mdx b/docs/core_concepts/20_jobs/index.mdx
index be942e6e5..4d55f5234 100644
--- a/docs/core_concepts/20_jobs/index.mdx
+++ b/docs/core_concepts/20_jobs/index.mdx
@@ -126,6 +126,10 @@ You can set a custom retention period for the jobs runs details. The retention p
/>
+## High priority jobs
+
+High priority jobs are jobs that are given a `priority` value between 1 and 100. Jobs with a higher priority value will be given precedence over jobs with a lower priority value in the job queue.
+
## Large job logs management
To optimize log storage and performance, Windmill leverages S3 for log management. This approach minimizes database load by treating the database as a temporary buffer for up to 5000 characters of logs per job.
diff --git a/docs/core_concepts/22_ai_generation/index.mdx b/docs/core_concepts/22_ai_generation/index.mdx
index 41500d750..016fabb31 100644
--- a/docs/core_concepts/22_ai_generation/index.mdx
+++ b/docs/core_concepts/22_ai_generation/index.mdx
@@ -105,7 +105,7 @@ Generate a flow consisting of a sequence of scripts.
### Trigger Flows
-[Trigger flows](../../flows/10_flow_trigger.mdx) are designed to pull data from an external source and return all of the new items since the last run, without resorting to external webhooks. A trigger script is intended to be used with [schedules](../1_scheduling/index.mdx) and [states](../3_resources_and_types/index.mdx#states) (rich objects in JSON, persistent from one run to another) in order to compare the execution to the previous one and process each new item in a [for loop](../../flows/12_flow_loops.md).
+[Trigger flows](../../flows/10_flow_trigger.mdx) are designed to pull data from an external source and return all of the new items since the last run, without resorting to external webhooks. A trigger script is intended to be used as scheduled poll with [schedules](../1_scheduling/index.mdx) and [states](../3_resources_and_types/index.mdx#states) (rich objects in JSON, persistent from one run to another) in order to compare the execution to the previous one and process each new item in a [for loop](../../flows/12_flow_loops.md).
If there are no new items, the flow will be skipped.
diff --git a/docs/core_concepts/24_caching/index.md b/docs/core_concepts/24_caching/index.md
index e35df8a86..ef4661a52 100644
--- a/docs/core_concepts/24_caching/index.md
+++ b/docs/core_concepts/24_caching/index.md
@@ -28,7 +28,7 @@ In the above example, the result of step the script will be cached for 180 secon
## Cache Flows
-Caching a flow means caching the results of that script for a certain duration. If the flow is triggered with the same flow inputs during the given duration, it will return the cached result.
+Caching a flow means caching the results of that flow for a certain duration. If the flow is triggered with the same flow inputs during the given duration, it will return the cached result.
-## Aggregated View
+## Aggregated view
The Runs menu in each workspace provides a time series view where you can monitor different time slots.
The green (respectively, red) dots being the tasks that succeeded (respectively, failed).
@@ -44,7 +44,7 @@ The graph can represent jobs on their Duration (default) or by Concurrency, mean
> Graphical view by concurrent jobs.
-## Details per Run
+## Details per run
Clicking on each run in the menu opens a run page where you can view the run's state, inputs, and success/failure reasons.
@@ -80,7 +80,7 @@ Example of filters in use:
> Here were filtered successful runs from August 2023 which returned `{"baseNumber": 11}`.
-## Jobs Labels
+## Jobs labels
Labels allow to add static or dynamic tags to [jobs](../20_jobs/index.mdx) with property "wm_labels" followed by an array of strings.
@@ -94,4 +94,8 @@ export async function main() {
Jobs can be filtered by labels in the Runs menu to monitor specific groups of jobs.
-![Run labels](./runs_labels.png 'Run labels')
\ No newline at end of file
+![Run labels](./runs_labels.png 'Run labels')
+
+## Invisible runs
+
+When this option is enabled, manual executions of this script or flow are invisible to users other than the user running it, including the owner(s). This setting can be overridden when this script is run manually from the advanced menu.
\ No newline at end of file
diff --git a/docs/core_concepts/9_worker_groups/index.mdx b/docs/core_concepts/9_worker_groups/index.mdx
index 38ebbfc9a..af4fbbf8d 100644
--- a/docs/core_concepts/9_worker_groups/index.mdx
+++ b/docs/core_concepts/9_worker_groups/index.mdx
@@ -110,7 +110,9 @@ You can assign groups to flows and flow steps to be executed on specific queues.
-There are 2 worker groups by default: _default_ and _native_.
+There are 2 worker groups by default: [default](#default-worker-group) and [native](#native-worker-group).
+
+#### Default worker group
The tags of _default_ worker group are:
@@ -127,6 +129,11 @@ The tags of _default_ worker group are:
Button `Reset to all tags minus native ones` will reset the tags of _default_ worker group to a given worker group.
+#### Native worker group
+
+Native workers are workers within the _native_ worker group.
+This group is pre-configured to listen to native jobs tags. Those jobs are executed under a special mode with subworkers for increased throughput.
+
The tags of _native_ worker group are:
- `nativets`: The default worker group for rest scripts.
@@ -138,6 +145,8 @@ The tags of _native_ worker group are:
- `bigquery`: The default worker group for bigquery scripts.
- `mssql`: The default worker group for mssql scripts.
+
+
If you assign custom worker groups to all your workers, make sure that they cover all tags above, otherwise those jobs will never be executed.
Button `Reset to native tags` will reset the tags of _native_ worker group to a given worker group.
@@ -158,6 +167,8 @@ gpu(workspace+workspace2)
Only 'workspace' and 'workspace2' will be able to use the `gpu` tags.
+Jobs within a same job queue can be given a [priority](../20_jobs/index.mdx#high-priority-jobs) between 1 and 100. Jobs with a higher priority value will be given precedence over jobs with a lower priority value in the job queue.
+
### How to assign worker tags to a worker group
Use the edit/create config next to the worker group name in Windmill UI:
diff --git a/docs/core_concepts/index.mdx b/docs/core_concepts/index.mdx
index d43c7baf8..6e93911ea 100644
--- a/docs/core_concepts/index.mdx
+++ b/docs/core_concepts/index.mdx
@@ -77,6 +77,11 @@ On top of its editors to build endpoints, flows and apps, Windmill comes with a
description="There are 5 ways to do error handling in Windmill."
href="/docs/core_concepts/error_handling"
/>
+
+
-
+## Scheduled polls
+
:::tip
Think of this as someone who checks the mailbox every day. If there is a new
@@ -50,7 +52,9 @@ activating the schedule as seen in the image below.
Example of a trigger script watching new Slack posts with a given word in a given channel and the flow sending each of them by email in a for loop:
-![Example of a schedule script with a for loop](../getting_started/9_trigger_flows/schedule-flow.png 'Example of a schedule script with a for loop')
+![Example of a schedule script with a for loop](../getting_started/9_trigger_flows/schedule-script.png 'Example of a schedule script with a for loop')
+
+![Schedule](../getting_started/9_trigger_flows/schedule-flow.png 'Schedule')
> This flow can be found on [WindmillHub](https://hub.windmill.dev/flows/51/watch-slack-posts-containing-a-given-word-and-send-all-new-ones-per-email).
diff --git a/docs/flows/1_flow_editor.mdx b/docs/flows/1_flow_editor.mdx
index 132017c32..55021e577 100644
--- a/docs/flows/1_flow_editor.mdx
+++ b/docs/flows/1_flow_editor.mdx
@@ -69,18 +69,18 @@ The Flow editor has the following features which are the subject of specific pag
description="Iterate quickly and get control on your flow testing."
href="/docs/flows/test_flows"
/>
-
+
-![Example of a flow](../assets/flows/flow_example.png.webp)
+![Example of a flow](../assets/flows/flow_example.png)
> _Example of a [flow](https://hub.windmill.dev/flows/38/automatically-populate-crm-contact-details-from-simple-email) in Windmill._
diff --git a/docs/flows/2_early_stop.md b/docs/flows/2_early_stop.md
index f4f2ae695..53757a8bc 100644
--- a/docs/flows/2_early_stop.md
+++ b/docs/flows/2_early_stop.md
@@ -2,7 +2,7 @@
If defined, at the end or before a step, the predicate expression will be evaluated to decide if the flow should stop early.
-## Early Stop for Step
+## Early stop for step
For each step of the flow, an early stop can be defined. The result of the step will be compared to a previously defined predicate expression, and if the condition is met, the flow will stop after that step.
@@ -26,7 +26,7 @@ For each step of the flow, an early stop can be defined. The result of the step
If stop early is run within a forloop, it will just break the for-loop and have it stop at that iteration instead of stopping the whole flow.
-## Early Stop for Flow
+## Early stop for flow
To stop early the flow based on the flow inputs, you can set an "Early Stop" from the flow settings. If the inputs meet the predefined condition, the flow will not run.
diff --git a/docs/flows/3_editor_components.mdx b/docs/flows/3_editor_components.mdx
index 8918ffacd..bbbc9125f 100644
--- a/docs/flows/3_editor_components.mdx
+++ b/docs/flows/3_editor_components.mdx
@@ -13,7 +13,7 @@ The Flow Builder has the following major components we will detail below:
-![Example of a flow](../assets/flows/flow_example.png.webp)
+![Example of a flow](../assets/flows/flow_example.png)
> _Example of a [flow](https://hub.windmill.dev/flows/38/automatically-populate-crm-contact-details-from-simple-email) in Windmill._
@@ -84,15 +84,6 @@ The diff button allows you to view the diff between the current and the latest [
## Settings
-The flow settings are divided into four tabs:
-
-- [Metadata](#metadata)
-- [Schedule](#schedule)
-- [Shared Directory](#shared-directory)
-- [Worker Group](#worker-group-tag)
-
-### Metadata
-
Each flow has metadata associated with it, enabling it to be defined and configured in depth.
#### Summary
@@ -127,63 +118,59 @@ This is where you can give instructions to users on how to run your Flow. It sup
![Flow Metadata Markdown](../assets/flows/flow_settings_metadata_markdown.png "Flow Metadata Markdown")
-### Schedule
-
-Flows can be [triggered](../getting_started/9_trigger_flows/index.mdx) by any [schedules](../core_concepts/1_scheduling/index.mdx), their [webhooks](../core_concepts/4_webhooks/index.mdx) or their [UI](../core_concepts/6_auto_generated_uis/index.mdx) but they only have only one primary schedules with which they share the same path. The primary schedule can be set here.
-
-A CRON expression is used to define the schedule. Schedules can also be disabled.
+### Advanced
-![Flow Schedule](../assets/flows/flow_settings_schedule.png.webp)
+![Flow Advanced](../assets/flows/flow_advanced_settings.png "Flow Advanced")
-:::tip
-
-Have more details on all the ways to trigger flows [here](../getting_started/9_trigger_flows/index.mdx).
-
-:::
+The advanced section allows to configure the following:
+
-
-
-### Shared Directory
-
-Flows on Windmill are by default based on a result basis. A step will take as inputs the results of previous steps. And this works fine for lightweight automation.
-
-For heavier ETLs, you might want to use the `Shared Directory` to share data between steps. Steps will share a folder at `./shared` in which they can store heavier data and pass them to the next step.
-
-Beware that the `./shared` folder is not preserved across [suspends](./11_flow_approval.mdx) and [sleeps](./15_sleep.md). The directory is temporary and active for the time of the execution.
-
-To enable the shared directory, on the `Settings` menu, go to `Shared Directory` and toggle on `Shared Directory on './shared'`.
-
-![Flow Shared Directory](../assets/flows/flow_settings_shared_directory.png.webp)
-
-To use the shared directory, just load outputs using `./shared/${path}` and call it for following steps.
-
-
+
+
-### Worker group tag
+## Triggers
-Flows can be assigned custom [worker groups](../core_concepts/9_worker_groups/index.mdx) for efficient execution on different machines with varying specifications.
+Flows can be triggered manually or in reaction to external events.
-![Worker group tag](../core_concepts/9_worker_groups/select_script_builder.png.webp)
+![Flow triggers](../assets/flows/flow_triggers.png "Flow triggers")
+
+See [Triggering flows](../getting_started/9_trigger_flows/index.mdx) for more details.
@@ -451,65 +438,4 @@ The result and logs are displayed on the left-hand side.
href="/docs/core_concepts/instant_preview"
color="teal"
/>
-
-
-#### Advanced
-
-The advanced section allows to configure the following:
-
-
-
-
-
-
-
-
-
-
-
-
+
\ No newline at end of file
diff --git a/docs/flows/4_cache.mdx b/docs/flows/4_cache.mdx
index 6d44f3390..4f06a68b2 100644
--- a/docs/flows/4_cache.mdx
+++ b/docs/flows/4_cache.mdx
@@ -23,7 +23,7 @@ This feature can significantly improve the performance of your scripts & flows,
## Cache flows
-Caching a flow means caching the results of that script for a certain duration. If the flow is triggered with the same flow inputs during the given duration, it will return the cached result.
+Caching a flow means caching the results of that flow for a certain duration. If the flow is triggered with the same flow inputs during the given duration, it will return the cached result.
-### Scheduling + Trigger scripts
+### Websocket triggers
+
+Windmill can connect to websocket servers and trigger runnables (scripts, flows) when a message is received.
+
+
+
+
+
+### Scheduled polls (Scheduling + Trigger scripts)
A particular use case for schedules are [Trigger scripts](../../flows/10_flow_trigger.mdx).
-Trigger scripts are used in [Flows](../../flows/1_flow_editor.mdx) and are designed to pull data from an external source and return all of the new items since the last run, without resorting to external webhooks. A trigger script is intended to be used with [schedules](../../core_concepts/1_scheduling/index.mdx) and [states](../../core_concepts/3_resources_and_types/index.mdx#states) (rich objects in JSON, persistent from one run to another) in order to compare the execution to the previous one and process each new item in a [for loop](../../flows/12_flow_loops.md). If there are no new items, the flow will be skipped.
+Trigger scripts are used in [Flows](../../flows/1_flow_editor.mdx) and are designed to pull data from an external source and return all of the new items since the last run, without resorting to external webhooks. A trigger script is intended to be used as scheduled poll with [schedules](../../core_concepts/1_scheduling/index.mdx) and [states](../../core_concepts/3_resources_and_types/index.mdx#states) (rich objects in JSON, persistent from one run to another) in order to compare the execution to the previous one and process each new item in a [for loop](../../flows/12_flow_loops.md). If there are no new items, the flow will be skipped.
You could set your script in a flow after a Trigger script to have it run only when new data is available.
diff --git a/docs/getting_started/9_trigger_flows/index.mdx b/docs/getting_started/9_trigger_flows/index.mdx
index b801e1c07..6f0b5bf7b 100644
--- a/docs/getting_started/9_trigger_flows/index.mdx
+++ b/docs/getting_started/9_trigger_flows/index.mdx
@@ -8,16 +8,17 @@ On-demand triggers:
- [Auto-generated UIs](/docs/getting_started/trigger_flows#auto-generated-uis)
- [Customized UIs with the App editor](#customized-uis-with-the-app-editor)
+- [Trigger a Flow from another Flow](#trigger-a-flow-from-another-flow)
- [Schedule](#schedule)
- [Trigger Flows from CLI (Command-line interface)](#cli-command-line-interface)
-- [Trigger a Flow from another Flow](#trigger-a-flow-from-another-flow)
Triggers from external events:
- [API](#trigger-from-api) and [Webhooks](#webhooks), including from [Slack](#webhooks-trigger-flows-from-slack)
- [Emails](#emails)
- [Custom HTTP routes](#custom-http-routes)
-- [Scheduling + Trigger scripts](#scheduling--trigger-scripts)
+- [Websocket triggers](#websocket-triggers)
+- [Scheduled polls (Scheduling + Trigger scripts)](#scheduled-polls-scheduling--trigger-scripts-scheduled-polls)
## On-demand triggers
@@ -82,6 +83,21 @@ You can also [automatically generate](../../core_concepts/6_auto_generated_uis/i
src="/videos/cowsay_app.mp4"
/>
+### Trigger a Flow from another Flow
+
+Windmill supports inner flows. This allows you to call a flow from another workflow.
+
+![Inner Flows](./inner_flow.png.webp)
+
+
+
+
+
### Schedule
Windmill allows you to schedule scripts using a user-friendly interface and control panels, **similar to [cron](https://crontab.guru/)** but with more features.
@@ -128,21 +144,6 @@ The `wmill` CLI allows you to interact with Windmill instances right from your t
/>
-### Trigger a Flow from another Flow
-
-Windmill supports inner flows. This allows you to call a flow from another workflow.
-
-![Inner Flows](./inner_flow.png.webp)
-
-
-
-
-
## Triggers from external events
### Trigger from API
@@ -221,11 +222,23 @@ Windmill allows you to define custom HTTP routes for your scripts and flows.
/>
-### Scheduling + Trigger scripts
+### Websocket triggers
+
+Windmill can connect to websocket servers and trigger runnables (scripts, flows) when a message is received.
+
+
+
+
+
+### Scheduled polls (Scheduling + Trigger scripts)
A particular use case for schedules are [Trigger scripts](../../flows/10_flow_trigger.mdx).
-Trigger scripts are designed to pull data from an external source and return all of the new items since the last run, without resorting to external webhooks. A trigger script is intended to be used with [schedules](../../core_concepts/1_scheduling/index.mdx) and [states](../../core_concepts/3_resources_and_types/index.mdx#states) (rich objects in JSON, persistent from one run to another) in order to compare the execution to the previous one and process each new item in a [for loop](../../flows/12_flow_loops.md). If there are no new items, the flow will be skipped.
+Trigger scripts are designed to pull data from an external source and return all of the new items since the last run, without resorting to external webhooks. A trigger script is intended to be used as scheduled poll with [schedules](../../core_concepts/1_scheduling/index.mdx) and [states](../../core_concepts/3_resources_and_types/index.mdx#states) (rich objects in JSON, persistent from one run to another) in order to compare the execution to the previous one and process each new item in a [for loop](../../flows/12_flow_loops.md). If there are no new items, the flow will be skipped.
By default, adding a trigger will set the schedule to 15 minutes.
diff --git a/docs/getting_started/9_trigger_flows/schedule-flow.png b/docs/getting_started/9_trigger_flows/schedule-flow.png
index 0824c9019..e4e238aa5 100644
Binary files a/docs/getting_started/9_trigger_flows/schedule-flow.png and b/docs/getting_started/9_trigger_flows/schedule-flow.png differ
diff --git a/docs/getting_started/9_trigger_flows/schedule-script.png b/docs/getting_started/9_trigger_flows/schedule-script.png
new file mode 100644
index 000000000..b180d2d62
Binary files /dev/null and b/docs/getting_started/9_trigger_flows/schedule-script.png differ
diff --git a/docs/misc/9_guides/snowflake/index.md b/docs/misc/9_guides/snowflake/index.md
index 90065d6a9..e3ae70cbc 100644
--- a/docs/misc/9_guides/snowflake/index.md
+++ b/docs/misc/9_guides/snowflake/index.md
@@ -4,7 +4,7 @@ title: Build an App accessing Snowflake with end-user Roles
import DocCard from '@site/src/components/DocCard';
-# Build an App Accessing Snowflake with End-User Roles
+# Build an app accessing Snowflake with end-user roles
This guide walks you through building an application that accesses Snowflake data based on the end-user’s role, using OAuth in Windmill. By leveraging dynamic role-based credentials from Snowflake’s OAuth integration, we avoid static credentials and enable secure data access customized for each user. This can be particularly useful for organizations with strict data access policies and multiple user roles where [row access policies](https://docs.snowflake.com/en/user-guide/security-row-intro) are set up.
diff --git a/docs/script_editor/custom_environment_variables.mdx b/docs/script_editor/custom_environment_variables.mdx
index c0c973b77..340003967 100644
--- a/docs/script_editor/custom_environment_variables.mdx
+++ b/docs/script_editor/custom_environment_variables.mdx
@@ -1,5 +1,15 @@
+import DocCard from '@site/src/components/DocCard';
+
# Custom environment variables
In a self-hosted environment, Windmill allows you to set custom environment [variables](../core_concepts/2_variables_and_secrets/index.mdx) for your scripts. This feature is useful when a script needs an environment variable prior to the main function executing itself. For instance, some libraries in Go do some setup in the 'init' function that depends on environment variables.
To add a custom environment variable to a script in Windmill, you should follow this format: `=`. Where `` is the name of the environment variable and `` is the corresponding value of the environment variable.
+
+
+
+
\ No newline at end of file
diff --git a/docs/script_editor/settings.mdx b/docs/script_editor/settings.mdx
index d773d7c51..4eb50e24a 100644
--- a/docs/script_editor/settings.mdx
+++ b/docs/script_editor/settings.mdx
@@ -117,7 +117,7 @@ Add a custom timeout for this script, for a given duration.
If enabled to execution will be stopped after the timeout.
-### Perpetual Scripts
+### Perpetual script
Perpetual scripts restart upon ending unless canceled.
@@ -159,9 +159,13 @@ The deletion is irreversible.
:::
+### High priority script
+
+Jobs within a same job queue can be given a [priority](../core_concepts/20_jobs/index.mdx#high-priority-jobs) between 1 and 100. Jobs with a higher priority value will be given precedence over jobs with a lower priority value in the job queue.
+
### Runs visibility
-When this option is enabled, manual [executions](../core_concepts/5_monitor_past_and_future_runs/index.mdx) of this script are invisible to users other than the user running it, including the [owner(s)](../core_concepts/16_roles_and_permissions/index.mdx). This setting can be overridden when this script is run manually from the advanced menu (available when the script is [deployed](../core_concepts/0_draft_and_deploy/index.mdx)).
+When this [option](../core_concepts/5_monitor_past_and_future_runs/index.mdx#invisible-runs) is enabled, manual [executions](../core_concepts/5_monitor_past_and_future_runs/index.mdx) of this script are invisible to users other than the user running it, including the [owner(s)](../core_concepts/16_roles_and_permissions/index.mdx). This setting can be overridden when this script is run manually from the advanced menu (available when the script is [deployed](../core_concepts/0_draft_and_deploy/index.mdx)).
## Generated UI
@@ -228,6 +232,18 @@ Windmill supports custom HTTP routes to trigger a script or flow.
/>
+### Websocket
+
+Windmill can connect to websocket servers and trigger runnables (scripts, flows) when a message is received.
+
+
+
+
+
### Email
Scripts and flows can be triggered by email messages sent to a specific email address, leveraging SMTP.
diff --git a/sidebars.js b/sidebars.js
index 5beb935f8..d1dbab67a 100644
--- a/sidebars.js
+++ b/sidebars.js
@@ -237,6 +237,7 @@ const sidebars = {
'core_concepts/roles_and_permissions/index',
'core_concepts/authentification/index',
'core_concepts/error_handling/index',
+ 'core_concepts/jobs/index',
'core_concepts/monitor_past_and_future_runs/index',
'core_concepts/scheduling/index',
'core_concepts/webhooks/index',
@@ -250,7 +251,6 @@ const sidebars = {
'core_concepts/websocket_triggers/index',
'core_concepts/caching/index',
'core_concepts/files_binary_data/index',
- 'core_concepts/jobs/index',
'core_concepts/service_logs/index',
'core_concepts/search_bar/index',
'core_concepts/collaboration/index',
@@ -560,9 +560,9 @@ const sidebars = {
items: [
'flows/architecture',
'openflow/index',
+ 'flows/editor_components',
'flows/test_flows',
'flows/ai_flows',
- 'flows/editor_components',
'flows/error_handling',
'flows/flow_branches',
'flows/flow_loops',
diff --git a/src/components/Pricing.js b/src/components/Pricing.js
index 181150043..dcfef65fa 100644
--- a/src/components/Pricing.js
+++ b/src/components/Pricing.js
@@ -363,6 +363,17 @@ const sections = [
},
link: '/docs/advanced/email_triggers'
},
+ {
+ name: 'Websocket triggers',
+ tiers: {
+ 'tier-free-selfhost': true,
+ 'tier-enterprise-selfhost': true,
+ 'tier-enterprise-cloud': false,
+ 'tier-free': false,
+ 'tier-team': false
+ },
+ link: '/docs/core_concepts/40_websocket_triggers'
+ },
{
name: 'BigQuery, Snowflake and MS SQL runtimes as languages',
tiers: {