Replies: 1 comment
-
The function I need is similar to this issue. From the current situation, it seems that Global Agent is not supported. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi Argo community,
I am currently researching cloud-native data warehouse orchestration tools, and I plan to use Argo to orchestrate my big data DAG tasks. The common tasks in my workflow include Spark, Flink, Shell, HTTP, and SQL (such as MySQL, SparkSQL, and SQLServer).
The Challenge:
Hi Argo community,
For SQL tasks, typically, each task in my workflow only executes a single SQL query. If I use the standard Argo Workflow steps, each SQL task would spin up a new pod to execute the SQL statement, which can be highly resource-intensive.
To optimize resource usage, I would like to implement a generic, long-running JDBC Agent that can handle all SQL query requests from various workflow tasks. The idea is to maintain only one instance of this JDBC Agent for the entire cluster, avoiding the creation of new pods for each SQL task.
What I Need Help With:
I understand that the Argo community provides options like HTTP Templates and Plugins to create Agents. The HTTP Template interacts with WorkflowTaskSet, but I am still unclear about how Plugins work.
Given my scenario, I would appreciate guidance on the following:
Should I use HTTP Templates or WorkflowTaskSet to build a reusable JDBC Agent?
If I go with HTTP Templates, would I be able to customize it in a similar manner to Plugins?
Looking forward to any insights or recommendations on how to approach this.
Thanks in advance for your help!
Beta Was this translation helpful? Give feedback.
All reactions