Replies: 1 comment
-
A follow up question. There is another API of Argo, the HTTP API of the server. It is very similar to the Kube API. I understand the additionnal value. For instance, it allows to handle so large workflows that their status wouldn't fit in a standard Kubernetes resource. But in case I accept this size limitation, would it still be "at my own risk" to directly use the Kube API rather than the Argo's HTTP one ? The watch API of Kube has great clients, with caching, retry and resync. In the operator SDK, there is this notion of "Informer" that handle all the tricky part of watching resource in Kube safely, without missing any updates. This notion has even been reproduced in the Java client fabric8. If the kubernetes API shouldn't be used to manipulate workflows, does that mean that I would have to recode the "Informer" mecanism on top of the Argo HTTP API ? |
Beta Was this translation helpful? Give feedback.
-
I am developping an application which is creating dynamically generated workflows. I am new to Argo Workflows, so I make mistakes, and I have to consider that even with heavy testing of my application, it may submit invalid workflows. I would expect then Argo to refuse to work with such workflow and explain me why for debugging after the crash.
For the little I have experimented, I have hit a
invalid memory address or nil pointer dereference
from Argo. See #5051The answer received from this bug/feature is that I should use the argo cli to submit workflows.
So, is the cli the official API to Argo? Meaning I have to embbed the argo cli in my app and launch a process to submit a workflow ?
If that is the case, it would be nice if the "at your own risk" could be mentionned there: https://argoproj.github.io/argo-workflows/kubectl/
Beta Was this translation helpful? Give feedback.
All reactions