You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Do we want dependent jobs? IE: Job A needs to run, but before it runs it needs Job B to run? These are usually pretty good for ETL style pipelines where you need to build out a graph of dependent jobs. Very similar to how AirFlow and Azkaban work.
I was thinking when you submit a one-off job to the cluster, it immediately gets scheduled and executed -- don't need to specify a specific time or recurrence.
Yeah, I think that's a pretty good use case. Perhaps we can configure retry policies for those jobs as well, based on an exit status code? Also, is there ever a need replicate/scale up a job?
Autoscaling based on CPU/memory might be pretty cool for the replicated container deployment.
workloads
need to think about routing traffic eventually
deployments
The text was updated successfully, but these errors were encountered: