-
Notifications
You must be signed in to change notification settings - Fork 988
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow setting the deletion strategy for kubernetes_manifest #2638
Comments
I'll submit a pr for this if the comunity agrees with the need for the feature and on the api |
@benbertrands Let's discuss before submitting the PR specifically for manifest. There might be an opportunity here to expand this feature beyond manifest as we move towards wider use of server-side-apply across the provider. Just want to make sure we get the UX future-proofed for that. |
Thanks for raising this. It's a very good idea and we'll definitely look into adding this. |
Let me know if I can help |
Description
The kubernetes api allows setting DeleteOptions. Extra docs: https://kubernetes.io/docs/concepts/architecture/garbage-collection
The default deletion strategy is background cascading deletion.
This strategy removes the resource and then returns, while removal of any "child" resources happens out-of-band
For some resources a foreground cascading deletion is more appropriate.
This strategy removes the "child" resources before completing the removal of the resource-for-which-deletion-was-requested and only then returns.
An example of this is Job. Job was switched to foreground cascading deletion here
The kubernetes_manifest resource would benefit from a property that allows configuring the deletion strategy. I have a case where we need a foreground cascading deletion.
Concrete case
We deployed karpenter using terraform
Next we create a set of karpenter nodepools using kubernetes_manifest. These create a bunch of "child" resources (nodeclaims which map to the underlying hosts), they refer to their respective nodepools as owner.
When using terraform to tear things back down:
Current situation
The nodepools are removed using background cascading deletion
-> karpenter is "undeployed".
The nodeclaims (which have a finalizer) are marked for deletion (out-of-band) but, as the karpenter pods are already gone, are indefinitely stuck waiting for karpenter to remove their finalizer.
To-be situation
The nodepools are removed using foreground cascading deletion
-> The nodeclaims (which have a finalizer) are marked for deletion
-> karpenter cleans up the underlying resources and removes the finalizer
-> nodeclaim resources removed
-> karpenter is "undeployed"
Potential Terraform Configuration
❗ Feedback on this is most welcome
References
#635
Community Note
The text was updated successfully, but these errors were encountered: