We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Terraform version: Kubernetes provider version: v2.34.0 Kubernetes version: AKS 1.31.1
kubernetes_job
What do you mean by "configuration"? Certainly you don't mean the entire terraform module.
Working on this, will attach soon.
N/A
v2.34.0
ttl_seconds_after_finished
This code has been running fine for a long time and the issue may be caused by this PR that was just merged #2596
Also, and I don't fully understand the PR creators scenario but this assumption seems bad to me:
This can cause Terraform to plan the recreation of the Job in subsequent runs, which is sort of undesirable behavior.
In fact this is highly desirable and required behavior for us. We explicitly want the Job to be re-created on every single deployment.
So is this change a regression? Is there some setting I can set it to that will cause it to be recreated on every deployment?
Regardless, having it fail even though the pod is succeeding seems like a bug.
The text was updated successfully, but these errors were encountered:
We were able to rollback to version 2.33.0 and it resolved our issue, so its definitely a recent issue probably this PR specifically.
2.33.0
Sorry, something went wrong.
arybolovlev
No branches or pull requests
Terraform Version, Provider Version and Kubernetes Version
Affected Resource(s)
kubernetes_job
Terraform Configuration Files
What do you mean by "configuration"? Certainly you don't mean the entire terraform module.
Debug Output
Panic Output
N/A
Steps to Reproduce
v2.34.0
or greater.kubernetes_job
resource which does not havettl_seconds_after_finished
set.Expected Behavior
Actual Behavior
Important Factoids
This code has been running fine for a long time and the issue may be caused by this PR that was just merged #2596
Also, and I don't fully understand the PR creators scenario but this assumption seems bad to me:
In fact this is highly desirable and required behavior for us. We explicitly want the Job to be re-created on every single deployment.
So is this change a regression? Is there some setting I can set it to that will cause it to be recreated on every deployment?
Regardless, having it fail even though the pod is succeeding seems like a bug.
References
Community Note
The text was updated successfully, but these errors were encountered: