-
Notifications
You must be signed in to change notification settings - Fork 277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix: TAS assignment error #3937
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: kerthcet <[email protected]>
Skipping CI for Draft Pull Request. |
/test |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: kerthcet The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@kerthcet: The
Use In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/test all |
✅ Deploy Preview for kubernetes-sigs-kueue canceled.
|
Signed-off-by: kerthcet <[email protected]>
/test all |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a unit test in scheduler_test.go
TestScheduleForTAS
for the problematic scenario.
for k, v := range singlePodRequest { | ||
usage[k] = v * int64(domain.state) | ||
} | ||
s.addUsage(domain.id, usage) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this will work fine for now as TAS is not combined with cohorts, and so only one workload is considered in each scheduling cycle on the snapshot. However, this will not work well when we start supporting cohorts - in that case usage coming from the assignment phase for a workload that is de-prioritized in a cohort will consume the capacity, but scheduler should operate on "clean" capacity.
One way to go about it in the future would be to remember usage coming from "Inflight" PodSets, and clean it at the end of the assignment phase, before the actual scheduling phase.
Since I don't see it problematic in the current assumptions (no cohorts) I believe a TODO with #3761 is sufficient.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will add tests later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean when having time, or in a follow up PR? My preference would be to add the test in this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean when I have time later, have codes without tests is terrible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I would like to include the fix in 0.10.1, and tentatively planning next week. Do you think you would add the test by then? I can also help to write it
What type of PR is this?
/kind bug
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes #3887
Special notes for your reviewer:
Does this PR introduce a user-facing change?