Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix: TAS assignment error #3937

Merged
merged 4 commits into from
Jan 14, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 9 additions & 4 deletions pkg/cache/tas_flavor_snapshot.go
Original file line number Diff line number Diff line change
Expand Up @@ -256,7 +256,7 @@ func (s *TASFlavorSnapshot) FindTopologyAssignment(
sortedLowerDomains := s.sortedDomains(lowerFitDomains)
currFitDomain = s.updateCountsToMinimum(sortedLowerDomains, count)
}
return s.buildAssignment(currFitDomain), ""
return s.buildAssignment(currFitDomain, requests), ""
}

func (s *TASFlavorSnapshot) HasLevel(r *kueue.PodSetTopologyRequest) bool {
Expand Down Expand Up @@ -334,7 +334,7 @@ func (s *TASFlavorSnapshot) updateCountsToMinimum(domains []*domain, count int32
}

// buildTopologyAssignmentForLevels build TopologyAssignment for levels starting from levelIdx
func (s *TASFlavorSnapshot) buildTopologyAssignmentForLevels(domains []*domain, levelIdx int) *kueue.TopologyAssignment {
func (s *TASFlavorSnapshot) buildTopologyAssignmentForLevels(domains []*domain, levelIdx int, singlePodRequest resources.Requests) *kueue.TopologyAssignment {
assignment := &kueue.TopologyAssignment{
Domains: make([]kueue.TopologyDomainAssignment, len(domains)),
}
Expand All @@ -344,11 +344,16 @@ func (s *TASFlavorSnapshot) buildTopologyAssignmentForLevels(domains []*domain,
Values: domain.levelValues[levelIdx:],
Count: domain.state,
}
usage := make(resources.Requests)
for resName, resValue := range singlePodRequest {
usage[resName] = resValue * int64(domain.state)
}
s.addUsage(domain.id, usage)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this will work fine for now as TAS is not combined with cohorts, and so only one workload is considered in each scheduling cycle on the snapshot. However, this will not work well when we start supporting cohorts - in that case usage coming from the assignment phase for a workload that is de-prioritized in a cohort will consume the capacity, but scheduler should operate on "clean" capacity.

One way to go about it in the future would be to remember usage coming from "Inflight" PodSets, and clean it at the end of the assignment phase, before the actual scheduling phase.

Since I don't see it problematic in the current assumptions (no cohorts) I believe a TODO with #3761 is sufficient.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will add tests later.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean when having time, or in a follow up PR? My preference would be to add the test in this PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean when I have time later, have codes without tests is terrible.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I would like to include the fix in 0.10.1, and tentatively planning next week. Do you think you would add the test by then? I can also help to write it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will do this weekend

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we have a test that covers a case where request topology is not the lowest level? E.g. the lowest level is hostname and job request (and can only fit in) a block? I have concerns if it's gonna work if the requested level is not the lowest one, since s.addUsage() method add usage only to leaves in the topology tree structure

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make sense, but I think it still works because we'll traverse down the domains level by level until the lowest ones, which means we'll always get the lowest domains in addUsage(). Added the tests. PTAL.

Copy link
Contributor

@PBundyra PBundyra Jan 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you adjust the test so the topology is made out of three levels. And then add also another podset that should make a workload not fit within requested topology? I am afraid this usage may not be counted if it's not at the lowest level, so this is what I wanted to test

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After second consideration I think, you're actually right and the usage will be tracked correctly. Thanks for adding the test

}
return assignment
}

func (s *TASFlavorSnapshot) buildAssignment(domains []*domain) *kueue.TopologyAssignment {
func (s *TASFlavorSnapshot) buildAssignment(domains []*domain, singlePodRequest resources.Requests) *kueue.TopologyAssignment {
// lex sort domains by their levelValues instead of IDs, as leaves' IDs can only contain the hostname
slices.SortFunc(domains, func(a, b *domain) int {
return utilslices.OrderStringSlices(a.levelValues, b.levelValues)
Expand All @@ -358,7 +363,7 @@ func (s *TASFlavorSnapshot) buildAssignment(domains []*domain) *kueue.TopologyAs
if s.isLowestLevelNode() {
levelIdx = len(s.levelKeys) - 1
}
return s.buildTopologyAssignmentForLevels(domains, levelIdx)
return s.buildTopologyAssignmentForLevels(domains, levelIdx, singlePodRequest)
}

func (s *TASFlavorSnapshot) lowerLevelDomains(domains []*domain) []*domain {
Expand Down
1 change: 1 addition & 0 deletions pkg/resources/requests.go
Original file line number Diff line number Diff line change
Expand Up @@ -104,6 +104,7 @@ func (req Requests) CountIn(capacity Requests) int32 {
if !found {
return 0
}
// find the minimum count matching all the resource quota.
count := int32(capacity / rValue)
if result == nil || count < *result {
result = ptr.To(count)
Expand Down
188 changes: 188 additions & 0 deletions pkg/scheduler/scheduler_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -4486,6 +4486,194 @@ func TestScheduleForTAS(t *testing.T) {
},
},
},
"scheduling workload with multiple PodSets requesting TAS flavor and will succeed": {
nodes: []corev1.Node{
*testingnode.MakeNode("x1").
Label("tas-node", "true").
Label(tasRackLabel, "r1").
Label(corev1.LabelHostname, "x1").
StatusAllocatable(corev1.ResourceList{
corev1.ResourceCPU: resource.MustParse("8"),
}).
Ready().
Obj(),
*testingnode.MakeNode("y1").
Label("tas-node", "true").
Label(tasRackLabel, "r1").
Label(corev1.LabelHostname, "y1").
StatusAllocatable(corev1.ResourceList{
corev1.ResourceCPU: resource.MustParse("8"),
}).
Ready().
Obj(),
},
topologies: []kueuealpha.Topology{defaultTwoLevelTopology},
resourceFlavors: []kueue.ResourceFlavor{defaultTASTwoLevelFlavor},
clusterQueues: []kueue.ClusterQueue{
*utiltesting.MakeClusterQueue("tas-main").
ResourceGroup(
*utiltesting.MakeFlavorQuotas("tas-default").
Resource(corev1.ResourceCPU, "16").Obj()).
Obj(),
},
workloads: []kueue.Workload{
*utiltesting.MakeWorkload("foo", "default").
Queue("tas-main").
PodSets(
*utiltesting.MakePodSet("launcher", 1).
RequiredTopologyRequest(corev1.LabelHostname).
Request(corev1.ResourceCPU, "1").
Obj(),
*utiltesting.MakePodSet("worker", 15).
PreferredTopologyRequest(corev1.LabelHostname).
Request(corev1.ResourceCPU, "1").
Obj()).
Obj(),
},
wantNewAssignments: map[string]kueue.Admission{
"default/foo": *utiltesting.MakeAdmission("tas-main", "launcher", "worker").
AssignmentWithIndex(0, corev1.ResourceCPU, "tas-default", "1000m").
AssignmentPodCountWithIndex(0, 1).
TopologyAssignmentWithIndex(0, &kueue.TopologyAssignment{
Levels: []string{corev1.LabelHostname},
Domains: []kueue.TopologyDomainAssignment{
{
Count: 1,
Values: []string{
"x1",
},
},
},
}).
AssignmentWithIndex(1, corev1.ResourceCPU, "tas-default", "15000m").
AssignmentPodCountWithIndex(1, 15).
TopologyAssignmentWithIndex(1, &kueue.TopologyAssignment{
Levels: []string{corev1.LabelHostname},
Domains: []kueue.TopologyDomainAssignment{
{
Count: 7,
Values: []string{
"x1",
},
},
{
Count: 8,
Values: []string{
"y1",
},
},
},
}).
Obj(),
},
eventCmpOpts: []cmp.Option{eventIgnoreMessage},
wantEvents: []utiltesting.EventRecord{
{
Key: types.NamespacedName{Namespace: "default", Name: "foo"},
Reason: "QuotaReserved",
EventType: corev1.EventTypeNormal,
},
{
Key: types.NamespacedName{Namespace: "default", Name: "foo"},
Reason: "Admitted",
EventType: corev1.EventTypeNormal,
},
},
},
"scheduling workload with multiple PodSets requesting higher level topology": {
nodes: []corev1.Node{
*testingnode.MakeNode("x1").
Label("tas-node", "true").
Label(tasRackLabel, "r1").
Label(corev1.LabelHostname, "x1").
StatusAllocatable(corev1.ResourceList{
corev1.ResourceCPU: resource.MustParse("8"),
}).
Ready().
Obj(),
*testingnode.MakeNode("y1").
Label("tas-node", "true").
Label(tasRackLabel, "r1").
Label(corev1.LabelHostname, "y1").
StatusAllocatable(corev1.ResourceList{
corev1.ResourceCPU: resource.MustParse("8"),
}).
Ready().
Obj(),
},
topologies: []kueuealpha.Topology{defaultTwoLevelTopology},
resourceFlavors: []kueue.ResourceFlavor{defaultTASTwoLevelFlavor},
clusterQueues: []kueue.ClusterQueue{
*utiltesting.MakeClusterQueue("tas-main").
ResourceGroup(
*utiltesting.MakeFlavorQuotas("tas-default").
Resource(corev1.ResourceCPU, "16").Obj()).
Obj(),
},
workloads: []kueue.Workload{
*utiltesting.MakeWorkload("foo", "default").
Queue("tas-main").
PodSets(
*utiltesting.MakePodSet("launcher", 1).
RequiredTopologyRequest(tasRackLabel).
Request(corev1.ResourceCPU, "1").
Obj(),
*utiltesting.MakePodSet("worker", 15).
RequiredTopologyRequest(tasRackLabel).
Request(corev1.ResourceCPU, "1").
Obj()).
Obj(),
},
wantNewAssignments: map[string]kueue.Admission{
"default/foo": *utiltesting.MakeAdmission("tas-main", "launcher", "worker").
AssignmentWithIndex(0, corev1.ResourceCPU, "tas-default", "1000m").
AssignmentPodCountWithIndex(0, 1).
TopologyAssignmentWithIndex(0, &kueue.TopologyAssignment{
Levels: []string{"kubernetes.io/hostname"},
Domains: []kueue.TopologyDomainAssignment{
{
Count: 1,
Values: []string{
"x1",
},
},
},
}).
AssignmentWithIndex(1, corev1.ResourceCPU, "tas-default", "15000m").
AssignmentPodCountWithIndex(1, 15).
TopologyAssignmentWithIndex(1, &kueue.TopologyAssignment{
Levels: []string{"kubernetes.io/hostname"},
Domains: []kueue.TopologyDomainAssignment{
{
Count: 7,
Values: []string{
"x1",
},
},
{
Count: 8,
Values: []string{
"y1",
},
},
},
}).
Obj(),
},
eventCmpOpts: []cmp.Option{eventIgnoreMessage},
wantEvents: []utiltesting.EventRecord{
{
Key: types.NamespacedName{Namespace: "default", Name: "foo"},
Reason: "QuotaReserved",
EventType: corev1.EventTypeNormal,
},
{
Key: types.NamespacedName{Namespace: "default", Name: "foo"},
Reason: "Admitted",
EventType: corev1.EventTypeNormal,
},
},
},
"scheduling workload when the node for another admitted workload is deleted": {
// Here we have the "bar-admitted" workload, which is admitted and
// is using the "x1" node, which is deleted. Still, we have the y1
Expand Down
23 changes: 19 additions & 4 deletions pkg/util/testing/wrappers.go
Original file line number Diff line number Diff line change
Expand Up @@ -548,18 +548,33 @@ func (w *AdmissionWrapper) Obj() *kueue.Admission {
}

func (w *AdmissionWrapper) Assignment(r corev1.ResourceName, f kueue.ResourceFlavorReference, value string) *AdmissionWrapper {
w.PodSetAssignments[0].Flavors[r] = f
w.PodSetAssignments[0].ResourceUsage[r] = resource.MustParse(value)
w.AssignmentWithIndex(0, r, f, value)
return w
}

func (w *AdmissionWrapper) AssignmentPodCount(value int32) *AdmissionWrapper {
w.PodSetAssignments[0].Count = ptr.To(value)
w.AssignmentPodCountWithIndex(0, value)
return w
}

func (w *AdmissionWrapper) TopologyAssignment(ts *kueue.TopologyAssignment) *AdmissionWrapper {
w.PodSetAssignments[0].TopologyAssignment = ts
w.TopologyAssignmentWithIndex(0, ts)
return w
}

func (w *AdmissionWrapper) AssignmentWithIndex(index int32, r corev1.ResourceName, f kueue.ResourceFlavorReference, value string) *AdmissionWrapper {
w.PodSetAssignments[index].Flavors[r] = f
w.PodSetAssignments[index].ResourceUsage[r] = resource.MustParse(value)
return w
}

func (w *AdmissionWrapper) AssignmentPodCountWithIndex(index, value int32) *AdmissionWrapper {
w.PodSetAssignments[index].Count = ptr.To(value)
return w
}

func (w *AdmissionWrapper) TopologyAssignmentWithIndex(index int32, ts *kueue.TopologyAssignment) *AdmissionWrapper {
w.PodSetAssignments[index].TopologyAssignment = ts
return w
}

Expand Down