-
Notifications
You must be signed in to change notification settings - Fork 715
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
with containerd v2 'kubeadm init' reports 'detected that the sandbox image "" of the container runtime is inconsistent with that used by kubeadm' #3146
Comments
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Hi @robertdahlem, Thank you for raising this issue, as most of the users are facing this issue. |
As a quick resolution, we can fix the |
And then we can update the configuration, restart the containerd service. |
cc @neolit123 We may fix the warning and do a cherry-pick. |
As in the code file for this, we can add the logic for attempting to configure the runtime with the correct sandbox image. |
For containerd 1.7,
For containerd 2.0,
|
related PR. |
related-to: containerd/containerd#11117 |
We may wait for containerd/containerd#11114 then. That seems to be the valid fix. Since
|
cc @SataQiu who added the check.
it's just a warning though.
if the read fails kubeadm should report it instead of ""
no, we should not configure runtimes from kubeadm. |
/transfer kubeadm |
that does seem like the problem. |
This is just a warning to alert the user to check the sandboxImage configuration of the container runtime. The actual kubeadm execution process is not affected. |
containerd is just dumping the config, like it has always done. the "bug" is that CRI is expecting things in the map The feature already broke with CRI-O and Docker before, since they did not dump the containerd 1.x config either... kubernetes/kubernetes#115610 (comment) containerd 2.x continues to dump the runtime config, it is just that sandbox has moved to the image config now: [plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "registry.k8s.io/pause:3.8" [plugins.'io.containerd.cri.v1.images'.pinned_images]
sandbox = 'registry.k8s.io/pause:3.10' I don't think CRI exposes any method to peek at the other config, so the workaround just copies the value over... For a manual check, the user could do |
This comment was marked as off-topic.
This comment was marked as off-topic.
that's a topic for a k/website ticket owned by sig-node. |
This comment has been minimized.
This comment has been minimized.
merged, that's all we need to do in kubeadm. cri/containerd bugs should be tracked in their respective repos. |
It seems like the error can "escape" to the reporting, if there is need to pull an image:
|
For some reason it is re-using the previous "err" variable, and I don't fully see how the actual error is transported: case v1.PullIfNotPresent:
if ipc.runtime.ImageExists(image) {
klog.V(1).Infof("image exists: %s", image)
continue
}
if err != nil {
errorList = append(errorList, errors.Wrapf(err, "failed to check if image %s exists", image))
}
fallthrough // Proceed with pulling the image if it does not exist i.e. ImageExists should probably return an error? if err != nil {
klog.Warningf("Failed to get image status, image: %q, error: %v", image, err)
return false
} |
this should be removed
it uses the prior error from sending fix in a bit. edit: here it is: |
cherry picks for 1.31 and 1.32 |
The bug was introduced here: kubernetes/kubernetes@7d1bfd9, when the error checking was removed @@ -857,8 +857,7 @@
for _, image := range ipc.imageList {
switch policy {
case v1.PullIfNotPresent:
- ret, err := ipc.runtime.ImageExists(image)
- if ret && err == nil {
+ if ipc.runtime.ImageExists(image) {
klog.V(1).Infof("image exists: %s", image)
continue
} (the error was never being returned from the impl, anyway) @@ -188,9 +225,11 @@
}
// ImageExists checks to see if the image exists on the system
-func (runtime *CRIRuntime) ImageExists(image string) (bool, error) {
- err := runtime.crictl("inspecti", image).Run()
- return err == nil, nil
+func (runtime *CRIRuntime) ImageExists(image string) bool {
+ ctx, cancel := defaultContext()
+ defer cancel()
+ _, err := runtime.impl.ImageStatus(ctx, runtime.imageService, &runtimeapi.ImageSpec{Image: image}, false)
+ return err == nil
} The pull should fail with a proper error later on, if the runtime is down. |
What happened?
Installing Kubernetes v1.32 with containerd v2.0.1, runc v1.2.3 and cni_plugins v1.6.1.
/etc/containerd/config.toml
is just the output of/usr/local/bin/containerd config default
kubadm init
succeeds, but reports:W0105 15:55:18.584951 7068 checks.go:846] detected that the sandbox image "" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.k8s.io/pause:3.10" as the CRI sandbox image.
It seems to read something that containerd v2 no longer provides.
What did you expect to happen?
kubadm init
succeeds with no warning regarding sandbox imageHow can we reproduce it (as minimally and precisely as possible)?
Install containerd, runc, cni_plugins and kubeadm.
/root/kubernetes.init.conf:
# kubeadm init --config=/root/kubernetes.init.conf
Anything else we need to know?
No response
Kubernetes version
Cloud provider
OS version
The text was updated successfully, but these errors were encountered: