Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podname is not Quadlet based: part 2 #24976

Open
dylannevans opened this issue Jan 8, 2025 · 2 comments
Open

podname is not Quadlet based: part 2 #24976

dylannevans opened this issue Jan 8, 2025 · 2 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. quadlet

Comments

@dylannevans
Copy link

Issue Description

to reopen bug #23381: setting PodName doesn't help.

When trying to create a Container with the Pod field set to PodName, where PodName is the name of a pod defined by a podname.pod file and containing PodName=podname, podman-system-generator throws the following error:

converting "mycontainer.container": pod podname is not Quadlet based

Steps to reproduce the issue

$ls /etc/container/systemd/user/1012

> auto-dev.pod  db-dev.container  app-dev.container

$cat auto-dev.pod

> [Unit]
> Requires=podman.socket
> After=podman.socket
> Description=auto pod
> 
> [Install]
> WantedBy=default.target
> 
> [Pod]
> PodName=auto-dev
> ServiceName=auto-dev
> PublishPort=[...
>     ]

$cat db-dev.container

> [Unit]
> Description=automation db - dev
> 
> [Container]
> Image=docker.io/postgres/postgres:latest
> Pod=auto-dev
> StartWithPod=true
> AutoUpdate=registry
> HostName=db
> ContainerName=db-dev
> Volume=[...
>     ]
> 
> Environment=[...
>     ]

$cat app-dev.container

> [Unit]
> Description=app dev server
> 
> [Container]
> Image=docker.n8n.io/n8nio/n8n:latest
> AutoUpdate=registry
> HostName=n8n-dev
> ContainerName=n8n-dev
> Volume=n8n-dev:/home/node/.n8n
> Pod=auto-dev
> StartWithPod=true
> Environment=[...
>     ]
> PublishPort=[...
>     ]

Describe the results you received

$/usr/lib/systemd/system-generators/podman-system-generator --user --dryrun

> quadlet-generator[5849]: Loading source unit file /etc/containers/systemd/user/1012/auto-dev.pod
> quadlet-generator[5849]: Loading source unit file /etc/containers/systemd/user/1012/db-dev.container
> quadlet-generator[5849]: Loading source unit file /etc/containers/systemd/user/1012/n8n-dev.container
> ---auto-dev.service---
> [Unit]
> Wants=podman-user-wait-network-online.service
> After=podman-user-wait-network-online.service
> Requires=podman.socket
> After=podman.socket
> Description=n8n automation pod
> SourcePath=/etc/containers/systemd/user/1012/auto-dev.pod
> RequiresMountsFor=%t/containers
> 
> [Install]
> WantedBy=default.target
> 
> [X-Pod]
> PublishPort=5432:5432
> PublishPort=2443:1443
> PodName=auto-dev
> ServiceName=auto-dev
> 
> [Service]
> SyslogIdentifier=%N
> ExecStart=/usr/bin/podman pod start --pod-id-file=%t/%N.pod-id
> ExecStop=/usr/bin/podman pod stop --pod-id-file=%t/%N.pod-id --ignore --time=10
> ExecStopPost=/usr/bin/podman pod rm --pod-id-file=%t/%N.pod-id --ignore --force
> ExecStartPre=/usr/bin/podman pod create --infra-conmon-pidfile=%t/%N.pid --pod-id-file=%t/%N.pod-id --exit-policy=stop --replace --publish 5432:5432 --publish 2443:1443 --infra-name auto-dev-infra --name auto-dev
> Environment=PODMAN_SYSTEMD_UNIT=%n
> Type=forking
> Restart=on-failure
> PIDFile=%t/%N.pid
> 
> quadlet-generator[5849]: converting "db-dev.container": pod auto-dev is not Quadlet based
> converting "n8n-dev.container": pod auto-dev is not Quadlet based

Alternately, if I find/replace all instances of "auto-dev" with "auto-dev.pod" in these three files as suggested by the workaround in #23381, dry-run throws no errors. However, it's a false positive: when systemctl daemon-reload, the service auto-dev-pod is created, but not services for the containers.

Describe the results you expected

Services for each defined container running in the defined pod.

podman info output

host:
  arch: amd64
  buildahVersion: 1.38.0
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  - rdma
  - misc
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.12-3.fc41.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.12, commit: '
  cpuUtilization:
    idlePercent: 99.34
    systemPercent: 0.18
    userPercent: 0.48
  cpus: 6
  databaseBackend: sqlite
  distribution:
    distribution: fedora
    variant: silverblue
    version: "41"
  eventLogger: journald
  freeLocks: 2044
  hostname: console
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 6.12.5-200.fc41.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 28942364672
  memTotal: 33444179968
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.13.1-1.fc41.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.13.1
    package: netavark-1.13.1-1.fc41.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.13.1
  ociRuntime:
    name: crun
    package: crun-1.19.1-1.fc41.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.19.1
      commit: 3e32a70c93f5aa5fea69b50256cca7fd4aa23c80
      rundir: /run/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20241211.g09478d5-1.fc41.x86_64
    version: |
      pasta 0^20241211.g09478d5-1.fc41.x86_64
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  rootlessNetworkCmd: pasta
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.3.1-1.fc41.x86_64
    version: |-
      slirp4netns version 1.3.1
      commit: e5e368c4f5db6ae75c2fce786e31eef9da6bf236
      libslirp: 4.8.0
      SLIRP_CONFIG_VERSION_MAX: 5
      libseccomp: 2.5.5
  swapFree: 8589930496
  swapTotal: 8589930496
  uptime: 1h 9m 47.00s (Approximately 0.04 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
store:
  configFile: /usr/share/containers/storage.conf
  containerStore:
    number: 3
    paused: 0
    running: 2
    stopped: 1
  graphDriverName: overlay
  graphOptions:
    overlay.imagestore: /usr/lib/containers/storage
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphRootAllocated: 1022505254912
  graphRootUsed: 16356438016
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Supports shifting: "true"
    Supports volatile: "true"
    Using metacopy: "true"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 5
  runRoot: /run/containers/storage
  transientStore: false
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 5.3.1
  Built: 1732147200
  BuiltTime: Wed Nov 20 19:00:00 2024
  GitCommit: ""
  GoVersion: go1.23.3
  Os: linux
  OsArch: linux/amd64
  Version: 5.3.1

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

Fedora 41 atomic

Additional information

No response

@dylannevans dylannevans added the kind/bug Categorizes issue or PR as related to a bug. label Jan 8, 2025
@Luap99 Luap99 added the quadlet label Jan 8, 2025
@Luap99
Copy link
Member

Luap99 commented Jan 8, 2025

It is definitely expected that you must set Pod=auto-dev.pod in the container unit, the pod extension is important as is stated in the documentation.

If the container services with that set still don't work then the dryrun command should definitely log the errors.

@ygalblum
Copy link
Contributor

ygalblum commented Jan 8, 2025

First as stated in the comment #23381 (comment) it is not a workaround. It is the required configuration (as also stated in the man page)

As for your issue, I created the files you pasted here on my machine (running the same podman version) and the service files were generated:

$ systemctl --user daemon-reload
$ systemctl --user status auto-dev
○ auto-dev.service - auto pod
     Loaded: loaded (/home/yblum.linux/.config/containers/systemd/auto-dev.pod; generated)
    Drop-In: /usr/lib/systemd/user/service.d
             └─10-timeout-abort.conf
     Active: inactive (dead)
$ systemctl --user status db-dev
○ db-dev.service - automation db - dev
     Loaded: loaded (/home/yblum.linux/.config/containers/systemd/db-dev.container; generated)
    Drop-In: /usr/lib/systemd/user/service.d
             └─10-timeout-abort.conf
     Active: inactive (dead)
$ systemctl --user status app-dev
○ app-dev.service - app dev server
     Loaded: loaded (/home/yblum.linux/.config/containers/systemd/app-dev.container; generated)
    Drop-In: /usr/lib/systemd/user/service.d
             └─10-timeout-abort.conf
     Active: inactive (dead)
  1. You pasted only part of the Quadlet files and didn't add the results of the dryrun after you've corrected the reference to the pod. Are you sure it returned successfully?
  2. You wrote that the service auto-dev-pod was created, while the name of the service of the pod is actually auto-dev. Are you sure you are looking for the correct service names for the containers?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. quadlet
Projects
None yet
Development

No branches or pull requests

3 participants