Zum Hauptinhalt springen

Podman: Incomplete Layer

Jump to solution

If you are running Podman on btrfs or ZFS you might have encountered this error already:

WARN[0001] Found incomplete layer “eff3d43aa72873c05dc21e5511016e8bf7f258b59a0881544c39234c837633fb”, deleting it

Sometimes accompanied by errors like

Error: looking up container “container1”: exit status 1: “/usr/sbin/zfs fs destroy -r zfspool/var/lib/containers/eff3d43aa72873c05dc21e5511016e8bf7f258b59a0881544c39234c837633fb” => cannot open ‘zfspool/var/lib/containers/eff3d43aa72873c05dc21e5511016e8bf7f258b59a0881544c39234c837633fb’: dataset does not exist

or

ERRO[0000] Image registry.tld/vendor/image exists in local storage but may be corrupted (remove the image to resolve the issue): exit status 1: “/usr/sbin/zfs fs destroy -r zfspool/var/lib/containers/eff3d43aa72873c05dc21e5511016e8bf7f258b59a0881544c39234c837633fb” => cannot open ‘zfspool/var/lib/containers/eff3d43aa72873c05dc21e5511016e8bf7f258b59a0881544c39234c837633fb’: dataset does not exist

In my case caused by aborting a running Podman command with Ctrl+C but it could also occur if your machine crashed or had a power outage. Looking for a solution I came across a lot of people recommending to reset the complete Podman environment by executing podman system reset resulting in a total loss of all pods, containers, images and volumes. If the machine you are using is not productive this sure is a time saving method. In my case though I did not intend to rebuild my whole container environment so I looked further.

Removing the storage layers folder does also fix the issue but still you lose all your volumes and have to at least rebuild your deployments afterwards to recreate the removed volumes. Looking further…

Solution

At last I stumbled upon an issue regarding this behavior in which someone mentioned removing the reference to the orphaned storage layer manually in images.json. In my case (Debian 12 with ZFS) the file is located at /var/lib/containers/storage/zfs-layers/layers.json and the orphaned layer eff3d43aa72873c05dc21e5511016e8bf7f258b59a0881544c39234c837633fb is already marked as incomplete:

  { 
    "id": "eff3d43aa72873c05dc21e5511016e8bf7f258b59a0881544c39234c837633fb",
    "parent": "cc4af65f3fefdd0f52be0f9429617959cf922ea2b8b5cc2eb208f98c03765040",
    "created": "2024-05-10T09:52:11.992932519Z",
    "flags": {
      "incomplete": true
    } 
  }

Be aware that the json-file is not formatted and might be harder to search through. vim+jq are at your service. After removing the layer-reference all together and saving the file, I can use Podman commands again without them resulting in the mentioned error message and aborting. All running containers as well as all locally stored images and volumes stayed in tact.

tl;dr

Search for the orphaned layer in Podmans images.json and remove the reference to that layer completely.