Prior Search
What happened?
I have had many under utilized nodes not get cleanup by karpenter due to a node-image-cache container not finalizing when the image doesn't exist.
Also, it is unclear why pods aren't being scheduled to these under utilized nodes but karpenter is also spinning up new nodes.
Once I fixed the image pullback, karpenter did start to consolidate and cleanup the nodes
Steps to Reproduce
- deploy a deployment with an invalid image tag with the image pinn and pre pull cache on
Relevant log output
karpenter not all pods would schedule, node-image-cache/node-image-cache-prepull-amd64-9t5grm => incompatible with nodepool "burstable-arm-d4d87789", daemonset overhead={"cpu":"367m","ephemeral-storage":"200Mi","memory":"886793678","pods":"6"}, incompatible requi │
│ rements, key kubernetes.io/hostname, kubernetes.io/hostname In [ip-10-0-166-100.us-west-2.compute.internal] not in kubernetes.io/hostname In [hostname-placeholder-54037]; incompatible with nodepool "burstable-01f2b9d1", daemonset overhead={"cpu":"351m","ephemeral-storage":"100Mi","memory":"870016462","pods":"5"}, │
│ incompatible requirements, key kubernetes.io/hostname, kubernetes.io/hostname In [ip-10-0-166-100.us-west-2.compute.internal] not in kubernetes.io/hostname In [hostname-placeholder-54038]; incompatible with nodepool "spot-arm-b3b0b92e", daemonset overhead={"cpu":"367m","ephemeral-storage":"200Mi","memory":"88679 │
│ 3678","pods":"6"}, incompatible requirements, key kubernetes.io/hostname, kubernetes.io/hostname In [ip-10-0-166-100.us-west-2.compute.internal] not in kubernetes.io/hostname In [hostname-placeholder-54039]; incompatible with nodepool "spot-ed7771fc", daemonset overhead={"cpu":"351m","ephemeral-storage":"100Mi"," │
│ memory":"870016462","pods":"5"}, incompatible requirements, key kubernetes.io/hostname, kubernetes.io/hostname In [ip-10-0-166-100.us-west-2.compute.internal] not in kubernetes.io/hostname In [hostname-placeholder-54040]; incompatible with nodepool "on-demand-73ef9476", daemonset overhead={"cpu":"351m","ephemeral │
│ -storage":"100Mi","memory":"870016462","pods":"5"}, incompatible requirements, key kubernetes.io/hostname, kubernetes.io/hostname In [ip-10-0-166-100.us-west-2.compute.internal] not in kubernetes.io/hostname In [hostname-placeholder-54041]; incompatible with nodepool "on-demand-arm-3e2ec3f1", daemonset overhead={ │
│ "cpu":"337m","ephemeral-storage":"100Mi","memory":"855336398","pods":"5"}, incompatible requirements, key kubernetes.io/hostname, kubernetes.io/hostname In [ip-10-0-166-100.us-west-2.compute.internal] not in kubernetes.io/hostname In [hostname-placeholder-54042]
Prior Search
What happened?
I have had many under utilized nodes not get cleanup by karpenter due to a
node-image-cachecontainer not finalizing when the image doesn't exist.Also, it is unclear why pods aren't being scheduled to these under utilized nodes but karpenter is also spinning up new nodes.
Once I fixed the image pullback, karpenter did start to consolidate and cleanup the nodes
Steps to Reproduce
Relevant log output