Skip to main content

Cluster Troubleshooting

Couldn't attach to pod, falling back to streaming logs: unable to upgrade connection: pod does not exist

When running the command to get the exec session for a pod, For example using the following command.

kubectl run testpod --image=busybox -it --rm --restart=Never -- /bin/sh

You might encounter the following error:

If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/testpod, falling back to streaming logs: unable to upgrade connection: pod does not exist
pod "testpod" deleted
Error from server (NotFound): the server could not find the requested resource ( pods/log testpod)

This error indicates that the Kubernetes cluster is unable to attach to the testpod and the pod is deleted almost immediately after creation.

The root cause is typically related to IP address conflicts across nodes in the cluster. Here’s how to resolve this issue.

Solution

This error primarily occur in Vagrant based Kubernetes setup where KUBELET_EXTRA_ARGS is set to same IP on all the nodes.

Follow the steps given below to resolve this issue.

Step 1: Check Node IP Addresses

First, you need to verify the IP addresses assigned to your nodes:

kubectl get nodes -o wide

In this example, all nodes have the same INTERNAL-IP, which indicates a conflict:

NAME           STATUS   ROLES           AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE       KERNEL-VERSION     CONTAINER-RUNTIME
controlplane   Ready    control-plane   3d1h   v1.29.7   10.0.2.15     <none>        Ubuntu 23.10   6.5.0-15-generic   cri-o://1.31.0
node01         Ready    <none>          3d1h   v1.29.7   10.0.2.15     <none>        Ubuntu 23.10   6.5.0-15-generic   cri-o://1.31.0
node02         Ready    <none>          3d1h   v1.29.7   10.0.2.15     <none>        Ubuntu 23.10   6.5.0-15-generic   cri-o://1.31.0

Step 2: Identify the Correct IP Address of the Nodes

To correct this, you need to identify the correct IP address for each node. Run the following command on each node:

ip addr

Locate the IP address associated with the eth1 interface or the interface that corresponds to the correct network.

Step 3: Update the Kubelet Configuration

Next, update the Kubelet configuration to use the correct IP address. Open the Kubelet configuration file.

nano /etc/default/kubelet

Replace the existing IP address in the KUBELET_EXTRA_ARGS with the IP address identified in the previous step:

KUBELET_EXTRA_ARGS=--node-ip=<replace-ip-address>

Step 4: Restart the Kubelet

After updating the configuration, restart the Kubelet service to apply the changes:

systemctl restart kubelet

Step 5: Apply Changes to All Nodes

Repeat the above steps on all nodes in the cluster, including the control plane and worker nodes (node01, node02, etc.).

Step 6: Verify the IP Address Changes

Check whether the IP addresses have been updated correctly: Ensure that each node now has a unique INTERNAL-IP:

$ kubectl get nodes -o wide


NAME           STATUS   ROLES           AGE    VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE       KERNEL-VERSION     CONTAINER-RUNTIME
controlplane   Ready    control-plane   3d1h   v1.29.7   192.168.201.10   <none>        Ubuntu 23.10   6.5.0-15-generic   cri-o://1.31.0
node01         Ready    <none>          3d1h   v1.29.7   192.168.201.11   <none>        Ubuntu 23.10   6.5.0-15-generic   cri-o://1.31.0
node02         Ready    <none>          3d1h   v1.29.7   192.168.201.12   <none>        Ubuntu 23.10   6.5.0-15-generic   cri-o://1.31.0

Step 7: Re-run the Command

After updating the IP addresses on all nodes, re-run the original kubectl command:

kubectl run testpod --image=busybox -it --rm --restart=Never -- /bin/sh

The issue should now be resolved, and you should be able to successfully attach to the pod without any errors.

Conclusion

This guide helps you resolve the issue of conflicting IP addresses in a Kubernetes cluster that can lead to pod connection errors. By ensuring each node has a unique INTERNAL-IP, the Kubernetes control plane can properly manage and attach to pods across the cluster.