## Learning k8s

Interesting. So sometimes k8s network goes down. Apparently it's a pitfall that has been logged with vendor but not yet fixed. If on either of the nodes networking service is restarted (i.e. you connect to VPN, plug in an USB wifi dongle, etc..) -- you will lose the flannel.1 interface. As a result you will NOT be able to use kube-dns (because it's unreachable) not will you access ClusterIPs on other nodes. Deleting flannel and allowing it to restart on control place brings it back to operational.

And yet another note.. If you're making a k8s cluster at home and you are planning to control it via your lappy -- DO NOT set up control plane on your lappy :) If you are away from home you'll have a hard time connecting back to your cluster.
A raspberry pi ir perfectly enough for a control place. And when you are away with your lappy, ssh'ing home and setting up a few iptables DNATs will do the trick

netikras@netikras-xps:~/skriptai/bin$ cat fw_kubeadm


FW_RULE="OUTPUT -d ${MASTER_IP} -p tcp -j DNAT --to-destination ${FW_LOCAL_IP}"

sudo iptables -t nat -A ${FW_RULE}

ssh home -p 4522 -l netikras -tt \
ssh ${MASTER_IP} -l ${MASTER_USER} -tt \
# 'echo "Tunnel is open. Disconnect from this SSH session to close the tunnel and remove NAT rules" ; bash'

sudo iptables -t nat -D ${FW_RULE}

And ofc copy control plane's ~/.kube to your lappy :)

  • 1
    had to google what k8s is.
    it's Kubernetes container 👍
  • 2
    Or do what I do:
    1. Use calico, or another CNI.
    2. Use esxi, with two static IPs for the node hosts, master node, and a dedicated managment VM on the same network. each host has one NIC on the internet facing network, and a dedicated esxi vlan for node comms.
  • 0
    I got a little paranoid and used a beefy odroid, but yes, dedicated control plane. A must :)
Add Comment