官方文档入口

注:master和node的操作系统都是debian 10

Master和Node都需要

安装docker

安装docker参考这里

docker配置systemd管理cgroup

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
sudo tee /etc/docker/daemon.json << EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

sudo systemctl daemon-reload
sudo systemctl restart docker

如果没有配置systemd,kubeadm init初始化时会有如下警告

    # [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    # [WARNING SystemVerification]: missing optional cgroups: hugetlb

关闭swap

1
2
3
sudo swapoff -a
sudo cp /etc/fstab{,.origin}
sudo sed -i 's/swap[ ][ ]*sw/&,noauto/' /etc/fstab

允许iptables桥接流量

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sudo sysctl -p
sudo sysctl --system

安装kubeadm,kubelet,kubectl

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -

echo 'deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

如需安装指定版本

1
2
apt-cache madison kubeadm  # 查找版本
sudo apt-get install -y kubelet=1.21.2-00 kubeadm=1.21.2-00 kubectl=1.21.2-00

Master部署

由于国内无法访问k8s.gcr.io,kubeadm init不成功,查看需要拉哪些镜像,如需指定版本--kubernetes-version v1.21.2

1
kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.21.2
k8s.gcr.io/kube-controller-manager:v1.21.2
k8s.gcr.io/kube-scheduler:v1.21.2
k8s.gcr.io/kube-proxy:v1.21.2
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0

从阿里云拉镜像

1
sudo kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers

给镜像打tag,注意k8s.gcr.io/coredns/coredns:v1.8.0在阿里云的镜像路径有所不同registry.aliyuncs.com/google_containers/coredns:v1.8.0,以下脚本已做特殊处理

1
2
3
4
5
6
7
8
a=($(kubeadm config images list))
#b=(${a[*]//k8s.gcr.io*\//registry.aliyuncs.com/google_containers/})
b=($(kubeadm config images list --image-repository registry.aliyuncs.com/google_containers))

for ((i=0; i<${#a[*]}; i++)); do
  sudo docker tag "${b[i]}" "${a[i]}"
  sudo docker rmi "${b[i]}"
done

现在可以愉快的kubeadm init了

1
sudo kubeadm init --pod-network-cidr 10.244.0.0/16

kubeadm init成功后返回的信息,加入节点要用到此信息

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.34.169.111:6443 --token mhn6aa.xlz608qvtueq4rk8 \
        --discovery-token-ca-cert-hash sha256:3b73c9ec63b6e1d4402be7b267eca83e056d337e8e6675cf6390a47ef88f4fb0 

初始化完成以后

1
2
3
4
5
6
7
8
#普通用户使用kubectl
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#root使用kubectl,可选,不建议
sudo cp /root/.bashrc{,.bak}
echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' | sudo tee -a /root/.bashrc

安装网络插件,这里选择flannel

kubernetes.io地址

github地址

1
2
3
4
5
#有可能要梯子访问
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

#国内可访问,可能不是最新版本
kubectl apply -f https://raw.yanyong.cc/downloads/k8s/kube-flannel.yml

防火墙开放端口

1
sudo iptables -A INPUT -p tcp -m state --state NEW -m multiport --dports 6443,2379:2380,10250,10257,10259 -j ACCEPT

Node部署

拉取k8s.gcr.io的镜像,参考master部署

略...

如果没有记录初始化后join相关信息,在master执行以下命令

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# 在master执行

# 查找token
kubeadm token list

# 查找discovery-token-ca-cert-hash
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
   openssl dgst -sha256 -hex | sed 's/^.* //'

# 如果24小时后已过期
kubeadm token create --print-join-command

kubeadm join

1
2
sudo kubeadm join 172.34.169.111:6443 --token mhn6aa.xlz608qvtueq4rk8 \
        --discovery-token-ca-cert-hash sha256:3b73c9ec63b6e1d4402be7b267eca83e056d337e8e6675cf6390a47ef88f4fb0

可选操作,重新运行coredns

由于集群节点通常是按顺序初始化的,CoreDNS Pods 很可能都运行在第一个控制面节点上。 为了提供更高的可用性,请在加入至少一个新节点后使用以下命令,重新平衡 CoreDNS Pods

1
kubectl -n kube-system rollout restart deployment coredns

防火墙开放端口

1
sudo iptables -A INPUT -p tcp -m state --state NEW -m multiport --dports 10250,30000:32767 -j ACCEPT

如果需要删除节点

1
kubectl drain node02 --delete-emptydir-data --force --ignore-daemonsets
1
2
#想删除的节点上执行
sudo kubeadm reset
1
kubectl delete node node02