找回密码
 立即注册
首页 业界区 安全 k8s部署HA高可用集群

k8s部署HA高可用集群

渣骑 2025-6-1 19:09:45
1.k8s高可用集群说明


  • Kubernetes Master 节点的高可用(HA)在没有云厂商LB的情况下通常有两种主流的实现方案:​Keepalived + Nginx 或​ Keepalived + HAProxy。这两种方案的核心目标都是通过负载均衡和 VIP(虚拟 IP)漂移实现多个 Master 节点的流量分发和故障转移。
  • 我本来是想使用keepalived + nginx 做高可用的,但由于服务器有限且业务量不大,所以只使用keepalived 做VIP漂移,没有使用nginx做代理和负载均衡,但我会提供nginx的相关配置。
2.部署环境


  • master至少部署3个节点,保证etcd是奇数节点,如果部署两个的话,在停掉一个master时,etcd是不可用的,那么api-server也不可用【重要!!!】
IP地址系统内核系统配置角色数据基础目录
172.16.1.23   VIP(keepalived)keepalived部署在3台master节点
172.16.1.20CentOS7.85.4.278-1.el7.elrepo.x86_648核16G/100Gmaster1/data/
172.16.1.21CentOS7.85.4.278-1.el7.elrepo.x86_648核16G/100Gmaster2/data/
172.16.1.24CentOS7.85.4.278-1.el7.elrepo.x86_648核16G/100Gmaster3/data/
172.16.1.22CentOS7.85.4.278-1.el7.elrepo.x86_648核16G/100Gnode1/data/
172.16.1.xxCentOS7.85.4.278-1.el7.elrepo.x86_648核16G/100Gnode2/data/
172.16.1.xxCentOS7.85.4.278-1.el7.elrepo.x86_648核16G/100Gnode3/data/
3.初始化系统、内核升级、安装k8s组件


  • 如果部署的是k8s HA高可用集群,只使用下边文档中的初始化部分,其余使用本文档,且所有的k8s节点都做同样初始化操作
  1. https://www.cnblogs.com/Leonardo-li/p/18648449
复制代码
4.编译kubeadm,修改证书授权时间

4.1 准备go环境


  • go环境需要大于1.17,否则会报错,我这里用的是1.23
  1. #下载go环境包
  2. wget https://golang.google.cn/dl/go1.23.4.linux-amd64.tar.gz
  3. tar zxf go1.23.4.linux-amd64.tar.gz 
  4. mv go /usr/local/
  5. #设置go环境变量
  6. vim /etc/profile
  7. #添加下面2行到文件末尾
  8. export PATH=$PATH:/usr/local/go/bin
  9. export GOPATH=$HOME/go
  10. #让环境变量生效
  11. source /etc/profile
  12. #测试go环境是否可用
  13. go version
复制代码
4.2 安装git
  1. yum -y install git
复制代码
4.3 kubeadm修改-编译

4.3.1 下载对应版本的kubernetes源码包,我这里是v1.23.17
  1. git clone --depth 1 --branch v1.23.17 https://github.com/kubernetes/kubernetes.git
复制代码
4.3.2 修改证书时间
  1. cd kubernetes/
复制代码

  • 修改ca证书到100年,注释原代码(注释符号 // ),将代码里面的 *10 换成 *100
  1. vim ./staging/src/k8s.io/client-go/util/cert/cert.go
复制代码
1.png


  • 修改其他证书到100年,注释原代码(注释符号 // ),将代码里面的 24 * 365 改成 24 * 365 * 100
  1. vim ./cmd/kubeadm/app/constants/constants.go
复制代码
2.png

4.3.3 进行编译
  1. make all WHAT=cmd/kubeadm GOFLAGS=-v
复制代码
4.3.4 查看是否编译成功
  1. ls _output/bin/kubeadm
复制代码
4.3.5 拷贝到所有k8s机器,master和node
  1. scp  _output/bin/kubeadm root@172.16.1.20:/data/
  2. scp  _output/bin/kubeadm root@172.16.1.21:/data/
  3. scp  _output/bin/kubeadm root@172.16.1.24:/data/
  4. scp  _output/bin/kubeadm root@172.16.1.22:/data/
复制代码
4.3.6 所以k8s节点备份kubeadm,并将新编译的kubeadm拷贝到相同目录
  1. mv /usr/bin/kubeadm /usr/bin/kubeadm-old
  2. mv /data/kubeadm /usr/bin/
复制代码
5.部署keepalived

5.1 下载keepalived
  1. curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
  2. yum -y install keepalived
复制代码
5.2 keepalived配置


  • master1 配置 keepalived.conf
  1. [root@master1 ~]# cat /etc/keepalived/keepalived.conf
复制代码
  1. ! Configuration File for keepalived
  2. global_defs {
  3.     router_id LVS_DEVEL
  4. }
  5. vrrp_instance VI_1 {
  6.     state MASTER                    # 设置为主节点
  7.     interface ens33                  # 网络接口,根据实际情况修改
  8.     virtual_router_id 51            # VRRP 路由ID,主备节点必须相同
  9.     priority 100                    # 优先级,主节点必须高于备份节点
  10.     advert_int 1                    # VRRP通告间隔,单位秒
  11.     authentication {
  12.         auth_type PASS              # 认证类型
  13.         auth_pass 1111              # 认证密码,主备节点必须相同
  14.     }
  15.     virtual_ipaddress {
  16.         172.16.1.23/22              # 虚拟IP地址,可以根据实际情况修改
  17.     }
  18. }
复制代码

  • master2 配置 keepalived.conf
  1. [root@master2 ~]# cat /etc/keepalived/keepalived.conf
复制代码
  1. ! Configuration File for keepalived
  2. global_defs {
  3.     router_id LVS_DEVEL
  4. }
  5. vrrp_instance VI_1 {
  6.     state BACKUP                    # 设置为备份节点
  7.     interface ens33                  # 确保使用正确的网络接口名称
  8.     virtual_router_id 51            # VRRP 路由ID,主备节点必须相同
  9.     priority 80                     # 优先级,备份节点必须低于主节点
  10.     advert_int 1                    # VRRP通告间隔,单位秒
  11.     authentication {
  12.         auth_type PASS              # 认证类型
  13.         auth_pass 1111              # 认证密码,主备节点必须相同
  14.     }
  15.     virtual_ipaddress {
  16.         172.16.1.23/22             # 虚拟IP地址,与主节点相同
  17.     }
  18. }
复制代码

  • master3 配置 keepalived.conf
  1. [root@master3 ~]# cat /etc/keepalived/keepalived.conf
复制代码
  1. ! Configuration File for keepalived
  2. global_defs {
  3.     router_id LVS_DEVEL
  4. }
  5. vrrp_instance VI_1 {
  6.     state BACKUP                    # 设置为备份节点
  7.     interface ens33                  # 确保使用正确的网络接口名称
  8.     virtual_router_id 51            # VRRP 路由ID,主备节点必须相同
  9.     priority 60                     # 优先级,备份节点必须低于主节点
  10.     advert_int 1                    # VRRP通告间隔,单位秒
  11.     authentication {
  12.         auth_type PASS              # 认证类型
  13.         auth_pass 1111              # 认证密码,主备节点必须相同
  14.     }
  15.     virtual_ipaddress {
  16.         172.16.1.23/22             # 虚拟IP地址,与主节点相同
  17.     }
  18. }
复制代码
5.3 启动keepalived(3个节点)
  1. systemctl restart keepalived
  2. systemctl enable keepalived
复制代码
5.4 补充nginx.conf代理和负载均衡配置,我这里暂时没有使用nginx,只使用了keepalived的VIP
  1. #nginx.conf stream代理
  2. stream {
  3.     upstream k8s_apiserver {
  4.         # 后端 Kubernetes Master 节点
  5.         server 172.16.1.21:6443;  # Master1
  6.         server 172.16.1.22:6443;  # Master2
  7.         server 172.16.1.24:6443;  # Master3
  8.     }
  9.     server {
  10.         listen 6443;          # 监听 6443 端口(TCP)
  11.         proxy_pass k8s_apiserver;
  12.         proxy_timeout 10s;
  13.     }
  14. }
复制代码
6.初始化k8s【master1】

6.1 编写kubeadm-config.yaml


  • 172.16.4.177:8090/k8s12317/registry.aliyuncs.com/google_containers 这是我的harbor私有仓库地址,是在之前离线下载镜像后推到自己的harbor的,具体参考:https://www.cnblogs.com/Leonardo-li/p/18648449
  1. cat kubeadm-config.yaml
复制代码
  1. apiVersion: kubeadm.k8s.io/v1beta3
  2. kind: ClusterConfiguration
  3. kubernetesVersion: v1.23.17
  4. imageRepository: 172.16.4.177:8090/k8s12317/registry.aliyuncs.com/google_containers
  5. apiServer:
  6.   certSANs:
  7.   - "172.16.1.23"   # VIP
  8.   - "172.16.1.20"   # Master1 实际IP
  9.   - "172.16.1.21"   # Master2 实际IP
  10.   - "172.16.1.24"   # Master3 实际IP
  11.   - "127.0.0.1"     # 本地回环
  12. controlPlaneEndpoint: "172.16.1.23:6443"  # VIP
  13. networking:
  14.   serviceSubnet: 10.96.0.0/12  # 关键修正:serviceCIDR -> serviceSubnet
  15.   podSubnet: 10.244.0.0/16     # 确保与 Calico 配置匹配
  16. ---  ## 添加下面几行 添加ipvs模式,
  17. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  18. kind: KubeProxyConfiguration
  19. mode: ipvs
复制代码
6.2 初始化master1
  1. kubeadm init --config=kubeadm-config.yaml --upload-certs
复制代码

  • 初始化信息如下:
master1初始化信息
  1. [root@master1 kubeadm]# kubeadm init --config=kubeadm-config.yaml --upload-certs
  2. [init] Using Kubernetes version: v1.23.17
  3. [preflight] Running pre-flight checks
  4. [preflight] Pulling images required for setting up a Kubernetes cluster
  5. [preflight] This might take a minute or two, depending on the speed of your internet connection
  6. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  7. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  8. [certs] Generating "ca" certificate and key
  9. [certs] Generating "apiserver" certificate and key
  10. [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1] and IPs [10.96.0.1 172.16.1.20 172.16.1.23 172.16.1.21 172.16.1.24 127.0.0.1]
  11. [certs] Generating "apiserver-kubelet-client" certificate and key
  12. [certs] Generating "front-proxy-ca" certificate and key
  13. [certs] Generating "front-proxy-client" certificate and key
  14. [certs] Generating "etcd/ca" certificate and key
  15. [certs] Generating "etcd/server" certificate and key
  16. [certs] etcd/server serving cert is signed for DNS names [localhost master1] and IPs [172.16.1.20 127.0.0.1 ::1]
  17. [certs] Generating "etcd/peer" certificate and key
  18. [certs] etcd/peer serving cert is signed for DNS names [localhost master1] and IPs [172.16.1.20 127.0.0.1 ::1]
  19. [certs] Generating "etcd/healthcheck-client" certificate and key
  20. [certs] Generating "apiserver-etcd-client" certificate and key
  21. [certs] Generating "sa" key and public key
  22. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  23. [kubeconfig] Writing "admin.conf" kubeconfig file
  24. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  25. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  26. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  27. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  28. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  29. [kubelet-start] Starting the kubelet
  30. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  31. [control-plane] Creating static Pod manifest for "kube-apiserver"
  32. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  33. [control-plane] Creating static Pod manifest for "kube-scheduler"
  34. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  35. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  36. [apiclient] All control plane components are healthy after 15.008274 seconds
  37. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  38. [kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
  39. NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
  40. [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
  41. [upload-certs] Using certificate key:
  42. 6d3f1abc998f3ffd104a989aa4b5ff3ae622ccd1a9b0098d9b68ed4221820ac5
  43. [mark-control-plane] Marking the node master1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
  44. [mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  45. [bootstrap-token] Using token: agitw8.fwghrey1nysrprf8
  46. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  47. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
  48. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  49. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  50. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  51. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  52. [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
  53. [addons] Applied essential addon: CoreDNS
  54. [addons] Applied essential addon: kube-proxy
  55. Your Kubernetes control-plane has initialized successfully!
  56. To start using your cluster, you need to run the following as a regular user:
  57.   mkdir -p $HOME/.kube
  58.   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  59.   sudo chown $(id -u):$(id -g) $HOME/.kube/config
  60. Alternatively, if you are the root user, you can run:
  61.   export KUBECONFIG=/etc/kubernetes/admin.conf
  62. You should now deploy a pod network to the cluster.
  63. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  64.   https://kubernetes.io/docs/concepts/cluster-administration/addons/
  65. You can now join any number of the control-plane node running the following command on each as root:
  66.   kubeadm join 172.16.1.23:6443 --token agitw8.fwghrey1nysrprf8 \
  67.         --discovery-token-ca-cert-hash sha256:01099779f60c0ba7a8070edacaeaaa1b3b55c36b3a9136402200ce75dafe6bb8 \
  68.         --control-plane --certificate-key 6d3f1abc998f3ffd104a989aa4b5ff3ae622ccd1a9b0098d9b68ed4221820ac5
  69. Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
  70. As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
  71. "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
  72. Then you can join any number of worker nodes by running the following on each as root:
  73. kubeadm join 172.16.1.23:6443 --token agitw8.fwghrey1nysrprf8 \
  74.         --discovery-token-ca-cert-hash sha256:01099779f60c0ba7a8070edacaeaaa1b3b55c36b3a9136402200ce75dafe6bb8
复制代码

  •  初始化信息重要信息截取:1.拷贝配置文件到家目录(执行),2.添加master节点的命令(记录),3.添加node节点的命令 (记录)
3.png


  •  将初始化后的管理员配置文件拷贝到家目录(命令直接从初始化信息中复制执行)
  1.   mkdir -p $HOME/.kube
  2.   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3.   sudo chown $(id -u):$(id -g) $HOME/.kube/config
复制代码

  • 查看节点,此时还没有网络插件,所以状态为NotReady
4.png

7.初始化k8s【master2】

7.1 初始化master2,加入k8s集群


  • 将【步骤6.2】中获取到的添加master节点的初始化命令复制-执行
  1. kubeadm join 172.16.1.23:6443 --token agitw8.fwghrey1nysrprf8 \
  2.         --discovery-token-ca-cert-hash sha256:01099779f60c0ba7a8070edacaeaaa1b3b55c36b3a9136402200ce75dafe6bb8 \
  3.         --control-plane --certificate-key 6d3f1abc998f3ffd104a989aa4b5ff3ae622ccd1a9b0098d9b68ed4221820ac5
复制代码

  • master2初始化信息如下:
master2初始化信息
  1. [root@master2 data]# kubeadm join 172.16.1.23:6443 --token agitw8.fwghrey1nysrprf8 \
  2. > --discovery-token-ca-cert-hash sha256:01099779f60c0ba7a8070edacaeaaa1b3b55c36b3a9136402200ce75dafe6bb8 \
  3. > --control-plane --certificate-key 6d3f1abc998f3ffd104a989aa4b5ff3ae622ccd1a9b0098d9b68ed4221820ac5
  4. [preflight] Running pre-flight checks
  5. [preflight] Reading configuration from the cluster...
  6. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
  7. [preflight] Running pre-flight checks before initializing the new control plane instance
  8. [preflight] Pulling images required for setting up a Kubernetes cluster
  9. [preflight] This might take a minute or two, depending on the speed of your internet connection
  10. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  11. [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
  12. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  13. [certs] Generating "apiserver-kubelet-client" certificate and key
  14. [certs] Generating "apiserver" certificate and key
  15. [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master2] and IPs [10.96.0.1 172.16.1.21 172.16.1.23 172.16.1.20 172.16.1.24 127.0.0.1]
  16. [certs] Generating "front-proxy-client" certificate and key
  17. [certs] Generating "etcd/peer" certificate and key
  18. [certs] etcd/peer serving cert is signed for DNS names [localhost master2] and IPs [172.16.1.21 127.0.0.1 ::1]
  19. [certs] Generating "apiserver-etcd-client" certificate and key
  20. [certs] Generating "etcd/server" certificate and key
  21. [certs] etcd/server serving cert is signed for DNS names [localhost master2] and IPs [172.16.1.21 127.0.0.1 ::1]
  22. [certs] Generating "etcd/healthcheck-client" certificate and key
  23. [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
  24. [certs] Using the existing "sa" key
  25. [kubeconfig] Generating kubeconfig files
  26. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  27. [kubeconfig] Writing "admin.conf" kubeconfig file
  28. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  29. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  30. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  31. [control-plane] Creating static Pod manifest for "kube-apiserver"
  32. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  33. [control-plane] Creating static Pod manifest for "kube-scheduler"
  34. [check-etcd] Checking that the etcd cluster is healthy
  35. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  36. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  37. [kubelet-start] Starting the kubelet
  38. [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
  39. [etcd] Announced new etcd member joining to the existing etcd cluster
  40. [etcd] Creating static Pod manifest for "etcd"
  41. [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
  42. The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
  43. [mark-control-plane] Marking the node master2 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
  44. [mark-control-plane] Marking the node master2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  45. This node has joined the cluster and a new control plane instance was created:
  46. * Certificate signing request was sent to apiserver and approval was received.
  47. * The Kubelet was informed of the new secure connection details.
  48. * Control plane (master) label and taint were applied to the new node.
  49. * The Kubernetes control plane instances scaled up.
  50. * A new etcd member was added to the local/stacked etcd cluster.
  51. To start administering your cluster from this node, you need to run the following as a regular user:
  52.         mkdir -p $HOME/.kube
  53.         sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  54.         sudo chown $(id -u):$(id -g) $HOME/.kube/config
  55. Run 'kubectl get nodes' to see this node join the cluster.
复制代码

  • 将初始化后的管理员配置文件拷贝到家目录(从初始化信息中复制执行)
  1. mkdir -p $HOME/.kube
  2. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. sudo chown $(id -u):$(id -g) $HOME/.kube/config
复制代码

  • 查看节点信息,已经有两个master节点了,没有网络插件,状态为NotReady
5.png

8.初始化k8s【master3】

8.1 初始化master3,加入k8s集群


  • 将【步骤6.2】中获取到的添加master节点的初始化命令复制-执行
  1. kubeadm join 172.16.1.23:6443 --token agitw8.fwghrey1nysrprf8 \
  2.         --discovery-token-ca-cert-hash sha256:01099779f60c0ba7a8070edacaeaaa1b3b55c36b3a9136402200ce75dafe6bb8 \
  3.         --control-plane --certificate-key 6d3f1abc998f3ffd104a989aa4b5ff3ae622ccd1a9b0098d9b68ed4221820ac5
复制代码

  • master2初始化信息如下:
master3初始化信息
  1.  [root@master3 ~]# kubeadm join 172.16.1.23:6443 --token agitw8.fwghrey1nysrprf8 \
  2. > --discovery-token-ca-cert-hash sha256:01099779f60c0ba7a8070edacaeaaa1b3b55c36b3a9136402200ce75dafe6bb8 \
  3. > --control-plane --certificate-key 6d3f1abc998f3ffd104a989aa4b5ff3ae622ccd1a9b0098d9b68ed4221820ac5
  4. [preflight] Running pre-flight checks
  5. [preflight] Reading configuration from the cluster...
  6. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
  7. [preflight] Running pre-flight checks before initializing the new control plane instance
  8. [preflight] Pulling images required for setting up a Kubernetes cluster
  9. [preflight] This might take a minute or two, depending on the speed of your internet connection
  10. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  11. [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
  12. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  13. [certs] Generating "apiserver-kubelet-client" certificate and key
  14. [certs] Generating "apiserver" certificate and key
  15. [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master3] and IPs [10.96.0.1 172.16.1.24 172.16.1.23 172.16.1.20 172.16.1.21 127.0.0.1]
  16. [certs] Generating "front-proxy-client" certificate and key
  17. [certs] Generating "etcd/server" certificate and key
  18. [certs] etcd/server serving cert is signed for DNS names [localhost master3] and IPs [172.16.1.24 127.0.0.1 ::1]
  19. [certs] Generating "apiserver-etcd-client" certificate and key
  20. [certs] Generating "etcd/peer" certificate and key
  21. [certs] etcd/peer serving cert is signed for DNS names [localhost master3] and IPs [172.16.1.24 127.0.0.1 ::1]
  22. [certs] Generating "etcd/healthcheck-client" certificate and key
  23. [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
  24. [certs] Using the existing "sa" key
  25. [kubeconfig] Generating kubeconfig files
  26. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  27. [kubeconfig] Writing "admin.conf" kubeconfig file
  28. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  29. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  30. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  31. [control-plane] Creating static Pod manifest for "kube-apiserver"
  32. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  33. [control-plane] Creating static Pod manifest for "kube-scheduler"
  34. [check-etcd] Checking that the etcd cluster is healthy
  35. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  36. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  37. [kubelet-start] Starting the kubelet
  38. [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
  39. [etcd] Announced new etcd member joining to the existing etcd cluster
  40. [etcd] Creating static Pod manifest for "etcd"
  41. [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
  42. The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
  43. [mark-control-plane] Marking the node master3 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
  44. [mark-control-plane] Marking the node master3 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  45. This node has joined the cluster and a new control plane instance was created:
  46. * Certificate signing request was sent to apiserver and approval was received.
  47. * The Kubelet was informed of the new secure connection details.
  48. * Control plane (master) label and taint were applied to the new node.
  49. * The Kubernetes control plane instances scaled up.
  50. * A new etcd member was added to the local/stacked etcd cluster.
  51. To start administering your cluster from this node, you need to run the following as a regular user:
  52.         mkdir -p $HOME/.kube
  53.         sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  54.         sudo chown $(id -u):$(id -g) $HOME/.kube/config
  55. Run 'kubectl get nodes' to see this node join the cluster.
复制代码

  • 将初始化后的管理员配置文件拷贝到家目录(从初始化信息中复制执行)
  1. mkdir -p $HOME/.kube
  2. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. sudo chown $(id -u):$(id -g) $HOME/.kube/config
复制代码

  • 查看节点信息,已经有三个master节点了,没有网络插件,状态为NotReady

9.查看节点信息


  • 由于没有安装calico网络插件,所以状态为NotReady
9.1 在master3查看节点信息
  1. [root@master3 ~]# kubectl get node -o wide
  2. NAME      STATUS     ROLES                  AGE     VERSION    INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
  3. master1   NotReady   control-plane,master   23m     v1.23.17   172.16.1.20   <none>        CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.9
  4. master2   NotReady   control-plane,master   18m     v1.23.17   172.16.1.21   <none>        CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.9
  5. master3   NotReady   control-plane,master   2m52s   v1.23.17   172.16.1.24   <none>        CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.9
复制代码
9.2 在master2查看节点信息
  1. [root@master2 data]# kubectl get node -o wide
  2. NAME      STATUS     ROLES                  AGE   VERSION    INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
  3. master1   NotReady   control-plane,master   50m   v1.23.17   172.16.1.20   <none>        CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.9
  4. master2   NotReady   control-plane,master   44m   v1.23.17   172.16.1.21   <none>        CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.9
复制代码
9.3 master1查看节点信息
  1. [root@master1 ~]# kubectl get node -o wide
  2. NAME      STATUS     ROLES                  AGE   VERSION    INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
  3. master1   NotReady   control-plane,master   20m   v1.23.17   172.16.1.20   <none>        CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.9
  4. master2   NotReady   control-plane,master   15m   v1.23.17   172.16.1.21   <none>        CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.9
复制代码
10.添加node节点到k8s集群(所有node节点执行)


  • 初始化在【步骤3】全部执行完成
10.1 将【步骤6.2】中获取到添加Node节点的初始化命令复制-执行
  1. kubeadm join 172.16.1.23:6443 --token agitw8.fwghrey1nysrprf8 \
  2.         --discovery-token-ca-cert-hash sha256:01099779f60c0ba7a8070edacaeaaa1b3b55c36b3a9136402200ce75dafe6bb8
复制代码

  • node1节点注册信息
node1注册到集群信息
  1. [root@node1 ~]# kubeadm join 172.16.1.23:6443 --token agitw8.fwghrey1nysrprf8 \
  2. > --discovery-token-ca-cert-hash sha256:01099779f60c0ba7a8070edacaeaaa1b3b55c36b3a9136402200ce75dafe6bb8
  3. [preflight] Running pre-flight checks
  4. [preflight] Reading configuration from the cluster...
  5. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
  6. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  7. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  8. [kubelet-start] Starting the kubelet
  9. [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
  10. This node has joined the cluster:
  11. * Certificate signing request was sent to apiserver and a response was received.
  12. * The Kubelet was informed of the new secure connection details.
  13. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
复制代码
10.2 查看node注册状态,在任意master执行(没有安装calico插件,所以状态不是Ready)
  1. [root@master1 kubeadm]# kubectl get node -o wide
  2. NAME      STATUS     ROLES                  AGE    VERSION    INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
  3. master1   NotReady   control-plane,master   26m    v1.23.17   172.16.1.20   <none>        CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.9
  4. master2   NotReady   control-plane,master   21m    v1.23.17   172.16.1.21   <none>        CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.9
  5. master3   NotReady   control-plane,master   6m3s   v1.23.17   172.16.1.24   <none>        CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.9
  6. node1     NotReady   <none>                 28s    v1.23.17   172.16.1.22   <none>        CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.9
复制代码
11.安装calico网络插件


  • 参考下列文章中的【步骤6】
  1. https://www.cnblogs.com/Leonardo-li/p/18648449
复制代码

  • calico部署后各节点状态
  1. [root@master1 calico]# kubectl get node -o wide
  2. NAME      STATUS   ROLES                  AGE    VERSION    INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
  3. master1   Ready    control-plane,master   31m    v1.23.17   172.16.1.20   <none>        CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.9
  4. master2   Ready    control-plane,master   26m    v1.23.17   172.16.1.21   <none>        CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.9
  5. master3   Ready    control-plane,master   10m    v1.23.17   172.16.1.24   <none>        CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.9
  6. node1     Ready    <none>                 5m3s   v1.23.17   172.16.1.22   <none>        CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.9
复制代码
12.k8s集群HA高可用验证

12.1 VIP和master节点状态查看


  • VIP状态查看,可以看到此时的VIP(172.16.1.23)在master1(172.16.1.20)上。
7.png


  • 查看控制层组件状态
8.png

12.2 高可用验证


  • 此时的VIP在master1上,关闭master1服务器,查看VIP的漂移是否正常,发现VIP在master3上了
9.png


  • 查看master2和master3控制层是否可用,发现是可以正常使用的
10.png

13.k8s永久证书确认

13.1 可以看到ca证书,还是其他业务使用的证书都是100年
  1. kubeadm certs check-expiration
复制代码
  1. [root@master1 ~]# kubeadm certs check-expiration
  2. [check-expiration] Reading configuration from the cluster...
  3. [check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
  4. CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
  5. admin.conf                 Mar 07, 2125 02:50 UTC   99y             ca                      no      
  6. apiserver                  Mar 07, 2125 02:50 UTC   99y             ca                      no      
  7. apiserver-etcd-client      Mar 07, 2125 02:50 UTC   99y             etcd-ca                 no      
  8. apiserver-kubelet-client   Mar 07, 2125 02:50 UTC   99y             ca                      no      
  9. controller-manager.conf    Mar 07, 2125 02:50 UTC   99y             ca                      no      
  10. etcd-healthcheck-client    Mar 07, 2125 02:50 UTC   99y             etcd-ca                 no      
  11. etcd-peer                  Mar 07, 2125 02:50 UTC   99y             etcd-ca                 no      
  12. etcd-server                Mar 07, 2125 02:50 UTC   99y             etcd-ca                 no      
  13. front-proxy-client         Mar 07, 2125 02:50 UTC   99y             front-proxy-ca          no      
  14. scheduler.conf             Mar 07, 2125 02:50 UTC   99y             ca                      no      
  15. CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
  16. ca                      Mar 07, 2125 02:50 UTC   99y             no      
  17. etcd-ca                 Mar 07, 2125 02:50 UTC   99y             no      
  18. front-proxy-ca          Mar 07, 2125 02:50 UTC   99y             no      
复制代码
13.2 kubelet证书也是100年
  1. openssl x509 -in /var/lib/kubelet/pki/kubelet-client-current.pem -noout -dates
复制代码
  1. [root@master1 ~]# openssl x509 -in /var/lib/kubelet/pki/kubelet-client-current.pem -noout -dates
  2. notBefore=Mar 31 02:50:04 2025 GMT
  3. notAfter=Mar  7 02:50:08 2125 GMT
复制代码
14.参考文档
  1. #高可用集群部署
  2. https://mp.weixin.qq.com/s/l4qS_GnmEZ2BmQpO6VI3sQ
  3. #永久证书创建
  4. https://mp.weixin.qq.com/s/TRukdEGu0Nm_7wjqledrRg
  5. #证书介绍
  6. https://mp.weixin.qq.com/s/E1gc6pJGLzbgHCvbOd1nPQ
复制代码
 

来源:程序园用户自行投稿发布,如果侵权,请联系站长删除
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!
您需要登录后才可以回帖 登录 | 立即注册