kubernetes高可用架构及各个插件相关部署记录
#知识迎虎年,开挂一整年#1、服务部署规划
IP
最低要求配置
ROLE
service
192.168.6.10
virturl IP
192.168.6.11
2C 1.7G
master-01
keepalive、haproxy
192.168.6.12
2C 1.7G
master-02
keepalive、haproxy
192.168.6.13
2C 1.7G
master-03
keepalive、haproxy
192.168.6.14
server-01 workNode
192.168.6.15
server-02 workNode
2、部署haproxy部署位置:
master-01、master-02、master-03 安装[root@master-01 haproxy]# yum install -y haproxy 重要配置说明:
由于haproxy部署在master节点,为避免与k8s的6443端口冲突,修改端口为9443。(如果为阿里云服务器,则可以直接使用SLB去做负载及高可用) 配置文件: [root@master-01 haproxy]# cat haproxy.cfg #--------------------------------------------------------------------- # Global settings #--------------------------------------------------------------------- global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the "listen" and "backend" sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend kubernetes-apiserver mode tcp bind *:9443 ## 监听9443端口 # bind *:443 ssl # To be completed .... acl url_static path_beg -i /static /images /javascript /stylesheets acl url_static path_end -i .jpg .gif .png .css .js default_backend kubernetes-apiserver backend kubernetes-apiserver mode tcp # 模式tcp balance roundrobin # 采用轮询的负载算法 # k8s-apiservers backend # 配置apiserver,端口6443 server master-192.168.6.11 192.168.6.11:6443 check server master-192.168.6.12 192.168.6.12:6443 check server master-192.168.6.13 192.168.6.13:6443 check 启动haproxy并查看是否启动成功[root@master-03 haproxy]# systemctl enable haproxy && systemctl start haproxy [root@master-03 haproxy]# netstat -ntlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 861/sshd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 952/master tcp 0 0 0.0.0.0:9443 0.0.0.0:* LISTEN 3745/haproxy tcp6 0 0 :::22 :::* LISTEN 861/sshd tcp6 0 0 ::1:25 :::* LISTEN 952/master
4、部署keepalive部署位置:
master-01、master-02、master-03 安装[root@master-01 haproxy]# yum install -y keepalive 配置文件:global_defs { script_user root enable_script_security notification_email { xxxx@qq.com } } vrrp_script chk_haproxy { script "/bin/bash -c "if [[ $(netstat -nlp | grep 9443) ]]; then exit 0; else exit 1; fi"" # haproxy 检测 interval 2 # 每2秒执行一次检测 weight 11 # 权重变化 } vrrp_instance VI_1 { interface ens32 state MASTER # backup节点设为BACKUP virtual_router_id 51 # id设为相同,表示是同一个虚拟路由组 priority 100 #初始权重,BACKUP节点可设置小一点 nopreempt #可抢占,必须配置,否则主节点挂了则无法切换 unicast_peer { } virtual_ipaddress { 192.168.6.10 # vip } authentication { auth_type PASS auth_pass password123 } track_script { chk_haproxy } }
注意事项:
(1)主从节点需修改配置 state MASTER # backup节点设为BACKUP priority 100 #初始权重,BACKUP节点可设置小一点
(2)注意防火墙导致keepalive脑裂
方式一:停止iptables iptables -F && iptables -X && iptables -t nat -F && iptables -t nat -X
方式二:开放策略 iptables -A INPUT -s 192.168.6.0/24 -d 224.0.0.18 -j ACCEPT #允许组播地址通信 iptables -A INPUT -s 192.168.6.0/24 -p vrrp -j ACCEPT #允许VRRP(虚拟路由器冗余协)通信 启动keepalived并查看是否启动成功[root@master-01 keepalived]# systemctl enable keepalived && systemctl start keepalived [root@master-01 keepalived]# ps aux |grep keepalived root 28985 0.0 0.1 120792 1416 ? Ss 10:58 0:00 /usr/sbin/keepalived -D root 28986 0.0 0.3 127532 3312 ? S 10:58 0:00 /usr/sbin/keepalived -D root 28987 0.1 0.3 131836 3144 ? S 10:58 0:01 /usr/sbin/keepalived -D root 30987 0.0 0.0 112824 992 pts/1 S+ 11:11 0:00 grep --color=auto keepalived [root@master-01 keepalived]# ip a |grep "192.168.6.10" inet 192.168.6.10/32 scope global ens32
5、k8s组件安装前优化配置(1)修改增加hosts解析# cat >> /etc/hosts << EOF 192.168.6.11 master-01 192.168.6.12 master-02 192.168.6.13 master-03 192.168.6.14 server-01 192.168.6.15 server-02 EOF (2)调整防火墙、selinux、NetworkManager
配置位置:
所有节点 iptables -F && iptables -X && iptables -t nat -F && iptables -t nat -X && systemctl stop NetworkManager && systemctl disable NetworkManager && systemctl stop firewalld && systemctl disable firewalld && setenforce 0 && sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config (3)关闭swap
** 配置位置:**
所有节点
临时关闭 # swapoff -a
永久关闭 # sed -i.bak "/swap/s/^/#/" /etc/fstab(4)设置内核参数bridge-nf-call-iptables=1配置位置:
所有节点
查看br_netfilter模块 # lsmod |grep br_netfilter
临时新增 # modprobe br_netfilter
永久新增 # cat > /etc/rc.sysinit << EOF #!/bin/bash for file in /etc/sysconfig/modules/*.modules ; do [ -x $file ] && $file done EOF # cat > /etc/sysconfig/modules/br_netfilter.modules << EOF modprobe br_netfilter EOF # chmod 755 /etc/sysconfig/modules/br_netfilter.modules (5)设置内核参数
配置位置:
所有节点
配置: # cat < /etc/sysctl.d/k8s.conf vm.swappiness = 0 kernel.sysrq=1 net.ipv4.neigh.default.gc_stale_time=120 # see details in https://help.aliyun.com/knowledge_detail/39428.html net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 net.ipv4.conf.default.arp_announce = 2 net.ipv4.conf.lo.arp_announce=2 net.ipv4.conf.all.arp_announce=2 # see details in https://help.aliyun.com/knowledge_detail/41334.html net.ipv4.tcp_max_tw_buckets = 5000 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 1024 net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_slow_start_after_idle = 0 net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 net.ipv4.ip_forward=1 net.ipv4.tcp_tw_recycle=0 net.ipv4.neigh.default.gc_thresh1=1024 net.ipv4.neigh.default.gc_thresh1=2048 net.ipv4.neigh.default.gc_thresh1=4096 vm.swappiness=0 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_instances=8192 fs.inotify.max_user_watches=1048576 fs.file-max=52706963 fs.nr_open=52706963 net.ipv6.conf.all.disable_ipv6=1 net.netfilter.nf_conntrack_max=2310720 EOF
加载使配置生效 # sysctl -p /etc/sysctl.d/k8s.conf(6)kube-proxy开启ipvs
配置位置:
所有节点
配置:
安装ipvs基础软件包: yum -y install ipset ipvsadm
linux内核开启ipvs支持: cat > /etc/sysconfig/modules/ipvs.modules < /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0 #repo_gpgcheck=1 #gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF # wget https://download.docker.com/linux/centos/docker-ce.repo -P /etc/yum.repos.d/(2)安装docker
查看版本 # yum list docker-ce --showduplicates | sort -r
安装依赖 # yum install -y yum-utils device-mapper-persistent-data lvm2
安装docker # yum install -y docker-ce-3.20.10(3)命令补全# yum -y install bash-completion # source /etc/profile.d/bash_completion.sh(4)镜像加速设置
由于初始化k8s从国外拉取镜像,因此可以使用阿里云镜像加速服务 # mkdir -p /etc/docker # tee /etc/docker/daemon.json <<-"EOF" { "registry-mirrors": ["https://v16stybc.mirror.aliyuncs.com"] } EOF(5)修改Cgroup Driver
修改daemon.json,新增‘"exec-opts": ["native.cgroupdriver=systemd"’ # cat /etc/docker/daemon.json { "registry-mirrors": ["https://v16stybc.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"] }(6)启动服务# systemctl daemon-reload && systemctl enable docker && systemctl restart docker
7、k8s安装(1)版本查看[root@master-01 ~]# yum list kubelet --showduplicates | sort -r |grep 1.20 Repository base is listed more than once in the configuration kubelet.x86_64 1.20.9-0 kubernetes kubelet.x86_64 1.20.8-0 kubernetes kubelet.x86_64 1.20.7-0 kubernetes kubelet.x86_64 1.20.6-0 kubernetes kubelet.x86_64 1.20.5-0 kubernetes kubelet.x86_64 1.20.4-0 kubernetes kubelet.x86_64 1.20.2-0 kubernetes kubelet.x86_64 1.20.15-0 kubernetes kubelet.x86_64 1.20.14-0 kubernetes kubelet.x86_64 1.20.13-0 kubernetes kubelet.x86_64 1.20.12-0 kubernetes kubelet.x86_64 1.20.11-0 kubernetes kubelet.x86_64 1.20.1-0 kubernetes kubelet.x86_64 1.20.10-0 kubernetes kubelet.x86_64 1.20.0-0 kubernetes (2)安装需要的版本
安装位置: 所有节点 # yum install -y kubelet-1.20.10 kubeadm-1.20.10 kubectl-1.20.10
注释 kubelet 运行在集群所有节点上,用于启动Pod和容器等对象的工具 kubeadm 用于初始化集群,启动集群的命令工具 kubectl 用于和集群通信的命令行,通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件(3)开机自启动kubelet
操作位置: 所有节点 # systemctl enable kubelet && systemctl start kubelet
设置命令补全 # echo "source <(kubectl completion bash)" >> ~/.bash_profile && source ~/.bash_profile(4)提前下载初始化镜像
操作位置: 所有节点 # cat image.sh #!/bin/bash url=registry.cn-hangzhou.aliyuncs.com/google_containers version=v1.16.4 images=(`kubeadm config images list --kubernetes-version=$version|awk -F "/" "{print $2}"`) for imagename in ${images[@]} ; do docker pull $url/$imagename docker tag $url/$imagename k8s.gcr.io/$imagename docker rmi -f $url/$imagename done # sh -x image.sh
查看下载 # docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.20.10 945c9bce487a 6 months ago 99.7MB k8s.gcr.io/kube-apiserver v1.20.10 644cadd07add 6 months ago 122MB k8s.gcr.io/kube-controller-manager v1.20.10 2f450864515d 6 months ago 116MB k8s.gcr.io/kube-scheduler v1.20.10 4c9be8dc650b 6 months ago 47.3MB k8s.gcr.io/etcd 3.4.13-0 0369cf4303ff 17 months ago 253MB k8s.gcr.io/coredns 1.7.0 bfe3a36ebd25 20 months ago 45.2MB k8s.gcr.io/pause 3.2 80d28bedfe5d 2 years ago 683kB (5.1)配置文件初始化集群(推荐,可永久修改kubeproxy 的 mode 为 ipvs)
生成初始化配置文件 [root@master-01 k8s]# kubeadm config print init-defaults > kubeadm-init.yaml
修改配置文件 [root@master-01 k8s]# cat kubeadm-init.yaml apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.6.11 #修改为某个master节点的IP bindPort: 6443 #修改为6443(默认就是) nodeRegistration: criSocket: /var/run/dockershim.sock name: master-01 taints: - effect: NoSchedule #根据实际情况调整,master生产环境一般不跑业务应用 key: node-role.kubernetes.io/master --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} controlPlaneEndpoint: "192.168.6.10:9443" #新增配置项,为SLB或者虚拟IP地址端口 dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: k8s.gcr.io #可以换成国内阿里的地址 如:registry.cn-hangzhou.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.20.10 # 修改为安装的版本,我安装的是v1.20.10 networking: dnsDomain: cluster.local serviceSubnet: 10.10.0.0/16 #根据自己需要修改 podSubnet: "10.144.0.0/16" #新增pod的ip范围,根据自己需求新增 scheduler: {} --- ###此行以下配置为新增配置,修改kubeproxy mode为 ipvs apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: "ipvs"
初始化 [root@master-01 k8s]# kubeadm init --config ./kubeadm-init.yaml --upload-certs [init] Using Kubernetes version: v1.20.10 [preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.12. Latest validated version: 19.03 [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using "kubeadm config images pull" [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master-01] and IPs [10.10.0.1 192.168.6.11 192.168.6.10][certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost master-01] and IPs [192.168.6.11 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost master-01] and IPs [192.168.6.11 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "admin.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "kubelet.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "controller-manager.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 19.048763 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: 1fbd8452dd60d3803b395f08fdcd8b9c88e2b72e8451963cdfa1975229006a43 [mark-control-plane] Marking the node master-01 as control-plane by adding the labels "node-role.kubernetes.io/master=""" and "node-role.kubernetes.io/contro l-plane="" (deprecated)"[mark-control-plane] Marking the node master-01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: abcdef.0123456789abcdef [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node running the following command on each as root: kubeadm join 192.168.6.10:9443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:fcc54a23d35c8eb3baf59290a3178f0656b573c3d0553fd3f9085ae1c9648bab --control-plane --certificate-key 1fbd8452dd60d3803b395f08fdcd8b9c88e2b72e8451963cdfa1975229006a43 Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.6.10:9443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:fcc54a23d35c8eb3baf59290a3178f0656b573c3d0553fd3f9085ae1c9648bab (5.2)命令行初始化集群(不推荐,此种方式无法永久设置kubeproxy 的mode为ipvs,只能临时修改)
操作位置: 随便某个master节点即可 kubeadm init --kubernetes-version=1.20.10 --service-cidr=10.10.0.0/16 --pod-network-cidr=10.144.0.0/16 --control-plane-endpoint 192.168.6.10:9443 --upload-certs
附:单个master初始化不用带下面参数 --control-plane-endpoint 192.168.6.10 --upload-certs
输出结果: [root@master-01 ~]# kubeadm init --kubernetes-version=1.20.10 > --service-cidr=10.10.0.0/16 --pod-network-cidr=10.144.0.0/16 > --control-plane-endpoint 192.168.6.10:9443 --upload-certs [init] Using Kubernetes version: v1.20.10 [preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.12. Latest validated version: 19.03 [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using "kubeadm config images pull" [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master-01] and IPs [10.10.0.1 192.168.6.10][certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost master-01] and IPs [192.168.6.10 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost master-01] and IPs [192.168.6.10 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "admin.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "kubelet.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "controller-manager.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 18.013101 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: 8a6b2052e2d717628c1d9a8ff9404da9748194848acee56162d604fa40fb2f3a [mark-control-plane] Marking the node master-01 as control-plane by adding the labels "node-role.kubernetes.io/master=""" and "node-role.kubernetes.io/contro l-plane="" (deprecated)"[mark-control-plane] Marking the node master-01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: lry74b.lek8xpwhfofkslxm [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node running the following command on each as root: kubeadm join 192.168.6.10:9443 --token lry74b.lek8xpwhfofkslxm --discovery-token-ca-cert-hash sha256:6a79bad34c640800f3defc739049a6382994c65077b971d93f15870aeeee8129 --control-plane --certificate-key 8a6b2052e2d717628c1d9a8ff9404da9748194848acee56162d604fa40fb2f3a Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.6.10:9443 --token lry74b.lek8xpwhfofkslxm --discovery-token-ca-cert-hash sha256:6a79bad34c640800f3defc739049a6382994c65077b971d93f15870aeeee8129 (6)添加节点
操作位置: 初始化所在master节点
生成免密钥登录 [root@master-01 ~]# ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:AXI6qrQrDrOB9ApII4MqogKl5678m0CVpCpQXOa4f4Q root@master-02 The key"s randomart image is: +---[RSA 2048]----+ | ..o+ o | | .++.+ . | |...o+ . | |+ oo o . | |*Oo E . S | |&o=. . | |&= .. . | |X+= .. | |BBo+. | +----[SHA256]-----+ [root@master-01 ~]# for i in 12 13 14 15 16 > do > ssh-cooy-id root@192.168.6.${i} > done
从初始化执行的节点,同步证书到其它节点
同步脚本: [root@master-01 ~]# cat scp.sh CONTROL_PLANE_IPS=$1 USER="root" dir="/etc/kubernetes/pki/" for host in ${CONTROL_PLANE_IPS}; do scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:$dir scp /etc/kubernetes/pki/ca.key "${USER}"@$host:$dir scp /etc/kubernetes/pki/sa.key "${USER}"@$host:$dir scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:$dir scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:$dir scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:$dir scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:${dir}etcd/ # Quote this line if you are using external etcd scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:${dir}etcd/ done
同步证书到其它master节点: [root@master-01 ~]# for i in 12 13 > do > sh scp.sh 192.168.6.${i} > done 添加master节点
操作位置: 其它master节点 # kubeadm join 192.168.6.10:9443 --token lry74b.lek8xpwhfofkslxm --discovery-token-ca-cert-hash sha256:6a79bad34c640800f3defc739049a6382994c65077b971d93f15870aeeee8129 --control-plane --certificate-key 8a6b2052e2d717628c1d9a8ff9404da9748194848acee56162d604fa40fb2f3a 添加node节点
操作位置: server节点 # kubeadm join 192.168.6.10:9443 --token lry74b.lek8xpwhfofkslxm --discovery-token-ca-cert-hash sha256:6a79bad34c640800f3defc739049a6382994c65077b971d93f15870aeeee8129 查看添加结果[root@master-01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master-01 NotReady control-plane,master 124m v1.20.10 master-02 NotReady control-plane,master 10m v1.20.10 master-03 NotReady control-plane,master 10m v1.20.10 server-01 NotReady 3m40s v1.20.10 server-02 NotReady 79s v1.20.10
由于未添加网络组件,因此node节点的状态为 NotRready 8、添加网络插件calico
背景: 由于coreDNS容器启动失败,看日志: Normal Scheduled 3m17s default-scheduler Successfully assigned kube-system/coredns-74ff55c5b-7jzlr to master-01 Warning FailedCreatePodSandBox 3m14s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "a1a7b574f6073e766d7744ed29da60f9b3fe8dbb8cda2d83d89e720f6a78760e" network for pod "coredns-74ff55c5b-7jzlr": networkPlugin cni failed to set up pod "coredns-74ff55c5b-7jzlr_kube-system" network: error getting ClusterInformation: Get "https://[10.10.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes"), failed to clean up sandbox container "a1a7b574f6073e766d7744ed29da60f9b3fe8dbb8cda2d83d89e720f6a78760e" network for pod "coredns-74ff55c5b-7jzlr": networkPlugin cni failed to teardown pod "coredns-74ff55c5b-7jzlr_kube-system" network: error getting ClusterInformation: Get "https://[10.10.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")] (1)下载calico.yaml及镜像
版本下载地址: https://github.com/projectcalico/calico/releases
找到 release-版本号.tgz 压缩文件 [root@master-01 calico]# wget https://github.com/projectcalico/calico/releases/download/v3.20.3/release-v3.20.3.tgz [root@master-01 calico]# tar xzvf release-v3.20.3.tgz [root@master-01 calico]# ls release-v3.20.3 bin images k8s-manifests README
yaml文件解压后所在位置 [root@master-01 calico]# cat release-v3.20.3/k8s-manifests/calico.yaml |grep image image: docker.io/calico/cni:v3.20.3 image: docker.io/calico/cni:v3.20.3 image: docker.io/calico/pod2daemon-flexvol:v3.20.3 image: docker.io/calico/node:v3.20.3 image: docker.io/calico/kube-controllers:v3.20.3
根据上述calico.yaml文件找到解压包内的镜像压缩文件 [root@master-01 calico]# ls release-v3.20.3/images/ |egrep "cni|pod2daemon-flexvol|node|kube-controllers" calico-cni.tar calico-kube-controllers.tar calico-node.tar calico-pod2daemon-flexvol.tar
同步到所有节点 [root@master-01 calico]# for i in `ls release-v3.20.3/images/ |egrep "cni|pod2daemon-flexvol|node|kube-controllers"` > do > for j in 12 13 14 15 > do > scp release-v3.20.3/images/$i root@192.168.6.${j}:/root/ > done > done
各个节点均导出为镜像
master-01执行 # for i in `ls release-v3.20.3/images/ |egrep "cni|pod2daemon-flexvol|node|kube-controllers"` > do > docker load -i release-v3.20.3/images/$i > done
其它节点执行 # for i in `ls /root/ |egrep "cni|pod2daemon-flexvol|node|kube-controllers"` > do > docker load -i /root/$i > done (2)创建calico相关资源对象[root@master-01 calico]# kubectl apply -f calico.yaml configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created serviceaccount/calico-node created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created poddisruptionbudget.policy/calico-kube-controllers created (3)再次查看状态,都是Ready状态[root@master-01 calico]# kubectl get nodes NAME STATUS ROLES AGE VERSION master-01 Ready control-plane,master 22h v1.20.10 master-02 Ready control-plane,master 20h v1.20.10 master-03 Ready control-plane,master 20h v1.20.10 server-01 Ready 20h v1.20.10 server-02 Ready 20h v1.20.10 [root@master-01 calico]# kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-9bcc567d6-4kq2g 1/1 Running 0 3m21s kube-system calico-node-25h7v 1/1 Running 0 3m20s kube-system calico-node-drtxw 1/1 Running 0 3m20s kube-system calico-node-nknc5 1/1 Running 0 3m20s kube-system calico-node-svk87 1/1 Running 0 3m21s kube-system calico-node-xvc29 1/1 Running 0 3m20s kube-system coredns-74ff55c5b-7jzlr 1/1 Running 0 5m45s kube-system coredns-74ff55c5b-jdwwh 1/1 Running 0 5m45s kube-system etcd-master-01 1/1 Running 0 5m52s kube-system etcd-master-02 1/1 Running 0 4m51s kube-system etcd-master-03 1/1 Running 0 4m29s kube-system kube-apiserver-master-01 1/1 Running 0 5m52s kube-system kube-apiserver-master-02 1/1 Running 0 4m52s kube-system kube-apiserver-master-03 1/1 Running 0 3m24s kube-system kube-controller-manager-master-01 1/1 Running 2 5m52s kube-system kube-controller-manager-master-02 1/1 Running 0 4m52s kube-system kube-controller-manager-master-03 1/1 Running 0 3m23s kube-system kube-proxy-ct6bh 1/1 Running 0 3m49s kube-system kube-proxy-hk5rd 1/1 Running 0 4m1s kube-system kube-proxy-s5swf 1/1 Running 0 4m52s kube-system kube-proxy-sjzl8 1/1 Running 0 3m53s kube-system kube-proxy-xglsf 1/1 Running 0 5m45s kube-system kube-scheduler-master-01 1/1 Running 2 5m52s kube-system kube-scheduler-master-02 1/1 Running 0 4m51s kube-system kube-scheduler-master-03 1/1 Running 0 3m13s 附1、kubeproxy调整
如果采用命令行初始化集群的,需修改kubeproxy mode 默认值"" 为 ipvs # kubectl edit configmap kube-proxy -n kube-system
mode: "" 修改为 mode: ipvs
然后重启kube-proxy # kubectl get pod -A |grep kube-proxy |awk "{print $2}" |xargs kubectl delete pod -n kube-system
零代码开发,三步帮您搞定网站访问量统计流量为王的时代,对于互联网的产品,访问量的统计显得非常重要。流量会来自不同渠道不同平台不同端口,对于数据的分析又需要有不同的维度。因此,要专门开发一套访问量统计系统,无论是难度,还
电商小卖家适合卖什么产品?所有搞运营的搞培训的都会跟大家说不要卖衣服鞋子,但是呢这些运营又会跟你说,秋冬流量来了刷起来把握住,但是除了衣服鞋子,其它产品跟秋冬流量有关系?卖百货的,家私的,五金的等等跟秋冬流
电动车的火热将帮助这些芯片股起飞如果电动汽车销量继续上升,为汽车制造芯片的半导体公司有望看到巨大的利益。以下是高盛瑞银和摩根士丹利在该板选择的最热门股票。与内燃机汽车相比,电动车需要更多的芯片,而且是更复杂的芯片
行情苹果华为小米OV五大品牌用户画像和你心里想的一样吗研究机构QuestMobile最近发布的一份2021年6月中国智能终端半年洞察报告数据里还有一组关于苹果华为小米OPPOvivo五大手机品牌的用户画像。报告里主要描述了四类用户群体
预算三千买哪款智能电视?8月好价,这三款性价比拉满,可闭眼入当大屏电视成为了消费者选择的主流产品后,贪心的消费者又会在这个尺寸上再加上一个条件便宜,这样看来,市面上55英寸的智能电视无疑会是最好的选择。而在如今这个时间点,618年中大促已过
2021年智能门锁一线品牌最新排名公布随着科技的进步,很多智能产品都走进了大家的生活,尤其是在智能安防系列中,从起初问世时候的智能密码锁,到现在的多功能。而全自动指纹锁是智能科技发展的产物,它更便于安装调试使用,同时更
全屋智能家居欧瑞博MixPad精灵开关的应用场景看到这个,不得不夸一下MixPad精灵触屏语音开关的免安装设计。相比很多智能开关都需要零火线供电,MixPad精灵触屏语音开关则可以通过TypeC接口充电直接使用。所以,这个开关不
日媒日本专业机构解剖华为高档智能手机中国产零部件占六成进步不小中国小康网8月31日讯老马在华为智能手机上,中国造零部件的采用出现激增。拆解华为支持5G的最新款智能手机,中国造的比率按金额计算达到约6成,相比旧机型翻了一番。在美国制裁仍持续的背
沈子瑜亿咖通科技破圈向上,在智能化领域创造无限可能亿咖通科技沈子瑜说随着互联网与移动设备的普及,人们的生活已经进入了智能化时代,工作方式与生活方式也在发生着改变。而在智能化越发普遍的情况下,沈子瑜带领亿咖通科技破圈向上,在汽车智能
手机空间不足拯救计划!分享四个手机储存空间清理的大家好我是非凡资源李李昨天给大家带来了几款电脑清理软件,今天来给大家带来四款手机的清理软件,每一款都值得拥有啊,而且实用的很那。不多说了开始吧一。安卓最强清理神器,清理君软件介绍安
全屋智能家居欧瑞博细节展现品质这个开关的使用场景挺丰富的,可以说在家里任何地方都能用到。因为欧瑞博MixPad精灵触屏语音开关不仅能在墙面安装,还能搭配支架布置到任意位置,需要充电时用TypeC接口的线充电即可