范文健康探索娱乐情感热点
投稿投诉
热点动态
科技财经
情感日志
励志美文
娱乐时尚
游戏搞笑
探索旅游
历史星座
健康养生
美丽育儿
范文作文
教案论文
国学影视

Ubuntu18。04下部署k8s

  一、更新Ubuntu源mv /etc/apt/sources.list /etc/apt/sources.list.bak cat  /etc/apt/sources.list.bak |grep -v "#" |grep -v "^#34; > sources.list sed -i s/archive.ubuntu.com/mirrors.ustc.edu.cn/g /etc/apt/sources.list sed -i s/security.ubuntu.com/mirrors.ustc.edu.cn/g /etc/apt/sources.list apt -y update && apt -y upgrade  # 2、timedatectl sed -i s/en_US/C/g /etc/default/locale timedatectl set-timezone Asia/Shanghai  # 3、bash-completion sed -i 97,99s/#//g /root/.bashrc  # 4、ssh echo "PermitRootLogin yes" >> /etc/ssh/sshd_config passwd root << "EOF" password password EOF systemctl reload ssh  # 5、hosts vim /etc/hosts 10.0.0.20 k8s-master00 10.0.0.21 k8s-master01 10.0.0.22 k8s-master02 10.0.0.23 k8s-node01 10.0.0.24 k8s-node02 10.0.0.25 k8s-bl-master  # 6、ssh-keygen ssh-keygen -t rsa for i in `cat /root/*.txt`;do echo $i;ssh-copy-id -i .ssh/id_rsa.pub $i;done  # 7、swap swapoff -a sed -i "/swap/s/^(.*)$/#1/g" /etc/fstab  # 8、network net=`cat /etc/netplan/00-installer-config.yaml |awk "NR==4{print $1}"` sed -i "s/${net}/eth0:/g" /etc/netplan/00-installer-config.yaml sed -i "11s/""/"net.ifnames=0 biosdevname=0"/g" /etc/default/grub update-grub reboot二、安装ipvsapt -y install ipvsadm ipset sysstat conntrack libseccomp2 libseccomp-dev cat > /etc/modules-load.d/ipvs.conf << EOF ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack ip_tables ip_set xt_set ipt_set ipt_rpfilter ipt_REJECT ipip EOF systemctl restart systemd-modules-load.service lsmod |grep -e ip_vs -e nf_conntrack_ipv4三、下载安装containerdwget https://github.com/containerd/containerd/releases/download/v1.6.1/cri-containerd-cni-1.6.1-linux-amd64.tar.gz tar --no-overwrite-dir -C / -xzf cri-containerd-cni-1.6.1-linux-amd64.tar.gz systemctl daemon-reload systemctl enable --now containerd  修改 config.toml containerd config default > /etc/containerd/config.toml --- sed -i "s#k8s.gcr.io#registry.aliyuncs.com/google_containers#g" /etc/containerd/config.toml sed -i "s#SystemdCgroup = false#SystemdCgroup = true#g" /etc/containerd/config.toml sed -i "153a        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]" /etc/containerd/config.toml  # 8个空格 # endpoint 10个空格 sed -i "154a          endpoint = ["https://registry.aliyuncs.com"]" /etc/containerd/config.toml  修改crictl.yaml mv /etc/crictl.yaml /etc/crictl.yaml.bak cat > /etc/crictl.yaml << "EOF" runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 0 debug: false pull-image-on-create: false disable-pull-on-run: false EOF四、安装nginx 做四层代理apt -y install nginx cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak vim /etc/nginx/nginx.conf --- ... ... stream {                    log_format main "$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent";                    access_log /var/log/nginx/k8s-access.log main;                  upstream k8s-apiserver {                 server 10.0.0.20:6443;                 server 10.0.0.21:6443;                 server 10.0.0.22:6443;         }              server {                 listen 6444;                  proxy_pass k8s-apiserver;         } }  http { 		log_format main "$remote_addr - $remote_user [$time_local] "$request" "                         "$status $body_bytes_sent "$http_referer" "                         ""$http_user_agent" "$http_x_forwarded_for"";         ...         ... } --- systemctl enable --now nginx.service systemctl status nginx.service五、安装keepalive 做高可用apt -y install keepalived #keepalived config cat > /etc/keepalived/keepalived.conf << "EOF" global_defs {     notification_email {       acassen@firewall.loc       failover@firewall.loc       sysadmin@firewall.loc     }     notification_email_from Alexandre.Cassen@firewall.loc     smtp_server 127.0.0.1     smtp_connect_timeout 30      router_id NGINX_MASTER }  vrrp_script check_nginx {   script "/etc/keepalived/check_nginx.sh"   interval 5   weight -1   fall 2   rise 1 }  vrrp_instance VI_1 {     state MASTER     interface eth0 # 修改为实际网卡名     virtual_router_id 51 # VRRP 路由 ID 实例,每个实例是唯一的     priority 100 # 优先级,备服务器设置 90     advert_int 1 # 指定 VRRP 心跳包通告间隔时间,默认 1 秒     authentication {         auth_type PASS         auth_pass K8SHA_KA_AUTH     }     # 虚拟 IP     virtual_ipaddress {         10.0.0.25/24     }     track_script {         check_nginx     } } EOF #health config cat > /etc/keepalived/check_nginx.sh << "EOF" #!/bin/bash  count=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$")  if [ "$count" -eq 0 ];then    systemctl stop keepalived  fi EOF --- systemctl enable --now keepalived.service systemctl status keepalived.service六、master端部署cfssl、etcd、ca certificate、etcd certificate
  6.1、下载cfsslwget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64 -O /usr/local/bin/cfssl wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64 -O /usr/local/bin/cfssljson wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64 -O /usr/local/bin/cfssl-certinfo chmod +x /usr/local/bin/cfssl* chown -Rf root:root /usr/local/bin/cfssl*
  6.2、etcd目录规划# all Master # 1、etcd-ssl mkdir -p /etc/etcd/ssl/ # 2、etcd-WorkingDirectory mkdir -p /var/lib/etcd/default.etcd # 3、kubernetes-ssl mkdir -p /etc/kubernetes/ssl # 4、kubernetes-log mkdir -p /var/log/kubernetes
  6.3 、ca 证书生成mkdir -p ~/work cd ~/work/ --- cat > ca-csr.json << "EOF" {   "CN": "kubernetes",   "key": {     "algo": "rsa",     "size": 2048   },   "names": [     {       "C": "CN",       "ST": "Shanghai",       "L": "Shanghai",       "O": "k8s",       "OU": "system"     }   ] } EOF --- cat > ca-config.json << "EOF" {   "signing": {     "default": {       "expiry": "87600h"     },     "profiles": {       "kubernetes": {         "usages": [             "signing",             "key encipherment",             "server auth",             "client auth"         ],         "expiry": "87600h"       }     }   } } EOF --- cfssl gencert -initca ca-csr.json | cfssljson -bare ca cp ca*.pem /etc/etcd/ssl/ --- # send to other master for i in `cat ~/MasterNodes.txt`;do echo $i;scp /etc/etcd/ssl/ca*.pem $i:/etc/etcd/ssl;done
  6.4 配置etcd证书cat > etcd-csr.json << "EOF" {   "CN": "etcd",   "hosts": [     "127.0.0.1",     "10.0.0.20",     "10.0.0.21",     "10.0.0.22",     "10.0.0.25"   ],   "key": {     "algo": "rsa",     "size": 2048   },   "names": [     {       "C": "CN",       "ST": "Shanghai",       "L": "Shanghai",       "O": "k8s",       "OU": "system"     }   ] } EOF --- cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd cp etcd*.pem /etc/etcd/ssl/ --- # send to other for i in `cat ~/MasterNodes.txt`;do echo $i;scp /etc/etcd/ssl/etcd*.pem $i:/etc/etcd/ssl;done
  6.5、下载及配置etcd# download etcd wget https://github.com/etcd-io/etcd/releases/download/v3.5.0/etcd-v3.5.0-linux-amd64.tar.gz # tar etcd-*.tar.gz tar -xf etcd-v3.5.0-linux-amd64.tar.gz --strip-components=1 -C ~/work/ etcd-v3.5.0-linux-amd64/etcd{,ctl} chown -Rf root:root etcd* cp -arp etcd* /usr/local/bin/ # send to other for i in `cat ~/MasterNodes.txt`;do echo $i;scp /usr/local/bin/etcd{,ctl} $i:/usr/local/bin/;done  cat > /etc/etcd/etcd.conf << "EOF" ETCD_NAME="etcd1" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://10.0.0.20:2380" # change ip ETCD_LISTEN_CLIENT_URLS="https://10.0.0.20:2379,http://127.0.0.1:2379" # change ip ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.20:2380" # change ip ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.20:2379" # change ip ETCD_INITIAL_CLUSTER="etcd1=https://10.0.0.20:2380,etcd2=https://10.0.0.21:2380,etcd3=https://10.0.0.22:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" EOF
  6.6、添加etcd systemd启动cat > /usr/lib/systemd/system/etcd.service << "EOF" [Unit] Description=Etcd Service After=network.target After=network-online.target Wants=network-online.target  [Service] Type=notify EnvironmentFile=-/etc/etcd/etcd.conf WorkingDirectory=/var/lib/etcd/ ExecStart=/usr/local/bin/etcd   --cert-file=/etc/etcd/ssl/etcd.pem   --key-file=/etc/etcd/ssl/etcd-key.pem   --trusted-ca-file=/etc/etcd/ssl/ca.pem   --peer-cert-file=/etc/etcd/ssl/etcd.pem   --peer-key-file=/etc/etcd/ssl/etcd-key.pem   --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem   --peer-client-cert-auth   --client-cert-auth Restart=on-failure RestartSec=10 LimitNOFILE=65536  [Install] WantedBy=multi-user.target EOF --- # send to other for i in `cat ~/MasterNodes.txt`;do echo $i;scp /usr/lib/systemd/system/etcd.service $i:/usr/lib/systemd/system/;done
  启动etcd# 1、start etcd systemctl daemon-reload systemctl enable --now etcd.service systemctl status etcd.service # 2、check etcd ETCDCTL_API=3 etcdctl --endpoints=https://10.0.0.20:2379,https://10.0.0.21:2379,https://10.0.0.22:2379 --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem endpoint health +----------------------------+--------+-------------+-------+ |          ENDPOINT          | HEALTH |    TOOK     | ERROR | +----------------------------+--------+-------------+-------+ | https://10.0.0.20:2379     |   true | 16.188005ms |       | | https://10.0.0.21:2379     |   true | 16.693314ms |       | | https://10.0.0.22:2379     |   true | 16.089367ms |       | +----------------------------+--------+-------------+-------+七、安装 k8s-master# 1、download wget https://dl.k8s.io/v1.23.5/kubernetes-server-linux-amd64.tar.gz # 2、tar tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C ~/work kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}  scp kube{ctl,-apiserver,-controller-manager,-scheduler} /usr/local/bin/ # 3、kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} for i in `cat ~/MasterNodes.txt`;do echo $i;scp ~/work/kube{ctl,-apiserver,-controller-manager,-scheduler} $i:/usr/local/bin/;done # 4、kube{let,-proxy} for i in `cat ~/WorkNodes.txt`;do echo $i;scp ~/work/kube{let,-proxy} $i:/usr/local/bin/;done # 5、send pem cp /etc/etcd/ssl/ca*.pem /etc/kubernetes/ssl/ for i in `cat ~/WorkNodes.txt`;do echo $i;scp /etc/etcd/ssl/ca*.pem $i:/etc/kubernetes/ssl/;done# 添加kube-apiserver  token cat  > /etc/kubernetes/token.csv < kube-apiserver-csr.json << "EOF" {   "CN": "kubernetes",   "hosts": [     "127.0.0.1",     "10.0.0.20",     "10.0.0.21",     "10.0.0.22",     "10.0.0.23",     "10.0.0.24",     "10.0.0.25",     "10.96.0.1",     "kubernetes",     "kubernetes.default",     "kubernetes.default.svc",     "kubernetes.default.svc.cluster",     "kubernetes.default.svc.cluster.local"   ],   "key": {     "algo": "rsa",     "size": 2048   },   "names": [     {       "C": "CN",       "ST": "Shanghai",       "L": "Shanghai",       "O": "k8s",       "OU": "system"     }   ] } EOF --- cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver cp kube-apiserver*.pem /etc/kubernetes/ssl/ for i in `cat ~/MasterNodes.txt`;do echo $i;scp ~/work/kube-apiserver*.pem $i:/etc/kubernetes/ssl/;done
  7.3、天kube-apiserver 配置文件# change --bind-address= and --advertise-address= --- cat > /etc/kubernetes/kube-apiserver.conf << "EOF" KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota   --anonymous-auth=false   --bind-address=10.0.0.20   --secure-port=6443   --advertise-address=10.0.0.20   --insecure-port=0   --authorization-mode=Node,RBAC   --runtime-config=api/all=true   --enable-bootstrap-token-auth   --service-cluster-ip-range=10.96.0.0/16   --token-auth-file=/etc/kubernetes/token.csv   --service-node-port-range=30000-50000   --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem   --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem   --client-ca-file=/etc/kubernetes/ssl/ca.pem   --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem   --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem   --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem   --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem   --service-account-issuer=https://kubernetes.default.svc.cluster.local   --etcd-cafile=/etc/etcd/ssl/ca.pem   --etcd-certfile=/etc/etcd/ssl/etcd.pem   --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem   --etcd-servers=https://10.0.0.20:2379,https://10.0.0.21:2379,https://10.0.0.22:2379   --enable-swagger-ui=true   --allow-privileged=true   --apiserver-count=3   --audit-log-maxage=30   --audit-log-maxbackup=3   --audit-log-maxsize=100   --audit-log-path=/var/log/kube-apiserver-audit.log   --event-ttl=1h   --alsologtostderr=true   --logtostderr=false   --log-dir=/var/log/kubernetes   --v=4" EOF
  7.4、添加kube-apiserver systemd启动cat > /usr/lib/systemd/system/kube-apiserver.service << "EOF" [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=etcd.service Wants=etcd.service  [Service] EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536  [Install] WantedBy=multi-user.target EOF --- for i in `cat ~/MasterNodes.txt`;do echo $i;scp /usr/lib/systemd/system/kube-apiserver.service $i:/usr/lib/systemd/system/;done
  启动kube-apiserversystemctl daemon-reload systemctl enable --now kube-apiserver.service systemctl status kube-apiserver.service --- # check curl --insecure https://10.0.0.20:6443 --- {   "kind": "Status",   "apiVersion": "v1",   "metadata": {},   "status": "Failure",   "message": "Unauthorized",   "reason": "Unauthorized",   "code": 401
  7.5、kubectl 安装# 添加 admin certificate cat > admin-csr.json << "EOF" {   "CN": "admin",   "hosts": [],   "key": {     "algo": "rsa",     "size": 2048   },   "names": [     {       "C": "CN",       "ST": "Shanghai",       "L": "Shanghai",       "O": "system:masters",       "OU": "system"     }   ] } EOF --- cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin cp admin*.pem /etc/kubernetes/ssl/ --- for i in `cat ~/MasterNodes.txt`;do echo $i;scp /etc/kubernetes/ssl/admin*.pem $i:/etc/kubernetes/ssl/;done   # 添加 admin.config # 1、设置集群参数 kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.0.0.25:6444 --kubeconfig=admin.config  # 2、设置客户端认证参数 kubectl config set-credentials kubernetes-admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=admin.config  # 3、设置上下文参数 kubectl config set-context kubernetes --cluster=kubernetes --user=kubernetes-admin --kubeconfig=admin.config  # 4、设置当前上下文 kubectl config use-context kubernetes --kubeconfig=admin.config  # kubernetes-kubelet api kubectl create clusterrolebinding kube-apiserver:kubelet-apiserver --clusterrole=system.kubelet-api-admin --user kubernetes kubectl create clusterrolebinding kubernetes --clusterrole=cluster-admin --user=kubernetes  #其它节点 cp ~/work/admin.config /etc/kubernetes mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.config $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config  for i in `cat ~/MasterNodes.txt`;do echo $i;scp /etc/kubernetes/admin.config $i:/etc/kubernetes/;done  echo "export KUBECONFIG=/etc/kubernetes/admin.config" >> /etc/profile source /etc/profilekubectl(bash-completion)  # kubectl(bash-completion) source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> /etc/profile source /etc/profile  kubectl cluster-info --- {Kubernetes control plane is running at https://10.0.0.20:6443} --- kubectl get componentstatuses --- NAME                 STATUS      MESSAGE                                                                                        ERROR scheduler            Unhealthy   Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused    controller-manager   Unhealthy   Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused    etcd-0               Healthy     {"health":"true","reason":""}                                            etcd-1               Healthy     {"health":"true","reason":""}                                            etcd-2               Healthy     {"health":"true","reason":""}                                            --- kubectl get all --all-namespaces --- NAMESPACE   NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE default     service/kubernetes   ClusterIP   10.96.0.1           443/TCP   56m  # verify kubectl kubectl cluster-info --- {Kubernetes control plane is running at https://10.0.0.20:6443} --- kubectl get componentstatuses --- NAME                 STATUS      MESSAGE                                                                                        ERROR scheduler            Unhealthy   Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused    controller-manager   Unhealthy   Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused    etcd-0               Healthy     {"health":"true","reason":""}                                            etcd-1               Healthy     {"health":"true","reason":""}                                            etcd-2               Healthy     {"health":"true","reason":""}                                            --- kubectl get all --all-namespaces --- NAMESPACE   NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE default     service/kubernetes   ClusterIP   10.96.0.1           443/TCP   56m八、kube-controller-manager# kube-controller-manager certificate cat > kube-controller-manager-csr.json << "EOF" {   "CN": "system:kube-controller-manager",   "hosts": [     "127.0.0.1",     "10.0.0.20",     "10.0.0.21",     "10.0.0.22",     "10.0.0.25"   ],   "key": {     "algo": "rsa",     "size": 2048   },   "names": [     {       "C": "CN",       "ST": "Shanghai",       "L": "Shanghai",       "O": "system:kube-controller-manager",       "OU": "Kubernetes"     }   ] } EOF --- cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager cp kube-controller-manager*.pem /etc/kubernetes/ssl/ for i in `cat ~/MasterNodes.txt`;do echo $i;scp /etc/kubernetes/ssl/kube-controller-manager*.pem $i:/etc/kubernetes/ssl/;done  # kube-controller-manager.kubeconfig # 1、设置集群参数 kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.0.0.25:6444 --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig  # 2、设置客户端认证参数 kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig  # 3、设置上下文参数 kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig  # 4、设置当前上下文 kubectl config use-context system:kube-controller-manager --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig  # kube-controller-manager.conf cat > /etc/kubernetes/kube-controller-manager.conf << "EOF" KUBE_CONTROLLER_MANAGER_OPTS="--v=2    --secure-port=10257    --bind-address=127.0.0.1    --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig    --service-cluster-ip-range=10.96.0.0/16    --cluster-name=kubernetes    --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem    --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem    --allocate-node-cidrs=true    --cluster-cidr=10.244.0.0/16    --experimental-cluster-signing-duration=87600h    --root-ca-file=/etc/kubernetes/ssl/ca.pem    --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem    --leader-elect=true    --feature-gates=RotateKubeletServerCertificate=true    --controllers=*,bootstrapsigner,tokencleaner    --horizontal-pod-autoscaler-sync-period=10s    --tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem    --tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem    --use-service-account-credentials=true" EOF   --- for i in `cat ~/MasterNodes.txt`;do echo $i;scp /etc/kubernetes/kube-controller-manager* $i:/etc/kubernetes/;done  # kube-controller-manager.service   systemd 启动 cat > /usr/lib/systemd/system/kube-controller-manager.service << "EOF" [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes  [Service] EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure RestartSec=5  [Install] WantedBy=multi-user.target EOF --- for i in `cat ~/MasterNodes.txt`;do echo $i;scp /usr/lib/systemd/system/kube-controller-manager.service $i:/usr/lib/systemd/system/;done  # start kube-controller-manager.service systemctl daemon-reload systemctl enable --now kube-controller-manager.service systemctl status kube-controller-manager.service  九、调度器kube-scheduler# kube-scheduler certificate cat > kube-scheduler-csr.json << "EOF" {   "CN": "system:kube-scheduler",   "hosts": [     "127.0.0.1",     "10.0.0.20",     "10.0.0.21",     "10.0.0.22",     "10.0.0.25"   ],   "key": {     "algo": "rsa",     "size": 2048   },   "names": [     {       "C": "CN",       "ST": "Shanghai",       "L": "Shanghai",       "O": "system:kube-scheduler",       "OU": "system"     }   ] } EOF --- cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler cp kube-scheduler*.pem /etc/kubernetes/ssl/ for i in `cat ~/MasterNodes.txt`;do echo $i;scp /etc/kubernetes/ssl/kube-scheduler*.pem $i:/etc/kubernetes/ssl/;done  # kube-scheduler.kubeconfig  1、设置集群参数 kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.0.0.25:6444 --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig 2、设置客户端认证参数 kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig 3、设置上下文参数 kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig 4、设置当前上下文 kubectl config use-context system:kube-scheduler --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig   # kube-scheduler.conf cat > /etc/kubernetes/kube-scheduler.conf << "EOF" KUBE_SCHEDULER_OPTS="--address=127.0.0.1    --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig    --leader-elect=true    --alsologtostderr=true    --logtostderr=false    --log-dir=/var/log/kubernetes    --v=2" EOF --- for i in `cat ~/MasterNodes.txt`;do echo $i;scp /etc/kubernetes/kube-scheduler* $i:/etc/kubernetes/;done  # kube-scheduler.service cat > /usr/lib/systemd/system/kube-scheduler.service << "EOF" [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes  [Service] EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS Restart=on-failure RestartSec=5  [Install] WantedBy=multi-user.target EOF --- for i in `cat ~/MasterNodes.txt`;do echo $i;scp /usr/lib/systemd/system/kube-scheduler.service $i:/usr/lib/systemd/system/;done  #start kube-scheduler.service systemctl daemon-reload systemctl enable --now kube-scheduler.service systemctl status kube-scheduler.service十、k8s node节点安装
  1、kubelet#BOOTSTRAP_TOKEN BOOTSTRAP_TOKEN=$(awk -F "," "{print $1}" /etc/kubernetes/token.csv)
  1.2 kubelet-bootstrap.kubeconfig# 1、设置集群参数 kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.0.0.25:6444 --kubeconfig=/root/work/kubelet-bootstrap.kubeconfig  # 2、设置客户端认证参数 kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=/root/work/kubelet-bootstrap.kubeconfig  # 3、设置上下文参数 kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=/root/work/kubelet-bootstrap.kubeconfig  # 4、设置当前上下文 kubectl config use-context default --kubeconfig=/root/work/kubelet-bootstrap.kubeconfig  # 5、创建clusterrolebinding kubectl delete clusterrolebinding kubelet-bootstrap kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=kubelet-bootstrap
  1.3 kubelet.jsoncat > ~/work/kubelet.json << "EOF" {  "kind": "KubeletConfiguration",  "apiVersion": "kubelet.config.k8s.io/v1beta1",  "authentication": {    "x509": {      "clientCAFile": "/etc/kubernetes/ssl/ca.pem"    },    "webhook": {      "enabled": true,      "cacheTTL": "2m0s"    },    "anonymous": {      "enabled": false    }  },  "authorization": {    "mode": "Webhook",    "webhook": {      "cacheAuthorizedTTL": "5m0s",      "cacheUnauthorizedTTL": "30s"    }  },  "address": "10.0.0.23",  "port": 10250,  "readOnlyPort": 10255,  "cgroupDriver": "systemd",  "hairpinMode": "promiscuous-bridge",  "serializeImagePulls": false,  "clusterDomain": "cluster.local.",  "clusterDNS": ["10.96.0.2"] } EOF
  1.4 kubelet.servicecat > ~/work/kubelet.service <<"EOF" [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes After=containerd.service Requires=containerd.service  [Service] WorkingDirectory=/var/lib/kubelet ExecStart=/usr/local/bin/kubelet   --container-runtime=remote   --container-runtime-endpoint=unix:///run/containerd/containerd.sock  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig   --cert-dir=/etc/kubernetes/ssl   --kubeconfig=/etc/kubernetes/kubelet.kubeconfig   --config=/etc/kubernetes/kubelet.json   --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.2   --v=2 Restart=on-failure RestartSec=5  [Install] WantedBy=multi-user.target EOF --- for i in `cat ~/WorkNodes.txt`;do echo $i;scp ~/work/kubelet.json ~/work/kubelet-bootstrap.kubeconfig $i:/etc/kubernetes;done for i in `cat ~/WorkNodes.txt`;do echo $i;scp ~/work/kubelet.service $i:/usr/lib/systemd/system;done for i in `cat ~/WorkNodes.txt`;do echo $i;scp /etc/kubernetes/ssl/ca.pem $i:/etc/kubernetes/ssl/;done
  1.5 start kubelet.servicesmkdir -p /var/lib/kubelet systemctl daemon-reload systemctl enable --now kubelet.service systemctl status kubelet.service
  1.6 Approve Nodeskubectl get csr |grep node |awk "{print$1,$6}"  "+---                                                  |     ---+"   node-csr-BV7RZ1Mc1RFkWhH9jzJH8h8on_dRMB3an_7FgBUwWhk   Pending   node-csr-wZOI__ACKylv7DlEPRK8iMg3_sYyBErjbGjxkMkRyPo   Pending "+---                                                  |     ---+"  kubectl certificate approve node-csr-csr-BV7RZ1Mc1RFkWhH9jzJH8h8on_dRMB3an_7FgBUwWhk node-csr-wZOI__ACKylv7DlEPRK8iMg3_sYyBErjbGjxkMkRyPo  kubectl get nodes NAME         STATUS   ROLES    AGE    VERSION k8s-node01   Ready       118m   v1.23.5 k8s-node02   Ready       118m   v1.23.5
  2、kube-pproxy# kube-proxy certificate cat > kube-proxy-csr.json << "EOF" {   "CN": "system:kube-proxy",   "key": {     "algo": "rsa",     "size": 2048   },   "names": [     {       "C": "CN",       "ST": "Shanghai",       "L": "Shanghai",       "O": "k8s",       "OU": "system"     }   ] } EOF --- cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy cp kube-proxy*.pem /etc/kubernetes/ssl/ for i in `cat ~/WorkNodes.txt`;do echo $i;scp ~/work/kube-proxy*.pem $i:/etc/kubernetes/ssl/;done
  2.3 kube-proxy.kubeconfig# 1、设置集群参数 kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.0.0.25:6444 --kubeconfig=/root/work/kube-proxy.kubeconfig  # 2、设置客户端认证参数 kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=/root/work/kube-proxy.kubeconfig  # 3、设置上下文参数 kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=/root/work/kube-proxy.kubeconfig  # 4、设置当前上下文 kubectl config use-context default --kubeconfig=/root/work/kube-proxy.kubeconfi # kube-proxy.yaml # 以下bindAddress均为宿主机ip,clusterCIDR为宿主机网段 --- cat > ~/work/kube-proxy.yaml << "EOF" apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 10.0.0.23 clientConnection:   kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig clusterCIDR: 10.244.0.0/24 healthzBindAddress: 10.0.0.23:10256 kind: KubeProxyConfiguration metricsBindAddress: 10.0.0.23:10249 mode: "ipvs" EOF --- for i in `cat ~/WorkNodes.txt`;do echo $i;scp ~/work/kube-proxy.yaml ~/work/kube-proxy.kubeconfig $i:/etc/kubernetes/;done
  2.4 kube-proxy.servicecat > ~/work/kube-proxy.service << "EOF" [Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/kubernetes/kubernetes After=network.target  [Service] WorkingDirectory=/var/lib/kube-proxy ExecStart=/usr/local/bin/kube-proxy   --config=/etc/kubernetes/kube-proxy.yaml   --alsologtostderr=true   --logtostderr=false   --log-dir=/var/log/kubernetes   --v=2 Restart=on-failure RestartSec=5 LimitNOFILE=65536  [Install] WantedBy=multi-user.target EOF --- for i in `cat ~/WorkNodes.txt`;do echo $i;scp ~/work/kube-proxy.service $i:/usr/lib/systemd/system;done
  2.5 start kube-proxy.services
  mkdir -p /var/lib/kube-proxy systemctl daemon-reload systemctl enable --now kube-proxy.service systemctl status kube-proxy.service
  3、网络 calicowget https://docs.projectcalico.org/manifests/calico.yaml kubectl apply -f calico.yaml
  3.1 corednsmv /etc/resolv.conf  /etc/resolv.conf.bak ln -s /run/systemd/resolve/resolv.conf /etc/ systemctl restart systemd-resolved.service && systemctl enable systemd-resolved.service
  coredns.yamlapiVersion: v1 kind: ServiceAccount metadata:   name: coredns   namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata:   labels:     kubernetes.io/bootstrapping: rbac-defaults   name: system:coredns rules:   - apiGroups:     - ""     resources:     - endpoints     - services     - pods     - namespaces     verbs:     - list     - watch   - apiGroups:     - discovery.k8s.io     resources:     - endpointslices     verbs:     - list     - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:   annotations:     rbac.authorization.kubernetes.io/autoupdate: "true"   labels:     kubernetes.io/bootstrapping: rbac-defaults   name: system:coredns roleRef:   apiGroup: rbac.authorization.k8s.io   kind: ClusterRole   name: system:coredns subjects: - kind: ServiceAccount   name: coredns   namespace: kube-system --- apiVersion: v1 kind: ConfigMap metadata:   name: coredns   namespace: kube-system data:   Corefile: |     .:53 {         errors         health {           lameduck 5s         }         ready         kubernetes cluster.local in-addr.arpa ip6.arpa {           fallthrough in-addr.arpa ip6.arpa         }         prometheus :9153         forward . /etc/resolv.conf {           max_concurrent 1000         }         cache 30         loop         reload         loadbalance     } --- apiVersion: apps/v1 kind: Deployment metadata:   name: coredns   namespace: kube-system   labels:     k8s-app: kube-dns     kubernetes.io/name: "CoreDNS" spec:   # replicas: not specified here:   # 1. Default is 1.   # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.   strategy:     type: RollingUpdate     rollingUpdate:       maxUnavailable: 1   selector:     matchLabels:       k8s-app: kube-dns   template:     metadata:       labels:         k8s-app: kube-dns     spec:       priorityClassName: system-cluster-critical       serviceAccountName: coredns       tolerations:         - key: "CriticalAddonsOnly"           operator: "Exists"       nodeSelector:         kubernetes.io/os: linux       affinity:          podAntiAffinity:            preferredDuringSchedulingIgnoredDuringExecution:            - weight: 100              podAffinityTerm:                labelSelector:                  matchExpressions:                    - key: k8s-app                      operator: In                      values: ["kube-dns"]                topologyKey: kubernetes.io/hostname       containers:       - name: coredns         image: coredns/coredns:1.8.4         imagePullPolicy: IfNotPresent         resources:           limits:             memory: 170Mi           requests:             cpu: 100m             memory: 70Mi         args: [ "-conf", "/etc/coredns/Corefile" ]         volumeMounts:         - name: config-volume           mountPath: /etc/coredns           readOnly: true         ports:         - containerPort: 53           name: dns           protocol: UDP         - containerPort: 53           name: dns-tcp           protocol: TCP         - containerPort: 9153           name: metrics           protocol: TCP         securityContext:           allowPrivilegeEscalation: false           capabilities:             add:             - NET_BIND_SERVICE             drop:             - all           readOnlyRootFilesystem: true         livenessProbe:           httpGet:             path: /health             port: 8080             scheme: HTTP           initialDelaySeconds: 60           timeoutSeconds: 5           successThreshold: 1           failureThreshold: 5         readinessProbe:           httpGet:             path: /ready             port: 8181             scheme: HTTP       dnsPolicy: Default       volumes:         - name: config-volume           configMap:             name: coredns             items:             - key: Corefile               path: Corefile --- apiVersion: v1 kind: Service metadata:   name: kube-dns   namespace: kube-system   annotations:     prometheus.io/port: "9153"     prometheus.io/scrape: "true"   labels:     k8s-app: kube-dns     kubernetes.io/cluster-service: "true"     kubernetes.io/name: "CoreDNS" spec:   selector:     k8s-app: kube-dns   clusterIP: 10.96.0.2   ports:   - name: dns     port: 53     protocol: UDP   - name: dns-tcp     port: 53     protocol: TCP   - name: metrics     port: 9153     protocol: TCP
  安装 corednskubectl apply -f coredns.yaml  二进制安装很繁琐,请耐心看完

去云南必备的20条建议云南中国最舒适的地方之一。为什么去云南玩呢?首先,云南天气不冷不热其次,那就是云南美啊!云南很大很美,现在去云南的朋友也越来越多。如果你想去云南,赶紧收藏这20条建议吧!1。如果你剧透!2021宝德数字产业生态大会亮点抢先看11月10日,以新生态同命运共发展为主题的2021宝德数字产业生态大会即将在深圳博林天瑞喜来登酒店正式开幕。据悉,本次峰会由一场主论坛,三场分论坛组成,汇聚政府领导工信部领导专家领多维度对比分析哪些进口奶粉品牌值得推荐现在,越来越多的进口奶粉涌入中国市场,一方面为父母提供了多样化的选择空间,但也增加了选择的难度。尽管一部分中国宝妈已经有挑选进口奶粉的经验,但市场上的进口奶粉质量参差不齐,如何挑选电池1无法充电,提示维修,MacBookPro电池门爆发MacBookPro电池电量提示1,建议维修,且无法充电!最近遇到这个问题,网上一看集中爆发了好多!开始以为是天冷电池损耗多,或者过冷电池保护导致,后来发现不是。罪魁祸首应该是苹果宝妈心目中的优质进口奶粉这些优点必不可少婴幼儿配方奶粉在宝宝成长的过程中占有重要地位,营养健康品质优良的奶粉,自然得到宝妈们的喜爱。为了给下一代提供更丰富的营养选择,父母们在挑选奶粉的时候也越来越仔细。现在国内市场上,进带货最高超2。5亿,企业家也能变网红OMG这也太好看了吧买它买它买它2019年,被称为网红经济元年,李佳琦薇娅张大奕等头部网红,不仅带货成绩惊人,而且频频出圈,获得不低于明星的关注。在网红之外,直播带货行业崛起了一股新疆美景对比瑞士风光,哪个更美?新疆面积166万平方公里,瑞士面积4。1284万平方公里,新疆面积是瑞士的40倍。新疆最大的县是若羌县,面积20万平方公里,是瑞士的5倍。在新疆,你可以看到草原雪山戈壁荒漠冰川湖泊抢先体验,看看ColorOS12升级了什么经过这几年的技术迭代,国产定制系统已经有了很不错的水平,彻底撕掉过去卡顿不好看标签,比如OPPO的ColorOS系统,所主打的无边界设计理念就很受消费者的欢迎,月活用户更是突破3。平板新星!荣耀平板V7Pro正式开售,这售价和配置赚到了在8月12日的荣耀发布会上,荣耀平板V7Pro也正式亮相,随着时间来到8月19日,这款平板也迎来了开售,售价为2599元起。来说说荣耀平板V7Pro的配置吧首先正面是一块11英寸大领衔国产品牌再创佳绩,OPPO销量全球第二,这些爆款产品功不可没现如今,越来越多的人意识到5G潮流是大势所趋,在5G时代下,各行各业都迎来了加速发展,5G手机产品更是快速渗透。根据9月13日国新办召开的发布会所公开的数据显示,今年1至8月,国内红魔游戏手机6SPro发布倒计时,拉满的游戏体验还会有多少惊喜?还有一天的时间,红魔游戏手机6SPro就要发布了,其实在这些天的预热里,关于红魔游戏手机6SPro的一些亮点我们已经可以提前知道了,现在就让我们来做一个盘点,方便大家提前判断。就先
全球5G进入加速发展期无线网络智能化成趋势21世纪经济报道记者倪雨晴深圳报道5G的建设和产业化正在持续推进。在近日的MWC2022预沟通会上,华为无线网络产品线副总裁首席营销官甘斌向记者表示,全球5G进入了加速发展期,目前这几款手机堪比小单反,经常拍照的朋友可以考虑一下喜欢去各地拍照旅游的朋友,可能随身携带相机不方便,但是大部分手机的拍照效果并不如相机,小编给大家盘点了以下几款堪比小单反的手机,经常拍照的朋友可以考虑。第一款三星S22Ultra新勇创新,高质量发展如虎添翼(奋进新征程建功新时代稳中求进开新局)就像是一部科幻电影!在日前闭幕的北京冬奥会上,美国全国广播公司记者发出这样的感叹。奥运史上首次机器人水下传递火炬国家速滑馆冰丝带零碳制冰主媒体中心智慧餐厅机器人烹饪送餐一条龙北京冬Dogecoin狗狗币能否在核心开发者离开后幸存下来?狗狗币(DOGE)核心开发者罗斯尼科尔上周宣布他将退出该项目。Nicoll也是Dogecoin基金会顾问委员会的董事,他说他一直在努力保持他的Dogecoin工作以及一份全职工作。老版5角梅花硬币里有黄金?新支付方式出现,未来现金会消失吗?现在很多人看到现金,居然会有一种恍如隔世的感觉,很多人过年在收红包的时候,居然觉得自己已经好久都没有看到过现金了。电子支付现在这个时代,出门只需要带一个手机,不仅能够替代现金的功能电动车续航低充电慢不保值为啥有这么多人不看好电动车很多人不看好电动车,主要是因为大家已经适应燃油车的方方面面了。就好比冰红茶出了个新口味,还是0糖0脂的,口味上也还是没有经典款那么容易被大家接受的。电动车使用条件对很多人并不友好首黑客团体Anonymous宣布对俄罗斯发动网络战争在入侵乌克兰之前,匿名者一直威胁要攻击俄罗斯。黑客团体Anonymous已经禁用了几个俄罗斯政府网站,包括国家控制的今日俄罗斯新闻服务。与Anonymous集体认同的黑客宣布,他们米家智能除湿机,低噪高效告别返潮东风帘幕雨丝丝,梅子半黄时。玉簪微醒醉梦,开却两三枝。虽然还未到阴雨连绵,淅淅沥沥的日子,但翻过年,总有那么些焦虑。好久不见,壬寅虎年的初遇,便被米家智能除湿机圈粉了。今天就跟大家心有猛虎细嗅蔷薇,FindX5系列除了硬核配置,还有这些人性化功能2月24日,OPPO新一代旗舰机型OPPOFindX5系列正式登场。全新的OPPOFindX5系列搭载了OPPO自研的马里亚纳NPU影像芯片,针对手机拍照视频录制都有了非常大的提升微信内测更新发布,新增几大功能!提前知悉微信有迎来一次重大更新,目前属于内测阶段主要更新内容1。支持发送原视频以前发送视频文件都是会被压缩,显而易见画质清晰度肯定是会被影响。更新之后的新版本,支持原图高清发送2。来电铃声深度长文量子力学到底讲了些什么?为何说没有人懂得量子力学?量子力学的奠基人玻尔,曾经说过这么一句话如果你第一次学量子力学认为自己懂了,那说明你还没懂。量子力学为什么会获得这么奇怪的一句评价?因为量子理论看起来很像是一套玄学理论,经历过经典