references:https:www。hostafrica。co。zablognewtechnologiesinstallkubernetesdelpoyclustercentos7https:zhuanlan。zhihu。comp163107995https:istio。iolatestzhdocssetupgettingstarteddownloadhttps:github。comistioistio TheDetailStep Step1:InstallDockeronallCentOS7VMs1parepaecentos7install更新源信息Updatethepackagedatabase2sudoyumcheckupdate安装必要软件3sudoyuminstallyyumutilsdevicemapperpersistentdatalvm2写入docker源信息4sudoyumconfigmanageraddrepohttps:download。docker。comlinuxcentosdockerce。repo5sudoyuminstalldockerce6systemctlstartdocker7systemctlenabledocker8systemctlstatusdocker Step2:SetuptheKubernetesRepositorySincetheKubernetespackagesaren’tpresentintheofficialCentOS7repositories,wewillneedtoaddanewrepositoryfile。Usethefollowingcommandtocreatethefileandopenitforediting:添加k8s阿里云YUM软件源sudovietcyum。repos。dkubernetes。repo〔kubernetes〕namekubernetesbaseurlhttps:mirrors。aliyun。comkubernetesyumreposkubernetesel7x8664enabled1gpgcheck1repogpgcheck0gpgkeyhttps:mirrors。aliyun。comkubernetesyumdocyumkey。gpghttps:mirrors。aliyun。comkubernetesyumdocrpmpackagekey。gpgxneededit是因为repo的gpg验证不通过导致的,可以修改repogpgcheck0跳过验证〔rootmaster〕yuminstallyyumutilsdevicemapperpersistentdatalvm2〔rootmaster〕yumconfigmanageraddrepohttp:mirrors。aliyun。comdockercelinuxcentosdockerce。repo Step3:forcentos7andmasternode1node2disableselinx:sudosetenforce0sudosedifollowsymlinkssSELINUXenforcingSELINUXdisabledgetcsysconfigselinuxdisablefirewall:〔rootmasternodeyum。repos。d〕systemctlstopfirewalld。service〔rootmasternodeyum。repos。d〕systemctldisablefirewalld。serviceRemovedsymlinketcsystemdsystemmultiuser。target。wantsfirewalld。service。Removedsymlinketcsystemdsystemdbusorg。fedoraproject。FirewallD1。service。updatehostname:sudohostnamectlsethostnamemasternode对于node1hostnamectlsethostnamenode1对于node2hostnamectlsethostnamenode2sudoexecbashupdateiptablesconfig:Weneedtoupdatethenet。bridge。bridgenfcalliptablesparameterinoursysctlfiletoensureproperprocessingofpacketsacrossallmachines。Usethefollowingcommands:catEOFetcsysctl。dk8s。confnet。bridge。bridgenfcallip6tables1net。bridge。bridgenfcalliptables1EOFsudosysctlsystem〔rootmasternode〕catetcsysctl。dk8s。confnet。bridge。bridgenfcallip6tables1net。bridge。bridgenfcalliptables1Disableswap:ForKubelettowork,wealsoneedtodisableswaponallofourVMs:sudosediswapdetcfstabsudoswapoffa〔rootmasternode〕sediswapdetcfstab〔rootmasternode〕swapoffa〔rootmasternode〕Edithosts:〔rootmasternode〕catetchosts127。0。0。1localhostlocalhost。localdomainlocalhost4localhost4。localdomain4::1localhostlocalhost。localdomainlocalhost6localhost6。localdomain610。211。55。8masternode10。211。55。9node1Wnode110。211。55。10node2Wnode2dockerdaemon。json:〔rootmasternode〕catetcdockerdaemon。json{registrymirrors:〔https:registry。dockercn。com,https:docker。mirrors。ustc。edu。cn,http:hubmirror。c。163。com,https:cr。console。aliyun。com〕,liverestore:true,execopts:〔native。cgroupdriversystemd〕}再启动看看〔rootmasternode〕catetcdefaultkubeletKUBELETEXTRAARGScgroupdriversystemd〔rootmasternode〕systemctlstartkubeletsystemctlstatuskubeletkubelet。servicekubelet:TheKubernetesNodeAgentLoaded:loaded(usrlibsystemdsystemkubelet。service;enabled;vendorpreset:disabled)DropIn:usrlibsystemdsystemkubelet。service。d10kubeadm。confActive:active(running)sinceWed2022110217:19:28CST;2min23sagoDocs:https:kubernetes。iodocsMainPID:1636(kubelet)Tasks:15Memory:115。5MCGroup:system。slicekubelet。service1636usrbinkubeletbootstrapkubeconfigetckubernetesbootst。。。kubeadmreset:node1andnode2也许需要这个操作才能加入 Step4InstallKubeletkubeadmkubectlonCentOS7ThefirstcoremodulethatweneedtoinstalloneverynodeisKubelet。Usethefollowingcommandtodoso:xsudoyuminstallykubelet需要注意我这里指定了特殊的版本:yuminstallynogpgcheckkubelet1。23。5kubeadm1。23。5kubectl1。23。5k8s在1。24版本不支持docker容器reference:https:cloud。tencent。comdeveloperarticle2093107在新版本Kubernetes环境(1。24以及以上版本)下官方不在支持docker作为容器运行时了,若要继续使用docker需要对docker进行配置一番。需要安装cridocker作为Kubernetes容器所有节点安装kubeadm,kubelet和kubectl,版本更新频繁,这里指定版本号部署。〔实际测试中发现其实版本是有影响的,所以我后来重新使用过这个版本〕x〔ERRORCRI〕:containerruntimeisnotrunning:output:E110216:22:55。77114413525remoteruntime。go:948〕xStatusfromruntimeservicefailederrrpcerror:codeUnimplementeddescunknownserviceruntime。v1alpha2。RuntimeServiceVerifythecommand:kubeletversionkubectlversionkubeadmversion Step5ThisconcludesourinstallationandconfigurationofKubernetesonCentOS7。Wewillnowsharethestepsfordeployingak8sclusterDeployingaKubernetesClusteronCentOS71kubeadminitpodnetworkcidr10。244。0。0161kubeadminitpodnetworkcidr10。244。0。016imagerepositoryregistry。aliyuncs。comgooglecontainers命令说明:x如果不指定podnetworkcidr10。244。0。016后面的kubeflannel会有问题找不到Errorregisteringnetwork:failedtoacquirelease:nodemasternodepodcidrnotassignedkubeadminithelpgreppodpodnetworkcidrstringSpecifyrangeofIPaddressesforthepodnetwork。Ifset,thecontrolplanewillautomaticallyallocateCIDRsforeverynode初始化过程说明:〔preflight〕kubeadm执行初始化前的检查。〔kubeletstart〕生成kubelet的配置文件varlibkubeletconfig。yaml〔certificates〕生成相关的各种token和证书〔kubeconfig〕生成KubeConfig文件,kubelet需要这个文件与Master通信〔controlplane〕安装Master组件,会从指定的Registry下载组件的Docker镜像。〔bootstraptoken〕生成token记录下来,后边使用kubeadmjoin往集群中添加节点时会用到〔addons〕安装附加组件kubeproxy和kubedns。KubernetesMaster初始化成功,提示如何配置常规用户使用kubectl访问集群。提示如何安装Pod网络。提示如何注册其他节点到Cluster。checkingsystemctlenablekubelet。service:1systemctlenablekubelet。service2systemctlstartkubelet。service3systemctlstatuskubeletkubeadmreset:如果遇到一些错误,这个命令会重新设置kubeadminit:这个过程会去调用需要拉取的镜像,由于默认拉取镜像地址http:k8s。gcr。io国内无法访问,这里指定阿里云镜像仓库地址kubeadminitpodnetworkcidr10。244。0。016imagerepositoryregistry。aliyuncs。comgooglecontainers其他解决参考:查看需要的镜像列表kubeadmconfigimageslistI110216:43:55。55786818580version。go:255〕remoteversionismuchnewer:v1。25。3;fallingbackto:stable1。23k8s。gcr。iokubeapiserver:v1。23。13k8s。gcr。iokubecontrollermanager:v1。23。13k8s。gcr。iokubescheduler:v1。23。13k8s。gcr。iokubeproxy:v1。23。13k8s。gcr。iopause:3。6k8s。gcr。ioetcd:3。5。10k8s。gcr。iocorednscoredns:v1。8。6从国内下载dockerpullregistry。cnhangzhou。aliyuncs。comgooglecontainerskubeapiserver:v1。23。13dockerpullregistry。cnhangzhou。aliyuncs。comgooglecontainerskubecontrollermanager:v1。23。13dockerpullregistry。cnhangzhou。aliyuncs。comgooglecontainerskubescheduler:v1。23。13dockerpullregistry。cnhangzhou。aliyuncs。comgooglecontainerskubeproxy:v1。23。13dockerpullregistry。cnhangzhou。aliyuncs。comgooglecontainerspause:3。6dockerpullregistry。cnhangzhou。aliyuncs。comgooglecontainersetcd:3。5。10dockerpullregistry。cnhangzhou。aliyuncs。comgooglecontainerscorednscoredns:v1。8。6最后这个有点问题dockerpullcorednscoredns:1。8。61。8。6:Pullingfromcorednscorednsd92bdee79785:Pullcomplete6e1b7c06e42d:PullcompleteDigest:sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590eStatus:Downloadednewerimageforcorednscoredns:1。8。6docker。iocorednscoredns:1。8。6对images重命名dockertagregistry。cnhangzhou。aliyuncs。comgooglecontainerskubeapiserver:v1。23。13k8s。gcr。iokubeapiserver:v1。23。13dockertagregistry。cnhangzhou。aliyuncs。comgooglecontainerskubecontrollermanager:v1。23。13k8s。gcr。iokubecontrollermanager:v1。23。13dockertagregistry。cnhangzhou。aliyuncs。comgooglecontainerskubescheduler:v1。23。13k8s。gcr。iokubescheduler:v1。23。13dockertagregistry。cnhangzhou。aliyuncs。comgooglecontainerskubeproxy:v1。23。13k8s。gcr。iokubeproxy:v1。23。13dockertagregistry。cnhangzhou。aliyuncs。comgooglecontainerspause:3。6k8s。gcr。iopause:3。6dockertagregistry。cnhangzhou。aliyuncs。comgooglecontainersetcd:3。5。10k8s。gcr。ioetcd:3。5。10dockertagcorednscoredns:1。8。6k8s。gcr。iocorednscoredns:v1。8。62kubeadmininitafteroutputreferenceYourKubernetescontrolplanehasinitializedsuccessfully!Tostartusingyourcluster,youneedtorunthefollowingasaregularuser:mkdirpHOME。kubesudocpietckubernetesadmin。confHOME。kubeconfigsudochown(idu):(idg)HOME。kubeconfigAlternatively,ifyouaretherootuser,youcanrun:exportKUBECONFIGetckubernetesadmin。confYoushouldnowdeployapodnetworktothecluster。Runkubectlapplyf〔podnetwork〕。yamlwithoneoftheoptionslistedat:https:kubernetes。iodocsconceptsclusteradministrationaddonsThenyoucanjoinanynumberofworkernodesbyrunningthefollowingoneachasroot:kubeadmjoin10。211。55。8:6443tokenqe1qbs。5hwtoxptauhnplktdiscoverytokencacerthashsha256:d1f1c1c6ef1df24c8b671c2e625c180bf3cded8550724485fda0f0d1046e3d7e3初始化之后的一些配置和说明〔rootmasternode〕mkdirpHOME。kube〔rootmasternode〕sudocpietckubernetesadmin。confHOME。kubeconfigcp:overwriteroot。kubeconfig?y〔rootmasternode〕sudochown(idu):(idg)HOME。kubeconfig〔rootmasternode〕exportKUBECONFIGetckubernetesadmin。conf〔rootmasternode〕kubectlgetnodeNAMESTATUSROLESAGEVERSIONmasternodeNotReadycontrolplane,master63sv1。23。5node1NotReadynone16sv1。23。5node2NotReadynone11sv1。23。5命令说明:加入KubernetesNode:如果执行kubeadminit时没有记录下加入集群的命令,可以通过以下命令重新创建〔rootmasternode〕kubeadmtokencreateprintjoincommand查看集群状态:确认各个组件都处于healthy状态:〔rootmasternode〕kubectlgetcsWarning:v1ComponentStatusisdeprecatedinv1。19NAMESTATUSMESSAGEERRORschedulerHealthyoketcd0Healthy{health:true,reason:}controllermanagerHealthyok〔rootmasternode〕kubectlgetcomponentstatusWarning:v1ComponentStatusisdeprecatedinv1。19NAMESTATUSMESSAGEERRORcontrollermanagerHealthyokschedulerHealthyoketcd0Healthy{health:true,reason:}neednode1andnode2join〔rootnode1〕kubeadmjoin10。211。55。8:6443tokenqe1qbs。5hwtoxptauhnplktdiscoverytokencacerthashsha256:d1f1c1c6ef1df24c8b671c2e625c180bf3cded8550724485fda0f0d1046e3d7e〔preflight〕Runningpreflightchecks〔preflight〕Readingconfigurationfromthecluster。。。〔preflight〕FYI:Youcanlookatthisconfigfilewithkubectlnkubesystemgetcmkubeadmconfigoyaml〔kubeletstart〕Writingkubeletconfigurationtofilevarlibkubeletconfig。yaml〔kubeletstart〕Writingkubeletenvironmentfilewithflagstofilevarlibkubeletkubeadmflags。env〔kubeletstart〕Startingthekubelet〔kubeletstart〕WaitingforthekubelettoperformtheTLSBootstrap。。。Thisnodehasjoinedthecluster:Certificatesigningrequestwassenttoapiserverandaresponsewasreceived。TheKubeletwasinformedofthenewsecureconnectiondetails。Runkubectlgetnodesonthecontrolplanetoseethisnodejointhecluster。〔rootnode2〕kubeadmjoin10。211。55。8:6443tokenqe1qbs。5hwtoxptauhnplktdiscoverytokencacerthashsha256:d1f1c1c6ef1df24c8b671c2e625c180bf3cded8550724485fda0f0d1046e3d7e〔preflight〕Runningpreflightchecks〔preflight〕Readingconfigurationfromthecluster。。。〔preflight〕FYI:Youcanlookatthisconfigfilewithkubectlnkubesystemgetcmkubeadmconfigoyaml〔kubeletstart〕Writingkubeletconfigurationtofilevarlibkubeletconfig。yaml〔kubeletstart〕Writingkubeletenvironmentfilewithflagstofilevarlibkubeletkubeadmflags。env〔kubeletstart〕Startingthekubelet〔kubeletstart〕WaitingforthekubelettoperformtheTLSBootstrap。。。 Step6:到这了基本的操作已经完成检查节点上各个系统Pod的状态:kubectlgetpodnkubesystemowideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATEDNODEREADINESSGATEScoredns64897985d5ksmj01Pending037mnonenonenonenonecoredns64897985dlj6zv01Pending037mnonenonenonenoneetcdmasternode11Running137m10。211。55。8masternodenonenonekubeapiservermasternode11Running137m10。211。55。8masternodenonenonekubecontrollermanagermasternode11Running137m10。211。55。8masternodenonenonekubeproxy4wdf611Running037m10。211。55。8masternodenonenonekubeproxymbg4n01ContainerCreating036m10。211。55。10node2nonenonekubeproxyrvz7601ContainerCreating036m10。211。55。9node1nonenonekubeschedulermasternode11Running137m10。211。55。8masternodenonenone说明可以看到node1node2资源创建有问题可以手动去dockerpullaliyun镜像或者参考上面step5其他解决参考为node1andnode2部署完成后,我们可以通过kubectlget重新检查Pod的状态:〔rootmasternode〕kubectlgetpodallnamespacesowideNAMESPACENAMEREADYSTATUSRESTARTSAGEIPNODENOMINATEDNODEREADINESSGATESkubeflannelkubeflannelds78wfp01CrashLoopBackOff2(23sago)43s10。211。55。10node2nonenonekubeflannelkubeflanneldsg9hpf01CrashLoopBackOff2(22sago)43s10。211。55。8masternodenonenonekubeflannelkubeflanneldsrh7wh01CrashLoopBackOff2(21sago)43s10。211。55。9node1nonenonekubesystemcoredns64897985d4zgpp01ContainerCreating04m25snonenode1nonenonekubesystemcoredns64897985dfk4q401ContainerCreating04m25snonenode1nonenonekubesystemetcdmasternode11Running24m31s10。211。55。8masternodenonenonekubesystemkubeapiservermasternode11Running24m31s10。211。55。8masternodenonenonekubesystemkubecontrollermanagermasternode11Running24m31s10。211。55。8masternodenonenonekubesystemkubeproxy52pv811Running04m26s10。211。55。8masternodenonenonekubesystemkubeproxy8s6xm11Running03m32s10。211。55。10node2nonenonekubesystemkubeproxyzrfhc11Running03m37s10。211。55。9node1nonenonekubesystemkubeschedulermasternode11Running24m31s10。211。55。8masternodenonenone Step7:corednsPending:要让KubernetesCluster能够工作,必须安装Pod网络,否则Pod之间无法通信。Kubernetes支持多种网络方案,这里我们使用flannel执行如下命令部署flannel:安coredns装Pod网络插件(CNI)master节点,node节点加入后自动下载可以看到,CoreDNS依赖于网络的Pod都处于Pending状态,即调度失败。这当然是符合预期的:因为这个Master节点的网络尚未就绪。集群初始化如果遇到问题,可以使用kubeadmreset命令进行清理然后重新执行初始化。kubectlapplyfhttps:raw。githubusercontent。comcoreosflannelmasterDocumentationkubeflannel。ymlnamespacekubeflannelcreatedclusterrole。rbac。authorization。k8s。ioflannelcreatedclusterrolebinding。rbac。authorization。k8s。ioflannelcreatedserviceaccountflannelcreatedconfigmapkubeflannelcfgcreateddaemonset。appskubeflanneldscreated〔rootmasternode〕catkubeflannel。ymlgrepimageimage:flannelcniflannelcniplugin:v1。1。0forppc64leandmips64le(dockerhublimitationsmayapply)image:docker。ioranchermirroredflannelcniflannelcniplugin:v1。1。0image:flannelcniflannel:v0。20。0forppc64leandmips64le(dockerhublimitationsmayapply)image:docker。ioranchermirroredflannelcniflannel:v0。20。0image:flannelcniflannel:v0。20。0forppc64leandmips64le(dockerhublimitationsmayapply)image:docker。ioranchermirroredflannelcniflannel:v0。20。〔rootmasternode〕dockerpulldocker。ioranchermirroredflannelcniflannelcniplugin:v1。1。0〔rootmasternode〕dockerpulldocker。ioranchermirroredflannelcniflannel:v0。20。0Checkingpodlogs:kubectllogskubeflanneldsrh7whnkubeflannelCheckingkubecontrollermanager。yaml。:etckubernetesmanifestskubecontrollermanager。yamlkubeflannel。ymliplink:1:lo:LOOPBACK,UP,LOWERUPmtu65536qdiscnoqueuestateUNKNOWNmodeDEFAULTgroupdefaultqlen1000linkloopback00:00:00:00:00:00brd00:00:00:00:00:002:eth0:BROADCAST,MULTICAST,UP,LOWERUPmtu1500qdiscpfifofaststateUPmodeDEFAULTgroupdefaultqlen1000linkether00:1c:42:b6:63:7bbrdff:ff:ff:ff:ff:ff3:virbr0:NOCARRIER,BROADCAST,MULTICAST,UPmtu1500qdiscnoqueuestateDOWNmodeDEFAULTgroupdefaultqlen1000linkether52:54:00:72:ee:dbbrdff:ff:ff:ff:ff:ff4:virbr0nic:BROADCAST,MULTICASTmtu1500qdiscpfifofastmastervirbr0stateDOWNmodeDEFAULTgroupdefaultqlen1000linkether52:54:00:72:ee:dbbrdff:ff:ff:ff:ff:ff5:docker0:NOCARRIER,BROADCAST,MULTICAST,UPmtu1500qdiscnoqueuestateDOWNmodeDEFAULTgroupdefaultlinkether02:42:dc:08:ca:04brdff:ff:ff:ff:ff:ff6:flannel。1:BROADCAST,MULTICAST,UP,LOWERUPmtu1450qdiscnoqueuestateUNKNOWNmodeDEFAULTgroupdefaultlinkether12:b3:b0:58:60:5abrdff:ff:ff:ff:ff:ff7:cni0:BROADCAST,MULTICAST,UP,LOWERUPmtu1450qdiscnoqueuestateUPmodeDEFAULTgroupdefaultqlen1000linkethere2:f8:95:89:0b:49brdff:ff:ff:ff:ff:ff8:veth42094fc8if2:BROADCAST,MULTICAST,UP,LOWERUPmtu1450qdiscnoqueuemastercni0stateUPmodeDEFAULTgroupdefaultlinkether36:4a:75:a2:cd:d5brdff:ff:ff:ff:ff:fflinknetnsid09:veth5edfc32aif2:BROADCAST,MULTICAST,UP,LOWERUPmtu1450qdiscnoqueuemastercni0stateUPmodeDEFAULTgroupdefaultlinkether8e:76:6a:de:5f:27brdff:ff:ff:ff:ff:fflinknetnsid1Checkingpodstatusagain:〔rootmasternode〕kubectlgetpodallnamespacesowideNAMESPACENAMEREADYSTATUSRESTARTSAGEIPNODENOMINATEDNODEREADINESSGATESkubeflannelkubeflanneldsgx4xl11Running029s10。211。55。10node2nonenonekubeflannelkubeflanneldss6n6l11Running029s10。211。55。8masternodenonenonekubeflannelkubeflanneldstgh5q11Running029s10。211。55。9node1nonenonekubesystemcoredns64897985dd5zv411Running02m29s10。244。0。2masternodenonenonekubesystemcoredns64897985dd94bh11Running02m29s10。244。0。3masternodenonenonekubesystemetcdmasternode11Running42m42s10。211。55。8masternodenonenonekubesystemkubeapiservermasternode11Running42m44s10。211。55。8masternodenonenonekubesystemkubecontrollermanagermasternode11Running02m42s10。211。55。8masternodenonenonekubesystemkubeproxy59dqs11Running02m18s10。211。55。10node2nonenonekubesystemkubeproxycjdc611Running02m21s10。211。55。9node1nonenonekubesystemkubeproxydtxft11Running02m29s10。211。55。8masternodenonenonekubesystemkubeschedulermasternode11Running42m42s10。211。55。8masternodenonenonecheckingkubeflannel:〔rootmasternode〕kubectlgetpodnkubeflannelNAMEREADYSTATUSRESTARTSAGEkubeflanneldsgx4xl11Running013hkubeflanneldss6n6l11Running013hkubeflanneldstgh5q11Running013h Step8:istioctlinstallreference:https:istio。iolatestzhdocssetupgettingstarteddownload〔rootmasternode〕lsanacondaks。cfgistioctl1。15。3linuxamd64。tar。gzoriginalks。cfg〔rootmasternode〕tarzxvfistioctl1。15。3linuxamd64。tar。gzistioctl〔rootmasternode〕lsanacondaks。cfgistioctlistioctl1。15。3linuxamd64。tar。gzoriginalks。cfg〔rootmasternode〕kubegetsvcbash:kube:commandnotfound。。。〔rootmasternode〕kubectlgetsvcNAMETYPECLUSTERIPEXTERNALIPPORT(S)AGEkubernetesClusterIP10。96。0。1none443TCP14m〔rootmasternode〕。istioctlversionnorunningIstiopodsinistiosystem1。15。3〔rootmasternode〕。istioctlversionnorunningIstiopodsinistiosystem1。15。3对于本次安装,我们采用demo配置组合。选择它是因为它包含了一组专为测试准备的功能集合,另外还有用于生产或性能测试的配置组合。〔rootmasternode〕。istioctlinstallsetprofiledemoThiswillinstalltheIstio1。15。3demoprofilewith〔IstiocoreIstiodIngressgatewaysEgressgateways〕componentsintothecluster。Proceed?(yN)yIstiocoreinstalledIstiodinstalledIngressgatewaysinstalledEgressgatewaysinstalledInstallationcompleteMakingthisinstallationthedefaultforinjectionandvalidation。ThankyouforinstallingIstio1。15。Pleasetakeafewminutestotellusaboutyourinstallupgradeexperience!https:forms。gleSWHFBmwJspusK1hv6〔rootmasternode〕kubectlgetpodallnamespacesowideNAMESPACENAMEREADYSTATUSRESTARTSAGEIPNODENOMINATEDNODEREADINESSGATESistiosystemistioegressgateway6df4fcb499p2p9k11Running0114s10。244。2。4node2nonenoneistiosystemistioingressgateway57bcb89bf9hfgnk11Running0114s10。244。2。3node2nonenoneistiosystemistiod75b7f5bbf6pjp8h11Running02m24s10。244。2。2node2nonenonekubeflannelkubeflanneldsgx4xl11Running04m48s10。211。55。10node2nonenonekubeflannelkubeflanneldss6n6l11Running04m48s10。211。55。8masternodenonenonekubeflannelkubeflanneldstgh5q11Running04m48s10。211。55。9node1nonenonekubesystemcoredns64897985dd5zv411Running06m48s10。244。0。2masternodenonenonekubesystemcoredns64897985dd94bh11Running06m48s10。244。0。3masternodenonenonekubesystemetcdmasternode11Running47m1s10。211。55。8masternodenonenonekubesystemkubeapiservermasternode11Running47m3s10。211。55。8masternodenonenonekubesystemkubecontrollermanagermasternode11Running07m1s10。211。55。8masternodenonenonekubesystemkubeproxy59dqs11Running06m37s10。211。55。10node2nonenonekubesystemkubeproxycjdc611Running06m40s10。211。55。9node1nonenonekubesystemkubeproxydtxft11Running06m48s10。211。55。8masternodenonenonekubesystemkubeschedulermasternode11Running47m1s10。211。55。8masternodenonenonehttps:istio。iolatestzhdocssetupgettingstarteddownloadhttps:istio。iolatestzhdocssetupgettingstarted给命名空间添加标签,指示Istio在部署应用的时候,自动注入Envoy边车代理:kubectllabelnamespacedefaultistioinjectionenablednamespacedefaultlabeledwgethttps:raw。githubusercontent。comistioistiorelease1。15samplesbookinfoplatformkubebookinfo。yaml部署Bookinfo示例应用:〔rootmasternode〕kubectlapplyfbookinfo。yamlservicedetailscreatedserviceaccountbookinfodetailscreateddeployment。appsdetailsv1createdserviceratingscreatedserviceaccountbookinforatingscreateddeployment。appsratingsv1createdservicereviewscreatedserviceaccountbookinforeviewscreateddeployment。appsreviewsv1createddeployment。appsreviewsv2createddeployment。appsreviewsv3createdserviceproductpagecreatedserviceaccountbookinfoproductpagecreateddeployment。appsproductpagev1created应用很快会启动起来。当每个Pod准备就绪时,Istio边车代理将伴随它们一起部署:〔rootmasternode〕kubectlgetservicesNAMETYPECLUSTERIPEXTERNALIPPORT(S)AGEdetailsClusterIP10。98。163。198none9080TCP32skubernetesClusterIP10。96。0。1none443TCP13mproductpageClusterIP10。102。204。13none9080TCP31sratingsClusterIP10。110。203。228none9080TCP31sreviewsClusterIP10。109。166。169none9080TCP31s〔rootmasternode〕kubectlgetpodNAMEREADYSTATUSRESTARTSAGEdetailsv1698b5d8c98mcglc02PodInitializing041sproductpagev1bf4b489d86b6rs02Init:01041sratingsv15967f59c58vh8xv22Running041sreviewsv19c6bb6658xbbfl02PodInitializing041sreviewsv28454bb78d82rcgt02PodInitializing041sreviewsv36dc9897554gsv6w02PodInitializing041swaitinguntilthepodisrunning:〔rootmasternode〕kubectlgetpodNAMEREADYSTATUSRESTARTSAGEdetailsv1698b5d8c98mcglc22Running03m9sproductpagev1bf4b489d86b6rs22Running03m9sratingsv15967f59c58vh8xv22Running03m9sreviewsv19c6bb6658xbbfl22Running03m9sreviewsv28454bb78d82rcgt22Running03m9sreviewsv36dc9897554gsv6w22Running03m9s确认上面的操作都正确之后,运行下面命令,通过检查返回的页面标题,来验证应用是否已在集群中运行,并已提供网页服务kubectlexec(kubectlgetpodlappratingsojsonpath{。items〔0〕。metadata。name})cratingscurlsproductpage:9080productpagegrepotitle。titletitleSimpleBookstoreApptitle对外开放应用程序此时,BookInfo应用已经部署,但还不能被外界访问。要开放访问,您需要创建Istio入站网关(IngressGateway),它会在网格边缘把一个路径映射到路由。把应用关联到Istio网关:https:raw。githubusercontent。comistioistiorelease1。15samplesbookinfonetworkingbookinfogateway。yaml〔rootmasternode〕kubectlapplyfbookinfogateway。yamlgateway。networking。istio。iobookinfogatewaycreatedvirtualservice。networking。istio。iobookinfocreated确保配置文件没有问题:〔rootmasternode〕。istioctlanalyzeNovalidationissuesfoundwhenanalyzingnamespace:default。确定入站IP和端口按照说明,为访问网关设置两个变量:INGRESSHOST和INGRESSPORT。使用标签页,切换到您选用平台的说明:执行下面命令以判断您的Kubernetes集群环境是否支持外部负载均衡:〔rootmasternodeaddons〕kubectlgetsvcistioingressgatewaynistiosystemNAMETYPECLUSTERIPEXTERNALIPPORT(S)AGEistioingressgatewayLoadBalancer10。105。71。125pending15021:31042TCP,80:32464TCP,443:30555TCP,31400:31864TCP,15443:32381TCP12h设置EXTERNALIP的值之后,您的环境就有了一个外部的负载均衡,可以用它做入站网关。但如果EXTERNALIP的值为none(或者一直是pending状态),则您的环境则没有提供可作为入站流量网关的外部负载均衡。在这个情况下,您还可以用服务(Service)的节点端口访问网关。按照下面说明:如果您的环境中没有外部负载均衡,那就选择一个节点端口来代替reference:https:kubernetes。iozhcndocsconceptsservicesnetworkingservicetypenodeporteg:apiVersion:v1kind:Servicemetadata:name:myservicespec:type:NodePortselector:app。kubernetes。ioname:MyAppports:默认情况下,为了方便起见,targetPort被设置为与port字段相同的值。port:80targetPort:80可选字段默认情况下,为了方便起见,Kubernetes控制平面会从某个范围内分配一个端口号(默认:3000032767)nodePort:30007设置入站的端口:kubectlgetsvcnistiosystemNAMETYPECLUSTERIPEXTERNALIPPORT(S)AGEistioegressgatewayClusterIP10。102。44。107none80TCP,443TCP12histioingressgatewayLoadBalancer10。105。71。125pending15021:31042TCP,80:32464TCP,443:30555TCP,31400:31864TCP,15443:32381TCP12histiodClusterIP10。98。188。17none15010TCP,15012TCP,443TCP,15014TCP12hkialiClusterIP10。102。130。158none20001TCP,9090TCPverfify:theport〔rootmasternode〕kubectlnistiosystemgetserviceistioingressgatewayojsonpath{。spec。ports〔?(。namehttp2)〕。nodePort}32464〔rootmasternode〕kubectlnistiosystemgetserviceistioingressgatewayojsonpath{。spec。ports〔?(。namehttps)〕。nodePort}30555settingexport:exportINGRESSPORT(kubectlnistiosystemgetserviceistioingressgatewayojsonpath{。spec。ports〔?(。namehttp2)〕。nodePort})exportSECUREINGRESSPORT(kubectlnistiosystemgetserviceistioingressgatewayojsonpath{。spec。ports〔?(。namehttps)〕。nodePort})〔rootmasternode〕exportgrepPORTdeclarexINGRESSPORT32464declarexSECUREINGRESSPORT30555Otherenvironments:kubectlgetpolistioingressgatewaynistiosystemojsonpath{。items〔0〕。status。hostIP}10。211。55。10exportINGRESSHOST(kubectlgetpolistioingressgatewaynistiosystemojsonpath{。items〔0〕。status。hostIP})〔rootmasternode〕exportegrepPORTHOSTdeclarexHOSTNAMEmasternodedeclarexINGRESSHOST10。211。55。10declarexINGRESSPORT32464declarexSECUREINGRESSPORT30555设置环境变量GATEWAYURL:〔rootmasternode〕exportGATEWAYURLINGRESSHOST:INGRESSPORT确保IP地址和端口均成功的赋值给了环境变量:〔rootmasternode〕echoGATEWAYURL10。211。55。10:32464〔rootmasternode〕kubectlgetpodNAMEREADYSTATUSRESTARTSAGEdetailsv1698b5d8c98mcglc22Running012hproductpagev1bf4b489d86b6rs22Running012hratingsv15967f59c58vh8xv22Running012hreviewsv19c6bb6658xbbfl22Running012hreviewsv28454bb78d82rcgt22Running012hreviewsv36dc9897554gsv6w22Running012h验证外部访问用浏览器查看Bookinfo应用的产品页面,验证Bookinfo已经实现了外部访问。运行下面命令,获取Bookinfo应用的外部访问地址。〔rootmasternode〕echohttp:GATEWAYURLproductpagehttp:10。211。55。10:32464productpage把上面命令的输出地址复制粘贴到浏览器并访问,确认Bookinfo应用的产品页面是否可以打开:curlIhttp:10。211。55。10:32464productpageHTTP1。1200OKcontenttype:texthtml;charsetutf8contentlength:4293server:istioenvoydate:Thu,03Nov202201:07:15GMTxenvoyupstreamservicetime:56Verifypodlogs:〔rootmasternode〕kubectllogsfproductpagev1bf4b489d86b6rsreply:HTTP1。1200OKrheader:xpoweredby:Servlet3。1header:contenttype:applicationjsonheader:date:Thu,03Nov202201:07:15GMTheader:contentlanguage:enUSheader:contentlength:357header:xenvoyupstreamservicetime:26header:server:envoyDEBUG:urllib3。connectionpool:http:reviews:9080GETreviews0HTTP1。1200357INFO:werkzeug:::ffff:127。0。0。6〔03Nov202201:07:15〕HEADproductpageHTTP1。1200查看仪表板:Istio和几个遥测应用做了集成。遥测能帮您了解服务网格的结构、展示网络的拓扑结构、分析网格的健康状态。使用下面说明部署Kiali仪表板、以及Prometheus、Grafana、还有Jaeger安装Kiali和其他插件,等待部署完成〔rootmasternodeaddons〕pwdrootistio1。15。3samplesaddons〔rootmasternodeaddons〕tree。extrasprometheusoperator。yamlprometheusvmtls。yamlprometheusvm。yamlzipkin。yamlgrafana。yamljaeger。yamlkiali。yamlprometheus。yamlREADME。md1directory,9files〔rootmasternodeaddons〕kubectlapplyfrootistio1。15。3samplesaddonsserviceaccountgrafanacreatedconfigmapgrafanacreatedservicegrafanacreateddeployment。appsgrafanacreatedconfigmapistiografanadashboardscreatedconfigmapistioservicesgrafanadashboardscreateddeployment。appsjaegercreatedservicetracingcreatedservicezipkincreatedservicejaegercollectorcreatedserviceaccountkialiunchangedconfigmapkialiunchangedclusterrole。rbac。authorization。k8s。iokialiviewerunchangedclusterrole。rbac。authorization。k8s。iokialiunchangedclusterrolebinding。rbac。authorization。k8s。iokialiunchangedrole。rbac。authorization。k8s。iokialicontrolplaneunchangedrolebinding。rbac。authorization。k8s。iokialicontrolplaneunchangedservicekialiunchangeddeployment。appskialiunchangedserviceaccountprometheuscreatedconfigmapprometheuscreatedclusterrole。rbac。authorization。k8s。ioprometheuscreatedclusterrolebinding。rbac。authorization。k8s。ioprometheuscreatedserviceprometheuscreateddeployment。appsprometheuscreated〔rootmasternodeaddons〕kubectlgetdeploynistiosystemNAMEREADYUPTODATEAVAILABLEAGEgrafana011033sistioegressgateway111112histioingressgateway111112histiod111112hjaeger011033skiali111112hprometheus011032s〔rootmasternodeaddons〕kubectlrolloutstatusdeploymentkialinistiosystemdeploymentkialisuccessfullyrolledout〔rootmasternodeaddons〕kubectlgetdeploynistiosystemNAMEREADYUPTODATEAVAILABLEAGEgrafana011049sistioegressgateway111112histioingressgateway111112histiod111112hjaeger011049skiali111112hprometheus011048sverifydeployment:〔rootmasternodeaddons〕kubectlgetdeploynistiosystemNAMEREADYUPTODATEAVAILABLEAGEgrafana11115m23sistioegressgateway111113histioingressgateway111113histiod111113hjaeger11115m23skiali111112hprometheus11115m22s访问Kiali仪表板。〔rootmasternode〕。istioctldashboardkialiaddress10。211。55。8http:10。211。55。8:20001kiali〔rootmasternode〕。istioctldashboardkialihttp:localhost:20001kiali在左侧的导航菜单,选择Graph,然后在Namespace下拉列表中,选择default。Kiali仪表板展示了网格的概览、以及Bookinfo示例应用的各个服务之间的关系。它还提供过滤器来可视化流量的流动。addperformancetestsforurl:wrkc100d300http:10。211。55。10:32464productpageRunning5mtesthttp:10。211。55。10:32464productpage2threadsand100connectionsThreadStatsAvgStdevMaxStdevLatency1。19s162。18ms2。00s78。86ReqSec42。0318。37120。0070。7425065requestsin5。00m,122。40MBreadSocketerrors:connect0,read0,write0,timeout70Requestssec:83。53Transfersec:417。69KBinstallliali。yamlfromtar。gz〔rootmasternodeaddons〕pwdrootistio1。15。3samplesaddons〔rootmasternodeaddons〕kubectlapplyfkiali。yamlserviceaccountkialicreatedconfigmapkialicreatedclusterrole。rbac。authorization。k8s。iokialiviewercreatedclusterrole。rbac。authorization。k8s。iokialicreatedclusterrolebinding。rbac。authorization。k8s。iokialicreatedrole。rbac。authorization。k8s。iokialicontrolplanecreatedrolebinding。rbac。authorization。k8s。iokialicontrolplanecreatedservicekialicreateddeployment。appskialicreatedwaitinguntildeploymentfinishedistiosystemkiali689fbdb586d6gxm01Running079s打开dashboardrootistioctldashboardkialihttp:localhost:20001kiali〔rootmasternodeaddons〕rootistioctldashboardkialihttp:localhost:20001kiali注入Sidecar的时候会在生成pod的时候附加上两个容器:istioinit、istioproxy。istioinit这个容器从名字上看也可以知道它属于k8s中的InitContainers,主要用于设置iptables规则,让出入流量都转由Sidecar进行处理。istioproxy是基于Envoy实现的一个网络代理容器,是真正的Sidecar,应用的流量会被重定向进入或流出Sidecar。https:cloud。tencent。comdeveloperarticle1746921?from15425我们在使用Sidecar自动注入的时候只需要给对应的应用部署的命名空间打个istioinjectionenabled标签,这个命名空间中新建的任何Pod都会被Istio注入Sidecar。应用部署后我们可以通过kubectldescribe查看pod内的容器kubectldescribepodproductpagev1bf4b489d86b6rs