今天我们来聊一聊获取ceph集群状态和ceph配置文件说明相关话题; Ceph集群状态获取常用命令 1、cephs:该命令用于输出ceph集群系统状态信息 提示:cephs主要输出有三类信息,一类是集群相关信息,比如集群id,健康状态;第二类是服务类相关信息,比如集群运行了几个mon节点,几个mgr节点,几个mds,osd和rgw;这些服务都处于什么样的状态等等;我们把这些信息称为集群运行状况,它可以让我们一目了然的了解到集群现有运行状况;第三类信息是数据存储类的信息;比如有多少个存储池,和pg数量;usage用来展示集群使用容量和剩余容量以及总容量;这里需要注意一点,集群显示的总磁盘大小,它不等于可以存储这么多对象数据;因为每一个对象数据都多个副本,所以真正能够存储对象数据的量应该根据副本的数量来计算;默认情况下,我们创建的存储都是副本型存储池,副本数量是3个(其中一个主,两个从),即每一个对象数据都会存储三份,所以真正能够存储对象数据的空间只有总空间的三分之一。 获取集群的即时状态信息 2、获取pg的状态 1hr2hr3hr〔cephadmcephadmincephcluster〕cephpgstat 304pgs:304activeclean;3。8KiBdata,10GiBused,890GiB900GiBavail 〔cephadmcephadmincephcluster〕 3、获取存储池的状态 1hr2hr3hr4hr5hr6hr7hr8hr9hr10hr11hr12hr13hr14hr15hr16hr17hr18hr19hr20hr21hr22hr23hr24hr25hr26hr〔cephadmcephadmincephcluster〕cephosdpoolstats pooltestpoolid1 nothingisgoingon poolrbdpoolid2 nothingisgoingon pool。rgw。rootid3 nothingisgoingon pooldefault。rgw。controlid4 nothingisgoingon pooldefault。rgw。metaid5 nothingisgoingon pooldefault。rgw。logid6 nothingisgoingon poolcephfsmetadatpoolid7 nothingisgoingon poolcephfsdatapoolid8 nothingisgoingon 〔cephadmcephadmincephcluster〕 提示:如果后面没有跟指定的存储表示获取所有存储的状态; 4、获取存储池大小和空间使用情况 1hr2hr3hr4hr5hr6hr7hr8hr9hr10hr11hr12hr13hr14hr15hr〔cephadmcephadmincephcluster〕cephdf GLOBAL: SIZEAVAILRAWUSEDRAWUSED 900GiB890GiB10GiB1。13 POOLS: NAMEIDUSEDUSEDMAXAVAILOBJECTS testpool10B0281GiB0 rbdpool2389B0281GiB5 。rgw。root31。1KiB0281GiB4 default。rgw。control40B0281GiB8 default。rgw。meta50B0281GiB0 default。rgw。log60B0281GiB175 cephfsmetadatpool72。2KiB0281GiB22 cephfsdatapool80B0281GiB0 〔cephadmcephadmincephcluster〕 提示:cephdf输出的内容主要分两大段,第一段是global,全局存储空间用量情况;size表示总空间大小,avail表示剩余空间大小;RAWUSED表示已用到原始存储空间;RAWUSED表示已用原始空间占比重空间的比例;第二段是相关存储空间使用情况;其中MAXAVAIL表示对应存储池能够使用的最大容量;OBJECTS表示该存储池中对象的个数; 获取存储空间用量详细情况 1hr2hr3hr4hr5hr6hr7hr8hr9hr10hr11hr12hr13hr14hr15hr〔cephadmcephadmincephcluster〕cephdfdetail GLOBAL: SIZEAVAILRAWUSEDRAWUSEDOBJECTS 900GiB890GiB10GiB1。13214 POOLS: NAMEIDQUOTAOBJECTSQUOTABYTESUSEDUSEDMAXAVAILOBJECTSDIRTYREADWRITERAWUSED testpool1NANA0B0281GiB002B2B0B rbdpool2NANA389B0281GiB5575B19B1。1KiB 。rgw。root3NANA1。1KiB0281GiB4466B4B3。4KiB default。rgw。control4NANA0B0281GiB880B0B0B default。rgw。meta5NANA0B0281GiB000B0B0B default。rgw。log6NANA0B0281GiB1751757。2KiB4。8KiB0B cephfsmetadatpool7NANA2。2KiB0281GiB22220B45B6。7KiB cephfsdatapool8NANA0B0281GiB000B0B0B 〔cephadmcephadmincephcluster〕 5、检查OSD和MON的状态 1hr2hr3hr4hr5hr6hr7hr8hr9hr10hr11hr12hr13hr14hr15hr16hr17hr18hr19hr20hr21hr22hr23hr24hr25hr26hr27hr28hr29hr30hr31hr32hr33hr34hr35hr36hr〔cephadmcephadmincephcluster〕cephosdstat 10osds:10up,10in;epoch:e99 〔cephadmcephadmincephcluster〕cephosddump epoch99 fsid7fd4a61997674b469cee78b9dfe88f34 created2022092400:36:13。639715 modified2022092512:33:15。111283 flagssortbitwise,recoverydeletes,purgedsnapdirs crushversion25 fullratio0。95 backfillfullratio0。9 nearfullratio0。85 requiremincompatclientjewel mincompatclientjewel requireosdreleasemimic pool1testpoolreplicatedsize3minsize2crushrule0objecthashrjenkinspgnum16pgpnum16lastchange42flagshashpspoolstripewidth0 pool2rbdpoolreplicatedsize3minsize2crushrule0objecthashrjenkinspgnum64pgpnum64lastchange81flagshashpspool,selfmanagedsnapsstripewidth0applicationrbd removedsnaps〔13〕 pool3。rgw。rootreplicatedsize3minsize2crushrule0objecthashrjenkinspgnum8pgpnum8lastchange84owner18446744073709551615flagshashpspoolstripewidth0applicationrgw pool4default。rgw。controlreplicatedsize3minsize2crushrule0objecthashrjenkinspgnum8pgpnum8lastchange87owner18446744073709551615flagshashpspoolstripewidth0applicationrgw pool5default。rgw。metareplicatedsize3minsize2crushrule0objecthashrjenkinspgnum8pgpnum8lastchange89owner18446744073709551615flagshashpspoolstripewidth0applicationrgw pool6default。rgw。logreplicatedsize3minsize2crushrule0objecthashrjenkinspgnum8pgpnum8lastchange91owner18446744073709551615flagshashpspoolstripewidth0applicationrgw pool7cephfsmetadatpoolreplicatedsize3minsize2crushrule0objecthashrjenkinspgnum64pgpnum64lastchange99flagshashpspoolstripewidth0applicationcephfs pool8cephfsdatapoolreplicatedsize3minsize2crushrule0objecthashrjenkinspgnum128pgpnum128lastchange99flagshashpspoolstripewidth0applicationcephfs maxosd10 osd。0upinweight1upfrom67upthru96downat66lastcleaninterval〔64,65)192。168。0。71:68021361172。16。30。71:68021361172。16。30。71:68031361192。168。0。71:68031361exists,upbf3649afe3f441a2a5ce8f1a316d344e osd。1upinweight1upfrom68upthru96downat66lastcleaninterval〔64,65)192。168。0。71:68001346172。16。30。71:68001346172。16。30。71:68011346192。168。0。71:68011346exists,up7293a12a7b4e4c8682dc0acc15c3349e osd。2upinweight1upfrom67upthru96downat66lastcleaninterval〔60,65)192。168。0。72:68001389172。16。30。72:68001389172。16。30。72:68011389192。168。0。72:68011389exists,up96c437c58e824486910f9e98d195e4f9 osd。3upinweight1upfrom67upthru96downat66lastcleaninterval〔60,65)192。168。0。72:68021406172。16。30。72:68021406172。16。30。72:68031406192。168。0。72:68031406exists,up4659d2a909c749d5bce04d2e65f5198c osd。4upinweight1upfrom71upthru96downat68lastcleaninterval〔59,66)192。168。0。73:68021332172。16。30。73:68021332172。16。30。73:68031332192。168。0。73:68031332exists,upde019aa83d2a4079a99eec2da2d4edb9 osd。5upinweight1upfrom71upthru96downat68lastcleaninterval〔58,66)192。168。0。73:68001333172。16。30。73:68001333172。16。30。73:68011333192。168。0。73:68011333exists,up119c8748af3b4ac4ac746171c90c82cc osd。6upinweight1upfrom69upthru96downat68lastcleaninterval〔59,66)192。168。0。74:68001306172。16。30。74:68001306172。16。30。74:68011306192。168。0。74:68011306exists,up08d8dd8bcdfe433883c0b1e2b5c2a799 osd。7upinweight1upfrom69upthru96downat68lastcleaninterval〔60,65)192。168。0。74:68021301172。16。30。74:68021301172。16。30。74:68031301192。168。0。74:68031301exists,up9de6cbd0bb1b49e9835c3e714a867393 osd。8upinweight1upfrom73upthru96downat66lastcleaninterval〔59,65)192。168。0。75:68001565172。16。30。75:68001565172。16。30。75:68011565192。168。0。75:68011565exists,up63aaa0b84e524d7482a8fbbe7b48c837 osd。9upinweight1upfrom73upthru96downat66lastcleaninterval〔59,65)192。168。0。75:68021558172。16。30。75:68021558172。16。30。75:68031558192。168。0。75:68031558exists,up6bf3204ab64c4808a782434a93ac578c 〔cephadmcephadmincephcluster〕 除了上述命令来检查osd状态,我们还可以根据OSD在CRUSHMPA中的位置查看osd 1hr2hr3hr4hr5hr6hr7hr8hr9hr10hr11hr12hr13hr14hr15hr16hr17hr18hr19hr〔cephadmcephadmincephcluster〕cephosdtree IDCLASSWEIGHTTYPENAMESTATUSREWEIGHTPRIAFF 10。87891rootdefault 90。17578hostcephmgr01 6hdd0。07809osd。6up1。000001。00000 7hdd0。09769osd。7up1。000001。00000 30。17578hostcephmon01 0hdd0。07809osd。0up1。000001。00000 1hdd0。09769osd。1up1。000001。00000 50。17578hostcephmon02 2hdd0。07809osd。2up1。000001。00000 3hdd0。09769osd。3up1。000001。00000 70。17578hostcephmon03 4hdd0。07809osd。4up1。000001。00000 5hdd0。09769osd。5up1。000001。00000 110。17578hostnode01 8hdd0。07809osd。8up1。000001。00000 9hdd0。09769osd。9up1。000001。00000 〔cephadmcephadmincephcluster〕 提示:从上面的输出信息我们可以看到每台主机上osd编号情况,以及每个OSD的权重; 检查mon节点状态 1hr2hr3hr4hr5hr6hr7hr8hr9hr10hr11hr12hr〔cephadmcephadmincephcluster〕cephmonstat e3:3monsat{cephmon01192。168。0。71:67890,cephmon02192。168。0。72:67890,cephmon03192。168。0。73:67890},electionepoch18,leader0cephmon01,quorum0,1,2cephmon01,cephmon02,cephmon03 〔cephadmcephadmincephcluster〕cephmondump dumpedmonmapepoch3 epoch3 fsid7fd4a61997674b469cee78b9dfe88f34 lastchanged2022092401:56:24。196075 created2022092400:36:13。210155 0:192。168。0。71:67890mon。cephmon01 1:192。168。0。72:67890mon。cephmon02 2:192。168。0。73:67890mon。cephmon03 〔cephadmcephadmincephcluster〕 提示:上述两条命令都能显示出集群有多少个mon节点,以及对应节点的ip地址和监听端口,以及mon节点编号等信息;cephmonstat除了能显示有多少mon节点和mon的详细信息外,它还显示领导节点的编号,以及选举次数; 查看仲裁状态 1hr2hr3hr〔cephadmcephadmincephcluster〕cephquorumstatus {electionepoch:18,quorum:〔0,1,2〕,quorumnames:〔cephmon01,cephmon02,cephmon03〕,quorumleadername:cephmon01,monmap:{epoch:3,fsid:7fd4a61997674b469cee78b9dfe88f34,modified:2022092401:56:24。196075,created:2022092400:36:13。210155,features:{persistent:〔kraken,luminous,mimic,osdmapprune〕,optional:〔〕},mons:〔{rank:0,name:cephmon01,addr:192。168。0。71:67890,publicaddr:192。168。0。71:67890},{rank:1,name:cephmon02,addr:192。168。0。72:67890,publicaddr:192。168。0。72:67890},{rank:2,name:cephmon03,addr:192。168。0。73:67890,publicaddr:192。168。0。73:67890}〕}} 〔cephadmcephadmincephcluster〕 使用管理套接字查询集群状态 Ceph的管理套接字接口常用于查询守护进程,套接字默认保存于varrunceph目录,此接口的使用不能以远程方式进程,只能在对应节点上使用; 命令的使用格式:cephadmindaemonvarruncephsocketname命令;比如获取帮助信息cephadmindaemonvarruncephsocketnamehelp 1hr2hr3hr4hr5hr6hr7hr8hr9hr10hr11hr12hr13hr14hr15hr16hr17hr18hr19hr20hr21hr22hr23hr24hr25hr26hr27hr28hr29hr30hr31hr32hr33hr34hr35hr36hr37hr38hr39hr40hr41hr42hr43hr44hr45hr46hr47hr48hr49hr50hr51hr52hr53hr54hr55hr56hr57hr58hr59hr60hr61hr62hr〔rootcephmon01〕cephadmindaemonvarruncephcephosd。0。asokhelp { calcobjectstoredbhistogram:Generatekeyvaluehistogramofkvdb(rocksdb)whichusedbybluestore, compact:Commpactobjectstoresomap。WARNING:Compactionprobablyslowsyourrequests, configdiff:dumpdiffofcurrentconfiganddefaultconfig, configdiffget:dumpdiffget:dumpdiffofcurrentanddefaultconfigsetting, configget:configget:gettheconfigvalue, confighelp:getconfigsettingschemaanddescriptions, configset:configset〔。。。〕:setaconfigvariable, configshow:dumpcurrentconfigsettings, configunset:configunset:unsetaconfigvariable, dumpblacklist:dumpblacklistedclientsandtimes, dumpblockedops:showtheblockedopscurrentlyinflight, dumphistoricops:showrecentops, dumphistoricopsbyduration:showslowestrecentops,sortedbyduration, dumphistoricslowops:showslowestrecentops, dumpmempools:getmempoolstats, dumpobjectstorekvstats:printstatisticsofkvdbwhichusedbybluestore, dumpoppqstate:dumpoppriorityqueuestate, dumpopsinflight:showtheopscurrentlyinflight, dumposdnetwork:Dumposdheartbeatnetworkpingtimes, dumppgstatehistory:showrecentstatehistory, dumpreservations:showrecoveryreservations, dumpscrubs:printscheduledscrubs, dumpwatchers:showclientswhichhaveactivewatches,andonwhichobjects, flushjournal:flushthejournaltopermanentstore, flushstorecache:Flushbluestoreinternalcache, getcommanddescriptions:listavailablecommands, getheapproperty:getmallocextensionheapproperty, getlatestosdmap:forceosdtoupdatethelatestmapfromthemon, getmappedpools:dumppoolswhosePG(s)aremappedtothisOSD。, getomap:outputentireobjectmap, gitversion:getgitsha1, heap:showheapusageinfo(availableonlyifcompiledwithtcmalloc), help:listavailablecommands, injectdataerr:injectdataerrortoanobject, injectfull:Injectafulldisk(optionalcounttimes), injectmdataerr:injectmetadataerrortoanobject, listdevices:listOSDdevices。, logdump:dumprecentlogentriestologfile, logflush:flushlogentriestologfile, logreopen:reopenlogfile, objecterrequests:showinprogressosdrequests, ops:showtheopscurrentlyinflight, perfdump:dumpperfcountersvalue, perfhistogramdump:dumpperfhistogramvalues, perfhistogramschema:dumpperfhistogramschema, perfreset:perfreset:perfresetalloroneperfcountername, perfschema:dumpperfcountersschema, rmomapkey:removeomapkey, setheapproperty:updatemallocextensionheapproperty, setrecoverydelay:Delayosdrecoverybyspecifiedseconds, setomapheader:setomapheader, setomapval:setomapkey, smart:probeOSDdevicesforSMARTdata。, status:highlevelstatusofOSD, triggerdeepscrub:Triggerascheduleddeepscrub, triggerscrub:Triggerascheduledscrub, truncobj:truncateobjecttolength, version:getcephversion } 〔rootcephmon01〕 比如获取mon01的版本信息 1hr2hr3hr〔rootcephmon01〕cephadmindaemonvarruncephcephmon。cephmon01。asokversion {version:13。2。10,release:mimic,releasetype:stable} 〔rootcephmon01〕 获取osd的状态信息 1hr2hr3hr4hr5hr6hr7hr8hr9hr10hr11hr〔rootcephmon01〕cephadmindaemonvarruncephcephosd。0。asokstatus { clusterfsid:7fd4a61997674b469cee78b9dfe88f34, osdfsid:bf3649afe3f441a2a5ce8f1a316d344e, whoami:0, state:active, oldestmap:1, newestmap:114, numpgs:83 } 〔rootcephmon01〕 进程的运行时配置 我们可以使用cephdaemon命令来动态的配置ceph进程,即不停服务动态配置进程; 比如,获取osd。0的公网地址 1hr2hr3hr4hr5hr〔rootcephmon01〕cephdaemonosd。0configgetpublicaddr { publicaddr:192。168。0。71:00 } 〔rootcephmon01〕 获取帮助信息:命令格式:cephdaemon{daemontype}。{id}help 1hr2hr3hr4hr5hr6hr7hr8hr9hr10hr11hr12hr13hr14hr15hr16hr17hr18hr19hr20hr21hr22hr23hr24hr25hr26hr27hr28hr29hr30hr31hr32hr33hr34hr35hr36hr37hr38hr39hr40hr41hr42hr43hr44hr45hr46hr47hr48hr49hr50hr51hr52hr53hr54hr55hr56hr57hr58hr59hr60hr61hr62hr〔rootcephmon01〕cephdaemonosd。1help { calcobjectstoredbhistogram:Generatekeyvaluehistogramofkvdb(rocksdb)whichusedbybluestore, compact:Commpactobjectstoresomap。WARNING:Compactionprobablyslowsyourrequests, configdiff:dumpdiffofcurrentconfiganddefaultconfig, configdiffget:dumpdiffget:dumpdiffofcurrentanddefaultconfigsetting, configget:configget:gettheconfigvalue, confighelp:getconfigsettingschemaanddescriptions, configset:configset〔。。。〕:setaconfigvariable, configshow:dumpcurrentconfigsettings, configunset:configunset:unsetaconfigvariable, dumpblacklist:dumpblacklistedclientsandtimes, dumpblockedops:showtheblockedopscurrentlyinflight, dumphistoricops:showrecentops, dumphistoricopsbyduration:showslowestrecentops,sortedbyduration, dumphistoricslowops:showslowestrecentops, dumpmempools:getmempoolstats, dumpobjectstorekvstats:printstatisticsofkvdbwhichusedbybluestore, dumpoppqstate:dumpoppriorityqueuestate, dumpopsinflight:showtheopscurrentlyinflight, dumposdnetwork:Dumposdheartbeatnetworkpingtimes, dumppgstatehistory:showrecentstatehistory, dumpreservations:showrecoveryreservations, dumpscrubs:printscheduledscrubs, dumpwatchers:showclientswhichhaveactivewatches,andonwhichobjects, flushjournal:flushthejournaltopermanentstore, flushstorecache:Flushbluestoreinternalcache, getcommanddescriptions:listavailablecommands, getheapproperty:getmallocextensionheapproperty, getlatestosdmap:forceosdtoupdatethelatestmapfromthemon, getmappedpools:dumppoolswhosePG(s)aremappedtothisOSD。, getomap:outputentireobjectmap, gitversion:getgitsha1, heap:showheapusageinfo(availableonlyifcompiledwithtcmalloc), help:listavailablecommands, injectdataerr:injectdataerrortoanobject, injectfull:Injectafulldisk(optionalcounttimes), injectmdataerr:injectmetadataerrortoanobject, listdevices:listOSDdevices。, logdump:dumprecentlogentriestologfile, logflush:flushlogentriestologfile, logreopen:reopenlogfile, objecterrequests:showinprogressosdrequests, ops:showtheopscurrentlyinflight, perfdump:dumpperfcountersvalue, perfhistogramdump:dumpperfhistogramvalues, perfhistogramschema:dumpperfhistogramschema, perfreset:perfreset:perfresetalloroneperfcountername, perfschema:dumpperfcountersschema, rmomapkey:removeomapkey, setheapproperty:updatemallocextensionheapproperty, setrecoverydelay:Delayosdrecoverybyspecifiedseconds, setomapheader:setomapheader, setomapval:setomapkey, smart:probeOSDdevicesforSMARTdata。, status:highlevelstatusofOSD, triggerdeepscrub:Triggerascheduleddeepscrub, triggerscrub:Triggerascheduledscrub, truncobj:truncateobjecttolength, version:getcephversion } 〔rootcephmon01〕 提示:cephdaemon获取某个进程的信息时,需要在对应主机上用root执行命令; 动态设置进程参数有两种方式,一种是通过mon向对应进程发送配置,一种是通过adminsocket发送配置给进程 通过mon向对应进程发送配置命令格式:cephtell{daemontype}。{daemonidor}injectargs{name}{value}〔{name}{value}〕 1hr2hr〔cephadmcephadmincephcluster〕cephtellosd。1injectargsdebugosd05 〔cephadmcephadmincephcluster〕 提示:这种方式可以在集群任意主机上执行; 通过adminsocket的方式发送配置命令格式:cephdaemon{daemontype}。{id}set{name}{value} 1hr2hr3hr4hr5hr〔rootcephmon01〕cephdaemonosd。0configsetdebugosd05 { success: } 〔rootcephmon01〕 提示:这种方式只能在进程所在主机上执行; 停止或重启Ceph集群步骤 停止ceph集群步骤 1、告知Ceph集群不要将OSD标记为out,命令:cephosdsetnoout 1hr2hr3hr〔cephadmcephadmincephcluster〕cephosdsetnoout nooutisset 〔cephadmcephadmincephcluster〕 2、按如下顺序停止守护进程和节点:停止存储客户端网关,如rgw元数据服务器,MDSCephOSDCephManagerCephMonitor;然后关闭对应主机; 启动ceph集群步骤 1、以与停止过程相关的顺序启动节点:CephMonitorCephManagerCephOSD元数据服务器,MDS网关,如rgw存储客户端; 2、删除noout标志,命令cephosdunsetnoout 1hr2hr3hr〔cephadmcephadmincephcluster〕cephosdunsetnoout nooutisunset 〔cephadmcephadmincephcluster〕 提示:集群重新启动起来,需要将noout标记取消,以免但真正有osd故障时,能够将osd及时下线,避免将对应存取数据的操作调度到对应osd上进行操作而引发的故障; ceph是一个对象存储集群,在生产环境中,如有不慎可能导致不可预估的后果,所以停止和启动顺序都非常重要;上述过程主要是尽量减少丢失数据的几率,但不保证一定不丢数据; Ceph配置文件ceph。conf说明 1hr2hr3hr4hr5hr6hr7hr8hr9hr10hr11hr12hr〔cephadmcephadmincephcluster〕catetccephceph。conf 〔global〕 fsid7fd4a61997674b469cee78b9dfe88f34 moninitialmemberscephmon01 monhost192。168。0。71 publicnetwork192。168。0。024 clusternetwork172。16。30。024 authclusterrequiredcephx authservicerequiredcephx authclientrequiredcephx 〔cephadmcephadmincephcluster〕 提示:ceph。conf配置文件严格遵守ini配置文件风格的语法和格式;其中井号‘’和冒号‘;’用于注释;ceph。conf主要有〔global〕、〔osd〕、〔mon〕、〔client〕这4个配置段组成;其中global配置段适用于全局配置,即各组件的公共配置;【osd】配置段作用范围是集群所有osd都生效的配置;【mon】作用范围是集群所以mon都生效的配置;【client】作用范围是所有客户端,比如rbd、rgw; mon和osd的独有配置段 上面的【osd】和【mon】配置段都是针对所有osd和mon生效,如果我们只想配置单独某一个osd或mon该怎么配置呢?ceph。conf中我们使用〔type。ID〕来表示某一个osd或mon的配置;比如,我们只想配置osd。0,则我们可以在配置段里写〔osd。0〕来表示该段配置只针对osd。0生效;同样mon也是同样的逻辑,不同于osd的是,mon的ID不是数字;我们可以使用cephmondump来查看mon的ID; 获取osd的编号 提示:osd的编号都是数字,从0开始; ceph。conf配置段生效优先级 如果以上公共配置段里的配置和专用配置段的配置重复,则专用配置段覆盖公共配置段里的配置,即专用配置段里的配置生效;配置生效优先级顺序为:【global】小于【osd】、【mon】、【client】;【osd】小于【osd。ID】,【mon】小于【mon。a】;总之配置段作用范围越小,越优先生效; ceph配置文件生效优先级 ceph启动时会按如下顺序查找配置文件 1、CEPHCONF:该环境变量所指定的配置文件; 2、cpathpath:命令行使用c选项指定的配置文件路径; 3、etccephceph。conf:默认配置文件路径 4、~。cephconfig:当前用户家目录下。cephconfig文件 5、。ceph。conf:当前用户所在目录下的ceph。conf文件 配置文件生效顺序是CEPHCONFcpathpathetccephceph。conf~。cephconfig。ceph。conf; Ceph配置文件常用的元参数 ceph配置文件支持用元参数来替换对应配置信息,比如cluster就表示当前Ceph集群的名称;type表示当前服务的类型名称;比如osd、mon;id表示守护进程的标识符,比如以osd。0来说,它的标识符就是0;host表示守护进程所在主机的主机名;name表示当前服务的类型名称和进程标识符的组合;即nametype。id; 原文链接:https:www。cnblogs。comqiuhom1874p16727820。html