Dear man ,
我在用corosync + pacemaker(crmsh) + drbd 做mysql HA
現在crm下設資源後另外一台不能切為master ,蠻奇怪的.
master cleans :
Step 1:
[root@cleans ~]# crm status
Last updated: Tue Feb 10 18:18:46 2015
Last change: Tue Feb 10 17:05:05 2015
Stack: classic openais (with plugin)
Current DC: cleanm.localdomain - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured, 2 expected votes
2 Resources configured
Online: [ cleanm.localdomain cleans.localdomain ]
Master/Slave Set: ms_mysqldrbd [mysqldrbd]
Masters: [ cleans.localdomain ]
Slaves: [ cleanm.localdomain ]
Step 2
切換: [root@cleans ~]# crm node standby
[root@cleans ~]# crm status
Last updated: Tue Feb 10 18:21:52 2015
Last change: Tue Feb 10 18:21:45 2015
Stack: classic openais (with plugin)
Current DC: cleanm.localdomain - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured, 2 expected votes
2 Resources configured
Node cleans.localdomain: standby
Online: [ cleanm.localdomain ]
Master/Slave Set: ms_mysqldrbd [mysqldrbd]
Slaves: [ cleanm.localdomain ]
Stopped: [ cleans.localdomain ]
Cleanm 無法成為 master
Step 3 :回復
[root@cleans ~]# crm node online
[root@cleans ~]# crm status
Last updated: Tue Feb 10 18:24:37 2015
Last change: Tue Feb 10 18:24:31 2015
Stack: classic openais (with plugin)
Current DC: cleanm.localdomain - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured, 2 expected votes
2 Resources configured
Online: [ cleanm.localdomain cleans.localdomain ]
Master/Slave Set: ms_mysqldrbd [mysqldrbd]
Masters: [ cleans.localdomain ]
Slaves: [ cleanm.localdomain ]
[root@cleanm ~]# crm_verify -L -V 沒有錯誤
在log有看到
on server cleanm
Feb 10 14:24:13 [2061] cleanm.localdomain stonith-ng: error: unpack_location_tags: Constraint 'cli-ban-mysqldrbd-on-cleans.localdomain': Invalid reference to 'mysqldrbd'
Cleans log
Feb 10 15:27:04 [2370] cleans.localdomain crmd: notice: process_lrm_event: Operation drbdfs_start_0: unknown error (node=cleans.localdomain, call=43, rc=1, cib-update=80, confirmed=true)
on server cleans
Feb 10 15:27:04 [2370] cleans.localdomain crmd: info: process_graph_event: Detected action (32.36) drbdfs_start_0.43=unknown error: failed
Feb 10 15:27:04 [2370] cleans.localdomain crmd: info: process_graph_event: Detected action (32.36) drbdfs_start_0.43=unknown error: failed
Feb 10 15:27:05 [2369] cleans.localdomain pengine: debug: determine_op_status: drbdfs_start_0 on cleans.localdomain returned 'unknown error' (1) instead of the expected value: 'ok' (0)
Feb 10 15:27:05 [2369] cleans.localdomain pengine: warning: unpack_rsc_op_failure: Processing failed op start for drbdfs on cleans.localdomain: unknown error (1)
Feb 10 15:27:05 [2369] cleans.localdomain pengine: debug: determine_op_status: drbdfs_start_0 on cleans.localdomain returned 'unknown error' (1) instead of the expected value: 'ok' (0)
Feb 10 15:27:05 [2369] cleans.localdomain pengine: warning: unpack_rsc_op_failure: Processing failed op start for drbdfs on cleans.localdomain: unknown error (1)
[root@cleanm ~]# crm configure show
node cleanm.localdomain \
attributes standby=off maintenance=off
node cleans.localdomain \
attributes standby=off maintenance=off
primitive mysqldrbd ocf:linbit:drbd \
params drbd_resource=mysql \
op start timeout=240 interval=0 \
op stop timeout=100 interval=0 \
op monitor role=Master interval=50s timeout=30s \
op monitor role=Slave interval=60s timeout=30s
ms ms_mysqldrbd mysqldrbd \
meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
location drbd-fence-by-handler-mysql-ms_mysqldrbd ms_mysqldrbd \
rule $role=Master -inf: #uname ne cleans.localdomain
property cib-bootstrap-options: \
dc-version=1.1.11-97629de \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes=2 \
no-quorum-policy=ignore \
pe-warn-series-max=1000 \
pe-input-series-max=1000 \
pe-error-series-max=1000 \
cluster-recheck-interval=5min \
stonith-enabled=false
rsc_defaults rsc-options: \
resource-stickiness=100