作者 主題: drbd( crmsh) switch fail  (閱讀 5095 次)

0 會員 與 1 訪客 正在閱讀本文。

jjchiou

  • 懷疑的國中生
  • **
  • 文章數: 68
    • 檢視個人資料
drbd( crmsh) switch fail
« 於: 2015-02-10 18:30 »
Dear man ,

我在用corosync + pacemaker(crmsh) + drbd 做mysql HA

現在crm下設資源後另外一台不能切為master ,蠻奇怪的.

master cleans :
Step 1:
[root@cleans ~]# crm status
Last updated: Tue Feb 10 18:18:46 2015
Last change: Tue Feb 10 17:05:05 2015
Stack: classic openais (with plugin)
Current DC: cleanm.localdomain - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured, 2 expected votes
2 Resources configured


Online: [ cleanm.localdomain cleans.localdomain ]

 Master/Slave Set: ms_mysqldrbd [mysqldrbd]
     Masters: [ cleans.localdomain ]
     Slaves: [ cleanm.localdomain ]

Step 2
切換: [root@cleans ~]# crm node standby
[root@cleans ~]# crm status
Last updated: Tue Feb 10 18:21:52 2015
Last change: Tue Feb 10 18:21:45 2015
Stack: classic openais (with plugin)
Current DC: cleanm.localdomain - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured, 2 expected votes
2 Resources configured


Node cleans.localdomain: standby
Online: [ cleanm.localdomain ]

 Master/Slave Set: ms_mysqldrbd [mysqldrbd]
     Slaves: [ cleanm.localdomain ]
     Stopped: [ cleans.localdomain ]

Cleanm 無法成為 master

Step 3 :回復
[root@cleans ~]# crm node online
[root@cleans ~]# crm status
Last updated: Tue Feb 10 18:24:37 2015
Last change: Tue Feb 10 18:24:31 2015
Stack: classic openais (with plugin)
Current DC: cleanm.localdomain - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured, 2 expected votes
2 Resources configured


Online: [ cleanm.localdomain cleans.localdomain ]

 Master/Slave Set: ms_mysqldrbd [mysqldrbd]
     Masters: [ cleans.localdomain ]
     Slaves: [ cleanm.localdomain ]


[root@cleanm ~]# crm_verify -L -V  沒有錯誤

在log有看到
on server cleanm

Feb 10 14:24:13 [2061] cleanm.localdomain stonith-ng:    error: unpack_location_tags:    Constraint 'cli-ban-mysqldrbd-on-cleans.localdomain': Invalid reference to 'mysqldrbd'

Cleans log
Feb 10 15:27:04 [2370] cleans.localdomain       crmd:   notice: process_lrm_event:    Operation drbdfs_start_0: unknown error (node=cleans.localdomain, call=43, rc=1, cib-update=80, confirmed=true)
on server cleans
Feb 10 15:27:04 [2370] cleans.localdomain       crmd:     info: process_graph_event:    Detected action (32.36) drbdfs_start_0.43=unknown error: failed
Feb 10 15:27:04 [2370] cleans.localdomain       crmd:     info: process_graph_event:    Detected action (32.36) drbdfs_start_0.43=unknown error: failed
Feb 10 15:27:05 [2369] cleans.localdomain    pengine:    debug: determine_op_status:    drbdfs_start_0 on cleans.localdomain returned 'unknown error' (1) instead of the expected value: 'ok' (0)
Feb 10 15:27:05 [2369] cleans.localdomain    pengine:  warning: unpack_rsc_op_failure:    Processing failed op start for drbdfs on cleans.localdomain: unknown error (1)
Feb 10 15:27:05 [2369] cleans.localdomain    pengine:    debug: determine_op_status:    drbdfs_start_0 on cleans.localdomain returned 'unknown error' (1) instead of the expected value: 'ok' (0)
Feb 10 15:27:05 [2369] cleans.localdomain    pengine:  warning: unpack_rsc_op_failure:    Processing failed op start for drbdfs on cleans.localdomain: unknown error (1)

[root@cleanm ~]# crm configure show
node cleanm.localdomain \
   attributes standby=off maintenance=off
node cleans.localdomain \
   attributes standby=off maintenance=off
primitive mysqldrbd ocf:linbit:drbd \
   params drbd_resource=mysql \
   op start timeout=240 interval=0 \
   op stop timeout=100 interval=0 \
   op monitor role=Master interval=50s timeout=30s \
   op monitor role=Slave interval=60s timeout=30s
ms ms_mysqldrbd mysqldrbd \
   meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
location drbd-fence-by-handler-mysql-ms_mysqldrbd ms_mysqldrbd \
   rule $role=Master -inf: #uname ne cleans.localdomain
property cib-bootstrap-options: \
   dc-version=1.1.11-97629de \
   cluster-infrastructure="classic openais (with plugin)" \
   expected-quorum-votes=2 \
   no-quorum-policy=ignore \
   pe-warn-series-max=1000 \
   pe-input-series-max=1000 \
   pe-error-series-max=1000 \
   cluster-recheck-interval=5min \
   stonith-enabled=false
rsc_defaults rsc-options: \
   resource-stickiness=100


jjchiou

  • 懷疑的國中生
  • **
  • 文章數: 68
    • 檢視個人資料
Re: drbd( crmsh) switch fail
« 回覆 #1 於: 2015-02-11 11:18 »
Dear all :

我找到問題了, crmsh會自動在切換設定上加上下列這行:我想這是保護設定,讓切換後就不能再切換了,有人知道為什麼會自動加上這一行嗎?
(由rsc_defaults resource-stickiness=100 設定來的嗎)

location drbd-fence-by-handler-mysql-ms_mysqldrbd ms_mysqldrbd \
   rule $role=Master -inf: #uname ne cleans.localdomain

我的crm version 是:ls /usr/src/crmsh/
crmsh-2.1-1.6.x86_64.rpm  pssh-2.3.1-4.1.x86_64.rpm  python-pssh-2.3.1-4.1.x86_64.rpm