建置高可用性 iSCSI 儲存設備此篇原創為weithenn大
http://www.weithenn.org/cgi-bin/wiki.pl?%E5%BB%BA%E7%BD%AE%E9%AB%98%E5%8F%AF%E7%94%A8%E6%80%A7_iSCSI_%E5%84%B2%E5%AD%98%E8%A8%AD%E5%82%99_%28%E4%B8%8A%291.線路規畫如圖
以Linux 做ISCSI target server
2. ISCSI Storage Server 1 以DRBD (Distributed Replicated Block Device) 會mirror sync ISCSI Storage Server 2
3.Storage server 1 跟 2 跟虛擬化Host Server 有做NIC bonding.
4.二台主機以 Heartbeat 容錯與監管
補充
1. CentOS 系統模組設定檔 (/etc/modprobe.conf) 內容為作業系統載入虛擬網路卡 Bonding 模組並指定網路卡容錯機制
改成mode=5 可為nic teaming +(balance-tlb)
mode=1 (active-backup)
Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond's MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance. The primary option affects the behavior of this mode.
mode=2 (balance-xor)
XOR policy: Transmit based on [(source MAC address XOR'd with destination MAC address) modulo slave count]. This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.
mode=3 (broadcast)
Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.
mode=4 (802.3ad)
IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification.
Prerequisites:
Ethtool support in the base drivers for retrieving the speed and duplex of each slave.
需要專門Switch
mode=5 (balance-tlb)
Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
一般Switch 就可

3.我會建議用Starwind + DFS Replication + Boradcom SLB driver
http://blogs.technet.com/b/filecab/archive/2009/06/29/deploying-dfs-replication-on-a-windows-failover-cluster-part-iii.aspx