Search This Blog

Loading...

Monday, December 13, 2010

Should I do Teaming or MPIO on iSCSI NIC?

 

Some people confused when set high availability for iSCSI NIC.

Well, i dont blame them. Let get back to the question

Should I do teaming or MPIO on iSCSI NIC?

If you choose NIC Teaming, then answer is wrong !!!

No, NIC teaming is not supported on the iSCSI interface. Please see more info below:

Public: Network card that is used for connectivity with external clients. NIC Teaming is fully supported on this interface. See Network adapter teaming and server clustering for additional information.

Private: Network card that is used for internal cluster communication. NIC Teaming is NOT supported on this interface.

SAN: Network card that is used for communication to the storage device. NIC Teaming is NOT supported on this interface. Instead use Microsoft Multipath I/O (MPIO) or multiple connections per session (MCS per iSCSI specification) to achieve fault tolerance

Clarification taken from http://www.microsoft.com/windowsserver2003/technologies/storage/iscsi/iscsicluster.mspx

So since now you’re clear here is simple configuration that you should do on setting high availability on iSCSI NIC.

1. Configure iSCSI NIC 1 and iSCSI NIC 2 an ip address

2. Execute iscsicpl command and click “Quick Connect” to connect to iSCSI storage.

You will be able to see the LUN which has presented to your server

3. Select the LUN and click Connect. Do remember to select Enable Multipath and set to use iSCSI NIC 1

Repeat same process for all LUN.

4. On Server core, install MPIO feature

ocsetup MultipathIo /norestart

This will install MPIO feature and will not restart the server

5. Claim all iSCSI attached storage for MPIO

mpclaim -r -i -d "MSFT2005iSCSIBusType_0x9"

The server will restart . After restart, you can execute MPIOCPL.

You will see a new device hardware ID (MSFT2005iSCSIBusType_0x9) in the MPIO Devices tab

6. Now add a second path to each LUN and leave the default Round Robin Load Balancing.

Go to iSCSI Initiator, select the LUN and click Properties

7. Click Add Session > Tick enable Multi path > Click Advanced and select the iSCSI NIC 2.

8. Once complete, click Devices and select MPIO button. You will see two path on the LUN which has set to Round Robin.

9. To verify, execute

mpclaim –s –d

To view all LUN which has claimed in the MPIO

10. To verify MPIO policy according to disk #,

mpclaim –s id <disk #>

Now you have successful learn on how to set high availability on iSCSI NIC.

6 comments:

  1. Your statement of it not being supported is very incorrect, depending on the OS.
    2008r2 fully supports teaming over any/all network connections provided the network driver itself does the function.

    http://support.microsoft.com/?id=254101

    There are many articles about this very thing debating if teaming should or should not be used.

    Speaking from experience, it works great. I currently have 1 san in failover (2 controllers, 4 gb ports) and 7 host servers (all with 10 gb ports).
    Teaming 2 ports on each host into a single connection to each of the 2 san controllers and a second team for failover to the same. MS cluster service validates and passes all aspects and the few times we've requested support from MS (mostly relating to invalid reporting functions in the failover cluster service gui) they have all stated that the environment is supported and correctly configured.

    ReplyDelete
  2. Thank you for your valuable feedback. Let me share some information.

    Your link is applicable for MS Cluster. In Hyper V, MS does not support 100% on teaming when using for Hyper V. If any problem, customer need to remove the teaming when troubleshooting. For more detail, please refer to this link http://support.microsoft.com/kb/968703
    .
    Then, let back to original discussion on iSCSI nic. MS does not support the use of teaming on iSCSI interface. for more detail, please refer to iSCSI user guide.
    http://download.microsoft.com/download/a/e/9/ae91dea1-66d9-417c-ade4-92d824b871af/uguide.doc

    Hope this information assist you better and share with the rest of people.

    Thanks again for your comment,
    Cheer-Lai

    ReplyDelete
  3. Hi,

    I have a bit of an issue. I am trying to get a three node Hyper-v Failover Cluster running. The SAN (Dell Equalogic) will only allow connection from a single subnet however I have two nic's in each server for use by ISCSI. If I put these both on the same subnet to allow them to communicate with the SAN the cluster validation tests fail and if I proceed I will only see half of the network interfaces in the Failover cluster manager since both nics in each server are on the same subnet.

    If I team the nics this is not supported by microsoft

    Therefore it appears that I will not be able to build a hyper-v failover cluster since teaming is not supported and multiple nics on same subnet is not supported.

    It has been recommended that I put each ISCSI nic on a seperate vlan however the SAN does not appear capable .

    Is there some other option that I do not yet know about??

    Please help - your comments most appreciated
    Regards
    Ivan Linton

    ReplyDelete
  4. Hi Ivan,

    You should deploy iSCSI LAN on separate network(subnet). The SAN just configure in ISCSI LAN and should separate from production network. No teaming for iSCSI NIC. Therefore you configure multiple IP (iSCSI network) and use MPIO.

    On Hyper-V, you can configure teaming. Just not supported by Microsoft. So far, i did configure teaming for hyper-V(not for iSCSI use)and work without any problem. If teaming got issue, then just break it to further test.

    ReplyDelete
  5. Hi Ivan.
    Same problem here. SAN on separat VLAN, but two Nics on each node in same subnet on the SAN subnet.
    The HP Lefthand SAN does not support more than one IP address.
    My validation report issues a warning of two nic's on same subnet :-(
    Did you find a solution?
    Regards,
    Peter

    ReplyDelete
    Replies
    1. with Lefthand you can configure ( in fact MUST ) X connection on every volume, where X is the number of nic specifying in every nic 1 ip address, and in the connection the source address of the connection , specifyng the desired MPIO policy and let MPIO do their work.
      The cluster validation report the warning besaces ( i think ) usually the application can not specify the source IP address so the routing will be a mess... but with Lefthand it works and it's the way to go! ( i have several cluster in production up to 4 nic per node and all is ok! )

      Delete