[Other] Advanced: Link aggregation, MPIO, iSCSI MC/S I am trying to find the proper way of accomplishing the following.
I would like to provide 2Gb/s access for clients accessing a fileserver guest vm on a ESXi server, which itself access the datastore over iSCSI. Therefore the ESXi server need 2Gbps connection to the NAS. I would also like to provide 2Gbps directly on the NAS.
Looks like there are three technology which can help. Link aggregation (802.3ad, LAG, Trunk), Multi Path IO (MPIO), and iSCSI Multiple connection per session (MC/S).
However each have their own purpose and drawbacks, Aggregation provide 2Gbps total but a single connection (I think it's based on source/dest MAC address) can only get 1Gbps, which is useless (I think for iSCSI for example which is a single stream), MPIO seem a good option for iSCSI as it balance any traffic on two connection however it seem to require 2 IPs on the Source and 2 IPs on the DEST, I am unsure about MCs.
Here is what I would like to archive, however I am not sure of the technology to employ on each NIC pair of 1Gbps.
I also think this design is flawed because doing link aggregation between the NAS and the switch would prevent me from using MPIO on the ESX as it also require 2 IP on the nas and I think link aggregation will give me a single IP.
Maybe using MCs instead of MPIO would work?
Here a diagram:
can you get 10g NIC's for your servers? If so, you could get a switch with 10g interfaces (although it'd be expensive).
reply to xcimo
This is very easily accomplished with the appropriate switching gear. I'd recommend some cisco gear - most any GB-capable gear (preferrably iOS) will suffice.
I would also HIGHLY recommend segmenting your network into different VLANs for network traffic and iSCSI traffic. Multicast and Broadcast traffic NEED to stay away from iSCSI traffic.... it can seriously affect performance and screw up a BUNCH of things.
Essentially, you'll need to create a bonded virtual link (called a PortChannel in Cisco terms) then add the interfaces to the port channel. This creates a virtual link (call the port channel) and each 1gb interface are bonded in the port channel creating a 1gb+1gb = 2gb link... (well almost 2gb... TCP has a little overhead, which makes it a little less than a true 2gb connection).
I wrote an article on my blog about bonding port channels with Cisco gear, and there are some very good example configurations on it, which would get you in a very good position to start with.
The blog talks about troublshooting LACP/PAGP links, but I did post the reasoning behind what was wrong, how to correct it, and the configurations needed to make it work.. granted - this was in a large corporate environment, but you can easily scale it back to work for what you want to do...
tubbynetreminds me of the danse russePremium,MVM
said by sleepyshark:it should be noted that you'll get two gig of *aggregate* throughput -- but no single stream will be able to exceed one gig.
and each 1gb interface are bonded in the port channel creating a 1gb+1gb = 2gb link... (well almost 2gb... TCP has a little overhead, which makes it a little less than a true 2gb connection).
"...if I in my north room dance naked, grotesquely before my mirror waving my shirt round my head and singing softly to myself..."
reply to sleepyshark