 PToN join:2001-10-04 Houston, TX | Storage performance? Hello,
For the longest time i;ve been debating what offers better performance.
1.- One huge LUN (ex: 4TB) presented to all Xen Hosts and create one big SR and then N number of VDI's or, 2.- N number of semi large LUNs (ex: 500GB - 1TB) presented to all Xen Hosts and create N number of SRs and N number of VDIs...?
I dont know if there is a per LUN bandwidth or if it total bandwidth per FC port...
If i have 10 LUNs all going via the same ports, would be the same as have 1 LUN, correct?
Any ideas?
Thanks. |
|
 DarkLogixTexan and ProudPremium join:2008-10-23 Baytown, TX kudos:3 | Well I would hope you're using some form of multi-path with the FC so all available FC ports are used and load balanced. -- »Death Star Petition |
|
 | reply to PToN You might check with your SAN manufacturer for best practices. On my unit a LUN typically spans 7 disks. So smaller LUNS perform better for us. My max LUN size is currently 1 TB. |
|
 PToN join:2001-10-04 Houston, TX | reply to PToN I have 2 Disk groups. - 1 with 16 400GB FC 10K drives and - 1 with 8 1TB FATA drives
2 controllers and 2 paths per server.
I will check HP for any best practices they have in their site.
Thanks. |
|
 PToN join:2001-10-04 Houston, TX | reply to PToN Well, after reading some more posts and whitepapers, it seems like there wont be much or any performance gain when using and EVA 4400 as it only has 1 loop internally.
The controllers are balanced automatically and multi-path provides you access through different controllers.
The only recommendation is to have as many disks per disk group as possible and group disks with similar size. Other than that you can create whatever you want. Although, it is mentioned that you should limit the LUN size in order to minimize impact were a LUN go crazy and get corrupted.
Thanks for looking. |
|
 tekmunkiTekmunkiPremium join:2001-12-06 Lake City, FL | reply to PToN I limit our Netapp SAN to 10 VM's per lun, size of the lun doesn't really factor, although I believe best practices say limit it to 500GB. |
|
|
|
 DarkLogixTexan and ProudPremium join:2008-10-23 Baytown, TX kudos:3 | reply to PToN said by PToN:I have 2 Disk groups. - 1 with 16 400GB FC 10K drives and - 1 with 8 1TB FATA drives
2 controllers and 2 paths per server.
I will check HP for any best practices they have in their site.
Thanks. You find anything on the P2000 SAS MSA?
I plan to look into it more later but currently from a C3000 to P2000 one controller is listed as VSM active and the other as passive. And the "force active" is greyed out. -- »www.change.org/petitions/create-···imcity-4 |
|
 dennismurphyPut me on hold? I'll put YOU on holdPremium join:2002-11-19 Parsippany, NJ Reviews:
·Verizon FiOS
·Optimum Online
| reply to PToN said by PToN:Well, after reading some more posts and whitepapers, it seems like there wont be much or any performance gain when using and EVA 4400 as it only has 1 loop internally.
The controllers are balanced automatically and multi-path provides you access through different controllers.
The only recommendation is to have as many disks per disk group as possible and group disks with similar size. Other than that you can create whatever you want. Although, it is mentioned that you should limit the LUN size in order to minimize impact were a LUN go crazy and get corrupted.
Thanks for looking. Correct. On an EVA, the best performance will be from the largest possible disk group. Ideally, you'd have a multiple-of-8 disks in your group since RSS's are created in groups of 8.
Once you have your group defined, you can create as many vRAID disks on top as you'd like. Fundamentally, there would be no advantage to a single large vRAID disk vs. multiple smaller vRAIDs, and in fact, I can think of a few reasons you'd want the smaller ones.
Since the underlying RSS's are the same, I'd use multiple smaller vRAID disks. Either vRAID1 or vRAID5 will work, but vRAID1 will give you a little better performance and availability, at the cost of capacity.
vRAID5 on EVA is a 3+1 technology, but that doesn't mean the LUN is only on 4 disks. Because the data is 'sprinkled' throughout the disk pool, you get the performance advantage of all the spindles, but data is written to the RSS's in 3+1 'chunks'.
Cool stuff. I've always liked the EVA - it was way ahead of its time when it launched. The 3Par, conceptually, shares quite a few things with EVA, but with some REALLY powerful ASICs. I'm in love with the new 7200/7400 arrays.
Yeah, I can talk storage all day.  |
|
 dennismurphyPut me on hold? I'll put YOU on holdPremium join:2002-11-19 Parsippany, NJ Reviews:
·Verizon FiOS
·Optimum Online
| reply to DarkLogix said by DarkLogix:said by PToN:I have 2 Disk groups. - 1 with 16 400GB FC 10K drives and - 1 with 8 1TB FATA drives
2 controllers and 2 paths per server.
I will check HP for any best practices they have in their site.
Thanks. You find anything on the P2000 SAS MSA? I plan to look into it more later but currently from a C3000 to P2000 one controller is listed as VSM active and the other as passive. And the "force active" is greyed out. Are you using the VAAI drivers?
Also, I realize it's ESX 4 based, but there may be some good info here:
»h20195.www2.hp.com/V2/GetPDF%2Ea···NW%2Epdf |
|
 DarkLogixTexan and ProudPremium join:2008-10-23 Baytown, TX kudos:3 | Its the SAS switch on the C3000 that says active/passive so in my case I don't think its up tot he os level yet. |
|
 dennismurphyPut me on hold? I'll put YOU on holdPremium join:2002-11-19 Parsippany, NJ Reviews:
·Verizon FiOS
·Optimum Online
| said by DarkLogix:Its the SAS switch on the C3000 that says active/passive so in my case I don't think its up tot he os level yet. Is this a single or dual-domain config?
When it says "active/passive" in the SAS switch, that's referring to the management interface; both switches will process I/O.
Here's a link to the deployment guide - page 79 has the "optimal" cabling for a BladeSystem + P2000 G3.
»bizsupport2.austin.hp.com/bc/doc···5615.pdf
If you're using Windows, make sure you have the MPIO DSM loaded: »bizsupport2.austin.hp.com/bc/doc···1677.pdf
If you're using Linux, make sure you setup MPIO according to the reference guide: »h20272.www2.hp.com/utility/docum···0_22.pdf
... but in all cases, the SAS switches should be active-active for I/O, but the GUI will show active/passive for the management interface.
Let me know if you need any more help .... |
|
 DarkLogixTexan and ProudPremium join:2008-10-23 Baytown, TX kudos:3 | The SAS switch setup is
the C3000 has 2 sas switches and the P2000 has 2 controllers each of the 2 sas switches have 1 cable to each controller (so 4 cables total)
in the web interface for the SAS switches it lists the 2nd sas switch as VSM passive in the topology of the links to the p2000
then 2 of the blades are running ESXi 5.1 with a shared datastore on the p2000 (the 3rd blade isn't using the p2000 and will only be using the sas switches to connect to a tape library) -- »www.change.org/petitions/create-···imcity-4 |
|
 dennismurphyPut me on hold? I'll put YOU on holdPremium join:2002-11-19 Parsippany, NJ Reviews:
·Verizon FiOS
·Optimum Online
| said by DarkLogix:The SAS switch setup is
the C3000 has 2 sas switches and the P2000 has 2 controllers each of the 2 sas switches have 1 cable to each controller (so 4 cables total)
in the web interface for the SAS switches it lists the 2nd sas switch as VSM passive in the topology of the links to the p2000
then 2 of the blades are running ESXi 5.1 with a shared datastore on the p2000 (the 3rd blade isn't using the p2000 and will only be using the sas switches to connect to a tape library) Perfect. VSM is the Virtual Storage Manager. That's the web interface and has no impact on I/O. Perfectly normal.
What does "multipath -v3" output on the ESXi host? |
|
 DarkLogixTexan and ProudPremium join:2008-10-23 Baytown, TX kudos:3 | said by dennismurphy:Perfect. VSM is the Virtual Storage Manager. That's the web interface and has no impact on I/O. Perfectly normal.
What does "multipath -v3" output on the ESXi host? Is there more to that command? I just got command not found from SSH to one host -- »www.change.org/petitions/create-···imcity-4 |
|
 dennismurphyPut me on hold? I'll put YOU on holdPremium join:2002-11-19 Parsippany, NJ | Duh. I gave you the Linux command ...
Try 'esxcli storage nmp device list' ... |
|
 DarkLogixTexan and ProudPremium join:2008-10-23 Baytown, TX kudos:3 | naa.600c0ff000195301191d445101000000 Device Display Name: HP Serial Attached SCSI Disk (naa.600c0ff000195301191d44 5101000000) Storage Array Type: VMW_SATP_ALUA Storage Array Type Device Config: {implicit_support=on;explicit_support=off; explicit_allow=on;alua_followover=on;{TPG_id=0,TPG_state=AO}{TPG_id=1,TPG_state= ANO}} Path Selection Policy: VMW_PSP_RR Path Selection Policy Device Config: {policy=rr,iops=1000,bytes=10485760,useA NO=0;lastPathIndex=0: NumIOsPending=0,numBytesPending=0} Path Selection Policy Device Custom Config: Working Paths: vmhba1:C0:T1:L1 Is Local SAS Device: false Is Boot USB Device: false -- »www.change.org/petitions/create-···imcity-4 |
|