dslreports logo
 
    All Forums Hot Topics Gallery
spc
Search similar:


uniqs
714

devicemanage
Premium Member
join:2002-03-16
Southampton, PA

1 edit

devicemanage

Premium Member

esxi 5.5 performance.

Looking to create a virtual environmemt. Heres my hardware:

I have 2 dell r710 servers 144 gb ram h700 512mb raid controller and 1 dell fs12-ty 64 gb ram and h700 512 mb raid controller. I have 12 600 gb 15k rpm hdd and 12 500 gb 7200 rpm hdds.

The network is going to be for 10 users, running 5 servers plus the vcenter server and the vsphere replication. (Fileserver, prim dc, sec dc, exchange 2013, av and backup etc) all servers windows 2012 r2.

Datastore would likely reside on the fs12 with 12 drives. Hosts 1 would replicate to host 2.

Thank you.

I was planning on going a raid 10

Drex
Beer...The other white meat.
Premium Member
join:2000-02-24
Not There

Drex

Premium Member

With such a small deployment, you might want to consider the vCenter appliance. I deployed it in my lab environment and it is working well. Although I'm not sure if the vsphere replication will work. Can't see why it wouldn't though as it's a separate appliance.

Do you have specs for the VMs? Are you building from scratch or doing a P2V? Sounds like from scratch since you state Windows 2012R2.
One concern would be with only two hosts replicating b/w each other, your primary and secondary DC's being on one host. If it goes down, your AD is down...until you're able to bring the VM's on host 2 online.

I've never virtualized an Exchange server, but I've heard there might be some gotchas with doing that. You didn't mention any sort of database VM which is good.

devicemanage
Premium Member
join:2002-03-16
Southampton, PA

devicemanage

Premium Member

I can handle the allocation of resources per vm. Just curious which way to handle the hdds i have. The r710 can fit 6 drives in each and the fs12 can do 12.

Just trying to see if the fs12 should have the slower drives or not?

The hosts have plenty of ram and cpu power. I would imagine the fs12 as the datastore would be sitting there?

DarkLogix
Texan and Proud
Premium Member
join:2008-10-23
Baytown, TX

DarkLogix

Premium Member

Assuming the FS12 is a storage array that can do shared storage (IE fiberchannel/SAS/iscsi) then I'd put the fast drives there and then boot the hosts with a usb flash drive

then all your vm's would be stored on the FS12

devicemanage
Premium Member
join:2002-03-16
Southampton, PA

2 edits

devicemanage

Premium Member

Yes, i was planning on running all servers raid 10, all sas drives with iscsi back to the fs12.

DarkLogix
Texan and Proud
Premium Member
join:2008-10-23
Baytown, TX

DarkLogix

Premium Member

So ya I wouldn't bother with even using the slow drives, save them for another fs12 because local storage with rotating drives is kinda pointless and you'd get more flexability and performance using all 12 fast drives in a single raid 10 that all the VM's would live on.

devicemanage
Premium Member
join:2002-03-16
Southampton, PA

devicemanage

Premium Member

Well my delima is i have these 7200 rpm drives in the r710s right now. Would it be ok to leave them in there as a raid 10?

DarkLogix
Texan and Proud
Premium Member
join:2008-10-23
Baytown, TX

DarkLogix

Premium Member

You could they'd be eating power

and with the good drives in the shared storage even without vcenter if a host failed you could just add the vm to inventory and get up again

though vcenter would be a good idea if its a business.

devicemanage
Premium Member
join:2002-03-16
Southampton, PA

1 edit

devicemanage

Premium Member

How much hdd space should a host server really have anyway?

DarkLogix
Texan and Proud
Premium Member
join:2008-10-23
Baytown, TX

DarkLogix

Premium Member

For just running ESXi it can work with just a 4gb flash drive

(maybe 2gb) if an actual harddrive(or array) is used then you'll get the host OS and a local datastore

but as I said I wouldn't bother putting any VM's on a local datastore when you have a storage array that's larger and faster.

one downside to using a USB flash drive to boot off of is boot time can be a bit slow but once the host is booted then the OS is in ram.
JoelC707
Premium Member
join:2002-07-09
Lanett, AL

JoelC707 to devicemanage

Premium Member

to devicemanage
You're doing a very similar setup to what I have except I'm using slightly different hosts and using Hyper-V instead of ESXi. Mine is a Dell C6100 4-node, 48GB per node and a C2100 (the label on it actually says FS12-TY) with 12GB (it needs more RAM honestly). I've got about 4-5 users and 15 servers.

Your FS12 should have two internal cabled 2.5" drive bays, get you a pair of laptop drives and run them in RAID 1 for the OS of the shared storage (will be on an ICH10 controller I think but since it's just for the host OS it'll be fine). This will leave your other drives dedicated to everything for the VMs.

For the R710's, can they boot from iSCSI? I know the C6100 nodes can. If they can, you could in theory give them a small partition on the FS12 for their OS as well. For ESXi it doesn't need to be much, I've booted it from an 8GB USB flash drive before that wasn't even half full. Windows with Hyper-V will of course need more but shouldn't need more than about 20GB or so I think (might need a little more to ensure room for update files).

How much total storage space are you going to need? Are you going to have a backup server? Twelve 600GB disks in RAID 10 will net you 3.6TB. Mine are currently filling up about 2.3TB but it depends on what you are actually doing with the servers. My suggestion would be to iSCSI boot the R710's (or get a few more drives and let them boot in RAID 1), then get another FS12 and load it up with the SATA drives for a backup server. Since its just for backup server, go RAID 5 or 6, you don't need the performance of RAID 10 but you do need the extra capacity (5 - 5.5 TB is what it would net you).

DarkLogix
Texan and Proud
Premium Member
join:2008-10-23
Baytown, TX

DarkLogix

Premium Member

said by JoelC707:

Your FS12 should have two internal cabled 2.5" drive bays, get you a pair of laptop drives and run them in RAID 1 for the OS

If its iSCSI then is the FS12 booting off those 2? otherwise you'd need an iSCSI HBA to initiate iscsi to boot the hosts off it.

BTW one downside to booting ESXi off a USB flash drive is it will warn that logs aren't being stored on it, though there are some tricks to make it do that.
JoelC707
Premium Member
join:2002-07-09
Lanett, AL

JoelC707

Premium Member

No, the FS12 would use those two drives to boot it's own OS from. If it's a big enough pair of drives, you could use them for the R710's to boot from as well. It's really going to depend on whether or not the R710 can boot from iSCSI without an additional NIC/HBA (my C6100 nodes can but that doesn't mean much for the R710 lol).

You're right about ESXi bitching about logs from USB boot, but there was some setting to change that could direct the logs to an iSCSI or NFS datastore.

DarkLogix
Texan and Proud
Premium Member
join:2008-10-23
Baytown, TX

DarkLogix

Premium Member

said by JoelC707:

but there was some setting to change that could direct the logs to an iSCSI or NFS datastore.

Theres even some tricks to make it store them on a flash drive.

BTW so if the FS12 uses the 2 drives to boot I'd see about some SSD's instead of normal laptop drives, I figure the FS12 won't need more for itself and then if the R710's can boot off iscsi then it would make that boot up nice and fast. (and it'd just be 2 drives and 3 devices would boot off it.
JoelC707
Premium Member
join:2002-07-09
Lanett, AL

JoelC707

Premium Member

That would be worth it I think. For just the FS12 boot, laptop drives would be sufficient, but if it's booting the FS12 and the R710's then yeah I think SSDs would be better.

DarkLogix
Texan and Proud
Premium Member
join:2008-10-23
Baytown, TX

DarkLogix

Premium Member

Well fast boots can reduce downtime when you are ether bringing it back up or applying an update to the device, even if the FS12 doesn't get rebooted but once in 2 years that one time you might be feeling impatient waiting for it to come up.

devicemanage
Premium Member
join:2002-03-16
Southampton, PA

devicemanage

Premium Member

From what i understand i could pull all the drives from the dell r710s and just boot to a usb flash drive. Have a couple of 64 gigers laying around.

But it would be nice to use the free space in thise r710s, i could turn them into a san or something.

This is a small office and doesnt have a lot of traffic. What are you guys doing for switches? We have a decent cisco unmanaged switch, is a iscsi connection over 1gb enough?

DarkLogix
Texan and Proud
Premium Member
join:2008-10-23
Baytown, TX

DarkLogix

Premium Member

You can do iscsi over 1gb but you want all iscsi traffic ether on a dedicated switch or dedicated vlan because you wont to limit the stray traffic that hits the iscsi interfaces.

also iscsi can do MPIO so you'll want as many interfaces as you can dedicated to iscsi

btw ESXi requires it's iscsi interfaces be dedicated and even have dedicated vswitchs for iscsi (there is a loop hole but its still effectively the same)

though if you get iscsi HBA's then I think the HBA's would do the iscsi connection below the OS level so the lun would then be precented as native storage instead of using the ESXi iscsi initiator for the boot lun) but if you're not using HBA's then you can ignore this section.

devicemanage
Premium Member
join:2002-03-16
Southampton, PA

devicemanage

Premium Member

Well let me explain my situation, lots of great info here. Thank you all very much.

This site was set up by someone else using the 2 dell r710's running 7200 rpm drives, raid5 at 3gb per second. They are maxed out and seeing performance issues. Had to move some of the servers around, basically have 2 servers running on one host and 3 on the other and cross replicating the vms. Also doing backups to a san...

I want to add the fs12 to the mix and they already purchased 12 15k 600gb drives and i have 2 h700 raid cards laying around.

I was planning kn ipfrading the r710s but it seems they might be fine the way they are and be used as extra storage via some san appliance. The fs12 funning the esxi with 2 raid 1's should be plenty of storage and give me decent speed via the iscsi, i believe i can get the fs12 a 4 gb nic and group them. What do you think?
cramer
Premium Member
join:2007-04-10
Raleigh, NC

cramer to DarkLogix

Premium Member

to DarkLogix
Unless you're using 10G iSCSI, a local drive (esp. off the H700) will be faster. Even the slower drives. HOWEVER, local storage is local to the host only -- complicates migration, and host failure equals storage failure.
cramer

cramer to DarkLogix

Premium Member

to DarkLogix
Boot time? Do you know how long it takes the R710's BIOS to even get to the point of loading anything? Optimizing the ESX boot time given the MINUTES it takes the BIOS to get anywhere is a wasted effort. (machine boot times in a cluster shouldn't matter at all.)
cramer

cramer to DarkLogix

Premium Member

to DarkLogix
said by DarkLogix:

You can do iscsi over 1gb but you want all iscsi traffic ether on a dedicated switch or dedicated vlan because you wont to limit the stray traffic that hits the iscsi interfaces.

Dedicated switch with deep buffers. Cheap (unmanaged) switches will not have the necessary buffer space (or support jumbo frames) and WILL result in dropped packets that will absolutely kill iSCSI. iSCSI is 150% ABSOLUTELY intolerant of frame drops.

btw ESXi requires it's iscsi interfaces be dedicated and even have dedicated vswitchs for iscsi (there is a loop hole but its still effectively the same)

Not "require", but it will complain if you run iSCSI and anything else on the same nic. (I have my iSCSI cluster links set for vmotion as well.)

The R710 can boot from iSCSI if you've paid Dell for the necessary license module.

DarkLogix
Texan and Proud
Premium Member
join:2008-10-23
Baytown, TX

DarkLogix to devicemanage

Premium Member

to devicemanage
said by devicemanage:

the fs12 to the mix and they already purchased 12 15k 600gb drives and i have 2 h700 raid cards laying around.

I was planning kn ipfrading the r710s but it seems they might be fine the way they are and be used as extra storage via some san appliance. The fs12 funning the esxi with 2 raid 1's

I'd do a raid 1 for the FS12's boot and raid10 for vm storage
insure it has multi-initiator enabled and then point both hosts at it.

I wouldn't "team" the nics for iscsi I'd just dedicate them with each having an IP and let MPIO load balance them.
cramer
Premium Member
join:2007-04-10
Raleigh, NC

cramer

Premium Member

And don't forget to set ESX to "round robin" or it will use one link exclusively:
esxcli storage nmp device set --device=[device id] --psp=VMW_PSP_RR
(or through the GUI)

(I do as much via scripts as I can.)

DarkLogix
Texan and Proud
Premium Member
join:2008-10-23
Baytown, TX

DarkLogix

Premium Member

ya