dslreports logo
site
 
    All Forums Hot Topics Gallery
spc

spacer




how-to block ads


Search Topic:
uniqs
1492
share rss forum feed


Paulg
Displaced Yooper
Premium
join:2004-03-15
Neenah, WI
kudos:1

Dell server auto-creating partitions?

Anyone out there use Dell hardware with ESX?

I have a client who is using an R620 with a bunch of 3TB disks as a vmware host to hold backup data. ESXi is installed on the internal SD card.

When we added the local disk, and configured the virtual disk in the UEFI array manager, it seems to have automatically formatted it MBR with some recovery partitions. Since its MBR, I can't create VMFS datastores that are larger than 2TB.

Any ideas?



exocet_cm
Free at last, free at last
Premium
join:2003-03-23
New Orleans, LA
kudos:3

said by Paulg:

Anyone out there use Dell hardware with ESX?

I have a client who is using an R620 with a bunch of 3TB disks as a vmware host to hold backup data. ESXi is installed on the internal SD card.

When we added the local disk, and configured the virtual disk in the UEFI array manager, it seems to have automatically formatted it MBR with some recovery partitions. Since its MBR, I can't create VMFS datastores that are larger than 2TB.

Any ideas?

Yes, R710 and HP DL380 too.

These disks, how many are there? Are they standalone or part of a RAID configuration? Were they configured in vSphere client, vCenter server, the guest VM, or a RAID config tool?
--
"All newspaper editorial writers ever do is come down from the hills after the battle is over and shoot the wounded." - Bruce Anderson
"I have often regretted my speech, never my silence." - Xenocrates


Badger3k
We Don't Need No Stinkin Badgers
Premium
join:2001-09-27
Franklin, OH
reply to Paulg

What version of ESX? Prior to 5.0 I believe you could only create 2TB datastores as a max. If you wanted anything bigger you'd have to create multiple datastores and extend into multiple datastores to go beyond.
--
Team Discovery: Project Hope



Paulg
Displaced Yooper
Premium
join:2004-03-15
Neenah, WI
kudos:1

Brand new, out of the box, 5.1 install. There are 8 3TB drives set up as a RAID 10 across 2 4 disk spans. The virtual disk was configured with the onboard RAID manager.

Vsphere reports the proper size, and reports the other two partitions on the disk, I'll have to grab a screenshot of them tomorrow, but the problem is that it formats as MBR, which has a maximum partition size of 2TB.

edit - I should also note that ESXi is installed on the internal SD card.


aguen
Premium
join:2003-07-16
Grants Pass, OR
kudos:2

1 edit
reply to Paulg

Paulg See Profile said "ESXi is installed on the internal SD card".
I found mention (in the Esxi 5.x) installation guide, that when installed on USB/SD devices, the installer will NOT use these devices for it's scratch space requirements. It will instead create it on the next internal disk that it finds. I don't think UEFI did this deliberately as supposedly, it should allow you to select partition types of MBR or GPT. So, either someone didn't make the correct selection in UEFI or Esxi forced MBR when it created it's scratch, etc. partitions.

{SNIP}
ESXi 5.1 has these storage requirements:

Installing ESXi 5.1 requires a boot device that is a minimum of 1GB in size. When booting from a local disk or SAN/iSCSI LUN, a 5.2GB disk is required to allow for the creation of the VMFS volume and a 4GB scratch partition on the boot device. If a smaller disk or LUN is used, the installer attempts to allocate a scratch region on a separate local disk. If a local disk cannot be found, the scratch partition (/scratch) is located on the ESXi host ramdisk, linked to /tmp/scratch. You can reconfigure /scratch to use a separate disk or LUN. For best performance and memory optimization, VMware recommeds that you do not leave /scratch on the ESXi host ramdisk.

To reconfigure /scratch, see Set the Scratch Partition from the vSphere Client in the vSphere Installation and Setup documentation.

Due to the I/O sensitivity of USB and SD devices, the installer does not create a scratch partition on these devices. As such, there is no tangible benefit to using large USB/SD devices as ESXi uses only the first 1GB. When installing on USB or SD devices, the installer attempts to allocate a scratch region on an available local disk or datastore. If no local disk or datastore is found, /scratch is placed on the ramdisk. You should reconfigure /scratch to use a persistent datastore following the installation.
{End SNIP}

I bolded what I believe to be the pertinent section.



Paulg
Displaced Yooper
Premium
join:2004-03-15
Neenah, WI
kudos:1

The scratch partition is using RAMdisk at the moment. ESXi was installed before the drives were online. Also, esxi 5.0 and newer uses GPT to partition disks, not MBR.

I am convinced that the problem is related to the Dell array manager writing a recovery partition onto the logical drive upon creation.



exocet_cm
Free at last, free at last
Premium
join:2003-03-23
New Orleans, LA
kudos:3

said by Paulg:

The scratch partition is using RAMdisk at the moment. ESXi was installed before the drives were online. Also, esxi 5.0 and newer uses GPT to partition disks, not MBR.

I am convinced that the problem is related to the Dell array manager writing a recovery partition onto the logical drive upon creation.

That is where I was going with my questioning above. I think it has something to do with Dell configuring the disks and not ESXi.
--
"All newspaper editorial writers ever do is come down from the hills after the battle is over and shoot the wounded." - Bruce Anderson
"I have often regretted my speech, never my silence." - Xenocrates


Paulg
Displaced Yooper
Premium
join:2004-03-15
Neenah, WI
kudos:1

Vmware has no way of creating the RAID10 array. I have to use the dell utilities for that. Here's a screenshot (that I thought posted earlier) of what we're seeing.


Paulg
Displaced Yooper
Premium
join:2004-03-15
Neenah, WI
kudos:1
reply to Paulg

Figured it out finally. We had to manually delete the partitions with partedUtil. I'd prefer that Dell keep its hands out of my damn drives to begin with, but what can you do.

~ # partedUtil get /dev/disks/naa.6848f690e88ff70018996358238b5209
1458933 255 63 23437770752
1 63 80324 222 0
2 81920 4276223 12 128
~ # partedUtil delete /dev/disks/naa.6848f690e88ff70018996358238b5209 1
~ # partedUtil delete /dev/disks/naa.6848f690e88ff70018996358238b5209 2
 


exocet_cm
Free at last, free at last
Premium
join:2003-03-23
New Orleans, LA
kudos:3

said by Paulg:

Figured it out finally. We had to manually delete the partitions with partedUtil. I'd prefer that Dell keep its hands out of my damn drives to begin with, but what can you do.

~ # partedUtil get /dev/disks/naa.6848f690e88ff70018996358238b5209
1458933 255 63 23437770752
1 63 80324 222 0
2 81920 4276223 12 128
~ # partedUtil delete /dev/disks/naa.6848f690e88ff70018996358238b5209 1
~ # partedUtil delete /dev/disks/naa.6848f690e88ff70018996358238b5209 2
 

It was in fact Dell's recovery partition?

--
"All newspaper editorial writers ever do is come down from the hills after the battle is over and shoot the wounded." - Bruce Anderson
"I have often regretted my speech, never my silence." - Xenocrates


Paulg
Displaced Yooper
Premium
join:2004-03-15
Neenah, WI
kudos:1

Yup.

Expand your moderator at work