dslreports logo
 
    All Forums Hot Topics Gallery
spc
uniqs
18
vue666 (banned)
Let's make Canchat better!!!
join:2007-12-07

vue666 (banned) to Guspaz

Member

to Guspaz

Re: [Rant] Holy Hard Drive rip off

Can you post some photos of your case & setup?

TIA

Guspaz
Guspaz
MVM
join:2001-11-05
Montreal, QC

1 edit

Guspaz

MVM

I might post photos after I get the third hotswap bay in.

The case I have was discontinued a while ago:

»www.newegg.ca/Product/Pr ··· 11112191

It was specifically purchased because it was aluminum (being a lian-li), and because it had 9x 5.25" drive bays with no obstructions between them; it fits the 3x height hotswap bays without modification:

»www.newegg.ca/Product/Pr ··· 17994028

The case isn't the greatest in the world, but it's decent enough, and the 5.25" bays are perfect. Saving weight on the case ended up to not really matter much once you load the thing up with tons of heavy hard disks and hotswap bays.

The hotswap bays have been working pretty decently for me, although they're not cheap. They're the highest density you can get, 5x3.5" in 3x5.25" space. Three of them allow me to get 15 drives in the case, hotswappable to boot.

Powering it is a 750w single-rail Corsair TX750. It's a v1 of this:

»www.corsair.com/power-su ··· ply.html

If it had been available at the time, I would have bought the modular edition to make life easier.

The board is a SuperMicro MBD-X7SLM-L-O, which was purchased because it was cheap, had at least two slots that were physically PCI-e 8x or higher (for the RAID controllers), and had two gigabit LAN ports. In hindsight, trying to save money here was a mistake, and I should have bought a higher-end board that supports more than 4GB of RAM. Originally I only bought 2GB for it because that was the listed max, but the board does support 4GB in the end (showing up as 3.5GB even in 64-bit, but better than 2):

»www.newegg.ca/Product/Pr ··· 13182168

The CPU in the thing is a Celeron E1500:

»www.newegg.ca/Product/Pr ··· 19116075

Again, in hindsight, shouldn't have tried to save money here. It's not a very fast CPU, and ZFS can be pretty CPU-intensive. Things are a bit better after I switched from OpenSolaris to Linux, though.

The RAID cards are not identical matches, since they were bought years apart. The first RAID card I put in the thing was the SuperMicro AOC-USAS-8li:

»www.supermicro.com/produ ··· -L8i.cfm

It's not actually a PCIe card, it's designed for a proprietary SuperMicro slot, but the interface is still PCIe. It works in a normal PCIe slot except the card's components are on the wrong side, which means the rear case bracket doesn't line up. In other words, it works fine, but it's only being held in the PCIe slot by tension of the slot. Somewhat risky, but it cost half as much as the LSI card at the time, so that was like a $150 savings. Again, probably shouldn't have tried to save money there... Anyhow, it's an 8-drive or SAS controller, has a PCIe-8x interface so that's not a bottleneck, supports SATA2, and can be loaded with a special (but official) firmware that makes it act like a dumb controller, exposing the drives directly to the OS. And, most importantly, it worked with Solaris, which was a requirement for me back then.

The second card, I decided to "do it right" and buy a real LSI card, so a few months ago I bought an LSI 9211-8i. Somewhat similar card, but it's a proper PCI-e card (so it has a rear bracket holding it in), supports SATA3, and has a different chipset. But it also has the special firmware available. The card is in a physical-8x-electrical-4x slot, but PCIe 4x is enough for my purposes here:

»www.newegg.ca/Product/Pr ··· 16118112

The downside of all these cards is that they require a special connector cable. You'll notice both cards have only two connectors on the board, but claim support for 8 drives. That's because you need SFF-8087 cables for SATA drives, and at $25 a pop they're not cheap:

»www.newegg.ca/Product/Pr ··· 16116097

They're good cables though. It's just tricky; they have to be forward cables, but forward and reverse cables look identical. And they lock in, which is nice. Actually my first card, I bought supermicro brand cables, and they're super stiff and annoying to work with, so when I got these 3ware cables, they're much nicer. I'm almost tempted to buy another set of two of them to replace the supermicro ones, but that'd be another $50+shipping, and the supermicro ones still work OK...

What else... Well, due to my original cost saving efforts (which, as I've mentioned, I've come to regret to some extent), the original server with 5x2TB drives (10TB), cost me about $1500 all-in. The second expansion (which brought the capacity of the server up from 5 drives to 15 drives, and populated half of the new drive bays giving me 20TB total) cost me about $1300. It would have been much cheaper if not for the thailand flood (that raised my costs by maybe $200-250), and that I bought the third hotswap bay even though it was not yet needed (another $180ish). The third expansion, which will populate the remaining drive bays, requires only the drives, since I have everything else needed. Impossible to say how much that will cost based on the HDD pricing, but if drive prices come back down to normal, and I buy more 3TB drives to complete the set I have, that would cost me $735 to bring the server capacity from the current 20TB to what would then be 35TB.

But the current 20TB capacity will last me quite a while. The first 10TB lasted me 2 years, the second 10TB should last me at least a year, year and a half...

In terms of the software environment on the server, I started out with OpenSolaris, and I hated it. But it was the only show in town at that time for a recent and stable ZFS build.

Very recently, I replaced OpenSolaris with Ubuntu 11.10, because there is now kernel-level support for ZFS available in Linux via the "zfsonlinux" project, which is very actively developed and reasonably stable at this point. And because it's a fully integrated kernel module (licensing issues don't apply if you distribute the non-GPL ZFS code in source form and have the package use DKMS to compile/install it on user systems), performance is actually better than Solaris. Installing it was dead simple, just add the PPA repo then "apt-get install ubuntu-zfs" kind of thing:

»zfsonlinux.org/

The primary goal when building the original server was expandability. I was only putting in 5 drives to start (in a raidz array, which is like raid5, giving me 4 drives usable capacity and 1 for parity), but I wanted to be able to expand in the future. I specifically picked a case and motherboard that would allow me to add the second RAID card and extra two hotswap bays that would let me triple the number of drives, and I also made sure that I could expand capacity even after filling the server; with ZFS, you can replace drives with larger capacity ones, and after all five drives are replaced, it will get the higher capacity. So if in 5 years I want to replace 5x2TB drives with say 5x6TB drives or whatever is available, I can do that. It will be a very slow process because the process is to replace a drive and rebuild the array five times, but it can be done.

I guess that's about everything there is to know about my server, let me know if you have questions.

Thane_Bitter
Inquire within
Premium Member
join:2005-01-20

Thane_Bitter

Premium Member

So you are using a Raid card, rather as a dumb disc controller, then using the OS to do RAID (RAID-Z), as RAID-Z offers better functionality over the more traditional RAID types. My apologies if I have not translated that correctly. If so, why not just use a less expensive SAS/SATA controller (something non-raid)?

Regarding the SM 'flipped' card, it seems that one can add some plastic spacers as standoffs to relocate the bracket to the correct expansion slot. I have not done it myself, I just recall a post about doing this out on the net while trying to figure out what the hell a SuperMicro UIO slot was; I think they used small nylon washers and slightly longer small screwbolts.

Guspaz
Guspaz
MVM
join:2001-11-05
Montreal, QC

Guspaz

MVM

said by Thane_Bitter:

So you are using a Raid card, rather as a dumb disc controller, then using the OS to do RAID (RAID-Z), as RAID-Z offers better functionality over the more traditional RAID types. My apologies if I have not translated that correctly. If so, why not just use a less expensive SAS/SATA controller (something non-raid)?

That's exactly the idea. raidz has the advantage of integrating with ZFS's use of per-block checksums, so it can tell if a block is corrupt, and recover it from parity.

In terms of why not use a less expensive controller, well, find one :P

There are no 8-port non-RAID controllers, so that narrows it down to RAID controllers. Of those, the following requirements had to be met, at the time:

1) Must support HBA (non-raid mode)
2) Most work with OpenSolaris
3) Must have a PCIe 8x interface (4x was acceptable)
4) Must support 8 drives per card (required to support 15 drives on 2 cards)

Finding that is hard. NewEgg sells precisely one card that meets at least requirements #3 and #4 that is cheaper, the HighPoint RocketRAID 2680 SGL. However, information on points #1 and #2 is scarce. From their website, it sounds like "Multiple Logical Drive" means it *might* work for requirement #1, but that's not certain, and their website does not list OpenSolaris as supported. It also doesn't cost much less than the SuperMicro card originally cost me ($150, I think). So for the first card, it definitely wasn't an option, and even for the second card, it wasn't really clear if it would work, and I wasn't even sure what OS I would be using when I bought the second card. I tried to match it as close as possible by purchasing the LSI card, which is a much newer chipset than the SM card, but is the same family.
said by Thane_Bitter:

Regarding the SM 'flipped' card, it seems that one can add some plastic spacers as standoffs to relocate the bracket to the correct expansion slot. I have not done it myself, I just recall a post about doing this out on the net while trying to figure out what the hell a SuperMicro UIO slot was; I think they used small nylon washers and slightly longer small screwbolts.

You can, but I don't have any such spacers, and I don't know where to get any such spacers, and I'm not sure I even still have the original bracket anymore.

elwoodblues
Elwood Blues
Premium Member
join:2006-08-30
Somewhere in

elwoodblues

Premium Member

What did it all cost you when said and done Guspaz?

Thane_Bitter
Inquire within
Premium Member
join:2005-01-20

Thane_Bitter to Guspaz

Premium Member

to Guspaz
Ah, I see.
Yes I find RAID card/supported drive thing a nightmare to decipher.
I thought I would build a windows server box, however the OS is costly, and to do a stable RAID setup (like 6) requires a good controller and those pricey non-firmware borked drives. This ZFS via something like FreeNAS looks promising; however it means I can’t consolidate some very old hardware.
Thanks for the details on your system, as well as the explanations on your part selections.

HiVolt
Premium Member
join:2000-12-28
Toronto, ON

1 recommendation

HiVolt to vue666

Premium Member

to vue666
Click for full size
said by vue666:

Can you post some photos of your case & setup?

TIA

This is my server setup.

C2Q Q6600 2.4 (stock)
4gb ddr2-800 ram
8x1TB Seagate RAID5
Promise EX8350 hardware raid controller.
2x2TB Seagate RAID1
2x150GB Velociraptor RAID1 (Boot/OS)
Generic trayless hotswap bays.

I was planning to upgrade the 8 drives to 2TB's this summer, but I kept putting it off then got burned by the increase...

Gone
Premium Member
join:2011-01-24
Fort Erie, ON

Gone

Premium Member

I used to run something similar to that in the basement, but I retired it all when I realized that the thing was more often than not sitting idle yet sucking power for nothing. So instead, I bought an external hard drive to do backup to my desktop, moved two of the hard drives to a cheap NAS enclosure to act as an FTP server for files from the shop, moved two more of the drives into my desktop, gave a bunch of other hard drives away to family and used the CPU and RAM to put together a system for my sister's birthday a few months back. No regrets at all. It was fun having something like that when I did IT work, particularly when I had the thing running a bunch of different OSes under ESXi, but now that I'm out of that I don't miss it at all.

urbanriot
Premium Member
join:2004-10-18
Canada

urbanriot

Premium Member

I admittedly haven't upgraded for a while...

My first RAID5 set is 6 x 1TB WD Caviar Black's WD1002FBYS0 which still allowed adjusting TLER and 4 x 3TB WD Caviar Green's which I picked up a few months before the shortage at around $139 a piece.

I have around 12TB of useable space on my home server which is terribly convenient, as I rarely have to delete anything.

I had 20TB but I sold my 2TB drives during the shortage for a ridiculous amount of money when the shortage hit.