I might post photos after I get the third hotswap bay in.
The case I have was discontinued a while ago:
»
www.newegg.ca/Product/Pr ··· 11112191It was specifically purchased because it was aluminum (being a lian-li), and because it had 9x 5.25" drive bays with no obstructions between them; it fits the 3x height hotswap bays without modification:
»
www.newegg.ca/Product/Pr ··· 17994028The case isn't the greatest in the world, but it's decent enough, and the 5.25" bays are perfect. Saving weight on the case ended up to not really matter much once you load the thing up with tons of heavy hard disks and hotswap bays.
The hotswap bays have been working pretty decently for me, although they're not cheap. They're the highest density you can get, 5x3.5" in 3x5.25" space. Three of them allow me to get 15 drives in the case, hotswappable to boot.
Powering it is a 750w single-rail Corsair TX750. It's a v1 of this:
»
www.corsair.com/power-su ··· ply.htmlIf it had been available at the time, I would have bought the modular edition to make life easier.
The board is a SuperMicro MBD-X7SLM-L-O, which was purchased because it was cheap, had at least two slots that were physically PCI-e 8x or higher (for the RAID controllers), and had two gigabit LAN ports. In hindsight, trying to save money here was a mistake, and I should have bought a higher-end board that supports more than 4GB of RAM. Originally I only bought 2GB for it because that was the listed max, but the board does support 4GB in the end (showing up as 3.5GB even in 64-bit, but better than 2):
»
www.newegg.ca/Product/Pr ··· 13182168The CPU in the thing is a Celeron E1500:
»
www.newegg.ca/Product/Pr ··· 19116075Again, in hindsight, shouldn't have tried to save money here. It's not a very fast CPU, and ZFS can be pretty CPU-intensive. Things are a bit better after I switched from OpenSolaris to Linux, though.
The RAID cards are not identical matches, since they were bought years apart. The first RAID card I put in the thing was the SuperMicro AOC-USAS-8li:
»
www.supermicro.com/produ ··· -L8i.cfmIt's not actually a PCIe card, it's designed for a proprietary SuperMicro slot, but the interface is still PCIe. It works in a normal PCIe slot except the card's components are on the wrong side, which means the rear case bracket doesn't line up. In other words, it works fine, but it's only being held in the PCIe slot by tension of the slot. Somewhat risky, but it cost half as much as the LSI card at the time, so that was like a $150 savings. Again, probably shouldn't have tried to save money there... Anyhow, it's an 8-drive or SAS controller, has a PCIe-8x interface so that's not a bottleneck, supports SATA2, and can be loaded with a special (but official) firmware that makes it act like a dumb controller, exposing the drives directly to the OS. And, most importantly, it worked with Solaris, which was a requirement for me back then.
The second card, I decided to "do it right" and buy a real LSI card, so a few months ago I bought an LSI 9211-8i. Somewhat similar card, but it's a proper PCI-e card (so it has a rear bracket holding it in), supports SATA3, and has a different chipset. But it also has the special firmware available. The card is in a physical-8x-electrical-4x slot, but PCIe 4x is enough for my purposes here:
»
www.newegg.ca/Product/Pr ··· 16118112The downside of all these cards is that they require a special connector cable. You'll notice both cards have only two connectors on the board, but claim support for 8 drives. That's because you need SFF-8087 cables for SATA drives, and at $25 a pop they're not cheap:
»
www.newegg.ca/Product/Pr ··· 16116097They're good cables though. It's just tricky; they have to be forward cables, but forward and reverse cables look identical. And they lock in, which is nice. Actually my first card, I bought supermicro brand cables, and they're super stiff and annoying to work with, so when I got these 3ware cables, they're much nicer. I'm almost tempted to buy another set of two of them to replace the supermicro ones, but that'd be another $50+shipping, and the supermicro ones still work OK...
What else... Well, due to my original cost saving efforts (which, as I've mentioned, I've come to regret to some extent), the original server with 5x2TB drives (10TB), cost me about $1500 all-in. The second expansion (which brought the capacity of the server up from 5 drives to 15 drives, and populated half of the new drive bays giving me 20TB total) cost me about $1300. It would have been much cheaper if not for the thailand flood (that raised my costs by maybe $200-250), and that I bought the third hotswap bay even though it was not yet needed (another $180ish). The third expansion, which will populate the remaining drive bays, requires only the drives, since I have everything else needed. Impossible to say how much that will cost based on the HDD pricing, but if drive prices come back down to normal, and I buy more 3TB drives to complete the set I have, that would cost me $735 to bring the server capacity from the current 20TB to what would then be 35TB.
But the current 20TB capacity will last me quite a while. The first 10TB lasted me 2 years, the second 10TB should last me at least a year, year and a half...
In terms of the software environment on the server, I started out with OpenSolaris, and I hated it. But it was the only show in town at that time for a recent and stable ZFS build.
Very recently, I replaced OpenSolaris with Ubuntu 11.10, because there is now kernel-level support for ZFS available in Linux via the "zfsonlinux" project, which is very actively developed and reasonably stable at this point. And because it's a fully integrated kernel module (licensing issues don't apply if you distribute the non-GPL ZFS code in source form and have the package use DKMS to compile/install it on user systems), performance is actually better than Solaris. Installing it was dead simple, just add the PPA repo then "apt-get install ubuntu-zfs" kind of thing:
»
zfsonlinux.org/The primary goal when building the original server was expandability. I was only putting in 5 drives to start (in a raidz array, which is like raid5, giving me 4 drives usable capacity and 1 for parity), but I wanted to be able to expand in the future. I specifically picked a case and motherboard that would allow me to add the second RAID card and extra two hotswap bays that would let me triple the number of drives, and I also made sure that I could expand capacity even after filling the server; with ZFS, you can replace drives with larger capacity ones, and after all five drives are replaced, it will get the higher capacity. So if in 5 years I want to replace 5x2TB drives with say 5x6TB drives or whatever is available, I can do that. It will be a very slow process because the process is to replace a drive and rebuild the array five times, but it can be done.
I guess that's about everything there is to know about my server, let me know if you have questions.