dslreports logo
 
    All Forums Hot Topics Gallery
spc
Search similar:


uniqs
5182

Octavean
MVM
join:2001-03-31
New York, NY

3 edits

Octavean

MVM

SSD RAID 0 Array

I recently purchased a Kingston SSDNow V series SNV425-S2 64GB SSD from buy.com. I had a $40 off coupon code which made it about ~$115 shipped otherwise I would have likely bought it from Newegg. It had very impressive performance IMO even knowing full well that this 200 MB/s read 110 MB/s write drive wasn't the fastest horse out of the gate. It wasn't long before I wanted more space on the OS drive though and thought I should have bought the 128GB version. Therefore I did the next best thing, arguably, and bought another 64GB SSD (about ~$133 from Newegg) for a RAID 0 array.

System Configuration:

Intel Core i7 920
ASUS P6T Deluxe (v1)
MSI GTX260
6GB DDR3 1600 Tri-Channel G-Skill
Kingston SSDNow V SNV425-S2 2x 64GB RAID 0 array 128GB (on ICH10R with latest Intel Rapid Storage Technology 9.6.0.1014 driver)
WD Black 1TB HDD
Seagate 750GB
Antec 300
Corsair CMPSU-750TX 750W PSU
Windows 7 Home Premium 64bit

Now as I understand it, there will be no TRIM support in such a RAID configuration and the drives will need to depend on GC unless there is some way to manually TRIM the drive. I've come across this but if anyone has some other ideas to maintain the drives I'd like to hear it:

»www.ocztechnologyforum.c ··· eed-this.

Lastly setting up the RAID array was quick and easy but I started to wonder if it would have been beneficial to use a smaller stripe size. In this case I used 128k which was the default. It seemed to have taken a hit in random reads (512k) by a factor of 2x or rather 1/2 and I wondered if that was due to stripe size?

Any thoughts would be great.

I don't mind configuring a new array if need be since I didnt activate Windows 7 yet.

Thanks in advance.

Maranello
MVM
join:2000-12-08
Butler, PA

1 edit

Maranello

MVM

Very nice. As we discussed, Intel is very close to a Raid driver that would support trim. Being an AMD user I'd just be happy with a driver that supported trim at all

From what I've read 64k or 128k for stripe size. Everyone seems to like something different but most say 128k is just fine.

EDIT: Looks like Intel did it and the latest driver does have raid support...

»www.ocztechnologyforum.c ··· TEL-RAID

Octavean
MVM
join:2001-03-31
New York, NY

Octavean

MVM

Click for full size
Click for full size
I was reading that thread too and I think they eventually concluded that TRIM wasn't being applied to the SSD drives that were in the RAID array. In my case the OS has TRIM enabled but that doesn't necessarily mean that the drives in the array are benefiting from it.

I was messing with the Intel Rapid Storage Technology control panel and enabled "write-back cache" which seemed to boost the lower random reads (512k) scores I was seeing.

So far I haven't been able to find much about stripe size with respect to SSD RAID arrays but if anything I can experiment a bit. Just backup the system to the old WHS and try another stripe size. I think you're right though and most people seem to recommend 128k or 64k.

Thanks for the input man!

I did find this though (they used 256k which wasn't an option for me):

»www.tomshardware.com/rev ··· 5-8.html


Matt3
All noise, no signal.
Premium Member
join:2003-07-20
Jamestown, NC

Matt3

Premium Member

Stripe size is all about how the files are written to disk. If you use a 128K stripe size, any file under 128K won't be striped across both disks. If you use a 64K stripe size, that same 128K file will be written to each disk.

In other words, if you have a lot of files smaller than your stripe size, you won't see a benefit to RAID0, which is typically why RAID0 doesn't improve desktop performance. RAID0 is great if you move lots of very large files around though.

Octavean
MVM
join:2001-03-31
New York, NY

Octavean

MVM

Click for full size
said by Matt3:

Stripe size is all about how the files are written to disk. If you use a 128K stripe size, any file under 128K won't be striped across both disks. If you use a 64K stripe size, that same 128K file will be written to each disk.

In other words, if you have a lot of files smaller than your stripe size, you won't see a benefit to RAID0, which is typically why RAID0 doesn't improve desktop performance. RAID0 is great if you move lots of very large files around though.
Right that was my understanding. However, I was concerned mainly with anything specific to SSD RAID that may differ slightly from conventional HDDs in an array. For example does write amplification come into play when thinking about stripe size? I have no idea.

Anyway I was thumbing through the Intel Rapid Storage Technology 9.6.0.1014 contents and came across the above.

Matt3
All noise, no signal.
Premium Member
join:2003-07-20
Jamestown, NC

Matt3

Premium Member

I wonder why they recommend 16KB as a stripe size, when Microsoft states that 4KB read/writes are the most important for desktop performance? Either way, the smaller stripe size makes sense since SSDs are so much faster at reading/writing small files than mechanical drives.

As for write amplification, I'm sure that comes into play, but a RAID-0 array may actually BENEFIT the effects of write amplification and extend the life of the drive. I'm sure that since not every file can be striped, having two SSDs means that each would share the load of writes.

Keep in mind though, no RAID controller/driver combo supports TRIM yet.

Here's a good article at Anand about two Intel X25V drives in RAID-0: »www.anandtech.com/show/3 ··· -for-250

Octavean
MVM
join:2001-03-31
New York, NY

1 edit

Octavean

MVM

One of the reasons I went with the Kingston SNV425-S2 64GB SSD was that they were reasonably priced drives with reasonably good performance for the money. As I stated before, the first one only cost me about ~$115. I didn’t necessarily plan to buy another initially but I think I got an OK price for the second one too.

The other reason I wanted to go with the Kingston SSD was in part due to this:
said by Allyn_Malventano :

TRIM

I don't have and pretty charts or graphs to explain this next part, but I will share an observation I made during my fragmentation testing. When running my fragmentation tool, I observe IOPS drop as the drive becomes more and more overloaded with the task of tracking the random writes taking place. Here the JMicron controller behaved like all other drives, but where it differed is what happened after the test was stopped. While most other drives will stick at the lower IOPS value until either sequentially written, TRIMmed, or Secure Erased, the JMicron controller would take the soonest available idle time to quickly and aggressively perform internal garbage collection. I could stop my tool, give the drive a minute or so to catch its breath. Upon restarting the tool, this drive would start right back up at it's pre-fragmented IOPS value.

Because of this super-fast IOPS restoring action, and along with the negligible drop in sequential transfer speeds from a 'clean' to 'dirty' drive, it was impossible to evaluate if this drive properly implemented ATA TRIM. Don't take this as a bad thing, as any drive that can bring itself back to full speed without TRIM is fine by me, even if that 'full speed performance' is not the greatest.

This type of self-healing (i.e. without needing TRIM) is great for those wanting to run a few SSD's behind a RAID, since no RAID implementation is currently capable of passing TRIM from the OS to the arrayed SSD's. Better yet, considering this drive is tailored to the budget crowd who may very well still be running XP or Vista, it's good to have a few choices that don't require TRIM to maintain decent levels or performance.
»www.pcper.com/article.ph ··· t&pid=10

That was the 128GB version of the same drive as I understand it (same JMicron controller and so on).

DarkLogix
Texan and Proud
Premium Member
join:2008-10-23
Baytown, TX

DarkLogix

Premium Member

odd I just looked at your WEI post for after the raid 0 setup 7.3 on the HD

I guess you prove that wei isn't all its cracked up to be

your getting faster speeds according to HDtune but I'm getting WEI HD of 7.9

Matt3
All noise, no signal.
Premium Member
join:2003-07-20
Jamestown, NC

Matt3

Premium Member

said by DarkLogix:

odd I just looked at your WEI post for after the raid 0 setup 7.3 on the HD

I guess you prove that wei isn't all its cracked up to be

your getting faster speeds according to HDtune but I'm getting WEI HD of 7.9
WEI is heavily dependent on 4k read/writes. If both of you ran CrystalDiskMark, I bet your 4k read/write speeds would be superior. This is from my SSD (scores a 7.7) whereas my Velociraptor scores less than 1Mbps on the 4k test.

Octavean
MVM
join:2001-03-31
New York, NY

2 edits

Octavean to DarkLogix

MVM

to DarkLogix
Well there is more to an SSD then sequential reads and its my guess that the WEI takes more into account then the sequential reads.

***edit***

I thought the 4k CrystalDiskMark scores were random for some reasons, not sure why. Anyway, yeah, I have already submitted my single SSD and RAID 0 CrystalDiskMark numbers above. For some reason I was taking a hit (a loss going from 1x SSD to 2x RAID) with the 512K reads by a factor of about half which is why in part I started the thread. I was hoping someone had some insight on this and I was hoping it could be addressed with a different Stripe size. Not the 4k mind you but rather the 512k performance hit. But yeah, the 4k reads and writes are superior on the Crucial RealSSD C300 no doubt.

DarkLogix
Texan and Proud
Premium Member
join:2008-10-23
Baytown, TX

DarkLogix

Premium Member

ah I see

Octavean
MVM
join:2001-03-31
New York, NY

Octavean

MVM

Yeah, I wouldn’t want to suggest that the C300 is somehow lacking as it shines in more than one area for sure and is quite an SSD in its own right. It’s a great all-around product by all accounts I’ve seen. Two Kingston SSDNow V SNV425-S2 64GB SSDs in RAID 0 will certainly fall short in a number of areas comparatively speaking if not most areas.

Anyway, I was searching around for some explanation as to why Intel specified 16k as the recommended / default stripe size for SSDs in RAID 0 and I stumbled across this:

»www.xtremesystems.org/fo ··· =4344222

This was an interesting read IMO although the hardware used was Intel end-to-end I think (Intel X25 V x 3 Raid 0)

DarkLogix
Texan and Proud
Premium Member
join:2008-10-23
Baytown, TX

DarkLogix

Premium Member

with SSD's theres a lot to think about when picking stripe size

1. how wide is each nand channel on the ssd?
2. How many ssd's?
3. the block size of the formating

with vmware when using just a normal raid (0 or 5) you want the stripe size to be x when (x=b/n)(b=block size)(n= number of drives) now add to this the internal striping of the nand channelsand it getts trickier

so now x=b/(n*c) (where c is the number of nand channels) but unlike the upper level stripe size you can't change the nand channel stripe width)

so lets say that it takes 16k to span all the nand chennels

16k would be the stripe size you want because then each stripe would span all nand channels but if you have 2 ssd's in raid 0 thenhaving a block size of 32k would be hard to do since win 7 wants a block size of 4k
Chrno
join:2003-12-11

1 recommendation

Chrno

Member

I very much doubt you can quantify the stripe size of an array with an equation. For SSDs, the smallest unit you can write is a page while the smallest unit you can erase is a block. In that sense, the equation you presented already fell apart. Further more, it doesn't make sense even when used with traditional hard drives. Typically, hard drives have a block size of 512byte and if you plug this into your equation, it falls apart.

Keizer
I'M Your Huckleberry
MVM
join:2003-01-20

Keizer to Octavean

MVM

to Octavean

Impressive results Octavean! I may follow your efforts myself soon.

DarkLogix
Texan and Proud
Premium Member
join:2008-10-23
Baytown, TX

DarkLogix to Chrno

Premium Member

to Chrno
it was a simplified version of a biger equ

and by Block I mean partition level unit

the idea is in a perfect setup you want each set of stripes to make a pass across all drives and for each set of stripes to make one partition level unit so that each block would span all drives in the raid and benefit from the speed of all of the drives
Chrno
join:2003-12-11

Chrno

Member

Ok, if that's the case and if Octavean is using Windows, which I am guessing most likely he is, we are dealing with a cluster size of 4K. For your equation to work (larger or the smaller/simplified version) you need a stripe size of 2K for a 2 drive raid 0 array. The 80GB Intel X25-M has 5 channel NAND so lets just say the SSDNow has 5 channel each. So we stick that in... now we have a stripe size of 409byte (come again)?

Not that I don't have a concept of how raid works but with my experience with raid setups, it all depends on the type of data you are working with and the type of controller you are using. Cluster size or in your terms, block size, has little to do with it IMO.

DarkLogix
Texan and Proud
Premium Member
join:2008-10-23
Baytown, TX

DarkLogix

Premium Member

it's not likely useful at this point the equ I was referancing was one used for vmware tuning

I'm sure someone will come up with one that works with SSD's (and since you can't control the channel width of the nand channels this puts a fixed but unknown veraible into the mix)

Keizer
I'M Your Huckleberry
MVM
join:2003-01-20

Keizer to Octavean

MVM

to Octavean
said by Octavean:

I've come across this but if anyone has some other ideas to maintain the drives I'd like to hear it:

»www.ocztechnologyforum.c ··· eed-this.

Have you tried this solution at all? I suppose you haven't had this RAID setup long enough to even see any decline in performance yet.

What about imaging the RAID array, and then just wiping the drive and restoring your image? Would this theoretically "clean" the drive? What about just wiping the free space?

I'm actually just starting to research SSD's so maybe I'm not understanding this TRIM technology correctly, and how to simulate it with RAID arrays.

Octavean
MVM
join:2001-03-31
New York, NY

Octavean

MVM

Well you’re quite right in that I haven’t been running the RAIDed SSDs for very long. I believe I set up the array on the 17 th of this month so we are talking less than two weeks. My informal plan was simply to establish a baseline by benchmarking the array early in and then reevaluate overall performance by running the same benchmarks again about every 30 days or so. Within that ~30 day period I would simply exercise general use of the drives. So in other words I wouldn’t be stress testing them but rather just using the system normally (edit photos, edit video, play games and so on).
Octavean

Octavean

MVM

Just a little update since its been almost 30 days since the SSD RAID array was put into service.

I have noticed “no” drop in performance empirically. So its as fast to me as day one thus far. I repeated the same benchmarks and there was no change with respect to HD Tune. The CrystalDisKMark benchmark however, had a drop in sequential reads.

I should point out that I have not stressed these SSDs very much (just normal use) and I haven’t installed much on them (Office 2010 beta, Unreal Tournament 3, FireFox, Crome, RealPlayer SP, HD Tune, CrystalDisKMark and so on). I’ve probably used less then ~30GB of the total ~119GB array at any one time and its currently at about ~25GB. So most of the available storage has never been used and TRIM therefore could not have been of much use anyway (one would think).

I don’t see why HD Tune seq reads would remain unchanged but CrystalDisKMark seq reads would take a hit. I probably should have tested with a more divers set of benchmarks to begin with.

Matt3
All noise, no signal.
Premium Member
join:2003-07-20
Jamestown, NC

Matt3

Premium Member

Yo Oct, would you re-run Crystal with a 100MB test? That's how most sites run it.

BTW, you still play UT3? I'm MntlCase from the old UT clan here.

Octavean
MVM
join:2001-03-31
New York, NY

Octavean

MVM

said by Matt3:

Yo Oct, would you re-run Crystal with a 100MB test? That's how most sites run it.

BTW, you still play UT3? I'm MntlCase from the old UT clan here.
CrystalDiskMark test with 100MB as requested.

Yeah I still play UT3 when I get the chance. It’s probably still my favorite game.
D4rr3n
join:2007-03-09
East Syracuse, NY

D4rr3n

Member

Sorry to do this to you guys......but here's an ssd server I built with vertex ssd's and an lsi 9260-8i

»img69.imageshack.us/gal. ··· xlsi.jpg

yes 2.4GB/s read and 2.8 GB/s is fast as it sounds, went over 3ghz on the weird looking hdtune one. just one of my wacky creations...many overclockers here?

koitsu
MVM
join:2002-07-16
Mountain View, CA
Humax BGW320-500

koitsu

MVM

said by D4rr3n:

Sorry to do this to you guys......but here's an ssd server I built with vertex ssd's and an lsi 9260-8i
Yes, because most folks are going to drop US$600 on a hardware RAID SAS controller just to drive some SSDs for consumer/residential use. *sigh* :P
D4rr3n
join:2007-03-09
East Syracuse, NY

D4rr3n

Member

I actually spent about $900 total for it and have made a nice profit selling similar high end pc items and small electronics. amazingly 600 isn't alot of money to some people though it may come as a shock to you
Chrno
join:2003-12-11

Chrno

Member

I think you totally missed his point. Most people around here move to SSD not because they want some big numbers in the sequential read/write department. They use SSD's mainly because they offer better access times than traditional hard drives are incapable of reaching.

And if someone around here is going to drop big money on a SAS controller, he/she will most likely drop it to drive a storage array.

Octavean
MVM
join:2001-03-31
New York, NY

Octavean to D4rr3n

MVM

to D4rr3n
said by D4rr3n:

Sorry to do this to you guys......but here's an ssd server I built with vertex ssd's and an lsi 9260-8i

»img69.imageshack.us/gal. ··· xlsi.jpg

yes 2.4GB/s read and 2.8 GB/s is fast as it sounds, went over 3ghz on the weird looking hdtune one. just one of my wacky creations...many overclockers here?
I appreciate you post and I hope you contribute more. However, I think you may have misinterpreted the purpose of the thread. I posted my benchmarks of a basic SSD RAID 0 array to establish a baseline to help better understand the performance degradation over time due to a lack of TRIM support (if applicable), what techniques can be applied to combat this possible performance degradation (AKA “Tony Trim”) as well as recommended stripe size for the RAID 0 array.

I most certainly wasn’t bragging, I was simply asking for information while I experimented and researched the subject. Any information in that respect would be greatly appreciated.
D4rr3n
join:2007-03-09
East Syracuse, NY

D4rr3n

Member

I just happened to be bored one night and saw benches of something similar to what I benched so I decided to post it, wasn't showing off. And I never had degradation with the vertexes with over 12 owned and sold I ran 24/7 in my main bench and never trimmed once.....and my speed improved over 6 months honestly.

Not going to argue with ocz and their people about trim and tweaks and needing a perfectly partitioned but any time i looked into it i compared my benches and they were a bit high without degradating maybe it was the 24/7 running. Anyways thanks for trying to clear something and to those who sent me invites. I'll be around, and not to stir anything up.

Octavean
MVM
join:2001-03-31
New York, NY

Octavean

MVM

Well that is helpful information.

There have been some indications that robust garbage collection on some makes / models will be enough to maintain the SSDs performance levels without TRIM. Not all models and generations may do this adequately but some might. So I would count your input as a success story.

Did you have any input on stripe sizes or any insight on why Intel made reference to a default stripe size of 16KB in the Intel Rapid Storage Technology contents documentation?