dslreports logo
 
    All Forums Hot Topics Gallery
spc
Search similar:


uniqs
1906
AsherN
Premium Member
join:2010-08-23
Thornhill, ON

AsherN to themagicone

Premium Member

to themagicone

Re: Need 10GB recommendations

said by themagicone:

If I did Raid 10, that would give me 12 usable drives or 36TB. That might work. We are probably going to do a 6 to 10 drive SSD array on top of this storage to provide quicker IO to the server.

See how much more for 4TB drives. Maybe worth it.
6 to 10 SSD, RAID6 if possible, RAID5 if necessary. 7 drives in RAID5 is optimal.
cramer
Premium Member
join:2007-04-10
Raleigh, NC
Westell 6100
Cisco PIX 501

cramer to AsherN

Premium Member

to AsherN
said by AsherN:

RAID5 + HS is NOT RAID6

I didn't say it was. I said RAID6 has replaced RAID5 because that's how people tended to use RAID5, i.e. with an extra spinning disk -- sure RAID5 won't touch the spare until there's a failure (real or operator triggered). Also, RAID6 didn't exist back then.

(it's also a matter of current tastes. RAID10 and RAID50 didn't exist years ago either. Now, almost everything supports it.)
themagicone
join:2003-08-13
Osseo, MN

themagicone

Member

The 4TB is a $1,700 premium. But it may prove cost savings vs the Raid 6. Read on the IO take on a R6 and it is 6 IO cycles vs 2 on R10. Currently checking to see if that is within budget.
LittleBill
join:2013-05-24

LittleBill to themagicone

Member

to themagicone
i work for the LARGEST date storage array company in the world Period.

Raid 5 is still very much an option, although the the trend has swung toward raid 6 due rebuild concerns. It most certainly is not removed

that said going to a 4TB drive is going to kill IOP performance period.

you need to break the array up into different raid groups to offset this
AsherN
Premium Member
join:2010-08-23
Thornhill, ON

AsherN

Premium Member

With low IOPs spindles like the OP is getting, more spindles per array is better.

I have large arrays with 2, 3 and 4 TB drives. smallest are 16 drives. Largest is 60 4TB drives. All RAID 10. I consistently get IOPs at IOPS per drive * # of drives.
LittleBill
join:2013-05-24

LittleBill to themagicone

Member

to themagicone
lol 60 drives this is a joke right? we have machines that hold anywhere from 500 to 2500 drives.

im not trying to insult you, but even the richest clients can not afford raid 10 across the board on entire arrays.

and yes there is a significant balancing acts with raid 6, the more spindals are better but there is a very big curve where that performance goes down the tubes since there is more capacity, thus there is more of a chance the host will request data from that raid group
AsherN
Premium Member
join:2010-08-23
Thornhill, ON

AsherN

Premium Member

I just stopped comparing d*** size when I finished kindergarten. We are discussing the OP's 24 drive array. At that size RAID10 is the more secure and best performer.
LittleBill
join:2013-05-24

1 edit

LittleBill

Member

there is no dick comparison your coming off like a know it all.

considering you haven't even done an audit of the host data that would be going to the array, you have absolutely no idea what his performance would be.

we have seen RAID 5 beat ( actually destroy) RAID 10 with specific data flows (even seen it with RAID3), but what do i know, we do audits for weeks before recommending a solution, but hey you read anandtech

when you want to come to the enterprise world let me know. stop telling people raid 5 doesn't exist

these are not our results, just a random grab off the internet, don't let your dick hit the floor

»bytes.com/topic/sql-serv ··· aid-10-a

sk1939
Premium Member
join:2010-10-23
Frederick, MD
ARRIS SB8200
Ubiquiti UDM-Pro
Juniper SRX320

sk1939

Premium Member

said by LittleBill:

we have seen RAID 5 beat ( actually destroy) RAID 10 with specific data flows (even seen it with RAID3), but what do i know, we do audits for weeks before recommending a solution, but hey you read anandtech

when you want to come to the enterprise world let me know. stop telling people raid 5 doesn't exist

these are not our results, just a random grab off the internet, don't let your dick hit the floor

»bytes.com/topic/sql-serv ··· aid-10-a

It all depends on the type of data and the needs of the users, I would still not recommend RAID 5 over RAID 6 or RAID 1+0. The difference between RAID 1+0 and RAID 6 is not insignificant as RAID 6 has a 4x higher penalty. That is ignoring the fact that if there is a RAID rebuild, the processing power required to recalculate parity can cause significant degradation in performance if you don't have a redundant array as a backup. However RAID1+0 has a 50% overhead cost versus RAID6.
AsherN
Premium Member
join:2010-08-23
Thornhill, ON

AsherN

Premium Member

said by sk1939:

said by LittleBill:

we have seen RAID 5 beat ( actually destroy) RAID 10 with specific data flows (even seen it with RAID3), but what do i know, we do audits for weeks before recommending a solution, but hey you read anandtech

when you want to come to the enterprise world let me know. stop telling people raid 5 doesn't exist

these are not our results, just a random grab off the internet, don't let your dick hit the floor

»bytes.com/topic/sql-serv ··· aid-10-a

It all depends on the type of data and the needs of the users, I would still not recommend RAID 5 over RAID 6 or RAID 1+0. The difference between RAID 1+0 and RAID 6 is not insignificant as RAID 6 has a 4x higher penalty. That is ignoring the fact that if there is a RAID rebuild, the processing power required to recalculate parity can cause significant degradation in performance if you don't have a redundant array as a backup. However RAID1+0 has a 50% overhead cost versus RAID6.

For spinning rust, RAID10's overhead may be worth it. 24 drives, if you want optimal arrays, you'd go with 3 8 drives RAID6, losing 6 drives to parity. And with 4TB drives, 8 drives may be pushing it for rebuild time.

I had HDD failures in 2 small arrays recently. a 16 4TB NL-SAS HDD in RAID10 and a 5 300GB 15K SAS in RAID5.

The RAID10 array took 5 hours to rebuild the drive. The RAID 5 array took 3 days.

But apparently, I'm not supposed to know what I'm doing...
Shady Bimmer
Premium Member
join:2001-12-03

2 edits

Shady Bimmer

Premium Member

said by AsherN:

I had HDD failures in 2 small arrays recently. a 16 4TB NL-SAS HDD in RAID10 and a 5 300GB 15K SAS in RAID5.

The RAID10 array took 5 hours to rebuild the drive. The RAID 5 array took 3 days.

But apparently, I'm not supposed to know what I'm doing...

I think the fact that it took 3 days to rebuild a 5 x 300gb 15K SAS RAID5 array speaks for itself here.

I can rebuild an 8 x 2TB 7200 RPM SATA (3Gb/S) RAID6 in just over four hours.

Edit: FWIW, many years ago using a RHEL3 md-based 8 x 400GB PATA RAID5 array on a 1.2 GHz Pentium-III server I never had a single-drive rebuild time longer than six hours.
nyrrule27
join:2007-12-06
Howell, NJ

nyrrule27 to AsherN

Member

to AsherN
I hate to hijack this thread but I have some questions

I have a qnap 8 bay nas with 8 2tb drives. when I got it I set it up with raid 5. I had 1 drive fail on me once. I just changed it out and it rebuilt the array. I don't remember exactly but I think it took less then a day. Now I obviously have data on it so I can't exactly change it but was that the wrong thing to do. It's not setup in a business. I have it in my house.
Shady Bimmer
Premium Member
join:2001-12-03

Shady Bimmer

Premium Member

said by nyrrule27:

Now I obviously have data on it so I can't exactly change it but was that the wrong thing to do. It's not setup in a business. I have it in my house.

Not sure about what part you are questioning but it sounds like you did exactly as intended. Drive failed; removed failed drive and replaced with working drive; array rebuilt.

Rebuild times depend upon many factors.
nyrrule27
join:2007-12-06
Howell, NJ

nyrrule27

Member

I'm questioning using raid 5 and not 6 of 10.
AsherN
Premium Member
join:2010-08-23
Thornhill, ON

AsherN to Shady Bimmer

Premium Member

to Shady Bimmer
Fastest I've seen a RAID5 array, not rebuild, but just initialize was a 16 drive QNAP, filled with 960GB M500 SSD. Took 2.5 hours.
The RAID5 array was a very busy server with a high rate of change. It spent more time on live IOs than on rebuilding.

The biggest issues with parity RAID is the write penalty where every write operation requires n IOs. And the statistical possibility of a URE. A bit less so with RAID6. Because drives are getting bigger but the quality is not improving, we have reached a point where statistically, you stand a great chance of encountering a URE doing a rebuild. It becomes about odds and the nature of the data. I have RAID6 arrays of 4TB drives, smallest being 16 HDD and largest being 32 HDD. They are targets for DR level backups. If I lose a drive, the array will attempt to rebuild. If it can't, I can recreate the data since those backups are not for archival purposes.

If you store fairly static data, with regular backups, you may be able to get away with parity RAID.