dslreports logo
 
    All Forums Hot Topics Gallery
spc
Search similar:


uniqs
692

trparky
Premium Member
join:2000-05-24
Cleveland, OH

trparky

Premium Member

Do I need a SATA3 wire?

Can I use a wire rated for SATA1/2 for SATA3?

Camelot One
MVM
join:2001-11-21
Bloomington, IN

Camelot One

MVM

Yes, the cable specs have not changed.

koitsu
MVM
join:2002-07-16
Mountain View, CA
Humax BGW320-500

koitsu to trparky

MVM

to trparky
Just an added back-up for what Camelot One See Profile has said (since some people never trust one answer): correct, there is absolutely no difference in SATA cables. Meaning: you can run SATA150, SATA300, and SATA600 all across the same/existing cables.

MacGyver

join:2001-10-14
Vancouver, BC

MacGyver to trparky

to trparky
Proof »www.maximumpc.com/articl ··· stigates

FizzyMyNizzy
join:2004-05-29
New York, NY

FizzyMyNizzy to trparky

Member

to trparky

»www.youtube.com/watch?v= ··· mOk87fqY

koitsu
MVM
join:2002-07-16
Mountain View, CA
Humax BGW320-500

koitsu to MacGyver

MVM

to MacGyver
{{koitsu OCD mode ENABLED}}

Heh, interesting test. I did read the full article, but the "facts" are somewhat scattered throughout and lacking in other areas.

The SATA specification (for all current SATA revisions) explicitly states that the longest cable length permitted is 1 metre (3.3 feet). So all their initial tests (using maximum-spec cables, testing out bends/kinks in the cable, etc.) are great, except all they said was "it worked fine". I don't know if I can believe this because they don't state how they verified that. Will it *work*, as in simply "function"? Probably, but for cables which are badly damaged or exceed maximum spec length, seeing CRC errors (tracked by SMART attribute 199) would be common.

I would love to have actually spent some time doing actual analysis of their "72-inch SATA connection" setup, but a protocol analyser would have been needed. 72 inches roughly twice the maximum length permitted per spec. So how did this function/work at all?

I imagine that the voltage spit out by many southbridges or dedicated SATA controllers -- for the SATA data port connection (I'm not talking about the power connection) -- is probably slightly higher than what the specification demands. This would definitely help with situations where someone goes out and buys, say, a 5 foot cable. (Why such a cable would exist to begin with is beyond me, but I have seen this with SCSI, believe it or not) The important part here is that the behaviour is going to vary per board.

Finally, their very funny 108-inch cable test made me laugh -- yes, it failed. But I loved seeing the photo. I'm just imagining some random Chinese outfit needing to solve such a problem, with these kinds of makeshift cables being used all over the place, hahaha...

Again: a SATA signal and transport protocol analyser would have really been useful in their tests, because I see no actual evidence to back up their "things worked great" claim. I'm certain at least one of those tests resulted in increased CRC errors, which are permitted in low amounts over long periods of time, but regularly incrementing is bad. Excess CRC errors would explain the 108-inch test where they "saw it fail sometimes but not others".

{{koitsu OCD mode DISABLED}}

Kilroy
MVM
join:2002-11-21
Saint Paul, MN

Kilroy

MVM

said by koitsu:

The SATA specification (for all current SATA revisions) explicitly states that the longest cable length permitted is 1 metre (3.3 feet). So all their initial tests (using maximum-spec cables, testing out bends/kinks in the cable, etc.) are great, except all they said was "it worked fine".

Check out the chart with read and write speeds. That is what they used to determine how well it worked. If errors increase read / write times would be affected.
said by koitsu:

I would love to have actually spent some time doing actual analysis of their "72-inch SATA connection" setup, but a protocol analyser would have been needed. 72 inches roughly twice the maximum length permitted per spec. So how did this function/work at all?

Because the spec is either under rated or very forgiving. Odds are under rated so that you can actual have longer runs that work just fine, but don't meet the actual specs. This helps ensure that if you stay within the spec your cables will be fully functional.

koitsu
MVM
join:2002-07-16
Mountain View, CA
Humax BGW320-500

koitsu

MVM

I'm really starting to hate this damn forum software throwing out my entire post when "quote blocks are unbalanced". I need to talk to Karl/nil about improving that. It's totally user error on my part, but seriously, just throwing out everything someone wrote because of a minor typo? Sheesh.
said by Kilroy:

Check out the chart with read and write speeds. That is what they used to determine how well it worked. If errors increase read / write times would be affected.

Not exactly. You're assuming that you would see a throughput decrease because of "increased errors" -- that's an incorrect assumption, because it's based on the assumption that actual ATA CDB timeouts would happen. Any kind of ATA timeout would absolutely impact performance and you would see it in high-resolution graphs which plot throughput on the Y axis and time (preferably in milliseconds) on the X axis. It'd look like this: |~~|_|~~|_|~~~~|_|~~~~|_|~~~~|_| where every _ indicates a drop to zero.

That isn't what happens with CRC errors. CRC errors are detected immediately and result in a resubmission of the ATA CDB; there's no timeout. The only time CRC errors would impact performance is if there is a massive number of them compared to checksump-passed CDBs. For example, if for every 2 ATA CDBs you had 1 which resulted in a CRC error + retransmit, you would definitely see an impact in performance. However, if for (pulling numbers out of my ass here) every 10,000 ATA CDBs you had 1 CRC error, you wouldn't see an impact in performance -- however, that CRC error rate is still phenomenally high (on average a few CRC errors seen every 365 days is considered normal/acceptable in noisy environments).

The exact same concept/model applies to things like TCP packets.

TL;DR -- a throughput/benchmark graph does not necessarily act as an indicator of error conditions.
said by Kilroy:

Because the spec is either under rated or very forgiving. Odds are under rated so that you can actual have longer runs that work just fine, but don't meet the actual specs. This helps ensure that if you stay within the spec your cables will be fully functional.

I would say it's a little bit of both. Most specifications when created are "made strict", e.g. "we know a cable length of 6 feet works fine, but let's not take any chances -- cut that in half". That would be underrated. But likewise, as I said in my previous post, forgiveness is definitely part of the picture too, as different SATA controllers are certain to use different levels of voltage across the data lines. SATA uses low-voltage differential signalling, and I'm fairly certain some SATA controllers output "slightly more" than what is desired -- which means longer cable runs will still continue to work.

Real-life evidence of this is with SCSI: HVD SCSI (high-voltage differential) used something like 7V (but I can't remember) and the cable lengths were significantly longer (something like 20-some metres), while non-HVD SCSI and things like SCSI LVD would only support something like 5 metres at most.

However: with differential signalling, more bandwidth = requires shorter cables. But MaximumPC's tests show that with the SATA controller they used, it was definitely outputting more than what the specification required, which is how they were able to accomplish more or less the same throughput with a 72" cable than that of a 36" cable. So that goes back to your point about things being underrated. :-)
scubascythan
join:2005-05-14

1 recommendation

scubascythan

Member

I think the point of his proof was to re-affirm that for most Sata cables 1-3 feet long, one that was certified was Sata 2 or even Sata 1 is perfectly fine for Sata 3.

Once you go over 3 feet it's impractical and out-of-spec and you should be using devices hooked up with something like ethernet. The long distance cables were just for the curious to see how far they could go before it fails. You're just arguing the proof at an out-of-spec test method for the sake of arguing and showing how much you know.

koitsu
MVM
join:2002-07-16
Mountain View, CA
Humax BGW320-500

koitsu

MVM

I respect your opinion, and I agree that the "overall point" of the article was to show that there is no such thing as a "SATA600" cable (meaning any SATA cable as of this writing will work fine). I'm not combating that point.

What I'm combating (pardon me: "arguing for sake of arguing and showing how much I know") is that the article implies that "because we got good speeds, there was no problem, even at some exceeds-the-spec lengths". Kilroy See Profile even interpreted it to mean that.

Bandwidth/throughput tests (read: benchmarks) are not a way to determine if errors are happening with SATA. It's like doing Ethernet throughput tests, then saying "because I got 9MBytes/second on a 100mbit link, this cable/network is fine" -- when looking at actual error counters and PHY statistics/counters to determine if there were frame overruns, frame checksum errors, or dropped frames, would act as an indicator as to errors. TL;DR -- good throughput doesn't necessarily indicate an errorless connection. That's all I'm trying to drive home here.