I'm really starting to hate this damn forum software throwing out my entire post when "quote blocks are unbalanced". I need to talk to Karl/nil about improving that. It's totally user error on my part, but seriously, just throwing out everything someone wrote because of a minor typo? Sheesh.
said by Kilroy:Check out the chart with read and write speeds. That is what they used to determine how well it worked. If errors increase read / write times would be affected.
Not exactly. You're assuming that you would see a throughput decrease because of "increased errors" -- that's an incorrect assumption, because it's based on the assumption that actual ATA CDB timeouts would happen. Any kind of ATA timeout would absolutely impact performance and you would see it in high-resolution graphs which plot throughput on the Y axis and time (preferably in milliseconds) on the X axis. It'd look like this:
|~~|_|~~|_|~~~~|_|~~~~|_|~~~~|_|
where every
_
indicates a drop to zero.
That isn't what happens with CRC errors. CRC errors are detected immediately and result in a resubmission of the ATA CDB; there's no timeout. The only time CRC errors would impact performance is if there is a massive number of them compared to checksump-passed CDBs. For example, if for every 2 ATA CDBs you had 1 which resulted in a CRC error + retransmit, you would definitely see an impact in performance. However, if for (pulling numbers out of my ass here) every 10,000 ATA CDBs you had 1 CRC error, you wouldn't see an impact in performance -- however, that CRC error rate is still phenomenally high (on average a few CRC errors seen every 365 days is considered normal/acceptable in noisy environments).
The exact same concept/model applies to things like TCP packets.
TL;DR -- a throughput/benchmark graph does not necessarily act as an indicator of error conditions.
said by Kilroy:Because the spec is either under rated or very forgiving. Odds are under rated so that you can actual have longer runs that work just fine, but don't meet the actual specs. This helps ensure that if you stay within the spec your cables will be fully functional.
I would say it's a little bit of both. Most specifications when created are "made strict", e.g. "we know a cable length of 6 feet works fine, but let's not take any chances -- cut that in half". That would be underrated. But likewise, as I said in my previous post, forgiveness is definitely part of the picture too, as different SATA controllers are certain to use different levels of voltage across the data lines. SATA uses low-voltage differential signalling, and I'm fairly certain some SATA controllers output "slightly more" than what is desired -- which means longer cable runs will still continue to work.
Real-life evidence of this is with SCSI: HVD SCSI (high-voltage differential) used something like 7V (but I can't remember) and the cable lengths were significantly longer (something like 20-some metres), while non-HVD SCSI and things like SCSI LVD would only support something like 5 metres at most.
However: with differential signalling, more bandwidth = requires shorter cables. But MaximumPC's tests show that with the SATA controller they used, it was definitely outputting more than what the specification required, which is how they were able to accomplish more or less the same throughput with a 72" cable than that of a 36" cable. So that goes back to your point about things being underrated. :-)