said by FFH5:
Everyone who has an axe to grind in the P2P debate and why cable companies throttle P2P should read this commentary. It is 4 pages long and technical, but it explains how the current implementation of TCP has allowed P2P software to hog bandwidth to everyone's detriment.
The article would be okay if (a) it didn't have technical issues and (b) didn't start with the assumption that TCP is broken. The problem is not TCP, but rather how the applications are coded and the last mile networks. P2P applications need more aggressive TCP session pruning and last mile networks need to implement basic, protocol-neutral QoS to ensure that everyone gets a fair slice of the network - using settings like minimum Committed Bit Rates, etc.
TCP was NEVER designed to be "fair" and does not need to be updated to make it "fair". ECN would help with performance to end users if the makers of crappy SOHO routers could code with a damn to respect ECN flags. But the fact is, TCP is designed to do ONE thing and to do that well - move packets. Once we start adding in additional "fairness" mechanisms into the protocol, you break the concept that makes IP, TCP and UDP work so well - Keep It Simple Stupid.
But it is hard to ignore issue like this-
Simply by opening up 10 to 100 TCP streams, P2P applications can grab 10 to 100 times more bandwidth than a traditional single-stream application under a congested Internet link.
There are so many issues with that statement.
For example, if a P2P user already has one TCP session open, sending out data at 128kbps over his connection with a 256kbps upload, he is only using 128kbps. Open a second connection and upload at 128kbps and his total usage is 256kbps, his limit. Open three or four more sessions, and all of the sessions slow down as the user has hit the maximum rate upload speed of his connection. Open 100 sessions, the user still only uses 256kbps of upload bandwidth.
And his assumption than single sessions always generate less traffic than multiple sessions is also flawed. He, apparently, is not familiar with IP video cameras, Slingbox, etc.
As well those multiple sessions are not immune from congestion as the author thinks. Just because the user has 100 sessions open, does NOT guarantee that he will get 256kbps all the time. How much data TCP sessions can transfer are a function of the lower levels, including the network layer and how saturated it is.
Then there is the apparent assumption that P2P TCP sessions are immune from the AIMD algorithm that the author complains about... Simply false.
Then there is this statement:
Despite the undeniable truth that Jacobsons TCP congestion avoidance algorithm is fundamentally broken, many academics and now Net Neutrality activists along with their lawyers cling to it as if it were somehow holy and sacred.
The author is the ONLY person that has labeled AIMD as being "fundamentally broken" as "undeniable truth". The internet engineering community has not said this, only the author has said this. Sorry, just because one guy with a ZDnet (a publication known for shoddy technical articles in the past) blog said it, does not make it so.
Then there is the whole last page on weighted TCP... The problem is that without some sort of hardware in the network to manage flows, what is proposed will never happen. TCP can not do what is being shown on its own, even if patched. It would require the network to target SPECIFIC TCP flows for the "weighted TCP" model to work. And it, essentially, is no different than using WFQ or CBWFQ, where class X can use a class Y's bandwidth until class Y needs it.
Sorry, but I'll take the positions in this article more seriously when a better written article shows up someplace like the IETF (not the IEEE) or someone writes about it on NANOG.