dslreports logo
site
 
    All Forums Hot Topics Gallery
spc

spacer




how-to block ads


Search Topic:
uniqs
25
share rss forum feed


factchecker

@cox.net
reply to FFH5

Re: Well worth reading on why P2P causes problems

said by FFH5:

Everyone who has an axe to grind in the P2P debate and why cable companies throttle P2P should read this commentary. It is 4 pages long and technical, but it explains how the current implementation of TCP has allowed P2P software to hog bandwidth to everyone's detriment.
The article would be okay if (a) it didn't have technical issues and (b) didn't start with the assumption that TCP is broken. The problem is not TCP, but rather how the applications are coded and the last mile networks. P2P applications need more aggressive TCP session pruning and last mile networks need to implement basic, protocol-neutral QoS to ensure that everyone gets a fair slice of the network - using settings like minimum Committed Bit Rates, etc.

TCP was NEVER designed to be "fair" and does not need to be updated to make it "fair". ECN would help with performance to end users if the makers of crappy SOHO routers could code with a damn to respect ECN flags. But the fact is, TCP is designed to do ONE thing and to do that well - move packets. Once we start adding in additional "fairness" mechanisms into the protocol, you break the concept that makes IP, TCP and UDP work so well - Keep It Simple Stupid.

But it is hard to ignore issue like this-
Simply by opening up 10 to 100 TCP streams, P2P applications can grab 10 to 100 times more bandwidth than a traditional single-stream application under a congested Internet link.
There are so many issues with that statement.

For example, if a P2P user already has one TCP session open, sending out data at 128kbps over his connection with a 256kbps upload, he is only using 128kbps. Open a second connection and upload at 128kbps and his total usage is 256kbps, his limit. Open three or four more sessions, and all of the sessions slow down as the user has hit the maximum rate upload speed of his connection. Open 100 sessions, the user still only uses 256kbps of upload bandwidth.

And his assumption than single sessions always generate less traffic than multiple sessions is also flawed. He, apparently, is not familiar with IP video cameras, Slingbox, etc.

As well those multiple sessions are not immune from congestion as the author thinks. Just because the user has 100 sessions open, does NOT guarantee that he will get 256kbps all the time. How much data TCP sessions can transfer are a function of the lower levels, including the network layer and how saturated it is.

Then there is the apparent assumption that P2P TCP sessions are immune from the AIMD algorithm that the author complains about... Simply false.

Then there is this statement:
Despite the undeniable truth that Jacobson’s TCP congestion avoidance algorithm is fundamentally broken, many academics and now Net Neutrality activists along with their lawyers cling to it as if it were somehow holy and sacred.
The author is the ONLY person that has labeled AIMD as being "fundamentally broken" as "undeniable truth". The internet engineering community has not said this, only the author has said this. Sorry, just because one guy with a ZDnet (a publication known for shoddy technical articles in the past) blog said it, does not make it so.

Then there is the whole last page on weighted TCP... The problem is that without some sort of hardware in the network to manage flows, what is proposed will never happen. TCP can not do what is being shown on its own, even if patched. It would require the network to target SPECIFIC TCP flows for the "weighted TCP" model to work. And it, essentially, is no different than using WFQ or CBWFQ, where class X can use a class Y's bandwidth until class Y needs it.

Sorry, but I'll take the positions in this article more seriously when a better written article shows up someplace like the IETF (not the IEEE) or someone writes about it on NANOG.


swhx7
Premium
join:2006-07-23
Elbonia
I think you're basically right. Here are some views of Slashdotters who said it as well as I could:

Not all sessions experience the same congestion (Score:5, Interesting) by thehickcoder (620326) * on Monday March 24, @10:44AM (#22844896)

The author of this analysis seems to have missed the fact that each TCP session in a P2P application is communicating with a different network user and may not be experiencing the same congestion as other sessions. In most cases (those where the congestion is not on the first hop) It doesn't make sense to throttle all connections when one is effected by congestion.


Re:Not all sessions experience the same congestion (Score:4, Informative) by Mike McTernan (260224) on Monday March 24, @12:12PM (#22845810)

Right. The article seems to be written on the assumption that the bandwidth bottleneck is always in the first few hops, within the ISP. And in many cases for home users this is probably reasonably true; ISPs have been selling cheap packages with 'unlimited' and fast connections on the assumption that people would use a fraction of the possible bandwidth. More fool the ISPs that people found a use [plus.net] for all that bandwidth they were promised.


FUD (Score:5, Insightful) by Detritus (11846) on Monday March 24, @11:27AM (#22845298)

The whole article is disingenuous. What he is describing are not "loopholes" being cynically exploited by those evil, and soon to be illegal, P2P applications. They are the intended behavior of the protocol stack. Are P2P applications gaming the system by opening multiple streams between each pair of endpoints? No. While we could have a legitimate debate on what is fair behavior, he poisons the whole issue by using it as a vehicle for his anti-P2P agenda.