|reply to factchecker |
Re: Well worth reading on why P2P causes problems
I think you're basically right. Here are some views of Slashdotters who said it as well as I could:
Not all sessions experience the same congestion (Score:5, Interesting) by thehickcoder (620326) * on Monday March 24, @10:44AM (#22844896)
The author of this analysis seems to have missed the fact that each TCP session in a P2P application is communicating with a different network user and may not be experiencing the same congestion as other sessions. In most cases (those where the congestion is not on the first hop) It doesn't make sense to throttle all connections when one is effected by congestion.
Re:Not all sessions experience the same congestion (Score:4, Informative) by Mike McTernan (260224) on Monday March 24, @12:12PM (#22845810)
Right. The article seems to be written on the assumption that the bandwidth bottleneck is always in the first few hops, within the ISP. And in many cases for home users this is probably reasonably true; ISPs have been selling cheap packages with 'unlimited' and fast connections on the assumption that people would use a fraction of the possible bandwidth. More fool the ISPs that people found a use [plus.net] for all that bandwidth they were promised.
FUD (Score:5, Insightful) by Detritus (11846) on Monday March 24, @11:27AM (#22845298)
The whole article is disingenuous. What he is describing are not "loopholes" being cynically exploited by those evil, and soon to be illegal, P2P applications. They are the intended behavior of the protocol stack. Are P2P applications gaming the system by opening multiple streams between each pair of endpoints? No. While we could have a legitimate debate on what is fair behavior, he poisons the whole issue by using it as a vehicle for his anti-P2P agenda.