dslreports logo
site
spacer

spacer
 
    All Forums Hot Topics Gallery
spc

spacer




how-to block ads


view:
topics flat nest 
Comments on news posted 2008-03-25 09:00:50: Several users have sent in this ZDNet discussion about reworking of the TCP (Transmission Control Protocol) congestion control. ..

page: 1 · 2 · next


tommy13v
Premium
join:2002-02-15
Niskayuna NY

No Clue

The guy has no clue as to what he is talking about.


neufuse

join:2006-12-06
James Creek, PA
Reviews:
·Comcast

1 edit

wait...

So if I am getting this right... say I have a web browser that opens 10 connections to download 10 images at once... if I use this guys method say the first image is on a slow server... and it is downloading at 20kbit per second... the other 9 images will not use the rest of the available bandwidth on the connection? if so that is just plain stupid and would destroy server's ability to serve information in a fast manner!

The guy makes it sound like currently if we have 1 download at 6mbit and open 10 more now we have 66mbit connections from the simplistic view they took on it... you never get 11x more bandwidth.. you get what you where provisioned at... TCP/IP is not the problem! overselling lines is.. if you have a DOCSIS 3 node and 250 users on it only sell them what they can sustain all at once.... which is what? 1.5Mbit? dont go oh we are on DOCSIS3 now give 250 people 50Mbit connectiosn then whine "they're using up all our pipe" boo hoo... you caused the problem (isp)...



FFH
Premium
join:2002-03-03
Tavistock NJ
kudos:5

1 recommendation

Well worth reading on why P2P causes problems

Everyone who has an axe to grind in the P2P debate and why cable companies throttle P2P should read this commentary. It is 4 pages long and technical, but it explains how the current implementation of TCP has allowed P2P software to hog bandwidth to everyone's detriment.

It also explains that the problem can be fixed without banning P2P. But to do that requires the IEEE to modify TCP protocol standards and for ISPs to develop bandwidth limiting procedures for those users who won't upgrade to the new TCP stack client software.
--
My BLOG .. .. Internet News .. .. My Web Page



Hehe

@ssa.gov

1 recommendation

reply to tommy13v

Re: No Clue

You really know how to support your opinion!



Cabal
Premium
join:2007-01-21
reply to FFH

Re: Well worth reading on why P2P causes problems

Seconded. Unfortunately, I think we're only likely to see armchair quarterback discussions on the matter.
--
Interested in open source engine management for your Subaru?



88615298
Premium
join:2004-07-28
West Tenness

1 recommendation

reply to Hehe

Re: No Clue

said by Hehe :

You really know how to support your opinion!
Just read more of his articles over at znet and then come back here.


knightmb
Everybody Lies

join:2003-12-01
Franklin, TN
reply to Hehe

No, really he doesn't.

"TCP currently gives the user with 11 opened TCP streams 11 times more bandwidth than the user who only uses one TCP stream"

That statement right there is when I stopped reading his blog. If that was true, you could never download 11 files and play streaming music at the same time.



knightmb
Everybody Lies

join:2003-12-01
Franklin, TN

1 edit
reply to FFH

Re: Well worth reading on why P2P causes problems

said by FFH:

Everyone who has an axe to grind in the P2P debate and why cable companies throttle P2P should read this commentary. It is 4 pages long and technical, but it explains how the current implementation of TCP has allowed P2P software to hog bandwidth to everyone's detriment.

It also explains that the problem can be fixed without banning P2P. But to do that requires the IEEE to modify TCP protocol standards and for ISPs to develop bandwidth limiting procedures for those users who won't upgrade to the new TCP stack client software.
Maybe if they used what TCP/IP already has built in like, TOS, which is Type of Service, flags for lowdelay, throughput, reliability, mincost, congestion. The stuff that is already in there and adding another one like "even more uber ultra lower" is just more from the Redundancy Department of Redundancy.


Matt3
All noise, no signal.
Premium
join:2003-07-20
Jamestown, NC
kudos:12

said by knightmb:

said by FFH:

Everyone who has an axe to grind in the P2P debate and why cable companies throttle P2P should read this commentary. It is 4 pages long and technical, but it explains how the current implementation of TCP has allowed P2P software to hog bandwidth to everyone's detriment.

It also explains that the problem can be fixed without banning P2P. But to do that requires the IEEE to modify TCP protocol standards and for ISPs to develop bandwidth limiting procedures for those users who won't upgrade to the new TCP stack client software.
Maybe if they used what TCP/IP already has built in like, TOS, which is Type of Service, flags for lowdelay, throughput, reliability, mincost, congestion. The stuff that is already in there and adding another one like "even more uber ultra lower" is just more from the Redundancy Depart of Redundancy.
Agreed. QoS is also a viable option (it works on my $40 Buffalo router for christ sakes) but the big ISPs aren't interested in throttling P2P traffic, they want to either 1) kill it by making it unreliable ala Comcast or 2) implement byte caps to start hammering users with overages ala Time Warner. These are all being discussed by the companies most threatened by the advent of streaming video ... the cable companies.

There are solutions to P2P "flood" right now, but a lot of companies want it to fail because it's detrimental to their ancient and flawed business model.

axus

join:2001-06-18
Washington, DC
Reviews:
·Comcast
reply to knightmb

Re: No Clue

Well, in theory your limit is the connection you're buying... for a 5/1 connection, you aren't going to use more bandwidth than that.

But, the TCP stream issue still comes into play when you have two people who are maxing out their 5/1 connection. User #1 has 11 TCP streams, 500kbit/100kbit each. User #2 has 1 TCP stream, 5000kbit/1000kbit.

What happens when each drops a packet? User #1 has one stream drop to 250kb, the rest stay at 500. User #2 has his whole stream drop to 2500k/500k, while #1 is now at 4750k/950k. The algorithm will ratchet them back up, but packets are still going to be dropped.

Fixing the client side is stupid, because people can change their TCP stack to whatever they want. This guy should know better. It must be fixed on the ISP's side, the question is what is the fair way to drop packets, that causes the least work for the router. Cisco and Sandvine would love to sell more hardware, they don't care about efficiency, but the ISP should.

I think they should just drop a packet from a random open stream, regardless of the number of packets used by that stream. This way, the 11 stream guy has an 11/12 chance of one of his streams getting hit (for not much loss), and the 1 stream guy has a 1/12 chance of getting hit (but for a bigger loss). I believe the standard method is to pick a random packet, but random stream would be more fair when considering the TCP algorithm.

Also, the DHCP server could put heavy users on one router gateway (or group of routers), and light users on a different one. Define use as their last months bandwidth total. Treat the routers equal, and the light users will have plenty of space while the heavy users compete with each other. Neither suggestion inspects packets, or breaks applications, I think they are network neutral.


socrplyr

join:2008-03-25
Canton, OH

1 recommendation

reply to neufuse

Re: wait...

What you are saying is somewhat true, but not thinking in the right way. First off the situation that you are proposing would actually be helped by the proposed scheme. Think of it this way, P2P works by opening up 10s/100s of TCP connections. So lets say that you are sharing a pipe w/ a P2P user and the pipe is saturated (together you are wanting to use more that what is available). Lets say the pipe has a capacity of 100. Currently if you have 1 TCP connection open and someone else has 90, you get to use a total of 1.1% of the pipe. Now lets say you open 10 connections to download your pictures (something a browser typically won't do, unless things are changed they limit you to 2). Now you get to use 10% of the connection. Now lets move to the way that the article says it should work (without the special burst addition which would help you even more). For simplicity, again let the other user have 90 connections and you only have 1. When you go to download your your first picture you get to use 50% of the connection. In this scenario you can actually download all 10 of your pictures one after the other in less than 1/5 of the time.
Now the second part of the argument that the other user on P2P is not hurt by this. Why? Lets say they are downloading a large file (the reason most people go to P2P). Now your files are small in comparison and will easily finish under either circumstances before the other user's download. This means that you didn't actually reduces the total length of their download because under the old way you would have used a smaller amount of bandwidth but used it for longer, while the new way you use more for shorter, but the total size is same.
As for your commented about overselling the connections, I don't want to go too much into that issue, but this suggested protocol would help in those situations as well. Basically it will guarantee you an equal part of the bandwidth. Also, you are misinterpreting the problem. This has nothing to do with the speed caps that are given to you and multiplying them.
In reality the author is correct that the TCP protocol is broken in this aspect. You could argue that the protocol itself violates Net Neutrality. It depends on where you are trying to keep things equal. Should it be on the TCP connection level or at the subscriber level. Basically should we even it out between the # of connections that my neighbors and I make or the # of neighbors. Personally I vote fore the # of neighbors.



tommy13v
Premium
join:2002-02-15
Niskayuna NY
reply to Hehe

Re: No Clue

And your response certainly showed yours as well. Get back to work.



justbits
More fiber than ATT can handle
Premium
join:2003-01-08
Chicago, IL
Reviews:
·Comcast Business..

1 recommendation

reply to Matt3

Re: Well worth reading on why P2P causes problems

QOS/TOS flags are not a solution. Anybody can set those flags. Anybody can ignore those flags. QOS/TOS is only useful between you and your first hop onto the Internet, not between you and anybody else. Otherwise, everybody who is greedy would mark all of their packets as highest priority.

The proposed change to TCP can result in fair-sharing of major Internet backbones as well as fair-sharing on your home Internet router. The big win with a fair-sharing TCP stack is that the major Internet backbones that carry GB/sec of traffic wont need to deploy excessive traffic shaping or deploy fake RST packets. And they can even detect the difference between a "greedy" TCP stack and a "fair-sharing" TCP stack so they can continue to throttle "greedy" consumers, but not excessively screw "fair-sharing" Internet users. The average case for the "fair-sharing" stack appears to result in no excessive detriment to P2P apps that would/could otherwise be more severely throttled by your ISP. And it seems to result in a huge increase in performance for lower-bandwidth single-connection users like VoIP and web surfing. However, an implementation of the protocol and testing needs to be done before this can really be proven.

The key here is that people who excessively use the Internet would be fairly throttled by a change to everyone's TCP stack to help ease congestion on Internet backbones. With fair-sharing TCP stacks, ISPs wouldn't need to excessively punish people using P2P protocols that are designed to take advantage of TCP's "congestion control".

If you want to think of it another way, P2P protocols are NOT designed to be a "green" or environmentally friendly protocol. They are designed to be greedy and take advantage of known flaws in the current TCP congestion algorithm. A fair-sharing change to TCP would result in helping all traffic move more smoothly, not just on your connection to your ISP, but across the entire Internet backbone between you and your end destination.



60529262

join:2007-01-11
Chicago, IL
reply to FFH

said by not said by user TK Junk Mail but he should have :

Everyone who has an axe to grind in the P2P debate and why cable companies throttle P2P should read this commentary. It is 4 pages long and technical, but it explains how the current implementation of TCP has allowed P2P software to hog bandwidth to Cable business model's detriment.

It also explains that the Cable cash flow problem can be fixed without banning P2P or competing video services delivered via broadband. But to do that requires the IEEE to modify TCP protocol standards and for Cable ISPs to develop bandwidth limiting procedures for those Cable users who won't upgrade to the new TCP stack client software or dare to venture outside the Cable walled garden.
There. That is much more accurate.


factchecker

@cox.net
reply to FFH

said by FFH:

Everyone who has an axe to grind in the P2P debate and why cable companies throttle P2P should read this commentary. It is 4 pages long and technical, but it explains how the current implementation of TCP has allowed P2P software to hog bandwidth to everyone's detriment.
The article would be okay if (a) it didn't have technical issues and (b) didn't start with the assumption that TCP is broken. The problem is not TCP, but rather how the applications are coded and the last mile networks. P2P applications need more aggressive TCP session pruning and last mile networks need to implement basic, protocol-neutral QoS to ensure that everyone gets a fair slice of the network - using settings like minimum Committed Bit Rates, etc.

TCP was NEVER designed to be "fair" and does not need to be updated to make it "fair". ECN would help with performance to end users if the makers of crappy SOHO routers could code with a damn to respect ECN flags. But the fact is, TCP is designed to do ONE thing and to do that well - move packets. Once we start adding in additional "fairness" mechanisms into the protocol, you break the concept that makes IP, TCP and UDP work so well - Keep It Simple Stupid.

But it is hard to ignore issue like this-
Simply by opening up 10 to 100 TCP streams, P2P applications can grab 10 to 100 times more bandwidth than a traditional single-stream application under a congested Internet link.
There are so many issues with that statement.

For example, if a P2P user already has one TCP session open, sending out data at 128kbps over his connection with a 256kbps upload, he is only using 128kbps. Open a second connection and upload at 128kbps and his total usage is 256kbps, his limit. Open three or four more sessions, and all of the sessions slow down as the user has hit the maximum rate upload speed of his connection. Open 100 sessions, the user still only uses 256kbps of upload bandwidth.

And his assumption than single sessions always generate less traffic than multiple sessions is also flawed. He, apparently, is not familiar with IP video cameras, Slingbox, etc.

As well those multiple sessions are not immune from congestion as the author thinks. Just because the user has 100 sessions open, does NOT guarantee that he will get 256kbps all the time. How much data TCP sessions can transfer are a function of the lower levels, including the network layer and how saturated it is.

Then there is the apparent assumption that P2P TCP sessions are immune from the AIMD algorithm that the author complains about... Simply false.

Then there is this statement:
Despite the undeniable truth that Jacobson’s TCP congestion avoidance algorithm is fundamentally broken, many academics and now Net Neutrality activists along with their lawyers cling to it as if it were somehow holy and sacred.
The author is the ONLY person that has labeled AIMD as being "fundamentally broken" as "undeniable truth". The internet engineering community has not said this, only the author has said this. Sorry, just because one guy with a ZDnet (a publication known for shoddy technical articles in the past) blog said it, does not make it so.

Then there is the whole last page on weighted TCP... The problem is that without some sort of hardware in the network to manage flows, what is proposed will never happen. TCP can not do what is being shown on its own, even if patched. It would require the network to target SPECIFIC TCP flows for the "weighted TCP" model to work. And it, essentially, is no different than using WFQ or CBWFQ, where class X can use a class Y's bandwidth until class Y needs it.

Sorry, but I'll take the positions in this article more seriously when a better written article shows up someplace like the IETF (not the IEEE) or someone writes about it on NANOG.


factchecker

@cox.net
reply to justbits

said by justbits:

QOS/TOS flags are not a solution. Anybody can set those flags. Anybody can ignore those flags. QOS/TOS is only useful between you and your first hop onto the Internet, not between you and anybody else. Otherwise, everybody who is greedy would mark all of their packets as highest priority.
The QoS issue only applies to the first mile anyway, where the problem is. There is not bandwidth problem at the internet backbone level.

As well, QoS can easily be implemented in a way that prevents the "flags" issue you have mentioned.

The proposed change to TCP can result in fair-sharing of major Internet backbones as well as fair-sharing on your home Internet router. The big win with a fair-sharing TCP stack is that the major Internet backbones that carry GB/sec of traffic wont need to deploy excessive traffic shaping or deploy fake RST packets.
Major backbone providers have not even thought about using traffic shaping or forged RST packets. There is not a bandwidth problem on the internet backbones, nor will there be for awhile as there is still a lot of available bandwidth and many options to alleviate any problems (turning up new wavelengths, etc.) This is a problem in last mile networks only.

And it seems to result in a huge increase in performance for lower-bandwidth single-connection users like VoIP and web surfing.
There are VERY few single-session TCP applications left. VoIP is not one of them. And web surfing is not one either (any longer).

If you want to think of it another way, P2P protocols are NOT designed to be a "green" or environmentally friendly protocol.
Then the problem is the P2P algorithm, NOT TCP. Fix the P2P algorithms, fix them problem. Mangle TCP, problem remains.

Skippy25

join:2000-09-13
Hazelwood, MO

Throttle them...

You can try to fix them any way you want but they will find a work around one way or another. Throttling their connection on the ISP side is the only true way to stop bandwidth hogs.

Simply set a xGB tiers and then start throttling the connection back. As they go up further into additional tiers, throttle their connection further. Then reset it to normal and let the process start over.

Then they can market higher throttling points for power users that should pay more to use the additonal bandwidth they are requesting.



swhx7
Premium
join:2006-07-23
Elbonia
reply to factchecker

Re: Well worth reading on why P2P causes problems

I think you're basically right. Here are some views of Slashdotters who said it as well as I could:

Not all sessions experience the same congestion (Score:5, Interesting) by thehickcoder (620326) * on Monday March 24, @10:44AM (#22844896)

The author of this analysis seems to have missed the fact that each TCP session in a P2P application is communicating with a different network user and may not be experiencing the same congestion as other sessions. In most cases (those where the congestion is not on the first hop) It doesn't make sense to throttle all connections when one is effected by congestion.


Re:Not all sessions experience the same congestion (Score:4, Informative) by Mike McTernan (260224) on Monday March 24, @12:12PM (#22845810)

Right. The article seems to be written on the assumption that the bandwidth bottleneck is always in the first few hops, within the ISP. And in many cases for home users this is probably reasonably true; ISPs have been selling cheap packages with 'unlimited' and fast connections on the assumption that people would use a fraction of the possible bandwidth. More fool the ISPs that people found a use [plus.net] for all that bandwidth they were promised.


FUD (Score:5, Insightful) by Detritus (11846) on Monday March 24, @11:27AM (#22845298)

The whole article is disingenuous. What he is describing are not "loopholes" being cynically exploited by those evil, and soon to be illegal, P2P applications. They are the intended behavior of the protocol stack. Are P2P applications gaming the system by opening multiple streams between each pair of endpoints? No. While we could have a legitimate debate on what is fair behavior, he poisons the whole issue by using it as a vehicle for his anti-P2P agenda.


ureihcim3
Freshly made

join:2007-12-16
Miami, FL
reply to Hehe

Re: No Clue

Hey you, I know we don't just use government systems to surf on any site we wish. Great now we got employees from the Social Security Administration discussing TCP/IP.


starbolin

join:2004-11-29
Redding, CA
reply to axus

Dropping packets only increases congestion. Every thing from the client to you has already spent cycles processing the packet. Now the packet has to be retransmitted. So what you end up accomplishing is increasing the load on downstream links in proportion to the percentage packets that you drop. This is the reason Comcast used the RST packets. Dropping packets would have only increased the load on their concentrators.

In addition it does nothing to address the fairness issue. The guy with one stream open waits on the dropped packet before reassembling the file while the guy with multiple streams continues to receive data in other streams.

There already exist many 'fair' protocols, Token Ring is fair, ATM is fair, these are two data-link layer protocols. DCCP is one more recently proposed transport layer protocol designed to be 'fair'. To understand why TCP persists you need to examine why some ( Token Ring ) fell by the wayside, and why others ( ATM ) are used as carriers for TCP.

This whole debate is amusing to me as it's very reminiscent of the ethernet vs. Token Ring debate of the 80's. Token Ring lost for several factors including cost, flexibility and performance. Although to be fair there are still many token ring installs out there. Mostly in applications where latency is critical.


bjbrock9

join:2002-10-28
Mcalester, OK

George Ou

Didn't you know that George Ou knows how to do everything better than everybody else?


patcat88

join:2002-04-05
Jamaica, NY
kudos:1
reply to ureihcim3

Re: No Clue

I wonder how fast the Bittorrent would be on the SSA's fat internet connection. Someone should colocate a torrent box.



funchords
Hello
Premium,MVM
join:2001-03-11
Yarmouth Port, MA
kudos:6

1 recommendation

reply to FFH

Re: Well worth reading on why P2P causes problems

said by FFH:

Everyone who has an axe to grind in the P2P debate and why cable companies throttle P2P should read this commentary. It is 4 pages long and technical, but it explains how the current implementation of TCP has allowed P2P software to hog bandwidth to everyone's detriment.
TK Junk Mail,

The only people who can call this "technical" are the people who have been adequately buffaloed into thinking that George Ou knows what he is talking about.

He does not.

I've made several comments to the article, pointing out where George gets it wrong. This time, as before, his responses indicate a lack of familiarity with the content of his own article! I am beginning to believe that Richard Bennett is George Ou's ghost writer.
--
Robb Topolski -= funchords.com =- Hillsboro, Oregon
"We don't throttle any traffic," -Charlie Douglas, Comcast spokesman, on this report.


Mchart
First There.

join:2004-01-21
Kaneohe, HI
reply to patcat88

Re: No Clue

It would barely work. As with any government connection it can be very slow during the day while employee's are simply browsing the internet. This is why youtube and myspace and similar sites have been blocked out by the on-site proxy. Typically a T3 to the real world is the most you can expect at any base or government facility.



bklynite
Premium
join:2001-03-18
Brooklyn, NY
reply to bjbrock9

Re: George Ou

He and Steve Gibson would have a wonderful relationship.



funchords
Hello
Premium,MVM
join:2001-03-11
Yarmouth Port, MA
kudos:6

1 edit

1 recommendation

reply to axus

Re: No Clue

said by axus:

But, the TCP stream issue still comes into play when you have two people who are maxing out their 5/1 connection. User #1 has 11 TCP streams, 500kbit/100kbit each. User #2 has 1 TCP stream, 5000kbit/1000kbit.

What happens when each drops a packet? User #1 has one stream drop to 250kb, the rest stay at 500. User #2 has his whole stream drop to 2500k/500k, while #1 is now at 4750k/950k. The algorithm will ratchet them back up, but packets are still going to be dropped.
THANK YOU!!! THANK YOU!!! Finally, someone who understands how this works!!

Two comments that I have to make quickly (time to go to an appt):

1. This "unfairness" only happens when the network is congested. What you're seeing here is a "keep the network alive" recovery method (prior to this, the network would continue to grind to an increasingly slowing stop). A healthy ISP or transit network does not run at congestion on a constant basis -- during the busiest hours of the busiest days, maybe a few times. So George Ou is concentrating on fixing a "fairness" problem that occurs during a network exception -- this is a rather dumb thing to spend a lot of time on.

2. "Fairness" really isn't the problem at all. Regardless of how you slice the allocation during congested moments, the problem to solve is avoiding reaching that moment of congestion. Even if you implemented every suggestion that Bob B. and George are making, you will not have addressed ANY of the issues allegedly causing congestion in the first place.
--
Robb Topolski -= funchords.com =- Hillsboro, Oregon
"We don't throttle any traffic," -Charlie Douglas, Comcast spokesman, on this report.


espaeth
Digital Plumber
Premium,MVM
join:2001-04-21
Minneapolis, MN
kudos:2
reply to knightmb

said by knightmb:

That statement right there is when I stopped reading his blog. If that was true, you could never download 11 files and play streaming music at the same time.
Assuming a 6mbps broadband connection: 6,000,000bps / 12 = 500,000bps. 500kbps is still quite adequate for streaming music.

The math works out just fine.

axus

join:2001-06-18
Washington, DC
Reviews:
·Comcast
reply to starbolin

Well, I admit I'm not a network engineer, but I think dropping packets decreases congestion with the current TCP stacks. Each client tries to transmit more slowly when it sees a dropped packet. Dropping packets is the standard way for the router to say "I'm getting too much traffic". Yes there will be retransmits, but they will be slower until the router doesn't need to drop packets anymore.

Delaying packets can be an alternative to dropping them, but I thought that remembering a packet while letting others through took more work than dropping it once or twice and passing it through the second or third time. Sending RST packets for specific protocols seems to require another piece of hardware, breaks the standard, and is not "network neutral".

What is the fairness issue? Everyone should get the same proportion of throughput. The YouTube watcher, guy downloading a CD from his work, lady emailing a bunch of pictures, and Bittorrent sharer should get the same total kbps. If the network can't support it, the dropped packets will cause them to slowdown their speeds until it can. If the bittorrent guy is dropping packets in proportion to the number of streams he has open, every stream is going to slow down as much as the single stream guy.



Smith6612
Premium,MVM
join:2008-02-01
North Tonawanda, NY
kudos:24
Reviews:
·Verizon Online DSL
·Frontier Communi..

But...

TCP isn't broken! Even though I can be a bandwidth hog at times (not torrent, have never used it), it doesn't mean I should have TCP rewritten to simply slow down my game demo downloads, timing some of them out and chewing up more bandwidth because another person decided to do one small file download. If people really want to slow the bandwidth hogs down, just use QoS to limit how much the physical machine is able to use at once, or hog the bandwidth before the bandwidth hog goes at it.

At a Wi-Fi hotspot at a resort I go to two times a year, they have a single T1 line powering all of the free Wi-Fi access points, while they have a T3 line powering the pay Wi-Fi in the hotel and corporate operations. The load on the T3 is hardly anything at night (when everyone is online) mainly because it's only corporate online because people hate to pay for internet, but the T1 sure does take a big banging at night, with everything ranging from 40kbps of download and upload to packet loss, to 300+ms ping because of everyone using the free Wi-Fi to download huge files or to mess around on games/YouTube, and there are probably around 20 access areas with a few routers in each zone as well (heck, due to demand of people they even have their entire main parking lot covered with Wi-Fi for the summer time outdoor people and RV people). This place doesn't limit bandwidth to anyone, so everyone can max the line out, but there's never a time where it's been so slow that I've never been able to connect to sites. Even with that lack of bandwidth, downloads still complete, uploads as well and the internet still works fine.

Again, QoS is the key. Don't like bandwidth hogs on a line? Get a new line just for you.



espaeth
Digital Plumber
Premium,MVM
join:2001-04-21
Minneapolis, MN
kudos:2

1 recommendation

reply to funchords

Re: Well worth reading on why P2P causes problems

said by funchords:

The only people who can call this "technical" are the people who have been adequately buffaloed into thinking that George Ou knows what he is talking about.

He does not.
Robb,

In all seriousness I truly do respect many of the things you post, but in this case George (while not 100% correct in his position either) is a heck of a lot closer to reality than you are in your counter arguments with him. Oversubscription at the edge is common in every network design; it's a matter of efficiency. Designing for full capacity for all edge ports is like designing freeways so that there would never be rush hour congestion; you'd bankrupt yourself in the process of building it.

The TCP fairness problem exists on the segments that experience saturation on a regular basis, which is generally the segment between the CMTS head-ends and the cable modems. Contrary to what George Ou posted the problem will present itself on both the upstream and downstream segments, but will manifest itself more readily in the lower capacity upstream segments.

You've stated a few times that TCP throttling is irrelevant because people are already limited by the upstream connection of their cable modem. That's a bit like saying "I still have checks, how can my account be out of money?!" The gotcha is that there is not enough upstream capacity for everybody to transmit at once, and when congestion occurs on the common upstream channel to the CMTS head-end, the TCP sessions will scale themselves in relation to capacity on the entire channel, not according to the provisioned capacity for each cable modem. For example:

Say you have an 8/1 service no a DOCSIS 1.1 network. Say you have 9 users all uploading in a single TCP session each at the same time, each user will get 1mbps to saturate out the connection.

Now say you had 18 users again all using a single TCP session, TCP will naturally balance things out so that each connection will average about 500kbps.

Now say you had 17 users, but one of the users had 2 TCP sessions. 9mbps channel capacity / 18 TCP sessions = 500kbps per TCP connection. That means 16 people will see 500kbps, and the 1 user using 2 TCP sessions will see 1mbps.

The numbers don't break down quite that neat in the field, but overall they're pretty close.