dslreports logo
 
    All Forums Hot Topics Gallery
spc
uniqs
25

SpaethCo
Digital Plumber
MVM
join:2001-04-21
Minneapolis, MN

1 recommendation

SpaethCo to funchords

MVM

to funchords

Re: Well worth reading on why P2P causes problems

said by funchords:

The only people who can call this "technical" are the people who have been adequately buffaloed into thinking that George Ou knows what he is talking about.

He does not.
Robb,

In all seriousness I truly do respect many of the things you post, but in this case George (while not 100% correct in his position either) is a heck of a lot closer to reality than you are in your counter arguments with him. Oversubscription at the edge is common in every network design; it's a matter of efficiency. Designing for full capacity for all edge ports is like designing freeways so that there would never be rush hour congestion; you'd bankrupt yourself in the process of building it.

The TCP fairness problem exists on the segments that experience saturation on a regular basis, which is generally the segment between the CMTS head-ends and the cable modems. Contrary to what George Ou posted the problem will present itself on both the upstream and downstream segments, but will manifest itself more readily in the lower capacity upstream segments.

You've stated a few times that TCP throttling is irrelevant because people are already limited by the upstream connection of their cable modem. That's a bit like saying "I still have checks, how can my account be out of money?!" The gotcha is that there is not enough upstream capacity for everybody to transmit at once, and when congestion occurs on the common upstream channel to the CMTS head-end, the TCP sessions will scale themselves in relation to capacity on the entire channel, not according to the provisioned capacity for each cable modem. For example:

Say you have an 8/1 service no a DOCSIS 1.1 network. Say you have 9 users all uploading in a single TCP session each at the same time, each user will get 1mbps to saturate out the connection.

Now say you had 18 users again all using a single TCP session, TCP will naturally balance things out so that each connection will average about 500kbps.

Now say you had 17 users, but one of the users had 2 TCP sessions. 9mbps channel capacity / 18 TCP sessions = 500kbps per TCP connection. That means 16 people will see 500kbps, and the 1 user using 2 TCP sessions will see 1mbps.

The numbers don't break down quite that neat in the field, but overall they're pretty close.

funchords
Hello
MVM
join:2001-03-11
Yarmouth Port, MA

1 edit

funchords

MVM

said by SpaethCo:

Oversubscription at the edge is common in every network design; it's a matter of efficiency.
I really do understand this, espaeth. I'm not one of these guys who think that 384 Kbps should be reserved for my account 24/7/365. Please help me by showing me where I said something different. (This is a serious request. If I screwed up and wrote something carelessly, I don't want it to be misinterpreted.)

I am having trouble with the semantics. Let me explain:

Based on past experience, a service provider observes that for every 1000 customers, no more than 100 are ever attempting to upload at any one time. During his past 20 busiest hours, this has remained true for 19 of them (95%*). Therefore, he builds his network to handle THAT capacity -- the capacity of 100 simultaneous uploads. To me, that's not oversubscription. That's simply the way that bandwidth is planned and provided.

*I don't necessarily mean to invoke the meaning of the 95th percentile here. Instead, I'm trying to say that the observation is reliable 95% of the time.

Is the above "over-subscription?" Or is that roughly the way that all data customers of any kind plan and allocate for their needs?

I have heard people say that an ISP should not sell more accounts than they could handle if everyone jumped online and started to upload/download at capacity. I've never been one of those voices.

When I say "over-subscription," I mean that the service provider has stopped trying to reliably meet the data demands of his users.

Are there more accurate or concise terms I should use?

factchecker
@cox.net

factchecker to SpaethCo

Anon

to SpaethCo
said by SpaethCo:

Now say you had 17 users, but one of the users had 2 TCP sessions. 9mbps channel capacity / 18 TCP sessions = 500kbps per TCP connection. That means 16 people will see 500kbps, and the 1 user using 2 TCP sessions will see 1mbps.
Here's the problem I see with this discussion... The emphasis is being place on TCP when the problem is within the network. TCP, in your situation is functioning EXACTLY as it is suppose to - move data for each session as quickly as possible, scaling back each session only when network conditions dictate that full speed ahead won't work.

TCP is not what is broken. And anyone who frames the discussion in terms of TCP being broken is missing the more fundamental problem.

The _network_ allows a single users to get more than their "fair share" of the network. In your example, one user is able to get more bandwidth because the network allows it. It is one of the problem with contention based (or shared) networks - one station can monopolize the network if nothing prevents it from doing so.

Instead of rewriting TCP, a much easier step can be taken.

Enable QoS in a manner than enables the bandwidth to be equally shared among uploading systems when congestion exists, otherwise allow system to send as fast as their connection permits.

See the following on fair queuing:
»en.wikipedia.org/wiki/Fa ··· _queuing

Routers, DSLAMS and CMTSs all have the capabilities to enable this type of behavior. ISPs just need to leverage it. This solution is MUCH easier than writing, testing and deploying patches to TCP on a myriad of platforms.