dslreports logo
site
 
    All Forums Hot Topics Gallery
spc

spacer




how-to block ads


Search Topic:
uniqs
13
share rss forum feed
« wait...But... »
This is a sub-selection from No Clue

axus

join:2001-06-18
Washington, DC
Reviews:
·Comcast
reply to knightmb

Re: No Clue

Well, in theory your limit is the connection you're buying... for a 5/1 connection, you aren't going to use more bandwidth than that.

But, the TCP stream issue still comes into play when you have two people who are maxing out their 5/1 connection. User #1 has 11 TCP streams, 500kbit/100kbit each. User #2 has 1 TCP stream, 5000kbit/1000kbit.

What happens when each drops a packet? User #1 has one stream drop to 250kb, the rest stay at 500. User #2 has his whole stream drop to 2500k/500k, while #1 is now at 4750k/950k. The algorithm will ratchet them back up, but packets are still going to be dropped.

Fixing the client side is stupid, because people can change their TCP stack to whatever they want. This guy should know better. It must be fixed on the ISP's side, the question is what is the fair way to drop packets, that causes the least work for the router. Cisco and Sandvine would love to sell more hardware, they don't care about efficiency, but the ISP should.

I think they should just drop a packet from a random open stream, regardless of the number of packets used by that stream. This way, the 11 stream guy has an 11/12 chance of one of his streams getting hit (for not much loss), and the 1 stream guy has a 1/12 chance of getting hit (but for a bigger loss). I believe the standard method is to pick a random packet, but random stream would be more fair when considering the TCP algorithm.

Also, the DHCP server could put heavy users on one router gateway (or group of routers), and light users on a different one. Define use as their last months bandwidth total. Treat the routers equal, and the light users will have plenty of space while the heavy users compete with each other. Neither suggestion inspects packets, or breaks applications, I think they are network neutral.


starbolin

join:2004-11-29
Redding, CA

Dropping packets only increases congestion. Every thing from the client to you has already spent cycles processing the packet. Now the packet has to be retransmitted. So what you end up accomplishing is increasing the load on downstream links in proportion to the percentage packets that you drop. This is the reason Comcast used the RST packets. Dropping packets would have only increased the load on their concentrators.

In addition it does nothing to address the fairness issue. The guy with one stream open waits on the dropped packet before reassembling the file while the guy with multiple streams continues to receive data in other streams.

There already exist many 'fair' protocols, Token Ring is fair, ATM is fair, these are two data-link layer protocols. DCCP is one more recently proposed transport layer protocol designed to be 'fair'. To understand why TCP persists you need to examine why some ( Token Ring ) fell by the wayside, and why others ( ATM ) are used as carriers for TCP.

This whole debate is amusing to me as it's very reminiscent of the ethernet vs. Token Ring debate of the 80's. Token Ring lost for several factors including cost, flexibility and performance. Although to be fair there are still many token ring installs out there. Mostly in applications where latency is critical.



funchords
Hello
Premium,MVM
join:2001-03-11
Yarmouth Port, MA
kudos:6

1 edit

1 recommendation

reply to axus

said by axus:

But, the TCP stream issue still comes into play when you have two people who are maxing out their 5/1 connection. User #1 has 11 TCP streams, 500kbit/100kbit each. User #2 has 1 TCP stream, 5000kbit/1000kbit.

What happens when each drops a packet? User #1 has one stream drop to 250kb, the rest stay at 500. User #2 has his whole stream drop to 2500k/500k, while #1 is now at 4750k/950k. The algorithm will ratchet them back up, but packets are still going to be dropped.
THANK YOU!!! THANK YOU!!! Finally, someone who understands how this works!!

Two comments that I have to make quickly (time to go to an appt):

1. This "unfairness" only happens when the network is congested. What you're seeing here is a "keep the network alive" recovery method (prior to this, the network would continue to grind to an increasingly slowing stop). A healthy ISP or transit network does not run at congestion on a constant basis -- during the busiest hours of the busiest days, maybe a few times. So George Ou is concentrating on fixing a "fairness" problem that occurs during a network exception -- this is a rather dumb thing to spend a lot of time on.

2. "Fairness" really isn't the problem at all. Regardless of how you slice the allocation during congested moments, the problem to solve is avoiding reaching that moment of congestion. Even if you implemented every suggestion that Bob B. and George are making, you will not have addressed ANY of the issues allegedly causing congestion in the first place.
--
Robb Topolski -= funchords.com =- Hillsboro, Oregon
"We don't throttle any traffic," -Charlie Douglas, Comcast spokesman, on this report.

axus

join:2001-06-18
Washington, DC
Reviews:
·Comcast
reply to starbolin

Well, I admit I'm not a network engineer, but I think dropping packets decreases congestion with the current TCP stacks. Each client tries to transmit more slowly when it sees a dropped packet. Dropping packets is the standard way for the router to say "I'm getting too much traffic". Yes there will be retransmits, but they will be slower until the router doesn't need to drop packets anymore.

Delaying packets can be an alternative to dropping them, but I thought that remembering a packet while letting others through took more work than dropping it once or twice and passing it through the second or third time. Sending RST packets for specific protocols seems to require another piece of hardware, breaks the standard, and is not "network neutral".

What is the fairness issue? Everyone should get the same proportion of throughput. The YouTube watcher, guy downloading a CD from his work, lady emailing a bunch of pictures, and Bittorrent sharer should get the same total kbps. If the network can't support it, the dropped packets will cause them to slowdown their speeds until it can. If the bittorrent guy is dropping packets in proportion to the number of streams he has open, every stream is going to slow down as much as the single stream guy.



funchords
Hello
Premium,MVM
join:2001-03-11
Yarmouth Port, MA
kudos:6
reply to starbolin

Everything you said, including the following, is true:

said by starbolin:

Dropping packets only increases congestion. Every thing from the client to you has already spent cycles processing the packet. Now the packet has to be retransmitted. So what you end up accomplishing is increasing the load on downstream links in proportion to the percentage packets that you drop.
Yes, the traffic on the overall swarm is increased and the throughput on the overall swarm is reduced. However, the traffic on the congested segment is relieved. The retransmit is delayed by a random delay interval which is again doubled on each retransmit. That way, the pressure on that segment rapidly backs off. This behavior is quite desirable.

BitTorrent takes it a step further by ending the transfer to a peer or peers using the congested link, and trying a different peer from the list.
--
Robb Topolski -= funchords.com =- Hillsboro, Oregon
"We don't throttle any traffic," -Charlie Douglas, Comcast spokesman, on this report.


knightmb
Everybody Lies

join:2003-12-01
Franklin, TN
reply to funchords

Everyone has to keep in mind, this is from the standpoint of a single IP address. When you sit behind a NAT router with multiple computers using the Internet from an access point of either DSL, T1, Cable, etc. then all of the problems you experience are the limitations of NAT.

Try this same experiment with two separate IP address on the same link and you'll notice the problem goes away. You can have a 1 MB/s Up and Down link (just for easy math reason) and if you have 1 IP address with a NAT sitting in front of it burning all the bandwidth available, then yes the issues of sharing come into play. But if the same link had 2 IP address in which they both shared that 1 MB/s pipe, and 1 IP address was maxing out the link with 1 or 100 connections, the other IP address would still get exactly half of the bandwidth for it's one single connection it had going. TCP/IP is suppose to work properly from IP address to IP address, not IP address to self.

The NAT router is no different that one computer with one IP address having to determine what takes priority over what. When you don't have any traffic shaping or QoS on the NAT, then yes it's first come first serve because that's the whole limitation of using NAT to share multiple computers behind the same IP address.

When they wrote the TCP/IP stuff decades ago, they didn't have to worry about NAT routers and how it changes the rules.



funchords
Hello
Premium,MVM
join:2001-03-11
Yarmouth Port, MA
kudos:6

said by knightmb:

But if the same link had 2 IP address in which they both shared that 1 MB/s pipe, and 1 IP address was maxing out the link with 1 or 100 connections, the other IP address would still get exactly half of the bandwidth for it's one single connection it had going.
What you are describing here is the proposal from Bob Briscoe and George Ou. I say proposal because it (reportedly) does not work as you are suggesting it does.

Are you saying that you've run a test that shows otherwise? If so, please describe your test environment (some stacks behave differently than others). Maybe we can figure out why the results came out like that.

said by knightmb:

TCP/IP is suppose to work properly from IP address to IP address, not IP address to self.
I don't understand this final line at all. Can you rephrase it?

said by knightmb:

When they wrote the TCP/IP stuff decades ago, they didn't have to worry about NAT routers and how it changes the rules.
Two responses to this:
1. TCP definitely has been revised since RFC 793. In so much as each revision changes the protocol somewhat (aka an "update"), it's not exactly fair to say that NAT hasn't been considered.

2. (And now to contradict myself,) NAT is not yet an Internet Standard. Various implementations of NAT to not play the same -- some do not play well together. So what NAT does, or what TCP does across a NAT device, probably varies.
--
Robb Topolski -= funchords.com =- Hillsboro, Oregon
"We don't throttle any traffic," -Charlie Douglas, Comcast spokesman, on this report.