dslreports logo
 
    All Forums Hot Topics Gallery
spc
uniqs
19
« wait...But... »
This is a sub-selection from No Clue
starbolin
join:2004-11-29
Redding, CA

starbolin to axus

Member

to axus

Re: No Clue

Dropping packets only increases congestion. Every thing from the client to you has already spent cycles processing the packet. Now the packet has to be retransmitted. So what you end up accomplishing is increasing the load on downstream links in proportion to the percentage packets that you drop. This is the reason Comcast used the RST packets. Dropping packets would have only increased the load on their concentrators.

In addition it does nothing to address the fairness issue. The guy with one stream open waits on the dropped packet before reassembling the file while the guy with multiple streams continues to receive data in other streams.

There already exist many 'fair' protocols, Token Ring is fair, ATM is fair, these are two data-link layer protocols. DCCP is one more recently proposed transport layer protocol designed to be 'fair'. To understand why TCP persists you need to examine why some ( Token Ring ) fell by the wayside, and why others ( ATM ) are used as carriers for TCP.

This whole debate is amusing to me as it's very reminiscent of the ethernet vs. Token Ring debate of the 80's. Token Ring lost for several factors including cost, flexibility and performance. Although to be fair there are still many token ring installs out there. Mostly in applications where latency is critical.
axus
join:2001-06-18
Washington, DC

axus

Member

Well, I admit I'm not a network engineer, but I think dropping packets decreases congestion with the current TCP stacks. Each client tries to transmit more slowly when it sees a dropped packet. Dropping packets is the standard way for the router to say "I'm getting too much traffic". Yes there will be retransmits, but they will be slower until the router doesn't need to drop packets anymore.

Delaying packets can be an alternative to dropping them, but I thought that remembering a packet while letting others through took more work than dropping it once or twice and passing it through the second or third time. Sending RST packets for specific protocols seems to require another piece of hardware, breaks the standard, and is not "network neutral".

What is the fairness issue? Everyone should get the same proportion of throughput. The YouTube watcher, guy downloading a CD from his work, lady emailing a bunch of pictures, and Bittorrent sharer should get the same total kbps. If the network can't support it, the dropped packets will cause them to slowdown their speeds until it can. If the bittorrent guy is dropping packets in proportion to the number of streams he has open, every stream is going to slow down as much as the single stream guy.

funchords
Hello
MVM
join:2001-03-11
Yarmouth Port, MA

funchords to starbolin

MVM

to starbolin
Everything you said, including the following, is true:
said by starbolin:

Dropping packets only increases congestion. Every thing from the client to you has already spent cycles processing the packet. Now the packet has to be retransmitted. So what you end up accomplishing is increasing the load on downstream links in proportion to the percentage packets that you drop.
Yes, the traffic on the overall swarm is increased and the throughput on the overall swarm is reduced. However, the traffic on the congested segment is relieved. The retransmit is delayed by a random delay interval which is again doubled on each retransmit. That way, the pressure on that segment rapidly backs off. This behavior is quite desirable.

BitTorrent takes it a step further by ending the transfer to a peer or peers using the congested link, and trying a different peer from the list.
« wait...But... »
This is a sub-selection from No Clue