dslreports logo
 
    All Forums Hot Topics Gallery
spc
Search similar:


uniqs
320001
SuperWISP
join:2007-04-17
Laramie, WY

2 edits

SuperWISP to jig

Member

to jig

Re: News: Comcast to end P2P interference

said by jig:

Maybe that's what it meant in the context of dialup, and I suppose they could have had errors in their early marketing, but I don't think so.
I do. It's completely irrational and impossible for someone to expect to download an infinite number of bytes. What's amazing is how spoiled some P2Pers are. Not only do they want to break the law with impunity and steal all the intellectual property they want; they have the audacity to want to take infinite bandwidth from the ISP and slow down legitimate users as they do so. On the other hand, the lack of consideration inherent in the former sort of suggests that they wouldn't be considerate about the latter.
said by koitsu:

You should take the time to examine commonly-used programs; you will find many of them DO accept incoming connections "from all and sundry". Here's a very short list I made back in October (in this thread nonetheless!):
There was nothing in your list. I guess that means that you do not believe that any programs act as servers?

Seriously: The YIM client is just that: a client. It connects to Yahoo's server.

koitsu
MVM
join:2002-07-16
Mountain View, CA
Humax BGW320-500

koitsu

MVM

said by SuperWISP:

There was nothing in your list. I guess that means that you do not believe that any programs act as servers?

Seriously: The YIM client is just that: a client. It connects to Yahoo's server.
There was nothing in the list? Yeah, I guess all that text is invisible, and funchords gave me a thumbs-up for no particular reason.

"" is right.

twistedgamer
Wasn't Me
join:2001-12-26
Mesa, AZ

twistedgamer to SuperWISP

Member

to SuperWISP
It's completely irrational and impossible for someone to expect to download an infinite number of bytes. What's amazing is how spoiled some P2Pers are. Not only do they want to break the law with impunity and steal all the intellectual property they want; they have the audacity to want to take infinite bandwidth from the ISP and slow down legitimate users as they do so. On the other hand, the lack of consideration inherent in the former sort of suggests that they wouldn't be considerate about the latter.
There was nothing in your list. I guess that means that you do not believe that any programs act as servers?

Seriously: The YIM client is just that: a client. It connects to Yahoo's server.
Oh my god it's people/companies like you that are ruining this country.

There is so much dark fiber in this country it's not even funny. Capacity isn't and was never the problem!!!!

Giving up that big fat Gov incentives tit is.

Providers know that if they actually start using whats out there (many thousands of miles of fiber they got incentives to lay), the Gov is going to take away its incentives tit because theres no reason in hell to give a perfectly healthy industry help by paying for the same service twice.

By the way look up the definition of a server. Roughly 80% or more of todays applications are some form of a server+client and have been for quite a few years.

jig
join:2001-01-05
Hacienda Heights, CA

jig to SuperWISP

Member

to SuperWISP
said by SuperWISP:

It's completely irrational and impossible for someone to expect to download an infinite number of bytes.
i think you need to read my post more carefully. the part about being limited by the available speed, just like all the cell providers currently contract for.

as far as being considerate, if my isp was a co-op, then maybe. even then, if the advertising is: unlimited! 24/7! downloads! games! streaming media! 10Mbit! (oh, and we don't guarantee our advertised speed, just our Best Efforts(tm). also, upload is 512k)!, then i'm going to use it exactly as advertised.

funchords
Hello
MVM
join:2001-03-11
Yarmouth Port, MA

1 edit

funchords to SuperWISP

MVM

to SuperWISP

Re: A Little About SuperWISP

All,

Let me start by saying that no matter what I post, Brett (SuperWISP) will disagree. He considers me a liar.

Brett operates Lariat.net, a Wireless ISP who competes with wireline service in Laramie. More importantly, his business serves people in the surrounding area who normally would have no Internet options outside of dialup.

Someone mentioned a cooperative, above. Interestingly, Lariat.net started as a cooperative. It went private in 2003. What Brett and his group did is nothing short of wonderful Internet history:
Lariat traces itself back to a computer users' group that decided, in the early 1990s, to take into its own hands the problem of good Internet access in a city of only 25 000 residents. After setting up a bank of dial-up modems, Glass and his friends considered the needs of Laramie's small businesses.

At the time, a standard U.S. Internet connection cost $3500 a month, buying a data rate of about 1.5 Mb/s. Lariat found it could buy radios and provide 2-Mb/s connectivity for a one-time fee of $3500 and monthly charges of $600 (nowadays, only $125 for setup and $125 per month). Thus in 1994, when most people were first hearing of the Web and PC modems were just breaking the 28-kb/s barrier, Lariat was offering wireless Internet access at data rates that are still unavailable to many people today.

»www.buffalowireless.net/ ··· less.htm Broadband A Go-Go, Steven M. Cherry; IEEE Spectrum archive Volume 40 , Issue 6 (June 2003)
Brett is extremely prejudiced against all P2P-distributed services. This includes file sharing and Skype. It probably doesn't include simple P2P connections like FTP and HTTP. Facts about network impacts and efficiency matter less to Brett than the principle of the thing: Brett feels that distributing processing and bandwidth among users actually steals from their ISPs.

Most of all, Brett is extremely concerned that government Network Neutrality actions might ruin his business. His position is that any limit on what any operator can do risks the years of personal investment as well as the livelihoods of his employees.

My advice: do not try to change his mind, you cannot. He is also on a campaign to prevent any kind of government action. But learn from Brett -- he has "feet on the street" experience and his business has different dynamics than traditional wireline services like Cable/DSL/FIOS.

I'm just a user here, and the above is simply my opinion and interpretation of the situation. I've been reading Brett's messages for months. Hopefully this message helps those meeting Brett for the first time. Brett's entitled to his own opinions, and you are to yours. And while it may look like I'm speaking or advocating for Brett in the words above, remember that he considers me a liar who is intent on destroying his business. I am not.

Robb Topolski

PS: Brett and I are not on good terms, as you can imagine. Rather than inflame debate, I generally just let him have the floor alone. MY SILENCE DOES NOT INDICATE AGREEMENT, DISAGREEMENT, OR LACK OF A DIFFERENT VIEW.

SpaethCo
Digital Plumber
MVM
join:2001-04-21
Minneapolis, MN

1 recommendation

SpaethCo to twistedgamer

MVM

to twistedgamer

Re: News: Comcast to end P2P interference

said by twistedgamer:

There is so much dark fiber in this country it's not even funny. Capacity isn't and was never the problem!!!!
... and how much of that fiber runs directly to your house?

While it is true there is a massive supply of cheap bandwidth available for backbone carrier connectivity, the capacity limitations on the last mile are very real.

NormanS
I gave her time to steal my mind away
MVM
join:2001-02-14
San Jose, CA

NormanS to twistedgamer

MVM

to twistedgamer
I don't doubt that there is a lot of fiber out there. OTOH, I know that there is no fiber within 3,000 feet of my premises; though both AT&T and Comcast have fiber in the neighborhood.

funchords
Hello
MVM
join:2001-03-11
Yarmouth Port, MA

funchords

MVM

Vuze Response: BitTorrent Inc. Does not Represent Us

From the Vuze Blog, Jay Monahan, General Counsel for Vuze, explains that BitTorrent, Inc. does not represent Vuze nor its industry; nor do side (non-)agreements made by Comcast control the behavior of other Cable Internet companies. (Agreements made by Comcast hardly control the behavior of Comcast! [see pg 10])

Key excerpts from Vuze...
For years, Comcast engaged in definitional gymnastics by denying that it was blocking “particular companies or applications,” but all the while it was engaging in “man-in-the-middle” attacks intended to interfere with seeding activities of all bit-torrent protocol based applications, like Vuze.
BitTorrent Inc. itself represents only a fraction of the bit-torrent-based applications being used today, and has no control over the many millions of bit-torrent based applications on desktop computers around the world. I have little doubt that Comcast wanted its announcement to be perceived as a sort of universal resolution of its differences with the bit-torrent world, but nothing could be further from the truth.
When we filed our petition for rulemaking with the FCC in November, 2007, we stated that both regulation and meaningful industry cooperation are necessary to protect consumer rights and foster innovation. We still believe that. Whether you believe that Comcast’s cozying up to BitTorrent, Inc. arises out of genuine enlightenment or is just a publicity stunt, in my view it changes nothing in terms of our original Petition.
mudtoe
join:2005-10-09
Cincinnati, OH

mudtoe to NormanS

Member

to NormanS

Re: News: Comcast to end P2P interference

It seems to me that what this all boils down to is a "disclosure" issue, and the real question is: Is Comcast's advertising at odds with the fine print (i.e. TOS agreement), and with their actions (e.g. spoofing RST commands as coming from other users).

It does seem to be the case. From my point of view one thing that the FCC, perhaps combined with the FTC, could and should do is to require all the ISPs to be completely up front and honest about what you are buying when you sign up.

Unfortunately, for whatever reason, the whole relationship between telecommunications providers and consumers is based on advertising one thing, and delivering another. Just look at the cell phone business. They tout one price, but when you get your bill there are lines and lines of fees added that can make your bill 30-40% higher than the rate you were quoted. Some of the most complicated software programs in the world are the telecom companies’ billing system applications, which are designed to allow all kinds of “add-on” fees to be put in the bill.

What should happen here, IMHO, is that all these telecommunications providers should be required to state up front what limitations, both bandwidth usage wise, and application wise (e.g. running servers, bit torrent, etc.), are placed on the service. Also, any prices quoted have to be the "out the door" price, not some base price that gets a ton of fees attached later. All of these companies are doing everything possible to prevent the consumer from comparing apples to apples when choosing a provider. Obfuscation, misdirection, and word-smithing nuance are the name of the game today with regard to how these companies communicate with their customers. That’s what the FCC should be addressing.

mudtoe

funchords
Hello
MVM
join:2001-03-11
Yarmouth Port, MA

1 edit

funchords

MVM

Comcast CTO: We'll throttle the top 0.5% to 2% @ heavy peaks

Om Malik writes in GigaOM that even he had trouble digesting whatever it was Comcast was dishing last Thursday. On Friday, he asked Tony Werner directly about it. In the end, Malik -- who has been covering Broadband and Telecom for years -- isn't buying the lip service.

Key excerpts:
In describing the deal, Comcast tried to shift the focus away from their so-called "network management" -- and by extension, the limitations of their network that prompted them to resort to traffic manipulation in the first place.
Werner said: Historically we had looked at a basket of P2P protocols during peak load times and would slow them down. In the new approach, we dont do this any more. In short, no P2P blocking!
...between one half and two percent of Comcasts customers can be described as bandwidth hogs ... According to Werner, the company is currently experimenting with software (including that from Sandvine) that would allow them to fractionally de-prioritize the traffic from these bandwidth hogs during peak load times, while at other times, leaving them alone.
Comcast will not discriminate against any protocol, but bandwidth baddies are going to be the ones to suffer. Or at least thats what I took away from our conversation.
Although a company spokesperson assured us Comcast will be clear and transparent with anything related to traffic management, my skepticism stems for Comcasts past actions. When it comes to traffic management, the Philadelphia-based operator has a checkered past.
(...to the article...)

Sunny
Runs from Clowns

join:2001-08-19

Sunny to funchords

to funchords

Re: Comcast is using Sandvine to manage P2P Connections

Please see this post ---> »Richard Bennett: It'll be like DSL, only Faster

Karl Bode
News Guy
join:2000-03-02

3 edits

Karl Bode

News Guy

Didn't Bennett spend the last part of the decade working for Cisco as a consultant on how to treat P2P traffic like a second class citizen? I've always found his anti-net-neutrality positions to be a little too enthusiastic to not be motivated by financial interest....

I feel like he's only slightly better when he ghost writes for George Ou over at the bankrupt ZDNet. I think it's odd that the Reg, which traditionally is sarcastic and the first to see through industry PR smokescreens, has decided to give Bennett a podium.

Unless they're thinking that his Ann Coulter-style techno-evangelism gets hits through controversy....(which works -- see John Dvorak mock Apple sometime)

Like we need another voice in the tech media whose sole function is to essentially reconstitute industry press releases. Annoying.

All we have so far is a press release and a promise from a company that's spent the last year lying. Yes, Richard. What a "win-win". I'm over-encumbered by victory!

funchords
Hello
MVM
join:2001-03-11
Yarmouth Port, MA

3 edits

funchords

MVM

More News and Views -- Comcast and Blocking

Dear David L. Cohen,

Stop saying that Comcast only "delayed uploads at peak times" ...

1. You're STILL doing it.

2. Comcast use of the forged RST packets "delays" BitTorrent uploads in the same way as blowing up a bridge delays a car from reaching its destination. It might get there eventually, or perhaps not at all, but definitely not by the manner intended!

3. And unless your definition of "peak times" includes 24/7, then someone is lying to you.

Robb Topolski

Link #1: Comcast Miffed at FCC Chairman Quips: »www.broadcastnewsroom.co ··· d=347174

Story #2: FCC Continues to Probe Comcast Broadband Network Management, 28 March 2008, Warren's Washington Internet Daily (no link)

Excerpt:
The FCC hasn't suspended an inquiry into complaints that Comcast blocked peer-to-peer file transfers via BitTorrent, said four agency officials. A Thursday settlement between the cable operator and the P2P software maker hasn't prompted Chairman Kevin Martin's office to tell other commissioners of any changes to the inquiry, said agency officials. "The complaint against Comcast is pending," said an FCC spokesman. "We are following that complaint as we do any other." But Commissioner Robert McDowell said the settlement "obviates the need for any further government intrusion into this matter."

The agency still plans an April 17 hearing at Stanford University in Palo Alto, Calif., on network management, agency sources said.

SpaethCo
Digital Plumber
MVM
join:2001-04-21
Minneapolis, MN

2 edits

1 recommendation

SpaethCo

MVM

said by funchords:

2. Comcast use of the forged RST packets "delays" BitTorrent uploads in the same way as blowing up a bridge delays a car from reaching its destination.
We need to come up with better analogies, because this one and nearly all of the existing ones (including my previous contributions) absolutely suck.

At the rate of these analogies we're going to have people picketing outside of central offices everywhere protesting for the sanctity of packets and the TCP session's right to life.

koitsu
MVM
join:2002-07-16
Mountain View, CA
Humax BGW320-500

koitsu

MVM

said by SpaethCo:

said by funchords:

2. Comcast use of the forged RST packets "delays" BitTorrent uploads in the same way as blowing up a bridge delays a car from reaching its destination.
We need to come up with better analogies, because this one and nearly all of the existing ones (including my previous contributions) absolutely suck.

At the rate of these analogies we're going to have people picketing outside of central offices everywhere protesting for the sanctity of packets and the TCP session's right to life.
I thought my analogy of the situation was pretty accurate (and I haven't seen any news site, blogger, or general community individual use it -- though I have seen similar analogies, but they're always way off base).

The analogy I used was something like this:

"Imagine making a telephone call to Stan, an acquaintence of yours. Stan's phone rings and he answers. You talk for a few seconds before your telco carrier decides they don't like what it is you're discussing and decides to sever the call. You hear what sounds like Stan rudely saying goodbye, and Stan hears what sounds like you rudely saying goodbye, even though neither of you said any such thing.

You're free to call Stan again, but the situation will continue to happen indefinitely until the telco carrier stops intervening."

SpaethCo
Digital Plumber
MVM
join:2001-04-21
Minneapolis, MN

SpaethCo

MVM

said by koitsu:

"Imagine making a telephone call to Stan, an acquaintence of yours. Stan's phone rings and he answers. You talk for a few seconds before your telco carrier decides they don't like what it is you're discussing and decides to sever the call. You hear what sounds like Stan rudely saying goodbye, and Stan hears what sounds like you rudely saying goodbye, even though neither of you said any such thing.

You're free to call Stan again, but the situation will continue to happen indefinitely until the telco carrier stops intervening."
There's a couple problems with that analogy.

First off, you're assigning layer-7 (application layer, Voice in this instance) attributes to the teardown. The P2P app makes an open socket call to the TCP/IP stack of the OS, and it's the protocol stack *NOT* the application that initiates the TCP SYN / session and handles the ACKing and windowing -- it never hears any kind of shutdown message when the TCP session is reset. It will get a notification that the socket has been closed, but that happens for any number of reasons and doesn't imply responsibility for who made the connection go away. In your phone call analogy, this would be the equivalent of each party hearing a "click" and assuming the other hung up -- but not actually hearing either side say "Goodbye".

The second issue is that everyone is attributing single connection relationships to the situation without looking at the big picture. In the case of P2P, you're not talking about a single phone call between you and Stan. You get a a list of connections to establish from the tracker the same way a telemarketer is given an autodialer with a programmed set of numbers to call. If you get hung up on, you move to the next number or connection and take a shot there.

funchords
Hello
MVM
join:2001-03-11
Yarmouth Port, MA

funchords to SpaethCo

MVM

to SpaethCo
said by SpaethCo:
said by funchords:

2. Comcast use of the forged RST packets "delays" BitTorrent uploads in the same way as blowing up a bridge delays a car from reaching its destination.
We need to come up with better analogies, because this one and nearly all of the existing ones (including my previous contributions) absolutely suck.

At the rate of these analogies we're going to have people picketing outside of central offices everywhere protesting for the sanctity of packets and the TCP session's right to life.
Oops, fair enough, I see what you're saying. I didn't think of the violence aspect.

"Comcast use of the forged RST packets 'delays' BitTorrent uploads in the same way as a forged ROAD CLOSED sign delays a car from reaching its destination."

I think it's apt. Likenesses: In BitTorrent, if there are no other sources, the transfer fails. In traffic, if there are no available alternate routes, the trip fails.
funchords

funchords to SpaethCo

MVM

to SpaethCo
First off, you're assigning layer-7 (application layer, Voice in this instance) attributes to the teardown. The P2P app makes an open socket call to the TCP/IP stack of the OS, and it's the protocol stack *NOT* the application that initiates the TCP SYN / session and handles the ACKing and windowing -- it never hears any kind of shutdown message when the TCP session is reset. It will get a notification that the socket has been closed, but that happens for any number of reasons and doesn't imply responsibility for who made the connection go away.
With respect, I disagree. The first error is "Connection reset by peer." This is the error you see caused by the forged packet allegedly from the other system. The second error is "Software caused connection abort." I haven't debugged this one except for behaviorially, but I think it actually happens when the stack detects an ACK to a RST it didn't send. In Winsock, these are errors 10053 and 10054. Those signals aren't sent for a typical socket close using the FIN flag.
funchords

1 edit

funchords

MVM

Open letter to Comcast: My (Mostly) Technical Analysis

In reaction to this story: »Comcast VP: We've Admitted Nothing

Attached is my letter to Comcast Executive Vice-President David Cohen. It was sent today via e-mail and is being filed in the FCC comment system (as was Cohen's letter).

In the letter, I analyze Cohen's statement as a whole and in a word-for-word or phrase-for-phrase fashion. Hopefully this will clear things up for this executive.

[1] »tinyurl.com/6mspye -- FCC Chairman Martin's Press Release
-- (links to »hraunfoss.fcc.gov/edocs_ ··· 65A1.pdf)

[2] »tinyurl.com/5mr7dm -- Comcast VP Cohen's Response -- (links
to »gullfoss2.fcc.gov/prod/e ··· 19869393)

[3]»tinyurl.com/5p3mbu -- My Response to Comcast VP Cohen --
(links to »fjallfoss.fcc.gov/prod/e ··· 19870563)

Robb Topolski
funchords

3 edits

funchords

MVM

Comcast to spend 9 months Reinventing the Wheel

I'm having a nice conversation with a DSLReports member, who pointed me to a /. conversation at ARPANET Co-Founder Calls for Flow Management. (There is some interesting history about ARPANET flow management being very different in need and implementation than the Internet, but that's beside the point.)

My own take is that current TCP congestion control -- with no "value added" devices -- is already fair. Drop fake prioritization. Then what packet drops at congestion becomes a random-number game.

Imagine a blind man throwing rocks at passing vehicles on a congested freeway: He is more likely to hit 18-wheel trucks than VW Bugs. If those tractor-trailer vehicles are double-trailer or triple-trailer, he's certain to hit them!

This is the way that congestion control on TCP works today. The blind man is the packet-dropping algorithm. Small, irregularly spaced, VOIP and gaming packets are less likely to suffer -- not because of technology -- but because of the physics. On the other side of the coin, large steady flows are almost certain to lose a packet -- they're practically impossible to miss.

As implemented today, when a TCP packet is lost, the sender assumes the net is congested and dramatically reduces speed and relaxes timeouts. This rapid back-off continues until there are no more signs of congestion. This way, the net congestion is not maintained by retransmissions.

The net result is THE SAME BEHAVIOR that Comcast described as a replacement for Sandvine's RST interference: the biggest users of bandwidth at that moment are cut back.

It exists today. It is built in to TCP. So why do they need 9 months to "implement" what is already there?

The technology that Comcast plans to replace forged RSTs for will eventually do something that already happens naturally -- no new technology required. So why are we waiting 9 months?

I know that people forget this -- they're so concerned with ESTABLISHING fairness that they have no idea what happens right now.
funchords

funchords to Karl Bode

MVM

to Karl Bode

Re: Comcast is using Sandvine to manage P2P Connections

said by Karl Bode:

Didn't Bennett spend the last part of the decade working for Cisco as a consultant on how to treat P2P traffic like a second class citizen? I've always found his anti-net-neutrality positions to be a little too enthusiastic to not be motivated by financial interest....
I'd like to know, too.

He is being a lot more obvious about his insider status. He seems to have unencumbered access to any Comcast VIP that he wants.

SpaethCo
Digital Plumber
MVM
join:2001-04-21
Minneapolis, MN

1 recommendation

SpaethCo to funchords

MVM

to funchords

Re: Comcast to spend 9 months Reinventing the Wheel

said by funchords:

My own take is that current TCP congestion control -- with no "value added" devices -- is already fair. Drop fake prioritization. Then what packet drops at congestion becomes a random-number game.
The problem with these arguments is that you are directly extrapolating from a single occurrence to make a conclusion about the interactions of a system of connections. There's a reason that the scientific community makes the distinction between psychology and sociology when it comes to studying the behavior of humans; people in groups often exhibit completely different behaviors than they would individually. While the interactions of network protocols isn't nearly as complex as that of humans, there are variables that come about when multiple sessions interact that you cannot deduce by simply looking at the case of a single session.

I encourage you to do a bit more research on queuing theory. There's been numerous books and articles written that detail what concessions routers need to make to help TCP out. In particular, I suggest researching on why fair-queuing (and subsequently weighted fair-queuing) were developed to replace FIFO queuing. Also, I recommend researching the problem of "tail drop" and why Random Early Discard was created to alleviate that problem.

In any case, all of the TCP flow control features including the queuing features developed to influence their operation are designed with the primary goal of attempting to create equality between TCP sessions. The problem is not that TCP doesn't balance, it's that applications disproportionately create TCP sessions to boost throughput. Again, TCP session performance is a function of bandwidth capacity, TCP window size, round trip time, and connection reliability. As long as bandwidth per TCP session is being constrained to be less than the total bandwidth capacity by the other factors, you can continue to get higher throughput by adding additional TCP sessions. Download accelerators have been around for at least a decade now, and that's the principle they work on: split your download into multiple TCP flows to boost throughput.

Latency increases as the line becomes more congested, creating a noticeable impact on TCP sessions well before full congestion is reached. If I have a single FTP session going and you have 10 FTP sessions going, even if our connections are provisioned identically your connection will consume more bandwidth as traffic increases on the links between us and our target server.
said by funchords:

The technology that Comcast plans to replace forged RSTs for will eventually do something that already happens naturally -- no new technology required.
This statement is, of course, completely false. If that were true companies like Sandvine would not have a marketable product, and companies like Comcast wouldn't be investing money into trying to solve a problem that according to your statement does not exist.
said by funchords:

It exists today. It is built in to TCP. So why do they need 9 months to "implement" what is already there?
The TCP mechanisms for throttling will continue to be a critical element of the equation; the magic is really in how to effectively trigger those mechanisms to achieve fairness per end-station and not simply per-flow. If Comcast is devising a method to improve fairness by reducing bandwidth capacity to heavy users to counteract the advantage of multiple TCP sessions, that could indeed take some time to develop, test, and roll out.

funchords
Hello
MVM
join:2001-03-11
Yarmouth Port, MA

1 edit

funchords

MVM

Hey, I'm all for RFC 2474 networks (Random Early Drop (RED) which is weighted (WRED) for Quality of Service (QoS)), as long as the end-point users and/or applications set the flags. I want to get the ISPs out of the business of using DPI for routing. My theory is that if we can push the industry into QoS, the only thing Service Providers will need DPI for is to investigate whether a user is abusing the system. Cable ISPs quickly detect and clamp down on uncapping the modem. There will be definite signs of DSCP-setting abuse, and yes, policing will be necessary because some people truly do abuse the system.

If this is what Comcast eventually does, then color me happy. Meanwhile, turn off Sandvine anyway because what we would have as a result is incrementally better.
said by SpaethCo:

Latency increases as the line becomes more congested, creating a noticeable impact on TCP sessions well before full congestion is reached. If I have a single FTP session going and you have *[inserted: 3 or 4] FTP sessions going, even if our connections are provisioned identically your connection will consume more bandwidth as traffic increases on the links between us and our target server.
*(I'm going to use 3-4 because George Ou abused this argument and his example involves 10 P2P connections. The comparative number that he should have used in his example is 3-4. 10 would probably work, too, but I'm not allowing George Ou to steer the arugment.)
When the network is not congested, the 1 flow task and the 3-4* flow task will perform relatively equally well. (Assuming no retransmits, the only difference is in the amount of overhead).

Do we agree so far?

As the route reaches a congested state, then I agree with you. However, I hold that this moment of "unfairness" is brief, the amount of "backpedal" by P2P TCP connections is heavier by comparison.

Lets assume at one particular moment, a VOIP packet and a P2P packet gets dropped during a minute of congestion. Both are retransmitted.
  • The retransmitted VOIP packet is still small, and still fits into its original window size.

  • The file-transfer packet is reassembled into smaller packets, all of which need to make it to the far end undamaged.

I'd place my bets on the retransmitted VOIP packet making it. On the retransmitted file-transfer packet, my money would be on a repeat of the packet drop.

As a result, the VOIP flow had a momentary speed drop to 50%. The P2P flow's speed will drop to 25% before another retransmission is attempted.
[download accelerators] disproportionately create TCP sessions to boost throughput
C'mon, how many users use these regularly? Download for download, my guess is less than five percent. They're usually not integrated very well, and (again, my guess) users would rather keep their browser-initiated downloads in the browser. (I'm biased though, I don't like using them, either.) My memory is that their boost of throughput was because they took a bigger fraction of the web site's uplink pipe -- it had nothing to do with taking advantage of congestion-control algorithms. These accelerators ALWAYS seem to piss off the webmasters. I've never seen a complaint by a service provider about them.

SpaethCo
Digital Plumber
MVM
join:2001-04-21
Minneapolis, MN

SpaethCo

MVM

said by funchords:

Hey, I'm all for RFC 2474 networks (Random Early Drop (RED) which is weighted (WRED) for Quality of Service (QoS)), as long as the end-point users and/or applications set the flags.
RED was deployed on networks long before any kind of meaningful differentiated service started rolling out.

Having end-users mark their packets is a pipe dream that won't become a reality anytime soon; in fact, I'd argue it will never come into play. You're talking about introducing 1 metric crapload of complexity into the broadband configuration for end-users to produce what will amount to absolutely no benefit to any user except the rare few who take the time to set it up correctly. The DSCP-to-subscriber-service values are defined by each service provider, so there would also likely be different configurations between broadband providers. (ATT and Verizon business use different DSCP values for their Silver and Bronze class on their MPLS service offerings. Only the Gold class is common with a marking of EF) The markings would also be reset once they hit the Internet border router as no Internet carrier will honor them. Companies are more likely to use this level of differentiation to support the products they sell (ie, preference their packaged voice and video services above their HSI service traffic), as they would control the end stations to be able to trust the marking and it would make business sense of them to implement such a system.
said by funchords:

I want to get the ISPs out of the business of using DPI for routing.
Then you'll need to prepare yourself to pay more. You can get a completely untouched network connection dropped to your house today, the only thing is you're going to have to pay a considerable amount of money more for it.
said by funchords:

said by SpaethCo:

Latency increases as the line becomes more congested, creating a noticeable impact on TCP sessions well before full congestion is reached. If I have a single FTP session going and you have *[inserted: 3 or 4] FTP sessions going, even if our connections are provisioned identically your connection will consume more bandwidth as traffic increases on the links between us and our target server.
When the network is not congested, the 1 flow task and the 3-4* flow task will perform relatively equally well. (Assuming no retransmits, the only difference is in the amount of overhead).

Do we agree so far?
Not at all.

Most users don't tweak their TCP Window size anywhere near as large as it should be. As such, each TCP session tends to be artificially limited to less than what could be capable on the line. If you were to take a random sampling of machines from the Internet (particularly Windows boxes or older Linux boxes) you would definitely have a stark contrast between 1 flow and 3-4 flows.

Try this out: Find a server in a data center that has significantly more capacity than you have at home. Take 2 completely independent boxes attached to your network and download from the same server. You'll find that with equal network characteristics between the boxes, the traffic will divide between the boxes nearly equally. Run the same test except initiate 2 instances of the download on one of the boxes, and you should see it consume 2/3rd of your total downstream capacity. The amount of bandwidth each TCP session can consume is (Total Bandwidth/# of TCP sessions). Therefore with more sessions, you have the potential to absorb a bigger fraction of the available bandwidth.
said by funchords:

Lets assume at one particular moment, a VOIP packet and a P2P packet gets dropped during a minute of congestion. Both are retransmitted.
  • The retransmitted VOIP packet is still small, and still fits into its original window size.

  • The file-transfer packet is reassembled into smaller packets, all of which need to make it to the far end undamaged.

I'd place my bets on the retransmitted VOIP packet making it. On the retransmitted file-transfer packet, my money would be on a repeat of the packet drop.
The first problem with this argument is that VoIP packets are UDP and there is no retransmission. (outside of "Hey, you broke up, can you repeat that?" ) I get what you're trying to say but again you're focused on the individual connection and not the big picture:

In a congested state many packets are still getting through without being dropped. If I have a single FTP session open and you have a P2P app running with 4 connections and we each have a packet dropped, my only connection will go into congestion avoidance and throttle back. On your side, however, only 1 out of 4 of your P2P sessions will throttle back (the one that took the drop). Clearly you're not asserting that we come out equal in this scenario.
said by funchords:

[download accelerators] disproportionately create TCP sessions to boost throughput
C'mon, how many users use these regularly?
Everybody who runs a P2P client, based on the mechanism of how they work.
said by funchords:

My memory is that their boost of throughput was because they took a bigger fraction of the web site's uplink pipe
Heh. Thanks for the laugh, Robb. I'm not saying that to be a jerk, but it brings me back to the meetings I've had at work all week with the key recurring theme: there is no magic in networking.

The mechanism they use to "take a bigger fraction of the pipe" is they open up more connections! In particular, they take advantage of HTTP's resume transfer function and start multiple downloads at different byte offsets into the file and assemble them back into one piece when all the transfers are done. (aside from the lack of CRC checking, this should sound eerily familiar)

funchords
Hello
MVM
join:2001-03-11
Yarmouth Port, MA

funchords

MVM

said by SpaethCo:

Having end-users mark their packets is a pipe dream that won't become a reality anytime soon; in fact, I'd argue it will never come into play. You're talking about introducing 1 metric crapload of complexity into the broadband configuration for end-users to produce what will amount to absolutely no benefit to any user except the rare few who take the time to set it up correctly.
nope, not at all. Just as the POP3 client uses port 110 or http uses 80, the applications themselves will set the appropriate values. It's only going to be us geeks that do programming, testing, etc., that will know any difference.

If you want ISPs to keep playing Cat-and-Mouse with their users -- then keep taking control away from them. They'll keep embedding their packets as payloads into other packets, using hopping ports, using more encryption and obfuscation. I mean, this has WORKED SO WELL FOR ISPS up until now.

SpaethCo
Digital Plumber
MVM
join:2001-04-21
Minneapolis, MN

SpaethCo

MVM

said by funchords:

nope, not at all. Just as the POP3 client uses port 110 or http uses 80, the applications themselves will set the appropriate values. It's only going to be us geeks that do programming, testing, etc., that will know any difference.
The problem is the big picture again. It's easy to prioritize traffic on your own local connection, though outbound prioritization is a fair bit cleaner than inbound.

The key problem with traffic marking is that it would bring about competition as to which traffic is more important between users. If the provider is honoring DSCP markings that means that you truly can set your traffic to be more important than that of your neighbors. If I run 20 RTP streams through my house and voice traffic is the the top queue, should I be given priority ahead of your surfing traffic? We have this same issue setting QoS policies in the corporate world -- most port 80 traffic is user surfing that has no business benefit so it should be deprioritized. Our top applications that run the business also run on port 80, however, so that completely invalidates our ability to differentiate traffic just on that port or application (the web browser is the same in either case).

funchords
Hello
MVM
join:2001-03-11
Yarmouth Port, MA

funchords to SpaethCo

MVM

to SpaethCo
said by SpaethCo:

Most users don't tweak their TCP Window size anywhere near as large as it should be. As such, each TCP session tends to be artificially limited to less than what could be capable on the line. If you were to take a random sampling of machines from the Internet (particularly Windows boxes or older Linux boxes) you would definitely have a stark contrast between 1 flow and 3-4 flows.
I'm aware of that situation. Statistically, that's noise, since the chances are equally good that a box on either side of the equation would be poorly configured.
said by SpaethCo:

Try this out: Find a server in a data center that has significantly more capacity than you have at home. Take 2 completely independent boxes attached to your network and download from the same server. You'll find that with equal network characteristics between the boxes, the traffic will divide between the boxes nearly equally. Run the same test except initiate 2 instances of the download on one of the boxes, and you should see it consume 2/3rd of your total downstream capacity. The amount of bandwidth each TCP session can consume is (Total Bandwidth/# of TCP sessions). Therefore with more sessions, you have the potential to absorb a bigger fraction of the available bandwidth.
I agree with you on the outcome of the above experiment. But it doesn't happen among neighbors behind cablemodems! Both users will hit their 6 Mbps constraint before hitting any other. The user with 1 stream will reach 6 Mbps. The user with 3-4 streams will hit 6 Mbps.

SpaethCo
Digital Plumber
MVM
join:2001-04-21
Minneapolis, MN

1 edit

SpaethCo

MVM

said by funchords:

I agree with you on the outcome of the above experiment. But it doesn't happen among neighbors behind cablemodems! Both users will hit their 6 Mbps constraint before hitting any other. The user with 1 stream will reach 6 Mbps. The user with 3-4 streams will hit 6 Mbps.
Of course it happens. The sum of the attachments to the 38mbps downstream channel is greater than 38mbps. Logic would dictate that at some point in time there is contention for resources on that connection. The probability, however, is significantly greater on the 9mbps upstream channel.

This isn't one of those "it might happen" situations -- it's a "it happens with regular frequency" situation. I sit in rush hour traffic every weekday morning going to work; that could be alleviated if they added more lanes to the highway. The problem is the freeway sits nearly idle for 20 hours out of the day. The goal is to keep contention to reasonable levels, not eliminate it completely.

funchords
Hello
MVM
join:2001-03-11
Yarmouth Port, MA

funchords to SpaethCo

MVM

to SpaethCo
said by SpaethCo:
said by funchords:
[download accelerators] disproportionately create TCP sessions to boost throughput
C'mon, how many users use these regularly?
Everybody who runs a P2P client, based on the mechanism of how they work.
Hmmmm. Okay, but they're not download accelerators. The traffic similarity is merely at the near end. Understanding that you linked the two helps me understand the rest of your argument.

One part that bothers me about all of this is the whole concept of "fairness" that people are suddenly talking about!

Do we really want an Internet that's fair? I don't want my stupid file-sharing traffic to interrupt someone else's 911 VOIP call, even if that 911 caller is a "bandwidth hog." I don't want to slow down anyone's gaming, either. But I don't want my ISP to decide "BitTorrent Bad, HTTP Good" either. That's not their job and, even though they've assumed that God-like role, they're abusing it.
funchords

funchords to SpaethCo

MVM

to SpaethCo
said by SpaethCo:
said by funchords:

I agree with you on the outcome of the above experiment. But it doesn't happen among neighbors behind cablemodems! Both users will hit their 6 Mbps constraint before hitting any other. The user with 1 stream will reach 6 Mbps. The user with 3-4 streams will hit 6 Mbps.
Of course it happens. The sum of the attachments to the 38mbps downstream channel is greater than 38mbps. Logic would dictate that at some point in time there is contention for resources on that connection. The probability, however, is significantly greater on the 9mbps upstream channel.
(Sigh, ) true, but you keep changing the parameters of the little controlled test I was describing. Are you just trying to keep me in the "wrong?" That's not like you.

The answer is, of course, they both get 6 Mbps since there is no congestion yet.

Now is there some moments of unfairness when congestion happens? yes Is this difference caused by the number of flows? yes How big is the difference, is it big enough to be worried about?

My untested theories are: the difference is counted in seconds, not minutes. And: if the network remains congested, the difference quickly flip-flops so that the file-transferring streams have a DISadvantage compared to the VOIP packets.