dslreports logo
 
    All Forums Hot Topics Gallery
spc
Search similar:


uniqs
1760

FFH5
Premium Member
join:2002-03-03
Tavistock NJ

1 recommendation

FFH5

Premium Member

Well worth reading on why P2P causes problems

Everyone who has an axe to grind in the P2P debate and why cable companies throttle P2P should read this commentary. It is 4 pages long and technical, but it explains how the current implementation of TCP has allowed P2P software to hog bandwidth to everyone's detriment.

It also explains that the problem can be fixed without banning P2P. But to do that requires the IEEE to modify TCP protocol standards and for ISPs to develop bandwidth limiting procedures for those users who won't upgrade to the new TCP stack client software.

Cabal
Premium Member
join:2007-01-21

Cabal

Premium Member

Seconded. Unfortunately, I think we're only likely to see armchair quarterback discussions on the matter.

knightmb
Everybody Lies
join:2003-12-01
Franklin, TN

1 edit

knightmb to FFH5

Member

to FFH5
said by FFH5:

Everyone who has an axe to grind in the P2P debate and why cable companies throttle P2P should read this commentary. It is 4 pages long and technical, but it explains how the current implementation of TCP has allowed P2P software to hog bandwidth to everyone's detriment.

It also explains that the problem can be fixed without banning P2P. But to do that requires the IEEE to modify TCP protocol standards and for ISPs to develop bandwidth limiting procedures for those users who won't upgrade to the new TCP stack client software.
Maybe if they used what TCP/IP already has built in like, TOS, which is Type of Service, flags for lowdelay, throughput, reliability, mincost, congestion. The stuff that is already in there and adding another one like "even more uber ultra lower" is just more from the Redundancy Department of Redundancy.

Matt3
All noise, no signal.
Premium Member
join:2003-07-20
Jamestown, NC

Matt3

Premium Member

said by knightmb:

said by FFH5:

Everyone who has an axe to grind in the P2P debate and why cable companies throttle P2P should read this commentary. It is 4 pages long and technical, but it explains how the current implementation of TCP has allowed P2P software to hog bandwidth to everyone's detriment.

It also explains that the problem can be fixed without banning P2P. But to do that requires the IEEE to modify TCP protocol standards and for ISPs to develop bandwidth limiting procedures for those users who won't upgrade to the new TCP stack client software.
Maybe if they used what TCP/IP already has built in like, TOS, which is Type of Service, flags for lowdelay, throughput, reliability, mincost, congestion. The stuff that is already in there and adding another one like "even more uber ultra lower" is just more from the Redundancy Depart of Redundancy.
Agreed. QoS is also a viable option (it works on my $40 Buffalo router for christ sakes) but the big ISPs aren't interested in throttling P2P traffic, they want to either 1) kill it by making it unreliable ala Comcast or 2) implement byte caps to start hammering users with overages ala Time Warner. These are all being discussed by the companies most threatened by the advent of streaming video ... the cable companies.

There are solutions to P2P "flood" right now, but a lot of companies want it to fail because it's detrimental to their ancient and flawed business model.

justbits
DSL is dead. Long live DSL!
Premium Member
join:2003-01-08
Chicago, IL

1 recommendation

justbits

Premium Member

QOS/TOS flags are not a solution. Anybody can set those flags. Anybody can ignore those flags. QOS/TOS is only useful between you and your first hop onto the Internet, not between you and anybody else. Otherwise, everybody who is greedy would mark all of their packets as highest priority.

The proposed change to TCP can result in fair-sharing of major Internet backbones as well as fair-sharing on your home Internet router. The big win with a fair-sharing TCP stack is that the major Internet backbones that carry GB/sec of traffic wont need to deploy excessive traffic shaping or deploy fake RST packets. And they can even detect the difference between a "greedy" TCP stack and a "fair-sharing" TCP stack so they can continue to throttle "greedy" consumers, but not excessively screw "fair-sharing" Internet users. The average case for the "fair-sharing" stack appears to result in no excessive detriment to P2P apps that would/could otherwise be more severely throttled by your ISP. And it seems to result in a huge increase in performance for lower-bandwidth single-connection users like VoIP and web surfing. However, an implementation of the protocol and testing needs to be done before this can really be proven.

The key here is that people who excessively use the Internet would be fairly throttled by a change to everyone's TCP stack to help ease congestion on Internet backbones. With fair-sharing TCP stacks, ISPs wouldn't need to excessively punish people using P2P protocols that are designed to take advantage of TCP's "congestion control".

If you want to think of it another way, P2P protocols are NOT designed to be a "green" or environmentally friendly protocol. They are designed to be greedy and take advantage of known flaws in the current TCP congestion algorithm. A fair-sharing change to TCP would result in helping all traffic move more smoothly, not just on your connection to your ISP, but across the entire Internet backbone between you and your end destination.
60529262 (banned)
join:2007-01-11
Chicago, IL

60529262 (banned) to FFH5

Member

to FFH5
said by not said by user TK Junk Mail but he should have :

Everyone who has an axe to grind in the P2P debate and why cable companies throttle P2P should read this commentary. It is 4 pages long and technical, but it explains how the current implementation of TCP has allowed P2P software to hog bandwidth to Cable business model's detriment.

It also explains that the Cable cash flow problem can be fixed without banning P2P or competing video services delivered via broadband. But to do that requires the IEEE to modify TCP protocol standards and for Cable ISPs to develop bandwidth limiting procedures for those Cable users who won't upgrade to the new TCP stack client software or dare to venture outside the Cable walled garden.
There. That is much more accurate.

factchecker
@cox.net

factchecker to FFH5

Anon

to FFH5
said by FFH5:

Everyone who has an axe to grind in the P2P debate and why cable companies throttle P2P should read this commentary. It is 4 pages long and technical, but it explains how the current implementation of TCP has allowed P2P software to hog bandwidth to everyone's detriment.
The article would be okay if (a) it didn't have technical issues and (b) didn't start with the assumption that TCP is broken. The problem is not TCP, but rather how the applications are coded and the last mile networks. P2P applications need more aggressive TCP session pruning and last mile networks need to implement basic, protocol-neutral QoS to ensure that everyone gets a fair slice of the network - using settings like minimum Committed Bit Rates, etc.

TCP was NEVER designed to be "fair" and does not need to be updated to make it "fair". ECN would help with performance to end users if the makers of crappy SOHO routers could code with a damn to respect ECN flags. But the fact is, TCP is designed to do ONE thing and to do that well - move packets. Once we start adding in additional "fairness" mechanisms into the protocol, you break the concept that makes IP, TCP and UDP work so well - Keep It Simple Stupid.

But it is hard to ignore issue like this-
Simply by opening up 10 to 100 TCP streams, P2P applications can grab 10 to 100 times more bandwidth than a traditional single-stream application under a congested Internet link.
There are so many issues with that statement.

For example, if a P2P user already has one TCP session open, sending out data at 128kbps over his connection with a 256kbps upload, he is only using 128kbps. Open a second connection and upload at 128kbps and his total usage is 256kbps, his limit. Open three or four more sessions, and all of the sessions slow down as the user has hit the maximum rate upload speed of his connection. Open 100 sessions, the user still only uses 256kbps of upload bandwidth.

And his assumption than single sessions always generate less traffic than multiple sessions is also flawed. He, apparently, is not familiar with IP video cameras, Slingbox, etc.

As well those multiple sessions are not immune from congestion as the author thinks. Just because the user has 100 sessions open, does NOT guarantee that he will get 256kbps all the time. How much data TCP sessions can transfer are a function of the lower levels, including the network layer and how saturated it is.

Then there is the apparent assumption that P2P TCP sessions are immune from the AIMD algorithm that the author complains about... Simply false.

Then there is this statement:
Despite the undeniable truth that Jacobson’s TCP congestion avoidance algorithm is fundamentally broken, many academics and now Net Neutrality activists along with their lawyers cling to it as if it were somehow holy and sacred.
The author is the ONLY person that has labeled AIMD as being "fundamentally broken" as "undeniable truth". The internet engineering community has not said this, only the author has said this. Sorry, just because one guy with a ZDnet (a publication known for shoddy technical articles in the past) blog said it, does not make it so.

Then there is the whole last page on weighted TCP... The problem is that without some sort of hardware in the network to manage flows, what is proposed will never happen. TCP can not do what is being shown on its own, even if patched. It would require the network to target SPECIFIC TCP flows for the "weighted TCP" model to work. And it, essentially, is no different than using WFQ or CBWFQ, where class X can use a class Y's bandwidth until class Y needs it.

Sorry, but I'll take the positions in this article more seriously when a better written article shows up someplace like the IETF (not the IEEE) or someone writes about it on NANOG.
factchecker

factchecker to justbits

Anon

to justbits
said by justbits:

QOS/TOS flags are not a solution. Anybody can set those flags. Anybody can ignore those flags. QOS/TOS is only useful between you and your first hop onto the Internet, not between you and anybody else. Otherwise, everybody who is greedy would mark all of their packets as highest priority.
The QoS issue only applies to the first mile anyway, where the problem is. There is not bandwidth problem at the internet backbone level.

As well, QoS can easily be implemented in a way that prevents the "flags" issue you have mentioned.
The proposed change to TCP can result in fair-sharing of major Internet backbones as well as fair-sharing on your home Internet router. The big win with a fair-sharing TCP stack is that the major Internet backbones that carry GB/sec of traffic wont need to deploy excessive traffic shaping or deploy fake RST packets.
Major backbone providers have not even thought about using traffic shaping or forged RST packets. There is not a bandwidth problem on the internet backbones, nor will there be for awhile as there is still a lot of available bandwidth and many options to alleviate any problems (turning up new wavelengths, etc.) This is a problem in last mile networks only.
And it seems to result in a huge increase in performance for lower-bandwidth single-connection users like VoIP and web surfing.
There are VERY few single-session TCP applications left. VoIP is not one of them. And web surfing is not one either (any longer).
If you want to think of it another way, P2P protocols are NOT designed to be a "green" or environmentally friendly protocol.
Then the problem is the P2P algorithm, NOT TCP. Fix the P2P algorithms, fix them problem. Mangle TCP, problem remains.

swhx7
Premium Member
join:2006-07-23
Elbonia

swhx7 to factchecker

Premium Member

to factchecker
I think you're basically right. Here are some views of Slashdotters who said it as well as I could:
Not all sessions experience the same congestion (Score:5, Interesting) by thehickcoder (620326) * on Monday March 24, @10:44AM (#22844896)

The author of this analysis seems to have missed the fact that each TCP session in a P2P application is communicating with a different network user and may not be experiencing the same congestion as other sessions. In most cases (those where the congestion is not on the first hop) It doesn't make sense to throttle all connections when one is effected by congestion.

Re:Not all sessions experience the same congestion (Score:4, Informative) by Mike McTernan (260224) on Monday March 24, @12:12PM (#22845810)

Right. The article seems to be written on the assumption that the bandwidth bottleneck is always in the first few hops, within the ISP. And in many cases for home users this is probably reasonably true; ISPs have been selling cheap packages with 'unlimited' and fast connections on the assumption that people would use a fraction of the possible bandwidth. More fool the ISPs that people found a use [plus.net] for all that bandwidth they were promised.

FUD (Score:5, Insightful) by Detritus (11846) on Monday March 24, @11:27AM (#22845298)

The whole article is disingenuous. What he is describing are not "loopholes" being cynically exploited by those evil, and soon to be illegal, P2P applications. They are the intended behavior of the protocol stack. Are P2P applications gaming the system by opening multiple streams between each pair of endpoints? No. While we could have a legitimate debate on what is fair behavior, he poisons the whole issue by using it as a vehicle for his anti-P2P agenda.

funchords
Hello
MVM
join:2001-03-11
Yarmouth Port, MA

1 recommendation

funchords to FFH5

MVM

to FFH5
said by FFH5:

Everyone who has an axe to grind in the P2P debate and why cable companies throttle P2P should read this commentary. It is 4 pages long and technical, but it explains how the current implementation of TCP has allowed P2P software to hog bandwidth to everyone's detriment.
TK Junk Mail,

The only people who can call this "technical" are the people who have been adequately buffaloed into thinking that George Ou knows what he is talking about.

He does not.

I've made several comments to the article, pointing out where George gets it wrong. This time, as before, his responses indicate a lack of familiarity with the content of his own article! I am beginning to believe that Richard Bennett is George Ou's ghost writer.

SpaethCo
Digital Plumber
MVM
join:2001-04-21
Minneapolis, MN

1 recommendation

SpaethCo

MVM

said by funchords:

The only people who can call this "technical" are the people who have been adequately buffaloed into thinking that George Ou knows what he is talking about.

He does not.
Robb,

In all seriousness I truly do respect many of the things you post, but in this case George (while not 100% correct in his position either) is a heck of a lot closer to reality than you are in your counter arguments with him. Oversubscription at the edge is common in every network design; it's a matter of efficiency. Designing for full capacity for all edge ports is like designing freeways so that there would never be rush hour congestion; you'd bankrupt yourself in the process of building it.

The TCP fairness problem exists on the segments that experience saturation on a regular basis, which is generally the segment between the CMTS head-ends and the cable modems. Contrary to what George Ou posted the problem will present itself on both the upstream and downstream segments, but will manifest itself more readily in the lower capacity upstream segments.

You've stated a few times that TCP throttling is irrelevant because people are already limited by the upstream connection of their cable modem. That's a bit like saying "I still have checks, how can my account be out of money?!" The gotcha is that there is not enough upstream capacity for everybody to transmit at once, and when congestion occurs on the common upstream channel to the CMTS head-end, the TCP sessions will scale themselves in relation to capacity on the entire channel, not according to the provisioned capacity for each cable modem. For example:

Say you have an 8/1 service no a DOCSIS 1.1 network. Say you have 9 users all uploading in a single TCP session each at the same time, each user will get 1mbps to saturate out the connection.

Now say you had 18 users again all using a single TCP session, TCP will naturally balance things out so that each connection will average about 500kbps.

Now say you had 17 users, but one of the users had 2 TCP sessions. 9mbps channel capacity / 18 TCP sessions = 500kbps per TCP connection. That means 16 people will see 500kbps, and the 1 user using 2 TCP sessions will see 1mbps.

The numbers don't break down quite that neat in the field, but overall they're pretty close.

funchords
Hello
MVM
join:2001-03-11
Yarmouth Port, MA

1 edit

funchords

MVM

said by SpaethCo:

Oversubscription at the edge is common in every network design; it's a matter of efficiency.
I really do understand this, espaeth. I'm not one of these guys who think that 384 Kbps should be reserved for my account 24/7/365. Please help me by showing me where I said something different. (This is a serious request. If I screwed up and wrote something carelessly, I don't want it to be misinterpreted.)

I am having trouble with the semantics. Let me explain:

Based on past experience, a service provider observes that for every 1000 customers, no more than 100 are ever attempting to upload at any one time. During his past 20 busiest hours, this has remained true for 19 of them (95%*). Therefore, he builds his network to handle THAT capacity -- the capacity of 100 simultaneous uploads. To me, that's not oversubscription. That's simply the way that bandwidth is planned and provided.

*I don't necessarily mean to invoke the meaning of the 95th percentile here. Instead, I'm trying to say that the observation is reliable 95% of the time.

Is the above "over-subscription?" Or is that roughly the way that all data customers of any kind plan and allocate for their needs?

I have heard people say that an ISP should not sell more accounts than they could handle if everyone jumped online and started to upload/download at capacity. I've never been one of those voices.

When I say "over-subscription," I mean that the service provider has stopped trying to reliably meet the data demands of his users.

Are there more accurate or concise terms I should use?
funchords

2 edits

funchords to justbits

MVM

to justbits
said by justbits:

If you want to think of it another way, P2P protocols are NOT designed to be a "green" or environmentally friendly protocol. They are designed to be greedy and take advantage of known flaws in the current TCP congestion algorithm.
That is simply untrue. Please do not be fooled by George Ou's vilification of P2P!

You can't exploit anything by downloading, since you're not on the sending side and you do not get any network feedback indicating congestion. The download side is also not heavily congested on these last-mile residential networks. The side an application or user can control is the upload side.

Of four major P2P File-Sharing protocols in use, DC++, Gnutella, and ED2K all deliver a requested file to a peer with a slot. These three behave more like http or ftp servers when transferring data - 1 slot per file request. They don't have the behavior that you (and George Ou) accuse of being an exploit.

I know developers across several of the BitTorrent teams, and they are extremely sensitive to responding to changing network conditions.

BitTorrent, the #1 protocol world-wide, does connect to 35-60 peers in a swarm, but it only uploads simultaneously to 3-4 of them! One of those upload slots is used for optimistic unchoking (looking for better uploading peers in exchange for your uploading activity).

If network congestion occurs on a connection with any of those 3-4 peers, that peer's performance will drop and then will be choked (traffic to that peer from your client is stopped). A different peer is selected to replace it. This process takes a maximum of 30 seconds. In this way, a congested link is relieved of the burden and will be retried later when the Optimistic Unchoke gets back around to that part of the peer list.

Neither FTP or HTTP do this. They continue to draw across the congested link for the entirety of the download.

BitTorrent is far more careful about congested links than many realize!

You can verify these claims either by testing them yourself (hard for many to do) or by looking at the protocol at http://wiki.theory.org/BitTorrentSpecification#Choking_and_Optimistic_Unchoking.

SpaethCo
Digital Plumber
MVM
join:2001-04-21
Minneapolis, MN

1 recommendation

SpaethCo

MVM

said by funchords:

You can't exploit anything by downloading, since you're not on the sending side and you do not get any network feedback indicating congestion. The download side is also not heavily congested on these last-mile residential networks. The side an application or user can control is the upload side.
This is the argument where both you and Mr. Ou miss the boat. TCP is a connection oriented protocol with guaranteed delivery and flow control. The sending side of the connection cannot release any data beyond a TCP window's worth of packets until you acknowledge the previous bundle's safe receipt. There is no difference between uploading or downloading in implementation, the key issue is congestion and how the protocol deals with it. TCP's congestion avoidance algorithm tunes itself based on round trip time calculations and packet loss. Since every TCP connection follows the exact same rules on how to deal with congestion, each TCP session is essentially equal on the network. You don't configure anything for how TCP will balance itself out, whether you are on a 56k dialup connection or have straight GigE access all the way out to the Internet the protocol self-adapts itself to the conditions.

The bottom line is that in the event of congestion, every TCP session backs off in roughly the same proportional amount. As such, the people with the most TCP sessions pumping their data are going to win for throughput. The algorithm is remarkably fair for bandwidth per TCP session, where the whole thing gets hosed up is the number of TCP sessions per end user.

funchords
Hello
MVM
join:2001-03-11
Yarmouth Port, MA

funchords

MVM

There is no difference between uploading or downloading in implementation...
Correct, but there is a rather physical difference. When you are uploading, you are sending fat packets back-to-back. When you are downloading, you are sending tiny packets with large gaps between. Given any random moment when congestion occurs, it's the uploader that is very likely to lose a packet and initiate congestion control responses. The downloader avoids losing a packet simply because his tiny "ACK" packets present a smaller target.
TCP's congestion avoidance algorithm tunes itself based on round trip time calculations and packet loss. Since every TCP connection follows the exact same rules on how to deal with congestion, each TCP session is essentially equal on the network.
Only if each route is the same and only if each end-system is as responsive as all of the others. IRL, this is never the case.
The bottom line is that in the event of congestion, every TCP session backs off in roughly the same proportional amount.
If and only if all connections drop a packet (or they all provide whatever else congestion cue might be available on that route).
The people with the most TCP sessions pumping their data are going to win for throughput.
Not withstanding my arguments above, they are only going to win during that brief interval of recovery. If the network is not congested, there is no problem that Bob Briscoe's proposal fixes. It does not avoid congestion. It does not reduce the amount of bandwidth used by P2P.

It's a bit like saying -- when IRQ 3 goes up, it takes the CPU 300% more time to service it does to service IRQ 4 or IRQ 7. Therefore, when the system starts to crawl, we're going to triple the CPU time given to IRQ 4 or IRQ 7 (or we're going to skip IRQ 3 66% of the time -- just to be fair). At that point, is the real problem fairness?

And to you I ask -- should the network be running at "congestion" often enough and for periods long enough for Bob Briscoe's suggestion to actually matter very much? Is a network that runs at "congestion" for long durations a healthy one?

And to everyone I ask -- when you read George Ou's article, is his intent to solve a problem? Or is George really trying to brand BitTorrent as some kind of an exploiter -- so that any thoughts about Network Neutrality would only apply fractionally toward BitTorrent based on how many open TCP connections it has (including the idle connections)?
The algorithm is remarkably fair for bandwidth per TCP session, where the whole thing gets hosed up is the number of TCP sessions per end user.
Which only comes into play when the network capacity is exceeded -- a condition which network providers are expected to avoid. We pay ISPs to meet user demand, not to shape it into their own vision of what user demand ought to be.

justbits
DSL is dead. Long live DSL!
Premium Member
join:2003-01-08
Chicago, IL

justbits

Premium Member

said by funchords:

And to everyone I ask -- when you read George Ou's article, is his intent to solve a problem? Or is George really trying to brand BitTorrent as some kind of an exploiter -- so that any thoughts about Network Neutrality would only apply fractionally toward BitTorrent based on how many open TCP connections it has (including the idle connections)?
The algorithm is remarkably fair for bandwidth per TCP session, where the whole thing gets hosed up is the number of TCP sessions per end user.
Which only comes into play when the network capacity is exceeded -- a condition which network providers are expected to avoid. We pay ISPs to meet user demand, not to shape it into their own vision of what user demand ought to be.
You've singled out BitTorrent, when it's not the only P2P protocol that's out there. The graphs, in particular, show several other P2P protocols. Modified BitTorrent clients and other P2P apps are designed to be the most greedy of all network users.

Yes, we ideally are paying ISPs to provide quality Internet service, but does your Terms Of Service Agreement say anything about quality of service? Most likely _no_. Yes, they should have been and should be upgrading their pipes continually. Yes, they likely have been shucking their responsibility on this in favor of providing returns to investors. So, if a change to everybody's TCP stack would make it less necessary for ISPs to spend money on network traffic control devices like Sandvine and instead spend money on upgrading the network, I'm all for that! The problem that Sandvine excessively solves ideally results in a better network experience for all users. So, modifying all TCP stacks to have better congestion control could result in less need for network traffic devices or policies that are as aggressive as Sandvine's.

So, if the network traffic becomes more self-policing/self-regulating by modifying underlying existing Internet protocols, then, maybe a paradigm shift will occur. Maybe then network neutrality will become a battle ground that's more directed at ISPs failing to provide bandwidth instead of it currently being an attack against ISPs for attempting to implement their own forms of congestion control.

factchecker
@cox.net

factchecker to SpaethCo

Anon

to SpaethCo
said by SpaethCo:

Now say you had 17 users, but one of the users had 2 TCP sessions. 9mbps channel capacity / 18 TCP sessions = 500kbps per TCP connection. That means 16 people will see 500kbps, and the 1 user using 2 TCP sessions will see 1mbps.
Here's the problem I see with this discussion... The emphasis is being place on TCP when the problem is within the network. TCP, in your situation is functioning EXACTLY as it is suppose to - move data for each session as quickly as possible, scaling back each session only when network conditions dictate that full speed ahead won't work.

TCP is not what is broken. And anyone who frames the discussion in terms of TCP being broken is missing the more fundamental problem.

The _network_ allows a single users to get more than their "fair share" of the network. In your example, one user is able to get more bandwidth because the network allows it. It is one of the problem with contention based (or shared) networks - one station can monopolize the network if nothing prevents it from doing so.

Instead of rewriting TCP, a much easier step can be taken.

Enable QoS in a manner than enables the bandwidth to be equally shared among uploading systems when congestion exists, otherwise allow system to send as fast as their connection permits.

See the following on fair queuing:
»en.wikipedia.org/wiki/Fa ··· _queuing

Routers, DSLAMS and CMTSs all have the capabilities to enable this type of behavior. ISPs just need to leverage it. This solution is MUCH easier than writing, testing and deploying patches to TCP on a myriad of platforms.

funchords
Hello
MVM
join:2001-03-11
Yarmouth Port, MA

funchords to justbits

MVM

to justbits
said by justbits:

You've singled out BitTorrent, when it's not the only P2P protocol that's out there. The graphs, in particular, show several other P2P protocols. Modified BitTorrent clients and other P2P apps are designed to be the most greedy of all network users.
I did this on purpose. Worldwide, BitTorrent is top dog by a long shot. Also, #2-#4 (DC++, ED2K, Gnutella/G2) behave more like FTP and HTTP -- and so they can't 'exploit' as George Ou describes. The 2006 chart is from Japan -- which not only has a very different in-country architecture than in North America, but also has a very different pattern of application adoption. North American ISPs don't want to show us their charts, even to researchers working under NDA.
said by justbits:

Yes, we ideally are paying ISPs to provide quality Internet service, but does your Terms Of Service Agreement say anything about quality of service? Most likely _no_.
Actually, now that you mention it, it does (-- in a way)! (And I had not thought of this before now, so if this idea is half-baked, it's because it's half-baked.)

The name of the Comcast tier that I am on is called 6Mbps. Six months ago, Verizon Wireless settled with the State of New York because they were advertising Unlimited service that was not, in fact, unlimited. Now a settlement sets no court precedent, and a New York case may only have consultative value outside of that state, but it does demonstrate that such a case would have merit.

That aside, your question has nothing to do with this topic. We're not talking about Quality of Service here (unless I missed your point)?
The problem that Sandvine excessively solves ideally results in a better network experience for all users.
All? Have you read my original report? Sandvine deteriorated my network experience (link in signature). That's how Comcast got caught! Comcast improved the experience for some at the expense of others -- even if those others were completely within the boundaries of the law and their subscription terms.
everybody's TCP stack would make it less necessary for ISPs to spend money on network traffic control devices like Sandvine and instead spend money on upgrading the network, I'm all for that!
Upgrading the network instead of wasting it on Sandvine is a simpler solution. The logic of your thinking reminds me of something from the "Bastard Operator from Hell." It goes something like this:
Help Desk: "You would like more space?"
Caller: "Yes, please!"
Help Desk (typing commands): RMDIR %USERPROFILE% /S /Q (command to delete user's files)
Caller: "AIEEEEEEEEEEEEEEEEEEEEEH!"
Help Desk: "You should have plenty of space now!"
Secondly -- and I said this above, too -- making the proposed changes does nothing to alleviate congestion. It only changes the behavior during the brief moments between congestion and recovery. Said differently, it doesn't prevent your computer from crashing, it just changes the order of the reboot process.

SpaethCo
Digital Plumber
MVM
join:2001-04-21
Minneapolis, MN

1 edit

SpaethCo

MVM

said by funchords:

Upgrading the network instead of wasting it on Sandvine is a simpler solution.
You keep making this statement over and over again like it's an obvious binary decision. I've said this before, but that statement just keeps reading as "To solve your debt problem you should just acquire more money rather than wasting time cutting spending."

Given the state of where MSOs are at with frequency capacity and deployable DOCSIS technology (ie, still pre 3.0) the only way to "upgrade the network" is to do node splits or ungodly expensive node additions. (ie, poured concrete slab, utility power, cabinet, trench in fiber, split the coax plant, re-engineer the amp placement and gain structure, etc)

You represent the options as cost equivalent, but they're not even in the same ball park. For the cost of a single node addition Comcast can probably buy Sandvine appliances for an entire region. You're talking about a box with a couple power cords, a couple network connections, and some licensing overhead compared to right-of-way contracts, conduit & fiber, power utility installation, cabinet/equipment fees, and a significant number of engineering hours to implement.

TV services are where the cable companies make most of their money. If it were TV-related services driving the need for expansion it might have a better chance at gaining funding. Since HSI is the only product driving the need for expansion, managing the traffic is the cheaper/faster/better approach for right now. I don't have a problem with MSOs practicing this type of network management as long as they are up-front about it. The reality is that some level of filtering will always be present on the Internet, the same way that our "free" society still has laws. Management will always be required to curb "abuse", be it mitigating Denial of Service attacks, filtering SPAM, or restricting network traffic in accordance with "Fair Access Policies". The big issue is that the rules and methods of traffic management must be disclosed; wishing for a complete "hands-off" approach to network management is a pipe dream that will rapidly turn into a nightmare.

funchords
Hello
MVM
join:2001-03-11
Yarmouth Port, MA

funchords

MVM

said by SpaethCo:

You represent the options as cost equivalent, but they're not even in the same ball park. For the cost of a single node addition Comcast can probably buy Sandvine appliances for an entire region. You're talking about a box with a couple power cords, a couple network connections, and some licensing overhead compared to right-of-way contracts, conduit & fiber, power utility installation, cabinet/equipment fees, and a significant number of engineering hours to implement.
The Sandvine products containing the P2P policy enforcement has been the cash-cow for Sandvine. Comcast is believed to be more than 50% of its business, according to numerous analysts. Sandvine is predicting its annual income to be $80-85 Million. How much does it cost to split a node or add a node? Nobody is talking.

Don't assume that just because it's a box with two wires that it is cheap. There is a significant investment in R&D in these products, and due to patents, Sandvine can exclusively offer certain of their technology. All this means that the cost of raw materials has nothing to do with the purchase price of the device.

And I'm not sure it will cost them much at all -- take this quote from Tony Werner, CTO of Comcast:
quote:
The return bandwidth is not on the worry list right now, for a bunch of reasons. For one, were splitting a lot of nodes based on the success of voice, high-speed Internet, and VOD. In other words, all based on downstream requirements, not upstream.

On HSD (high-speed data), Im using two to three 3.2 MHz carriers (upstream). A lot more than that are sitting fallow in my CMTS cards. In most markets, I still have 12 MHz of bandwidth I can reclaim from circuit switched voice, once we migrate off of those platforms. So for now, the 5-42 MHz to me seems plenty adequate.

...

As we hit 70 percent utilization, we issue a work order to split the node. But it depends on utilization. Usually we set it to split to 250 homes. And for us, 65 percent of our node splits are really decoupling of nodes at the headend."

»www.cedmagazine.com/how- ··· nty.aspx

SpaethCo
Digital Plumber
MVM
join:2001-04-21
Minneapolis, MN

SpaethCo

MVM

said by funchords:

Comcast is believed to be more than 50% of its business, according to numerous analysts. Sandvine is predicting its annual income to be $80-85 Million.
Annual income in which calendar year? They can only sell the hardware once, and recurring subscription/maintenance fees aren't going to drive the same level of return as the initial capital outlay.
said by funchords:

How much does it cost to split a node or add a node? Nobody is talking.
Comcast talks to investors all the time -- I get a nice prospectus from them every year that gives a high level overview of the company, it's operation, and it's financial performance. There are a few different methods of splitting a node. Most nodes start out combined at the head-end with multiple HFC nodes sharing a common head-end port (at least on the downstream). Those can be split by simply breaking the nodes into their own CMTS ports at the head-end. According to Comcast investor numbers this costs about $7500 per split ($2500/year on a 3 year depreciation cycle). The next split happens at the node itself where there is typically a north, south, east, and west string that branches out from the platform. Each segment can be broken out to have its own dedicated frequencies back to the CMTS with dedicated fiber. The investor numbers that were given for that are $18,000 per split (again $6k/3years). If one segment of a node becomes heavy (ie, north) you need to inject another distribution point in the coax plant to divide the infrastructure out yet again. Comcast places a buildout value of $60,000 on that split ($20k / 3years). Personally I think that value is low, but it is possible if neighborhood platform buildout is not factored into the cost. (ie, a new development goes in and the developer establishes a concrete pad with power hookups for telco / MSO / utility equipment) As of the investor documentation Comcast sent out in 2007 they had approximately 102,000 HFC nodes on their network.
said by funchords:

Don't assume that just because it's a box with two wires that it is cheap. There is a significant investment in R&D in these products, and due to patents, Sandvine can exclusively offer certain of their technology.
My assumption wasn't based strictly on hardware, but on pricing that I know personally from having cut purchase orders. If I use boxes from F5 Networks and 8E6 Technologies as a starting point, my guess would be that each P2P appliance probably costs about $30k.
said by funchords:

And I'm not sure it will cost them much at all -- take this quote from Tony Werner, CTO of Comcast
You have to keep in mind the audience of that article -- you don't tell investors that you're backed into a corner when it comes to your infrastructure. As far as expandability goes, he's right there are a lot of options for long term growth but all that development takes time. It also glosses over strategic value of certain upgrades. For example, if you've got open ports on your DOCSIS 1.1 CMTS line cards then doing node splits is no big deal. When you start talking about procuring hardware things become a lot more sticky -- this gear either has a 3 or 5 year depreciation cycle, so with the company's aggressive stance on DOCSIS 3.0 deployment the last thing they want to do is sink a bunch of capital into pre-3.0 hardware and be stuck with it past the end of the decade.

funchords
Hello
MVM
join:2001-03-11
Yarmouth Port, MA

funchords

MVM

said by SpaethCo:
said by funchords:

How much does it cost to split a node or add a node? Nobody is talking.
Comcast talks to investors all the time -- I get a nice prospectus from them every year that gives a high level overview of the company, it's operation, and it's financial performance. There are a few different methods of splitting a node. Most nodes start out combined at the head-end with multiple HFC nodes sharing a common head-end port (at least on the downstream). Those can be split by simply breaking the nodes into their own CMTS ports at the head-end. According to Comcast investor numbers this costs about $7500 per split ($2500/year on a 3 year depreciation cycle). The next split happens at the node itself where there is typically a north, south, east, and west string that branches out from the platform. Each segment can be broken out to have its own dedicated frequencies back to the CMTS with dedicated fiber. The investor numbers that were given for that are $18,000 per split (again $6k/3years). If one segment of a node becomes heavy (ie, north) you need to inject another distribution point in the coax plant to divide the infrastructure out yet again. Comcast places a buildout value of $60,000 on that split ($20k / 3years). Personally I think that value is low, but it is possible if neighborhood platform buildout is not factored into the cost. (ie, a new development goes in and the developer establishes a concrete pad with power hookups for telco / MSO / utility equipment) As of the investor documentation Comcast sent out in 2007 they had approximately 102,000 HFC nodes on their network.
Awesome! I spent about a half-hour looking for this information in Google -- and I probably can find that prospectus on the Investor Relations site. :::SHEEESH::: (BTW, there's some interesting "physical node split" alternatives being offered -- at least interesting by title only -- when searching Google).

$60K strikes me as low, too. Quite honestly, the numbers in my head had 6 digits and they didn't start with a 1.

The depreciation thing confuses me -- does this mean they can't charge the entire expense in the same year? Or does it mean they can charge the entire expense in the purchase year and then write-off the value of the capital (as a loss) over the next 3 years? (Other than taxes, does the depreciation amount have any value to this debate?)
the last thing they want to do is sink a bunch of capital into pre-3.0 hardware and be stuck with it past the end of the decade.
If Verizon doesn't get back on the stick, perhaps they can.
funchords

funchords to SpaethCo

MVM

to SpaethCo
said by SpaethCo:

My assumption wasn't based strictly on hardware, but on pricing that I know personally from having cut purchase orders. If I use boxes from F5 Networks and 8E6 Technologies as a starting point, my guess would be that each P2P appliance probably costs about $30k.
So, based on numbers like yours, and the fact that 65% of Comcast's node splits are of the "virtual" kind (either the $7.5K or $18K type), please tell me if you think is it safe-ish to say: "Comcast spends about as much on node splits as they do on Sandvine."

SpaethCo
Digital Plumber
MVM
join:2001-04-21
Minneapolis, MN

SpaethCo

MVM

How many Sandvine appliances do you honestly think they have? My guess would be no more than a pair per market head-end.

funchords
Hello
MVM
join:2001-03-11
Yarmouth Port, MA

funchords

MVM

Wow. I was thinking more than that -- some number higher than 1 per 10 Gbps hanging off the router at the aggregation point. (That's of the PTS 14000 model). I have no idea regarding the 8210s which they probably have, too.