dslreports logo
 story category
Stanford FCC Meeting One-Sided but Useful
Lessig dominated the conversation; ISPs were absent
The FCC meeting at Stanford appears to have gone well, with multiple speakers discussing the topic of network neutrality in-depth for the commission to hear. As predicted, well-known net-neutrality crusader (and Stanford professor) Larry Lessig spoke extensively at the meeting. Unlike the other speakers, Lessig was given an unlimited amount of time to make his points.
quote:
"Two of his stronger points, which you can expect to see repeated, were one, that Net Neutrality principles have been the historic base of the Internet, and have been responsible for its unbridled competition and growth. And two, that providers should be governed by clear rules that make it more expensive for them to restrict network access than to provide broadband that doesn’t differentiate or prioritize different traffic types."
Some say that it’s great that these points were made to the FCC. However, others say that the absence of Comcast and the other ISPs made this too much of a one-sided debated. Those who missed out on the meeting can watch some of the video footage here.
view:
topics flat nest 

FFH5
Premium Member
join:2002-03-03
Tavistock NJ

1 edit

FFH5

Premium Member

Comcast's EVP Cohen says Martin's thoughts irrelevant

»www.theregister.co.uk/20 ··· thority/
Comcast has told the chairman of US Federal Communications Commission that he has no legal right to prevent the company from busting BitTorrents.

The FCC is currently investigating claims that the big-name ISP put a choke hold on BitTorrents and other P2P traffic. But clearly, Comcast believes this is an epic waste of time.

With a recent FCC filing (PDF), Comcast executive vice president David Cohen informed Chairman Kevin Martin that even if the ISP is guilty of violating the commission's Internet Policy Statement, the commission doesn't have the power to do anything about it.

At first, Comcast countered this argument by pointing out that the Policy Statement allows for "reasonable network management" and that it's BitTorrent throttling was nothing less than reasonable. And some experts have agreed. But now the company is saying it doesn't really matter whether the practice is reasonable or not.

"There are several reasons why the Commission cannot lawfully issue an injunction against Comcast with regard to the provision of Internet services, even were it to conclude - contrary to what we have demonstrated in our previous communications with the Commission - that Comcast's behavior is inconsistent with the Internet Policy Statement," reads David Cohen's FCC filing.

At one point, Cohen says "It is settled law that policy statements do not create binding legal obligations. Indeed, the Internet Policy Statement expressly disclaimed any such intent."

When the Policy Statement was unveiled, Chairman Martin did say that policy statements were not "enforceable documents."

Cohen also says that if the FCC enjoined Comcast, it would run afoul of its own 2002 Cable Modem Declaratory Ruling and Congress' 1946 Administrative Procedures Act. And he's adamant that an FCC fine is out of the question as well.

"For all these reasons, there is no basis upon which the Commission could lawfully adopt any sort of prospective injunction in the current setting. I might add that all of these reasons - plus others that we have previously detailed - would apply to any purported assessment of monetary forfeitures based on prior conduct."
Translation: Cohen says to Martin "Your Mama".

And from the meeting at Stanford, it appears the FCC will probably do nothing anyway:

»www.washingtonpost.com/w ··· 025.html
FCC commissioners Michael Copps and Jonathan Adelstein called for the agency to strengthen its power to prevent Comcast and its competitors from unfairly discriminating against some customers, reports AP. But two others, Deborah Tate and Robert McDowell, warned against burdening the industry with additional, costly regulations.

FCC Chairman Kevin Martin played the quasi-neutral game on it, but leaning towards anti-regulation: he argued that the FCC's current Internet policy is sufficient, but it needs to be enforced to guarantee that whatever actions ISPs are taking "is tailored to a legitimate purpose." He also said that Comcast and other companies should be permitted to manage their networks to ensure traffic flows smoothly, but that customers should be given notice.
Sounds like a 2 to 2 tie and Martin planning on doing nothing but jawboning for the press.

funchords
Hello
MVM
join:2001-03-11
Yarmouth Port, MA

funchords

MVM

Re: Comcast's EVP Cohen says Martin's thoughts irrelevant

said by FFH5:

»www.theregister.co.uk/20 ··· thority/
FCC Chairman Kevin Martin played the quasi-neutral game on it, but leaning towards anti-regulation: he argued that the FCC's current Internet policy is sufficient, but it needs to be enforced to guarantee that whatever actions ISPs are taking "is tailored to a legitimate purpose." He also said that Comcast and other companies should be permitted to manage their networks to ensure traffic flows smoothly, but that customers should be given notice.
Sounds like a 2 to 2 tie and Martin planning on doing nothing but jawboning for the press.
It should be understood, however, that these remarks were prepared before the hearing started.

I think that Martin has 4 votes (he needs three) to stop this particular type of interference. He has 0 votes to stop network management, because nobody wants to stop network management.

Martin thinks that the FCC policy statement of 2005 is all that is needed, and provides citations for his position (this supports the Free Press' position). Comcast says no, and (I believe) has good arguments and citations as well.

I think it will work out this way:

The FCC will vote 4-1 or 5-0 to stop this particular type of interference, granting one of the reliefs requested by Free Press. Comcast will appeal and fail. The FCC will find that its power to make policy exists.

However, I think that the FCC will have difficultly in fining Comcast due to the lack of action it took in 2005. Courts would (or will) say, and rightly so, that you can't decide not to regulate something -- but say things in a press release as to how you would have regulated something if you did; and then later declare them regulation ex post facto.

The distinction between the two is the decision in 2008 to order the stop of a particular type of interference.

Commissioner Copps is right. An enforceable policy is needed so that the FCC can practice oversight. And Chairman Martin is right that policy has a dampening effect on an industry -- but he's ignoring the fact that competition doesn't exist for wireline broadband, and access providers are abusing that fact.

SpaethCo
Digital Plumber
MVM
join:2001-04-21
Minneapolis, MN

SpaethCo

MVM

Re: Comcast's EVP Cohen says Martin's thoughts irrelevant

said by funchords:

I think that Martin has 4 votes (he needs three) to stop this particular type of interference. He has 0 votes to stop network management, because nobody wants to stop network management.
Ok, here's the $10,000 question: What's the alternative management solution that can be implemented on existing hardware right now that doesn't result in non-P2P users on the network getting screwed?

funchords
Hello
MVM
join:2001-03-11
Yarmouth Port, MA

1 edit

funchords

MVM

Re: Comcast's EVP Cohen says Martin's thoughts irrelevant

that doesn't result in non-P2P users on the network getting screwed?
So it's okay to screw some customers but not others?

CAN BE DONE RIGHT NOW: Turn off the policy-enforcement part of Sandvine and let TCP Congestion Control handle it. If anyone gets screwed, everyone gets screwed relatively fairly.

And if parts of the network are truly over-oversold to the point that its in a constant state of congestion, then the calls demanding upgrades will identify those areas and give management an idea of which ones need the fastest attention.

SpaethCo
Digital Plumber
MVM
join:2001-04-21
Minneapolis, MN

1 recommendation

SpaethCo

MVM

Re: Comcast's EVP Cohen says Martin's thoughts irrelevant

said by funchords:

So it's okay to screw some customers but not others?
If you're running an application that causes network degradation, absolutely.
said by funchords:

CAN BE DONE RIGHT NOW: Turn off the policy-enforcement part of Sandvine and let TCP Congestion Control handle it. If anyone gets screwed, everyone gets screwed relatively fairly.
This is absolutely false. Anyone with access to an Ixia or Spirent traffic generator and a simple network lab should be able to map this out without any issue. If you're unable to line up something out there I'd be happy mock up something here in Minneapolis and do a MeetingPlace screen sharing session of the traffic generator screen to show how the flows do *NOT* equally balance out in the way you suggest. This is network 101 stuff...
said by funchords:

And if parts of the network are truly over-oversold to the point that its in a constant state of congestion
With a sufficient quantity of applications driving 24x7 full-link-speed traffic, the only network configuration that will satisfy those demands is 1:1 mapping of edge subscription to backbone bandwidth. It can be built, but I don't think most people would like the pricing.

funchords
Hello
MVM
join:2001-03-11
Yarmouth Port, MA

1 edit

funchords

MVM

Re: Comcast's EVP Cohen says Martin's thoughts irrelevant

said by SpaethCo:
said by funchords:

So it's okay to screw some customers but not others?
If you're running an application that causes network degradation, absolutely.


Except that I'm not. I was running a peer-to-peer connection at sub-1 KB/s speeds and was still getting reset. And I wasn't doing anything abusive.
said by SpaethCo:
said by funchords:

CAN BE DONE RIGHT NOW: Turn off the policy-enforcement part of Sandvine and let TCP Congestion Control handle it. If anyone gets screwed, everyone gets screwed relatively fairly.
This is absolutely false. Anyone with access to an Ixia or Spirent traffic generator and a simple network lab should be able to map this out without any issue. If you're unable to line up something out there I'd be happy mock up something here in Minneapolis and do a MeetingPlace screen sharing session of the traffic generator screen to show how the flows do *NOT* equally balance out in the way you suggest. This is network 101 stuff...
I'll take you up on that. (I always enjoy meeting someone that I respect.) However, I did use the term "relatively" on purpose (you and I have discussed even better ways than the default router behavior). And, I am also running on the assumption that the network doesn't idle on "heavily congested."

If that's a bad assumption, I'd like to know about that, too.
said by SpaethCo:
said by funchords:

And if parts of the network are truly over-oversold to the point that its in a constant state of congestion
With a sufficient quantity of applications driving 24x7 full-link-speed traffic, the only network configuration that will satisfy those demands is 1:1 mapping of edge subscription to backbone bandwidth. It can be built, but I don't think most people would like the pricing.
I also used the term "over-oversold" on purpose. As you know, I have no problem with the concept of dividing the bandwidth by 200% or 300% -- but that xxx% has to be "managed" in such a way that the users can still count on getting their subscribed speeds within a reasonable degree of confidence. I say that number is in the mid-to-high 90%s.

Lessig says he typically doesn't get half the speed he's paying for. I usually can get more than I'm paying for, with the exception of BitTorrent uploads, these days.

SpaethCo
Digital Plumber
MVM
join:2001-04-21
Minneapolis, MN

1 edit

SpaethCo

MVM

Re: Comcast's EVP Cohen says Martin's thoughts irrelevant

said by funchords:

said by SpaethCo:If you're running an application that causes network degradation, absolutely.
Except that I'm not. I was running a peer-to-peer connection at sub-1 KB/s speeds and was still getting reset. And I wasn't doing anything abusive.
I'm only one guy with a car, it's not like I'm contributing to global warming...
said by funchords:

I'll take you up on that. (I always enjoy meeting someone that I respect.)
I'll try and set something up. The SmartBits box I want to use is out in Hartford for VoIP QoS policy testing right now, but should be back in our office in a couple weeks.
said by funchords:

However, I did use the term "relatively" on purpose (you and I have discussed even better ways than the default router behavior). And, I am also running on the assumption that the network doesn't idle on "heavily congested."
The network could very well idle on heavily congested. The problem, of course, is that not every network publishes good stats and not every user community is identical. Plusnet in the UK is a good example of a company that is publishing raw stats for how they manage their network. The blog entry shows the case in 2006 where P2P was present at all times of the day and ramped up at night when they reduced their throttling of P2P traffic.

»www.p2p-blog.com/item-116.html

If you click on the link you can see current stats for today, and while they are clearly more agressive in limiting P2P today the same pattern holds true: P2P usage goes up at night and the network is far from idle.

The other problem is that user base can play a huge role in the amount of P2P traffic on the network. Take a look at the traffic analysis from Sprint labs where they did an analysis of traffic across the Sprintlink backbone. In their later analysis they only looked at San Jose node traffic, which is unfortunate because if you look at the earlier work San Jose shows an unusually low amount of P2P traffic compared to New York.

»ipmon.sprintlabs.com/pac ··· p?020203

»ipmon.sprintlabs.com/pac ··· p?020806

In that regard it's difficult to gauge the impact of P2P traffic on your network without knowing the composition of the traffic generated by your segment of users. One thing comes away clearly from looking at these graphs though: there is no peak and non-peak concept when it comes to P2P traffic. P2P will constantly expand its traffic to fill the gap of anything made available to it.
said by funchords:

Lessig says he typically doesn't get half the speed he's paying for. I usually can get more than I'm paying for, with the exception of BitTorrent uploads, these days.
Part of the problem there is the variability. Not all modems, routers, and PC configurations are created equal. Moreover, not all coax plant signal conditions are perfect either. There are many factors that are outside of the carrier's direct control that have significant impact on how your broadband connection will perform.

You see the same thing on the wireless voice carrier side. One person's favorite wireless carrier is the next person's worst carrier they've ever used. Equipment and location variations can produce massive differences in customer experiences.

funchords
Hello
MVM
join:2001-03-11
Yarmouth Port, MA

funchords

MVM

Re: Comcast's EVP Cohen says Martin's thoughts irrelevant

On Plusnet, I see P2P staying the same, not increasing or decreasing. Since it's throttled, I'm not sure it's meaningful, anyway. However, it is not "constantly expand[ing] its traffic to fill the gap of anything made available to it." Check out the dips between 0000 and 1200 each day!

As to the Sprint pics -- I see an 97% uncongested network coming out of NYC. There is no "treatment" needed on an uncongested network.

Exactly what are you talking about when you say, "P2P will constantly expand its traffic to fill the gap of anything made available to it?" How is that different from HTTP or FTP or Skype? P2P is just an architecture, the apps don't all work the same and none do anything special with regard to filling a gap. On the transmit side -- how do they know a gap? On the receive side, how do they control it?

SpaethCo
Digital Plumber
MVM
join:2001-04-21
Minneapolis, MN

1 edit

1 recommendation

SpaethCo

MVM

Re: Comcast's EVP Cohen says Martin's thoughts irrelevant

You're clearly missing the point. Notice on the Sprint graphs that P2P levels are constant out of NY for the entire sample interval. I didn't reference the Sprint pics to indicate congestion, but rather the constant, heavy 24x7 nature of the traffic. Considering these are backbone nodes the lack of congestion is a good thing. It's the same deal on the Plusnet captures, as soon as they start ramping down how aggressively they throttle P2P traffic every night the traffic rises up to meet the extra bandwidth as it is made available.

The reason that P2P expands into the any chunk of bandwidth that is made available is that it is always trying to fire up new connections. ALWAYS. As long the client is running and the tracker is giving it potential clients to connect to it's going to keep making every possible connection it can. That's a pattern that doesn't show up in human-initiated connections like standard HTTP and RTP / VoIP traffic.

funchords
Hello
MVM
join:2001-03-11
Yarmouth Port, MA

funchords

MVM

Re: Comcast's EVP Cohen says Martin's thoughts irrelevant

said by SpaethCo:

You're clearly missing the point.
Not on purpose -- that's why I replied back as I did.
said by SpaethCo:

Notice on the Sprint graphs that P2P levels are constant out of NY for the entire sample interval.
I agree, that's what I saw.
said by SpaethCo:

I didn't reference the Sprint pics to indicate congestion, but rather the constant, heavy 24x7 nature of the traffic. Considering these are backbone nodes the lack of congestion is a good thing. It's the same deal on the Plusnet captures, as soon as they start ramping down how aggressively they throttle P2P traffic every night the traffic rises up to meet the extra bandwidth as it is made available.
Can you mark up one of these graphs to show me what it is that you're seeing that supports that last sentence? I'm still missing it.
said by SpaethCo:

The reason that P2P expands into the any chunk of bandwidth that is made available is that it is always trying to fire up new connections. ALWAYS.

As long the client is running and the tracker is giving it potential clients to connect to it's going to keep making every possible connection it can. That's a pattern that doesn't show up in human-initiated connections like standard HTTP and RTP / VoIP traffic.
No, that's incorrect. There is a limit. It only tries to make new connections if its peer list is smaller than the limit configured. While clients vary, they don't vary much. Connecting to 50 peers (as low as 35, as high as 80, depending on the client) in a swarm (a download task) is the default limit. And again, that's 35-80(-ish) connections, not all of them are active, and very few (3-4) are active in the congested upload direction.

I do have an explanation for your understanding -- there is a problem where users will configure their client inappropriately -- the number that do is probably up to 25% to 33% -- thinking they can get better performance out of it by doing so. (In reality, they get worse overall performance if they do this.) Similarly, people do set Firefox and IE to make more simultaneous server connections by registration or configuration settings.

In both cases, though, an open connection doesn't mean it is being used very much. And if the congestion issue is upstream, they can open as many connections as they want but only 3-4 of them are going to be active at any one time.

SpaethCo
Digital Plumber
MVM
join:2001-04-21
Minneapolis, MN

1 recommendation

SpaethCo

MVM

Re: Comcast's EVP Cohen says Martin's thoughts irrelevant


Current Plusnet cap
said by funchords:

Can you mark up one of these graphs to show me what it is that you're seeing that supports that last sentence? I'm still missing it.
I'm not sure if you were looking at the new caps or the shot from 2006 so I attached a current image. You can see the P2P traffic is nearly instant-on once they let up on the throttling outside of peak hours.
said by funchords:

said by SpaethCo:

The reason that P2P expands into the any chunk of bandwidth that is made available is that it is always trying to fire up new connections. ALWAYS. As long the client is running and the tracker is giving it potential clients to connect to it's going to keep making every possible connection it can.
No, that's incorrect. There is a limit. It only tries to make new connections if its peer list is smaller than the limit configured. While clients vary, they don't vary much. Connecting to 50 peers (as low as 35, as high as 80, depending on the client) in a swarm (a download task) is the default limit. And again, that's 35-80(-ish) connections, not all of them are active, and very few (3-4) are active in the congested upload direction. ...

Similarly, people do set Firefox and IE to make more simultaneous server connections by registration or configuration settings.
That's per client though -- and each client produces a cumulative effect on the network.

Concurrency is the big problem. We as humans inject minor amounts of randomness to everything we do. Even though the majority of the US population gets up and goes to work at around the same time, we introduce just enough variance that when we turn on lights in the morning we don't all hit them at the exact same instant and cause a power dip. We shower with just enough variance that we're not all putting load on the municipal water supply at the exact same instant, and we leave our houses in the suburbs with just enough staggering that traffic manages to not back up at the same time in each neighborhood.

It's the same situation with most web traffic. HTTP and FTP traffic has the benefit of being both a finite transfer and largely being human initiated. (ie, I click on something and it will download or upload until the task is complete, then it's done until I click again.) For VoIP traffic it is interactive -- you have to be involved with being on the phone for the time you are generating that traffic. You also have random events that determine if you can generate that traffic at all, such as both you and the person you want to call being around a phone to take the call, and the person you want to call needs to not be on the phone with someone else when you choose to call them.

For human behavior, we've seen when this routine goes wrong: natural disasters. When outside forces cause us to do things at the exact same instant, we overrun our infrastructure. We see it time and time again -- freeways packed and not moving as people all leave at the same time to flee an oncoming hurricane. Cell phone networks melting down because we're all trying to get in touch with people to make sure they're okay. Most of the infrastructure we rely on is not meant for all of us to use it at the exact same instant.

P2P-type apps are designed to follow in this same pattern, with trackers assuring that concurrence is not only possible, but it's damn well guaranteed. The tracker's main goal is to make sure that clients have enough connection information to keep traffic moving simultaneously between clients. If you get enough clients all sharing in the same content it is guaranteed that you are going to be transmitting non-stop. The clients line up all those connections so that there is never a period where it's sitting doing nothing. One connection freezes up? No problem -- just move one to one of the other established connections and keep the bits moving.

That's the crux of the problem -- shared connections need to be statistically multiplexed, and you can't multiplex an application that never backs off and yields the line to other network devices. If P2P authors wanted to make their applications network-friendly they would have implemented random timers to spread out the network load. If apps like BitTorrent transmitted for {x} seconds and then backed off from all uploads for {y} seconds (where "x" and "y" are random numbers between, say, 0 and 180) we wouldn't be having these discussions today.

funchords
Hello
MVM
join:2001-03-11
Yarmouth Port, MA

funchords

MVM

Re: Comcast's EVP Cohen says Martin's thoughts irrelevant

said by SpaethCo:
said by funchords:

Can you mark up one of these graphs to show me what it is that you're seeing that supports that last sentence? I'm still missing it.
I'm not sure if you were looking at the new caps or the shot from 2006 so I attached a current image. You can see the P2P traffic is nearly instant-on once they let up on the throttling outside of peak hours.
Thanks! I see it now.

Plusnet has done a pretty good job of explaining their management and rationale on their site.

I wonder if the realtime P2P applications are covered under "Streaming" or "P2P" on that graph. I guess I can ask them.

Thanks for showing me that.

It would help me to know if you understand that the widely-held notion that "...P2P expands into the any chunk of bandwidth that is made available is that it is always trying to fire up new connections..." is incorrect.

As for concurrency, this is a truth for any long transfer -- including the now very popular streaming video and direct-download sites. The plus.net graph shows the impact of the "prime-time" nature of our viewing habits.
If you get enough clients all sharing in the same content it is guaranteed that you are going to be transmitting non-stop.
I don't think that's true, and I think that the plus.net graph bears this out.

Generally, these clients come configured to upload 125% (give or take) of the download size. Again, somewhere between 1/4th and 1/3rd probably change it (wet-finger number), but in this case, the law of big numbers apply -- some change it to share more, others less. There is a definite trailing off of P2P proportion before 08:00 (when throttling starts?) which continues throughout the morning into the early afternoon. This is because these transfers are completing both download and upload goals and stopping.

The other factor is that there quickly becomes a point where a swarm has aged. 99% of those wanting a copy have it, and those that don't are latecomers to the swarm. These guys won't be able to upload very fast as there is nobody left in the swarm who is downloading.
funchords

funchords to SpaethCo

MVM

to SpaethCo
said by SpaethCo:

That's the crux of the problem -- shared connections need to be statistically multiplexed, and you can't multiplex an application that never backs off and yields the line to other network devices.
You've lost me a bit. Is this a DOCSIS concept or limitation?
said by SpaethCo:

If P2P authors wanted to make their applications network-friendly
There is no "if" here. They do. But they have been attacked and vilified by the ISPs, and so they have no bullshit tolerance, which is why we have encryption and obfuscation. But at the same time, they do support ISP friendly tools such as Peercache.
said by SpaethCo:

they would have implemented random timers to spread out the network load. If apps like BitTorrent transmitted for {x} seconds and then backed off from all uploads for {y} seconds (where "x" and "y" are random numbers between, say, 0 and 180) we wouldn't be having these discussions today.
They all do something LIKE this, none do exactly what you've described. None back off of all uploads simultaneously, except possibly for Gnutella when it is configured to support only one upload.

  • BitTorrent apps will usually stop transmitting to some peer on the average of once every 30 seconds, but never an interval shorter than 10 seconds. (Each slot sustains ~5 KB/s or so.)


  • ED2K apps will usually transfer 9.28 MB, the randomizing factor there being that each uploading slot has a different speed. It then contacts the next client in the wait queue, which causes a brief delay before that starts. (Each slot sustains ~3 KB/s or so.)


  • Gnutella apps will usually transfer for a fixed number of seconds. It then contacts the next client in the queue, which causes a brief delay before that starts. (Slots can be 10 KB/s or more, the Gnutella differentiator from ED2K is smaller wait queues, fewer slots, higher upload speeds per slot.)

Is that sufficient to meet the needs you're talking about?

user850
@k12.il.us

user850

Anon

I guess you run when there is any opposition...

Bribing your way is better than trying to convince others anyway. Money speaks louder than words with how little morality we have.

Jim Kirk
Premium Member
join:2005-12-09
49985

Jim Kirk

Premium Member

Re: I guess you run when there is any opposition...

said by user850 :

Bribing your way is better than trying to convince others anyway. Money speaks louder than words with how little morality we have.
So true...
nasadude
join:2001-10-05
Rockville, MD

nasadude

Member

won't do anything anyway

this FCC has no interest in doing anything that is the least bit "regulatory".

Whether or not Martin has the power to do anything is irrelevant; any vote on any matter that benefits consumers will be 3-2 against. The republicans on the commission are there for a reason: to support anything industry wants to do.

therefore, I predict whatever Martin does (and I think he will do something, simply because he needs to at least appear to be doing something), it will not be anything meaningful.

FCC = facilitator of crushing consumers

TIGERON
join:2008-03-11
Boston, MA

TIGERON

Member

Re: won't do anything anyway

AGREED.

A class-action lawsuit against Comcast is the ONLY way to stop this type of type of hidden, deceptive network management, and other actions including the twightlight-zone bandwidth limits that they constantly deny.