dslreports logo
 
    All Forums Hot Topics Gallery
spc
uniqs
123

a333
A hot cup of integrals please
join:2007-06-12
Rego Park, NY

1 recommendation

a333 to devnuller

Member

to devnuller

Re: Is this a good thing for the net?

Yawn, here comes the typical argument... bandwidth is bandwidth, either way you look at it. All p2p does is open several simultaneous connections, splitting the user's bandwidth. Unless you horribly misconfigured your client to open up, say, 1000 ports. It's not as if the user is using any more bandwidth than if they were conducting a regular http download. P2P actually is better for a network, as (given enough peers) it completes downloads significantly faster than normal centralized server methods, thus getting heavy users off the network noticeably faster (obviously, unless the user is dumb enough to allocate their entire upstream bandwidth to seeding).
As to bypassing the "TCP congestion control" you speak of, do you think Bell's solution is ANY better? The throttling of particular packets by itself violates the principles of TCP. Not only that, it also throttles/cripples MANY legitimate applications, such as secure VPN's or other encrypted connections. Do you REALLY want that as an alternative to this so-called "problem" of p2p? I've said over an over, the ideal solution is to gracefully scale back speed for ANY upload/download if the said user is using their full bandwidth for more than 20 minutes during peak hours. This actually solves the problem, unlike throttling schemes like bell's, which render many legitimate applications useless. Let's face it, even Comcast here in the states has been forced to take a long hard look at their policy on Sandvine. Soon enough, we can only hope Bell will as well...
Do I even support the above solution? By itself, absolutely NOT!! IMHO, the ideal solution is to upgrade the core and its routers. However, that takes time and capital that companies like Bell are rather unwilling to spend; they'd rather (ab)use their position in the limited Canadian ISP market to deploy band-aid solutions like throttling p2p.
amigo_boy
join:2005-07-22

amigo_boy

Member

said by a333:

bandwidth is bandwidth, either way you look at it.
That's not true. Even p2p users employ QoS to give their interactive activity higher priority (slowing down their bit torrent, etc.).

I don't see anything wrong with the same principle applying at a higher (wider) level.

Mark
devnuller
join:2006-06-10
Cambridge, MA

devnuller to a333

Member

to a333
said by a333:

I've said over an over, the ideal solution is to gracefully scale back speed for ANY upload/download if the said user is using their full bandwidth for more than 20 minutes during peak hours. This actually solves the problem, unlike throttling schemes like bell's, which render many legitimate applications useless.
Like this? »www.bloomberg.com/apps/n ··· NA18k1dY

a333
A hot cup of integrals please
join:2007-06-12
Rego Park, NY

a333

Member

Yep, Comcast's "protocol agnostic" approach is EXACTLY what I was talking about, hence the reason I mentioned them in my original post..
To amigo_boy, QoS can be enabled on many different levels. Per se, many Vonage routers let you slow down ALL your internet traffic in general, to make sure your VoIP is clear/reliable. Many download managers let you put a cap on download speeds/time that stuff is downloaded. I don't see anything special about BitTorrent/P2P. What's your point?

SpaethCo
Digital Plumber
MVM
join:2001-04-21
Minneapolis, MN

1 recommendation

SpaethCo to a333

MVM

to a333
said by a333:

bandwidth is bandwidth, either way you look at it.
That's not entirely true. It is several orders of magnitude more expensive to deliver bandwidth to residential homes than it is to provide bandwidth for servers in data centers. That's why shifting the distribution burden from data center hosted servers to end-user links is such a poor idea. The P2P architecture is one that you arrive at when you develop an application with complete ignorance to the realities of the network infrastructure it will run on.
said by a333:

It's not as if the user is using any more bandwidth than if they were conducting a regular http download.
False. A HTTP transfer would be a straight download, with the only upstream packets being TCP ACKs. In a P2P environment you upload content to other members while you download -- you will use significantly more overall bandwidth grabbing your content even if you shut down your P2P client immediately after you have completed the download.
said by a333:

P2P actually is better for a network, as (given enough peers) it completes downloads significantly faster than normal centralized server methods, thus getting heavy users off the network noticeably faster
Again, false. Your P2P client will continue uploading to other members even after you have completed the download of your file. This seeding process will keep using upstream data capacity for as long as the client is running on your machine. Considering that technical hurdles make upstream capacity the most difficult to build out at the edge, an application designed to make constant use of upstream bandwidth is exceedingly bad.
said by a333:

The throttling of particular packets by itself violates the principles of TCP.
Throttling flows is not a technical violation. This is how QoS systems work; if you are giving priority to some packets then by definition you are also reducing the priority and delaying the delivery of other packets.

a333
A hot cup of integrals please
join:2007-06-12
Rego Park, NY

a333

Member

First of all, you took my statement "bandwidth is bandwidth" COMPLETELY out of context. Nice try, though...
Next, exactly how do you think seeding works? The uploading you do during YOUR transfer in turn speeds up someone else's download of the SAME file, hence letting a lot of heavy users get their files faster, reducing strain on the network in general. How hard is it to get this? P2P software REDUCES congestion, and avoids the situations most HTTP downloads would just keep trying to hammer their way through. To distribute Blizzard patches to several million users simultaneously using the regular HTTP/unicast methods would require a port into the 'net that's the size of a national backbone.
And what is all this B.S. about upstream bandwidth? Unless you set your client to use 100% of your upstream bandwidth, and make it open up ~2000 ports, you are NOT causing any harm to the network, PERIOD. It's the same as if you had been uploading that 400 MB family reunion movie to Grandma Ginny. As I said, bandwidth is bandwidth. P2P doesn't magically make my available bandwidth a multiple of 10.

Overall, none of you network engineers/"experts" have given me a VALID reason to throttle P2P in PARTICULAR.
amigo_boy
join:2005-07-22

amigo_boy to a333

Member

to a333
said by a333:

To amigo_boy, ... What's your point?
I was just responding to the assertion that all bandwidth is equal. That's not true. Even torrent users apply traffic shaping via QoS because they don't want their torrents disrupting their own interactive applications.

I see nothing wrong with applying the same common sense further upstream. Maybe ISPs aren't doing that in the best way. I don't know what they do, or what the alternatives are. But, just claiming that all bandwidth is equal is incorrect.

Mark

SpaethCo
Digital Plumber
MVM
join:2001-04-21
Minneapolis, MN

1 recommendation

SpaethCo to a333

MVM

to a333
said by a333:

The uploading you do during YOUR transfer in turn speeds up someone else's download of the SAME file, hence letting a lot of heavy users get their files faster, reducing strain on the network in general.
The payoff on P2P only works if many people leave their connection seeding after their transfer completes. The fast downloads of a few require extra upstream capacity from many others to be used.
said by a333:

To distribute Blizzard patches to several million users simultaneously using the regular HTTP/unicast methods would require a port into the 'net that's the size of a national backbone.
It would require an intelligent method of distribution like using content delivery networks. Microsoft has more customers than Blizzard, and they have no problems deploying massive service packs and regular patches via HTTP transfers. Linux package managers like apt, yast, yum, or up2date also grab package updates via HTTP for the tens of millions of Linux boxes out there. Same deal with antivirus updates, or really the overwhelming majority of software updates.

Blizzard uses P2P for one key reason: cost. It moves the distribution burden and expense from them to you. In reality the WoW patches would deploy significantly faster if Blizzard were to "man up" and pay for CDN delivery.
said by a333:

And what is all this B.S. about upstream bandwidth? Unless you set your client to use 100% of your upstream bandwidth, and make it open up ~2000 ports, you are NOT causing any harm to the network, PERIOD. It's the same as if you had been uploading that 400 MB family reunion movie to Grandma Ginny.
Broadband bandwidth is oversubscribed. Your "idle" bits are intended to be someone else's "used" bits. The difference here is again finite vs infinite duration transfers. You start a standard upload of that 400MB video to Grandma Ginny, walk away, and once your transfer finishes there is no more traffic on the network. Using a P2P application, on the other hand, will keep putting bits on the network for as long as you let the application run. Little Timmy queues up some MP3s to download in the morning before he goes to school -- even though the transfer will probably finish in the first 30-45 minutes, the P2P app will keep uploading to other P2P clients the entire time he's away at school, or even longer if he leaves the client running after he gets home.

The other issue is concurrence. With standard transfers you have normal human triggers that cause the load to be randomly distributed. (ie, the chances of you and your neighbor clicking a website button to trigger a large download at the same time a relatively small) Since the distribution from P2P is constant and automated, the chances of transfers of multiple P2P users all hitting the network at the same time are significantly greater.
amigo_boy
join:2005-07-22

4 edits

amigo_boy to a333

Member

to a333
said by a333:

letting a lot of heavy users get their files faster, reducing strain on the network in general.
That's illogical. If it were true that letting torrents run faster so they finish sooner (and consume bandwith for less time), torrent users wouldn't use QoS to slow down their torrents for the benefit of their web browsing, DNS lookups and VOIP.

I agree that distributed serving reduces network load compared to the load of multiple people downloading from one server. But, if distributed loads facilitate data transfer that wouldn't have been feasible from one server (because the provider wouldn't pay for enough bandwidth to meet the demand), then it has the effect of creating "virtual" servers which are unfunded on networks that didn't bargain for providing that kind of bandwidth.

It's an interesting challenge. But, let's not be coy about what's happening.

Mark

rawgerz
The hell was that?
Premium Member
join:2004-10-03
Grove City, PA

rawgerz to SpaethCo

Premium Member

to SpaethCo
said by SpaethCo:

Broadband bandwidth is oversubscribed. Your "idle" bits are intended to be someone else's "used" bits. The difference here is again finite vs infinite duration transfers. You start a standard upload of that 400MB video to Grandma Ginny, walk away, and once your transfer finishes there is no more traffic on the network. Using a P2P application, on the other hand, will keep putting bits on the network for as long as you let the application run. Little Timmy queues up some MP3s to download in the morning before he goes to school -- even though the transfer will probably finish in the first 30-45 minutes, the P2P app will keep uploading to other P2P clients the entire time he's away at school, or even longer if he leaves the client running after he gets home.
That's what QOS is for. Prioritize http and keep P2P at the bottom. Everyone wins. Now, if everyone using P2P was using a high encrypted VPN, then it would be a problem. Or "unlimited" high speed tiers that don't have the bandwidth to support all the clients. But that's not exactly the end user's fault.
Skippy25
join:2000-09-13
Hazelwood, MO

Skippy25 to amigo_boy

Member

to amigo_boy
To the ISP, yes it should be as they should be nothing but the dumbpipes they are passing packets along the network.

The rest will work itself out with the law of physics, congestion, and consumers willingness to deal with it.

funchords
Hello
MVM
join:2001-03-11
Yarmouth Port, MA

funchords to SpaethCo

MVM

to SpaethCo
Espaeth, FWIW, there's a reason that dedicated file-sharers flee to "private" (they're not really) "trackers" (they're mostly websites-tracker hybrids), and it's because most people don't share all that constantly. So they sign up for these private sites (like sports leagues) to set and enforce some community rules about uploading at least as much as people download. That users do this and that it's the sharing imbalance motive is very clear.

If we lived in the world you're describing, the average up/down "ratio" would be 5:1 and private sites wouldn't exist. My guess is the average up/down ratio is 1/5. Yes, users upload longer than they download, but they also have asymmetric pipes. It takes 5-15x longer to upload the same amount as they download.
patcat88
join:2002-04-05
Jamaica, NY

patcat88 to a333

Member

to a333
said by a333:

Yawn, here comes the typical argument... bandwidth is bandwidth, either way you look at it. All p2p does is open several simultaneous connections, splitting the user's bandwidth. Unless you horribly misconfigured your client to open up, say, 1000 ports. It's not as if the user is using any more bandwidth than if they were conducting a regular http download.
Yes it is. Lets make an example. TCP equalizes the bandwidth equally for all TCP connections on the link at a single roadblock. User A and User B have 2 mbit connections to the DSLAM. The DSLAM has no one other than User A and User B. The DSLAM has a 3 mbit connection to the core router. User A is running P2P with 100 connections, User B is running an HTTP download with 10 connections. User A's speed will be 2 mbit (limited only by his DSL modem's 2 mbit speed), User B will be 1 mbit. Shouldn't it be 1.5 mbit for User A and 1.5 mbit for User B? Its not. If User A has a 3mbit connection to the DSLAM (DSLAM to internet is still 3 mbit), his speed would be 2.72 mbit ((3/110)*100), and User B will be left with .27 mbit ((3/110)*10). Its the same reason download accelerators who make multiple connections to download an HTTP file work.

The speed caps (2 mbit in this case) don't extend past the DSLAM/CMTS. After that its a single ethernet link, and packets and upstream routers do not see the original speed caps, or see IPs and current traffic behind each IP when deciding which traffic to toss at a congested point. The router will randomly drop traffic. The chance of a single TCP connection's packet being post is the same among all TCP packets at that congestion/dropping point. A droped packet means TCP will slow down the connection. So the more connections a user has (assuming the connection's destination's connection has infinite bandwidth), the less likely a packet will drop on the sum/pool of all of that user's connections.

Your acting as if every router has QOS support, and can see your current utilization and all other user's utilization, and decide to split bandwidth equally among all users (not all connections). A router doesn't see users and DSL modems and Cable modems, it sees a bunch of TCP connections with different source IPs. Only if the router sees each modem as a VPN tunnel (which is a single point to point connection) will your idea work.
P2P actually is better for a network, as (given enough peers) it completes downloads significantly faster than normal centralized server methods, thus getting heavy users off the network noticeably faster (obviously, unless the user is dumb enough to allocate their entire upstream bandwidth to seeding).
I'll be driving down to the Yacht club in my Maybach to be the host of my Yacht Party, laughing all the way by throwing datacenter/server/internet connection costs to the users to swallow.

There is so much content on the internet, only a couple people on your headend/DSLAM/Central Office/City will have that content. No current P2P system and no realistic future P2P system actively attempts to talk to local (same ASN/ISP/city/least hops) peers vs distent peers. P4P is dead since no open source coders have any incentive to help "the man" (ISPs). When Vuze and uTorrent come with ASN preference, thats when your not BSing.

Users do allocate almost all their upstream to seeding, otherwise you get banned from your private tracker and seed for 24/7. Nobody pumps HTTP traffic 24/7.
Do I even support the above solution? By itself, absolutely NOT!! IMHO, the ideal solution is to upgrade the core and its routers. However, that takes time and capital
If IP6 is DOA, so is any upgrades to TCP/IP. I'm still waiting for Xcast (Explicit Multicast), or some system to let me receive or send multicast traffic in P2P. P2P traffic would decrease exponentially if consumers had access to Multicast. My 1 upstream stream of sectors of the file can duplicate to 100s of peers with only 1 copy of the traffic on each ISP, and it can go across long haul backbone fiber optics as 1 copy. Only problem is if your download is too slow for my upstream stream, you will have to get "makeup" packets via conventional P2P P2P from peers who did get my packets. Sectors with the most users needing them/rarest get sent out first. So a torrent can be seeded to 1000 users in the time it takes for the initial seed to seed it exactly once.
patcat88

patcat88 to a333

Member

to a333
said by a333:

P2P software REDUCES congestion, and avoids the situations most HTTP downloads would just keep trying to hammer their way through. To distribute Blizzard patches to several million users simultaneously using the regular HTTP/unicast methods would require a port into the 'net that's the size of a national backbone.
Pay Limelight or Akamai like a proper company »www.akamai.com/html/cust ··· ist.html .
Intelligent localized caching and distribution and redirection of clients to the closest server. Datacenters all over the world. Almost no transoceanic link usage by clients connecting to a CDN.

funchords
Hello
MVM
join:2001-03-11
Yarmouth Port, MA

funchords to patcat88

MVM

to patcat88
said by patcat88:
said by a333:

Yawn, here comes the typical argument... bandwidth is bandwidth, either way you look at it. All p2p does is open several simultaneous connections, splitting the user's bandwidth. Unless you horribly misconfigured your client to open up, say, 1000 ports. It's not as if the user is using any more bandwidth than if they were conducting a regular http download.
Yes it is. Lets make an example. TCP equalizes the bandwidth equally for all TCP connections on the link at a single roadblock. User A and User B have 2 mbit connections to the DSLAM. The DSLAM has no one other than User A and User B. The DSLAM has a 3 mbit connection to the core router. User A is running P2P with 100 connections, User B is running an HTTP download with 10 connections. User A's speed will be 2 mbit (limited only by his DSL modem's 2 mbit speed), User B will be 1 mbit. Shouldn't it be 1.5 mbit for User A and 1.5 mbit for User B? Its not. If User A has a 3mbit connection to the DSLAM (DSLAM to internet is still 3 mbit), his speed would be 2.72 mbit ((3/110)*100), and User B will be left with .27 mbit ((3/110)*10).
Nope, because as long as the 1 Mbps connection has upward headroom, it's going to take a creep at it and some of the resulting dropped packets at the 3 Mbps choke point will belong to the other user. This means that the equilibrium will continue to creep up until some balance was made. (This thought experiment is easier if you think in terms of 1 and 10 connections instead of 10 and 100, but the outcome is the same).

There is possibly a temporary unfairness, and that's because the 100-connection link will experience a less deep cut on a dropped packet than the 10-connection link will. But ulitimately routers are stateless, they only know data packets and balance will eventually be achieved such that A and B packets get dropped at about the same rate.

Besides, none of us have a 3 Mbps choke point. You need to apply that as well.

The relative tiny broadband modem pipe is a good unfairness equalizer. If it wasn't for that, then it actually might be more prone to work the way you describe.
Its the same reason download accelerators who make multiple connections to download an HTTP file work.
Nope. Download accelerators work because you're taking a larger share of the distant server's ports, each of which is allocated a share of the total bandwidth.

They would work the way you envision if we had 100 Mbps connections fighting for a smaller upstream pipe. But as it is we all have a small pipe competing in a larger one -- so it doesn't.
patcat88
join:2002-04-05
Jamaica, NY

patcat88

Member

said by funchords:

Nope, because as long as the 1 Mbps connection has upward headroom, it's going to take a creep at it and some of the resulting dropped packets at the 3 Mbps choke point will belong to the other user. This means that the equilibrium will continue to creep up until some balance was made. (This thought experiment is easier if you think in terms of 1 and 10 connections instead of 10 and 100, but the outcome is the same).
100 connections will all try at creeping up, just the same as the 10 connections will try creeping up, 100 connections will still have more bandwidth. Unless a TCPIP stack is designed to think of all connections as a whole when deciding whether increase speed (which i don't think is possible, since the stack has no way of telling if the congestion is on the local link, or out in the internet link after many routers).
There is possibly a temporary unfairness, and that's because the 100-connection link will experience a less deep cut on a dropped packet than the 10-connection link will. But ulitimately routers are stateless, they only know data packets and balance will eventually be achieved such that A and B packets get dropped at about the same rate.
Your talking about packets, I'm talking about connections. packets A and B can be part of the same connection.
Besides, none of us have a 3 Mbps choke point. You need to apply that as well.
Its an example, it makes less sense if I talk about User A through User ZZ and a 1 gigabit link from the CMTS to core router, and a 100 gigabit backplane, and a 40 gigabit link to a peering center, then a 1 gigabit link to some Tier 1, hand off to a Tier 2 in same datacenter, then taking a trip across the country on a leased MPLS OC768 shared with a handful of Tier 2 ISPs and find that at 6PM most days of the week there is congestion on that, or we can go further and say, then it arrives another coast of the USA, goes into a Tier 1's datacenter, where its split into its Tier 2's bonded 1 gigabit circuits where it flys off to a colo datacenter through the Tier 2's router then in that datacenter it goes to another floor and then down a congested 100mbit link to a new hot Web 2.0 video sharing site which offers videos in torrent with 30 1U servers inside each torrent, or a HTTP download from 1 server, where exactly all Users A through ZZ are trying to connect. Lets hope no line cards are out of spec and not causing congestion.
Nope. Download accelerators work because you're taking a larger share of the distant server's ports, each of which is allocated a share of the total bandwidth.
Thats true too. Same drowning out effect as in TCP connections. Except now for Apache threads.
They would work the way you envision if we had 100 Mbps connections fighting for a smaller upstream pipe. But as it is we all have a small pipe competing in a larger one -- so it doesn't.
But our small pipes summed up, are much larger than the "large" pipe we are trying to get into (consumer ISP contention).

P2P is like an ISP with 75% of its customer being botnet infect and DDOSing youtube off the net by drowning out legitimate connections. Except replace youtube with a peering link.

funchords
Hello
MVM
join:2001-03-11
Yarmouth Port, MA

funchords

MVM

100 will try at creeping up, and more connections will experience packet drop -- (more connections, but not a bigger proportion) -- but either way, routers don't understand connections -- they just deal with packets and when congestion hits, they drop in proportion.

If B is transmitting more data than A, then B will have more drops. We have to mentally turn the situation back into connections in order to predict what happens next.

As to your last sentence:

Client-server is not more legitimate than P2P (The Internet started as P2P), and well-moneyed companies don't deserve the only voice on the 'net.

You might like a receive-only network -- but we've had that before, we called it "Television."

edam
@btopenworld.com

edam to a333

Anon

to a333
said by a333:

P2P software REDUCES congestion, and avoids the situations most HTTP downloads would just keep trying to hammer their way through.
Haha ha!! You've obviously never managed a network, mate...

swhx7
Premium Member
join:2006-07-23
Elbonia

swhx7 to SpaethCo

Premium Member

to SpaethCo
said by SpaethCo:

It is several orders of magnitude more expensive to deliver bandwidth to residential homes than it is to provide bandwidth for servers in data centers. That's why shifting the distribution burden from data center hosted servers to end-user links is such a poor idea. The P2P architecture is one that you arrive at when you develop an application with complete ignorance to the realities of the network infrastructure it will run on.

... Considering that technical hurdles make upstream capacity the most difficult to build out at the edge, an application designed to make constant use of upstream bandwidth is exceedingly bad.

If A and B are both necessary conditions of a bad result, and both are voluntary human actions, then it is a fallacy to treat A as if it were an inevitable fact of nature and only B as a choice for which someone is responsible. To blame B instead of A or both, one needs an argument for B being a bad action and A a good one.

The slowness of residential links in USA, and their asymmetry, are both due to severe lack of competition in broadband markets. This in turn is due to national policies of granting right of way, local monopolies, and subsidies to telcos and cable companies, and permitting them to abuse customers outrageously, with minimal corresponding requirements or enforcement of mandates on behalf of the public.

On the other hand, p2p developers have merely coded for the internet as it was meant to be, and has the potential to be, as a network of peers not reliant on big commercial content providers. It is somewhat backwards to blame p2p for not capitulating to the distortions introduced by bad policies, rather than concluding that the policies have artificially made p2p into a problem.

SpaethCo
Digital Plumber
MVM
join:2001-04-21
Minneapolis, MN

SpaethCo to funchords

MVM

to funchords
said by funchords:

There is possibly a temporary unfairness, and that's because the 100-connection link will experience a less deep cut on a dropped packet than the 10-connection link will. But ulitimately routers are stateless, they only know data packets and balance will eventually be achieved such that A and B packets get dropped at about the same rate.
Even if A & B have packets dropped at the same rate, connection A has 100 instances of congestion avoidance compared to connection B having 10 instances.

The likelihood of a packet in a TCP flow of B being dropped is 10x greater than the likelihood of a packet in a TCP flow of A being dropped. 1:10 vs 1:100

If packets were uniformly dropped across flows then it might balance in the way you suggest. The problem is that congestion / drops happens without regard to specific flows, so the impact across TCP flows is not even.

ezln23
@qwest.net

ezln23 to edam

Anon

to edam
P2P does not reduce traffic because it enables the delivery of media that would otherwise cost too much money or is unavailable (music, movies, etc.). It may, in fact, be more efficient than delivering the same amount of data through a CDN and it is certainly cheaper for the distributor of the media.
SuperWISP
join:2007-04-17
Laramie, WY

SuperWISP to swhx7

Member

to swhx7

Sorry, but espaeth is correct

It's necessarily much more expensive to deliver bandwidth to the end user via the last mile than it is to deliver it at a server farm. I know this because I'm out there every day -- on roofs, in users' homes, climbing radio towers -- to make that "last mile" link. Content providers should not be able to shift their bandwidth costs to ISPs, multiplying them in the process. See my testimony before the FCC at »www.brettglass.com/FCC/r ··· rks.html for a detailed explanation of why.

NetAdmin1
CCNA
join:2008-05-22

NetAdmin1

Member

said by SuperWISP:

Content providers should not be able to shift their bandwidth costs to ISPs, multiplying them in the process.
Nobody is magically shifting costs anywhere because all the costs are paid for by everyone connected to the network.
NetAdmin1

NetAdmin1 to a333

Member

to a333

Re: Is this a good thing for the net?

said by a333:

The throttling of particular packets by itself violates the principles of TCP.
No, it violates the End-to-end design principle. TCP is designed is such a way that it works VERY well QoS schemes.
NetAdmin1

NetAdmin1 to SpaethCo

Member

to SpaethCo
said by SpaethCo:

You start a standard upload of that 400MB video to Grandma Ginny, walk away, and once your transfer finishes there is no more traffic on the network. Using a P2P application, on the other hand, will keep putting bits on the network for as long as you let the application run. Little Timmy queues up some MP3s to download in the morning before he goes to school -- even though the transfer will probably finish in the first 30-45 minutes, the P2P app will keep uploading to other P2P clients the entire time he's away at school, or even longer if he leaves the client running after he gets home.
I would like to point out that more BT clients are now setting defaults that eliminate this issue. Most BT clients will shutdown the transfer once the user has reached an Upload to Download ratio of greater than or equal to 1. Users have to manually change that option to seed indefinitely.

SpaethCo
Digital Plumber
MVM
join:2001-04-21
Minneapolis, MN

SpaethCo

MVM

said by NetAdmin1:

I would like to point out that more BT clients are now setting defaults that eliminate this issue. Most BT clients will shutdown the transfer once the user has reached an Upload to Download ratio of greater than or equal to 1. Users have to manually change that option to seed indefinitely.
Which is great if people actually update their software. Considering the number of SQL slammer packets I still see hitting my firewall to this day, forgive me if I remain skeptical that this will make a difference anytime soon.

Bar Humbug2U
@ntl.com

Bar Humbug2U to SuperWISP

Anon

to SuperWISP

Re: Sorry, but espaeth is correct

said by SuperWISP:

It's necessarily much more expensive to deliver bandwidth to the end user via the last mile than it is to deliver it at a server farm. I know this because I'm out there every day -- on roofs, in users' homes, climbing radio towers -- to make that "last mile" link.

Content providers should not be able to shift their bandwidth costs to ISPs, multiplying them in the process. See my testimony before the FCC at »www.brettglass.com/FCC/r ··· rks.html for a detailed explanation of why.
interesting, taken from your text

"I founded LARIAT -- a rural telecommunications cooperative -- to bring Internet to the community. I and other interested business owners started by borrowing a bit of bandwidth from the University to build a "proof of concept" network, and then transitioned to buying our own. At the time, a T1 line cost $6,000 a month, but we pooled our money and partnered with other providers to bring the connection into my office.

The problem, once we got it there, was how to divvy it up among all the people who were paying for it. The answer turned out to be the techology upon which I'd worked here at Stanford. We bought some of the NCR radio equipment and set up a metropolitan area network spanning downtown Laramie. As far as I or anyone else can tell, this made us the world's first WISP, or wireless Internet service provider.

Fast forward to 2003. The Internet was now well known, and the growing membership of LARIAT decided that rather than being members of a cooperative, they simply wanted to buy good Internet service from a responsible local provider. So, the Board prevailed upon me and my wife -- who had served as the caretakers of the network -- to take it private. We did, and have been running LARIAT as a small, commercial ISP ever since. But after all these years, our passion for bringing people good, economical Internet service hasn't changed. And nothing can beat the sense of achievement we feel when we hook up a rural customer who couldn't get broadband before we brought it to them -- or when we set up a customer who lives in town but has decided to "cut the cord" to the telephone company or cable company and go wireless with us. We make very little per customer; our net profit is between $2.50 and $5 per customer per month. But we're not doing this to get rich. We're doing this because we love to do it.
"

so how is it that you with roots in the "telecommunications cooperative" operations and a person that took your free alocation from university bandwidth to "build a "proof of concept" network. so your not averse to taking others bandwidth as long as it innovates for YOU, BUT YOU have NOT seen fit to TURN ON MULTICATING to and from your paying end users today ?.

how is it you have not taken a very small amount of your current profits, and returned a very small amount to the free initatives in paying a few £100 per advance to the torrent java AZ/Vuse coders and related free codebases to retrofit Multicasting and a generic IP multicat tunnel for any and all P2p/torrent traffic to use TODAY, so advancing and innovating on what came before , plus then being in a position to save VAST amounts of local ,national and intercontinental bandwidth.

lets be clear on this, if it were not for the content providers (and that means eveyone that uses and contributes to thisand many other MBs etc) then there would be anyone wanting to pay for a broadband connection in the first place, we end users creat the demand, we create the content, and we pay the asking price for our connections world wide....

would you be happy for the likes of Vuse, BitTorrent, and the multitude of video P2p vendors that take our content pay for server cache and related Hardware space in every single ISPs racks, im sure you would welcome that.

but your not willing to cache the unicast torrents inside your ISP and serve them to your paying end user last mile customers if you have to buy the caching torrent kit or turn on Multicasting and pay a coder to retrofit the required code on the cheap, or even setup a simple multicast tunnel on your web side co-locations racks the for any non US users to connect to over a Multicast tunnel.

"BBC's "iPlayer" P2P software is causing a similar effect. While the BBC is not a for-profit entity" true but we end users in the UK have PAYED the cost of the production and delivery of the content, the very content your US end users want to also see and use their payed for ISP connections to get it.

the BB havbe and are still running the multicast peering to ISPs that are willing to turn ON multicating to and fromthe end users, alas the worlds ISP dont want to give the end users that multicat ability.

simply put the Multicast video and torrent/P2p fromthe like of the BBC are available, the working Multicast codebases are are there and available, and all the worlds ISPs router and ralated kit have multicast capabilitys as a generic use, BUT YOU as the ISP owner have turned it all off the the end users, your not after innovating or helping the conoperatives of today and tomorrow,your just looking after your self and will move on to other things if you can t make your cashflow THE EASY antiquated unicast IP way.

shame someinnovation, take whats already available and help this really old and underused Multicast grow, as the OWNER of the ISP and make available Multicast tunnels for the UK BBC and users to use on your local ISP network and related peered kit around the world, and tell us about it so we can use it......

NetAdmin1
CCNA
join:2008-05-22

NetAdmin1 to SpaethCo

Member

to SpaethCo

Re: Is this a good thing for the net?

said by SpaethCo:

Which is great if people actually update their software. Considering the number of SQL slammer packets I still see hitting my firewall to this day, forgive me if I remain skeptical that this will make a difference anytime soon.
There is a significant level of difference in the sophistication of the heavy BT user and the average user with an unpatched XP Home box at home. Heavy BT users are the types of people who tend to obsessively upgrade their software to be on the bleeding edge.

swhx7
Premium Member
join:2006-07-23
Elbonia

swhx7 to SuperWISP

Premium Member

to SuperWISP

Re: Sorry, but espaeth is correct

said by SuperWISP:

It's necessarily much more expensive to deliver bandwidth to the end user via the last mile than it is to deliver it at a server farm. I know this because I'm out there every day -- on roofs, in users' homes, climbing radio towers -- to make that "last mile" link. Content providers should not be able to shift their bandwidth costs to ISPs, multiplying them in the process. See my testimony before the FCC at »www.brettglass.com/FCC/r ··· rks.html for a detailed explanation of why.

Your whole presentation exemplifies exactly the fallacy I was pointing out. It amounts to "everyone must adapt to the existing bottleneck in the last mile, instead of the last mile being improved; if users make problems for ISPs existing business model then users must be restricted instead of ISPs changing".

Just to single out one example, you claim to believe in network neutrality, but condone the ISPs' prohibitions of servers - precisely one of the most egregious violations of network neutrality.

SpaethCo
Digital Plumber
MVM
join:2001-04-21
Minneapolis, MN

SpaethCo to NetAdmin1

MVM

to NetAdmin1
said by NetAdmin1:

Nobody is magically shifting costs anywhere because all the costs are paid for by everyone connected to the network.
If your monthly subscription fee actually covered the cost of high bandwidth usage then there would be no reason for ISPs to waste time and money on network management.

It's like insurance -- you don't get $250,000 for $100/mo; you get coverage for very specific events with very specific exclusions. Once insurance companies have to start paying out too many claims they have to start adjusting premiums or start clamping down on coverage. We're seeing this same equilibrium being worked out in the ISP space now.