dslreports logo
site
 
    All Forums Hot Topics Gallery
spc

spacer




how-to block ads


Search Topic:
uniqs
23992
share rss forum feed

devnuller

join:2006-06-10
Cambridge, MA

1 edit

1 recommendation

Is this a good thing for the net?

Is bypassing TCP congestion control a good thing for the users of the network? Why should one persons non-interactive file sharing generating a dozen to a hundred streams be more important than my interactive VoIP call or gaming experience?

Using it as a feature, maybe, but enabling this behavior by default is just wrong and will lead to continuing counter, counter measures and more justification for caps.


a333
A hot cup of integrals please

join:2007-06-12
Rego Park, NY
Reviews:
·T-Mobile US

1 recommendation

Yawn, here comes the typical argument... bandwidth is bandwidth, either way you look at it. All p2p does is open several simultaneous connections, splitting the user's bandwidth. Unless you horribly misconfigured your client to open up, say, 1000 ports. It's not as if the user is using any more bandwidth than if they were conducting a regular http download. P2P actually is better for a network, as (given enough peers) it completes downloads significantly faster than normal centralized server methods, thus getting heavy users off the network noticeably faster (obviously, unless the user is dumb enough to allocate their entire upstream bandwidth to seeding).
As to bypassing the "TCP congestion control" you speak of, do you think Bell's solution is ANY better? The throttling of particular packets by itself violates the principles of TCP. Not only that, it also throttles/cripples MANY legitimate applications, such as secure VPN's or other encrypted connections. Do you REALLY want that as an alternative to this so-called "problem" of p2p? I've said over an over, the ideal solution is to gracefully scale back speed for ANY upload/download if the said user is using their full bandwidth for more than 20 minutes during peak hours. This actually solves the problem, unlike throttling schemes like bell's, which render many legitimate applications useless. Let's face it, even Comcast here in the states has been forced to take a long hard look at their policy on Sandvine. Soon enough, we can only hope Bell will as well...
Do I even support the above solution? By itself, absolutely NOT!! IMHO, the ideal solution is to upgrade the core and its routers. However, that takes time and capital that companies like Bell are rather unwilling to spend; they'd rather (ab)use their position in the limited Canadian ISP market to deploy band-aid solutions like throttling p2p.

amigo_boy

join:2005-07-22
said by a333:

bandwidth is bandwidth, either way you look at it.
That's not true. Even p2p users employ QoS to give their interactive activity higher priority (slowing down their bit torrent, etc.).

I don't see anything wrong with the same principle applying at a higher (wider) level.

Mark

devnuller

join:2006-06-10
Cambridge, MA
reply to a333
said by a333:

I've said over an over, the ideal solution is to gracefully scale back speed for ANY upload/download if the said user is using their full bandwidth for more than 20 minutes during peak hours. This actually solves the problem, unlike throttling schemes like bell's, which render many legitimate applications useless.
Like this? »www.bloomberg.com/apps/news?pid=···NA18k1dY


a333
A hot cup of integrals please

join:2007-06-12
Rego Park, NY
Reviews:
·T-Mobile US
Yep, Comcast's "protocol agnostic" approach is EXACTLY what I was talking about, hence the reason I mentioned them in my original post..
To amigo_boy, QoS can be enabled on many different levels. Per se, many Vonage routers let you slow down ALL your internet traffic in general, to make sure your VoIP is clear/reliable. Many download managers let you put a cap on download speeds/time that stuff is downloaded. I don't see anything special about BitTorrent/P2P. What's your point?


espaeth
Digital Plumber
Premium,MVM
join:2001-04-21
Minneapolis, MN
kudos:2

1 recommendation

reply to a333
said by a333:

bandwidth is bandwidth, either way you look at it.
That's not entirely true. It is several orders of magnitude more expensive to deliver bandwidth to residential homes than it is to provide bandwidth for servers in data centers. That's why shifting the distribution burden from data center hosted servers to end-user links is such a poor idea. The P2P architecture is one that you arrive at when you develop an application with complete ignorance to the realities of the network infrastructure it will run on.

said by a333:

It's not as if the user is using any more bandwidth than if they were conducting a regular http download.
False. A HTTP transfer would be a straight download, with the only upstream packets being TCP ACKs. In a P2P environment you upload content to other members while you download -- you will use significantly more overall bandwidth grabbing your content even if you shut down your P2P client immediately after you have completed the download.

said by a333:

P2P actually is better for a network, as (given enough peers) it completes downloads significantly faster than normal centralized server methods, thus getting heavy users off the network noticeably faster
Again, false. Your P2P client will continue uploading to other members even after you have completed the download of your file. This seeding process will keep using upstream data capacity for as long as the client is running on your machine. Considering that technical hurdles make upstream capacity the most difficult to build out at the edge, an application designed to make constant use of upstream bandwidth is exceedingly bad.

said by a333:

The throttling of particular packets by itself violates the principles of TCP.
Throttling flows is not a technical violation. This is how QoS systems work; if you are giving priority to some packets then by definition you are also reducing the priority and delaying the delivery of other packets.


a333
A hot cup of integrals please

join:2007-06-12
Rego Park, NY
Reviews:
·T-Mobile US
First of all, you took my statement "bandwidth is bandwidth" COMPLETELY out of context. Nice try, though...
Next, exactly how do you think seeding works? The uploading you do during YOUR transfer in turn speeds up someone else's download of the SAME file, hence letting a lot of heavy users get their files faster, reducing strain on the network in general. How hard is it to get this? P2P software REDUCES congestion, and avoids the situations most HTTP downloads would just keep trying to hammer their way through. To distribute Blizzard patches to several million users simultaneously using the regular HTTP/unicast methods would require a port into the 'net that's the size of a national backbone.
And what is all this B.S. about upstream bandwidth? Unless you set your client to use 100% of your upstream bandwidth, and make it open up ~2000 ports, you are NOT causing any harm to the network, PERIOD. It's the same as if you had been uploading that 400 MB family reunion movie to Grandma Ginny. As I said, bandwidth is bandwidth. P2P doesn't magically make my available bandwidth a multiple of 10.

Overall, none of you network engineers/"experts" have given me a VALID reason to throttle P2P in PARTICULAR.

amigo_boy

join:2005-07-22
reply to a333
said by a333:

To amigo_boy, ... What's your point?
I was just responding to the assertion that all bandwidth is equal. That's not true. Even torrent users apply traffic shaping via QoS because they don't want their torrents disrupting their own interactive applications.

I see nothing wrong with applying the same common sense further upstream. Maybe ISPs aren't doing that in the best way. I don't know what they do, or what the alternatives are. But, just claiming that all bandwidth is equal is incorrect.

Mark


espaeth
Digital Plumber
Premium,MVM
join:2001-04-21
Minneapolis, MN
kudos:2

1 recommendation

reply to a333
said by a333:

The uploading you do during YOUR transfer in turn speeds up someone else's download of the SAME file, hence letting a lot of heavy users get their files faster, reducing strain on the network in general.
The payoff on P2P only works if many people leave their connection seeding after their transfer completes. The fast downloads of a few require extra upstream capacity from many others to be used.

said by a333:

To distribute Blizzard patches to several million users simultaneously using the regular HTTP/unicast methods would require a port into the 'net that's the size of a national backbone.
It would require an intelligent method of distribution like using content delivery networks. Microsoft has more customers than Blizzard, and they have no problems deploying massive service packs and regular patches via HTTP transfers. Linux package managers like apt, yast, yum, or up2date also grab package updates via HTTP for the tens of millions of Linux boxes out there. Same deal with antivirus updates, or really the overwhelming majority of software updates.

Blizzard uses P2P for one key reason: cost. It moves the distribution burden and expense from them to you. In reality the WoW patches would deploy significantly faster if Blizzard were to "man up" and pay for CDN delivery.

said by a333:

And what is all this B.S. about upstream bandwidth? Unless you set your client to use 100% of your upstream bandwidth, and make it open up ~2000 ports, you are NOT causing any harm to the network, PERIOD. It's the same as if you had been uploading that 400 MB family reunion movie to Grandma Ginny.
Broadband bandwidth is oversubscribed. Your "idle" bits are intended to be someone else's "used" bits. The difference here is again finite vs infinite duration transfers. You start a standard upload of that 400MB video to Grandma Ginny, walk away, and once your transfer finishes there is no more traffic on the network. Using a P2P application, on the other hand, will keep putting bits on the network for as long as you let the application run. Little Timmy queues up some MP3s to download in the morning before he goes to school -- even though the transfer will probably finish in the first 30-45 minutes, the P2P app will keep uploading to other P2P clients the entire time he's away at school, or even longer if he leaves the client running after he gets home.

The other issue is concurrence. With standard transfers you have normal human triggers that cause the load to be randomly distributed. (ie, the chances of you and your neighbor clicking a website button to trigger a large download at the same time a relatively small) Since the distribution from P2P is constant and automated, the chances of transfers of multiple P2P users all hitting the network at the same time are significantly greater.

amigo_boy

join:2005-07-22

4 edits
reply to a333
said by a333:

letting a lot of heavy users get their files faster, reducing strain on the network in general.
That's illogical. If it were true that letting torrents run faster so they finish sooner (and consume bandwith for less time), torrent users wouldn't use QoS to slow down their torrents for the benefit of their web browsing, DNS lookups and VOIP.

I agree that distributed serving reduces network load compared to the load of multiple people downloading from one server. But, if distributed loads facilitate data transfer that wouldn't have been feasible from one server (because the provider wouldn't pay for enough bandwidth to meet the demand), then it has the effect of creating "virtual" servers which are unfunded on networks that didn't bargain for providing that kind of bandwidth.

It's an interesting challenge. But, let's not be coy about what's happening.

Mark


rawgerz
The hell was that?
Premium
join:2004-10-03
Grove City, PA
reply to espaeth
said by espaeth:

Broadband bandwidth is oversubscribed. Your "idle" bits are intended to be someone else's "used" bits. The difference here is again finite vs infinite duration transfers. You start a standard upload of that 400MB video to Grandma Ginny, walk away, and once your transfer finishes there is no more traffic on the network. Using a P2P application, on the other hand, will keep putting bits on the network for as long as you let the application run. Little Timmy queues up some MP3s to download in the morning before he goes to school -- even though the transfer will probably finish in the first 30-45 minutes, the P2P app will keep uploading to other P2P clients the entire time he's away at school, or even longer if he leaves the client running after he gets home.
That's what QOS is for. Prioritize http and keep P2P at the bottom. Everyone wins. Now, if everyone using P2P was using a high encrypted VPN, then it would be a problem. Or "unlimited" high speed tiers that don't have the bandwidth to support all the clients. But that's not exactly the end user's fault.
--

You can't make all the people happy all of the time. But it should be common sense to shoot for the majority.


Sean8

join:2004-01-23
Toronto

1 edit
reply to devnuller
I see you've been brainwashed by the anti-neutrality crowd. It's not even ABOUT which data is more important. It's about treating it all fairly.

All bandwidth should be equally distributed. If you are experiencing a slow down, then so should I, and vice versa.

Let me ask you, why should your data go faster then mine...? It shouldn't.

Perhaps the ISPs should give up some of their profits in order to maintain their business. Revenue is supposed to be re-invested.

amigo_boy

join:2005-07-22
said by Sean8:

All bandwidth should be equally distributed.
I'll believe that when torrent users stop using QoS to make their other, more interactive services usable.

Mark


dvd536
as Mr. Pink as they come
Premium
join:2001-04-27
Phoenix, AZ
kudos:4
reply to devnuller
said by devnuller:

Is bypassing TCP congestion control a good thing for the users of the network?
YES!
because if users can bypass the crap isps are doing that its subs are PAYING FOR, they might just have to throw some of those profits at their NETWORKS!
--
When I gez aju zavateh na nalechoo more new yonooz tonigh molinigh - Ken Lee


espaeth
Digital Plumber
Premium,MVM
join:2001-04-21
Minneapolis, MN
kudos:2
reply to Sean8
said by Sean8:

All bandwidth should be equally distributed. If you are experiencing a slow down, then so should I, and vice versa.

Let me ask you, why should your data go faster then mine...?
Quite simply: because not all applications are affected by latency in the same way. Latency due to congestion makes real-time applications like RDP, VoIP, or online gaming completely unusable. Injecting latency into a file transfer simply slows it down -- it doesn't break anything, it just takes longer to get your data.

Broken apps are a bigger deal than slow apps.


Combat Chuck
Too Many Cannibals
Premium
join:2001-11-29
Verona, PA

2 edits
reply to devnuller
said by devnuller:

Is bypassing TCP congestion control a good thing for the users of the network?
I think you're misunderstanding just what TCP congestion control's purpose is; that being primarily to keep TCP's reaction to unacknowledged packets from doubling the amount of bandwidth a particular stream is consuming when a router along the way starts dropping packets, thus making the situation worse. UDP doesn't care if a packet is actually received or not so it won't retransmit a packet.

TCP congestion control has little to nothing to do with bandwidth management. It's about making sure that a temporary reduction in actual bandwidth doesn't result in a permanent reduction of effective bandwidth because every TCP stream over the affected link keeps sending duplicate data.

--
The world’s elusive, remember
where love's the leaf
faith, the river
what's born as flame dies in ember
see for yourself!

Skippy25

join:2000-09-13
Hazelwood, MO
reply to amigo_boy
To the ISP, yes it should be as they should be nothing but the dumbpipes they are passing packets along the network.

The rest will work itself out with the law of physics, congestion, and consumers willingness to deal with it.

Kearnstd
Space Elf
Premium
join:2002-01-22
Mullica Hill, NJ
kudos:1
reply to devnuller
the problem is ISPs saw the money portals where making and comitted themselves to being portals as well. now customers expect that portal so there is no backing out to just a simple webmail page.
--
[65 Arcanist]Filan(High Elf) Zone: Broadband Reports


funchords
Hello
Premium,MVM
join:2001-03-11
Yarmouth Port, MA
kudos:6

1 edit
reply to devnuller
said by devnuller:

Is bypassing TCP congestion control a good thing for the users of the network? Why should one persons non-interactive file sharing generating a dozen to a hundred streams be more important than my interactive VoIP call or gaming experience?
It's a very good thing for the network. This new protocol YIELDS to other streams. In other words, it's less agressive. The idea, eventually, is that background file transfers are handled like -- well -- background transfers -- similar to the way that background processes take a lighter toll on the CPU while you're actively using the computer. P2P users have the same concerns -- this change keeps their interactive uses snappy, and during crunch time it ought to help others as well.

TCP's congestion control (actually, there are several styles, but let's pretend there's just one) is just a choice. There's nothing wrong with reproducing the same behavior in UDP -- or any other IP-based protocol.
--
Robb Topolski -= funchords.com =- Hillsboro, Oregon
More features, more fun, Join BroadbandReports.com, it's free...


funchords
Hello
Premium,MVM
join:2001-03-11
Yarmouth Port, MA
kudos:6
reply to espaeth
Espaeth, FWIW, there's a reason that dedicated file-sharers flee to "private" (they're not really) "trackers" (they're mostly websites-tracker hybrids), and it's because most people don't share all that constantly. So they sign up for these private sites (like sports leagues) to set and enforce some community rules about uploading at least as much as people download. That users do this and that it's the sharing imbalance motive is very clear.

If we lived in the world you're describing, the average up/down "ratio" would be 5:1 and private sites wouldn't exist. My guess is the average up/down ratio is 1/5. Yes, users upload longer than they download, but they also have asymmetric pipes. It takes 5-15x longer to upload the same amount as they download.
--
Robb Topolski -= funchords.com =- Hillsboro, Oregon
More features, more fun, Join BroadbandReports.com, it's free...

patcat88

join:2002-04-05
Jamaica, NY
kudos:1
reply to a333
said by a333:

Yawn, here comes the typical argument... bandwidth is bandwidth, either way you look at it. All p2p does is open several simultaneous connections, splitting the user's bandwidth. Unless you horribly misconfigured your client to open up, say, 1000 ports. It's not as if the user is using any more bandwidth than if they were conducting a regular http download.
Yes it is. Lets make an example. TCP equalizes the bandwidth equally for all TCP connections on the link at a single roadblock. User A and User B have 2 mbit connections to the DSLAM. The DSLAM has no one other than User A and User B. The DSLAM has a 3 mbit connection to the core router. User A is running P2P with 100 connections, User B is running an HTTP download with 10 connections. User A's speed will be 2 mbit (limited only by his DSL modem's 2 mbit speed), User B will be 1 mbit. Shouldn't it be 1.5 mbit for User A and 1.5 mbit for User B? Its not. If User A has a 3mbit connection to the DSLAM (DSLAM to internet is still 3 mbit), his speed would be 2.72 mbit ((3/110)*100), and User B will be left with .27 mbit ((3/110)*10). Its the same reason download accelerators who make multiple connections to download an HTTP file work.

The speed caps (2 mbit in this case) don't extend past the DSLAM/CMTS. After that its a single ethernet link, and packets and upstream routers do not see the original speed caps, or see IPs and current traffic behind each IP when deciding which traffic to toss at a congested point. The router will randomly drop traffic. The chance of a single TCP connection's packet being post is the same among all TCP packets at that congestion/dropping point. A droped packet means TCP will slow down the connection. So the more connections a user has (assuming the connection's destination's connection has infinite bandwidth), the less likely a packet will drop on the sum/pool of all of that user's connections.

Your acting as if every router has QOS support, and can see your current utilization and all other user's utilization, and decide to split bandwidth equally among all users (not all connections). A router doesn't see users and DSL modems and Cable modems, it sees a bunch of TCP connections with different source IPs. Only if the router sees each modem as a VPN tunnel (which is a single point to point connection) will your idea work.
P2P actually is better for a network, as (given enough peers) it completes downloads significantly faster than normal centralized server methods, thus getting heavy users off the network noticeably faster (obviously, unless the user is dumb enough to allocate their entire upstream bandwidth to seeding).
I'll be driving down to the Yacht club in my Maybach to be the host of my Yacht Party, laughing all the way by throwing datacenter/server/internet connection costs to the users to swallow.

There is so much content on the internet, only a couple people on your headend/DSLAM/Central Office/City will have that content. No current P2P system and no realistic future P2P system actively attempts to talk to local (same ASN/ISP/city/least hops) peers vs distent peers. P4P is dead since no open source coders have any incentive to help "the man" (ISPs). When Vuze and uTorrent come with ASN preference, thats when your not BSing.

Users do allocate almost all their upstream to seeding, otherwise you get banned from your private tracker and seed for 24/7. Nobody pumps HTTP traffic 24/7.
Do I even support the above solution? By itself, absolutely NOT!! IMHO, the ideal solution is to upgrade the core and its routers. However, that takes time and capital
If IP6 is DOA, so is any upgrades to TCP/IP. I'm still waiting for Xcast (Explicit Multicast), or some system to let me receive or send multicast traffic in P2P. P2P traffic would decrease exponentially if consumers had access to Multicast. My 1 upstream stream of sectors of the file can duplicate to 100s of peers with only 1 copy of the traffic on each ISP, and it can go across long haul backbone fiber optics as 1 copy. Only problem is if your download is too slow for my upstream stream, you will have to get "makeup" packets via conventional P2P P2P from peers who did get my packets. Sectors with the most users needing them/rarest get sent out first. So a torrent can be seeded to 1000 users in the time it takes for the initial seed to seed it exactly once.

patcat88

join:2002-04-05
Jamaica, NY
kudos:1
reply to a333
said by a333:

P2P software REDUCES congestion, and avoids the situations most HTTP downloads would just keep trying to hammer their way through. To distribute Blizzard patches to several million users simultaneously using the regular HTTP/unicast methods would require a port into the 'net that's the size of a national backbone.
Pay Limelight or Akamai like a proper company »www.akamai.com/html/customers/cu···ist.html .
Intelligent localized caching and distribution and redirection of clients to the closest server. Datacenters all over the world. Almost no transoceanic link usage by clients connecting to a CDN.


funchords
Hello
Premium,MVM
join:2001-03-11
Yarmouth Port, MA
kudos:6
reply to patcat88
said by patcat88:

said by a333:

Yawn, here comes the typical argument... bandwidth is bandwidth, either way you look at it. All p2p does is open several simultaneous connections, splitting the user's bandwidth. Unless you horribly misconfigured your client to open up, say, 1000 ports. It's not as if the user is using any more bandwidth than if they were conducting a regular http download.
Yes it is. Lets make an example. TCP equalizes the bandwidth equally for all TCP connections on the link at a single roadblock. User A and User B have 2 mbit connections to the DSLAM. The DSLAM has no one other than User A and User B. The DSLAM has a 3 mbit connection to the core router. User A is running P2P with 100 connections, User B is running an HTTP download with 10 connections. User A's speed will be 2 mbit (limited only by his DSL modem's 2 mbit speed), User B will be 1 mbit. Shouldn't it be 1.5 mbit for User A and 1.5 mbit for User B? Its not. If User A has a 3mbit connection to the DSLAM (DSLAM to internet is still 3 mbit), his speed would be 2.72 mbit ((3/110)*100), and User B will be left with .27 mbit ((3/110)*10).
Nope, because as long as the 1 Mbps connection has upward headroom, it's going to take a creep at it and some of the resulting dropped packets at the 3 Mbps choke point will belong to the other user. This means that the equilibrium will continue to creep up until some balance was made. (This thought experiment is easier if you think in terms of 1 and 10 connections instead of 10 and 100, but the outcome is the same).

There is possibly a temporary unfairness, and that's because the 100-connection link will experience a less deep cut on a dropped packet than the 10-connection link will. But ulitimately routers are stateless, they only know data packets and balance will eventually be achieved such that A and B packets get dropped at about the same rate.

Besides, none of us have a 3 Mbps choke point. You need to apply that as well.

The relative tiny broadband modem pipe is a good unfairness equalizer. If it wasn't for that, then it actually might be more prone to work the way you describe.

Its the same reason download accelerators who make multiple connections to download an HTTP file work.
Nope. Download accelerators work because you're taking a larger share of the distant server's ports, each of which is allocated a share of the total bandwidth.

They would work the way you envision if we had 100 Mbps connections fighting for a smaller upstream pipe. But as it is we all have a small pipe competing in a larger one -- so it doesn't.
--
Robb Topolski -= funchords.com =- Hillsboro, Oregon
More features, more fun, Join BroadbandReports.com, it's free...

patcat88

join:2002-04-05
Jamaica, NY
kudos:1
said by funchords:

Nope, because as long as the 1 Mbps connection has upward headroom, it's going to take a creep at it and some of the resulting dropped packets at the 3 Mbps choke point will belong to the other user. This means that the equilibrium will continue to creep up until some balance was made. (This thought experiment is easier if you think in terms of 1 and 10 connections instead of 10 and 100, but the outcome is the same).
100 connections will all try at creeping up, just the same as the 10 connections will try creeping up, 100 connections will still have more bandwidth. Unless a TCPIP stack is designed to think of all connections as a whole when deciding whether increase speed (which i don't think is possible, since the stack has no way of telling if the congestion is on the local link, or out in the internet link after many routers).
There is possibly a temporary unfairness, and that's because the 100-connection link will experience a less deep cut on a dropped packet than the 10-connection link will. But ulitimately routers are stateless, they only know data packets and balance will eventually be achieved such that A and B packets get dropped at about the same rate.
Your talking about packets, I'm talking about connections. packets A and B can be part of the same connection.
Besides, none of us have a 3 Mbps choke point. You need to apply that as well.
Its an example, it makes less sense if I talk about User A through User ZZ and a 1 gigabit link from the CMTS to core router, and a 100 gigabit backplane, and a 40 gigabit link to a peering center, then a 1 gigabit link to some Tier 1, hand off to a Tier 2 in same datacenter, then taking a trip across the country on a leased MPLS OC768 shared with a handful of Tier 2 ISPs and find that at 6PM most days of the week there is congestion on that, or we can go further and say, then it arrives another coast of the USA, goes into a Tier 1's datacenter, where its split into its Tier 2's bonded 1 gigabit circuits where it flys off to a colo datacenter through the Tier 2's router then in that datacenter it goes to another floor and then down a congested 100mbit link to a new hot Web 2.0 video sharing site which offers videos in torrent with 30 1U servers inside each torrent, or a HTTP download from 1 server, where exactly all Users A through ZZ are trying to connect. Lets hope no line cards are out of spec and not causing congestion.

Nope. Download accelerators work because you're taking a larger share of the distant server's ports, each of which is allocated a share of the total bandwidth.
Thats true too. Same drowning out effect as in TCP connections. Except now for Apache threads.
They would work the way you envision if we had 100 Mbps connections fighting for a smaller upstream pipe. But as it is we all have a small pipe competing in a larger one -- so it doesn't.
But our small pipes summed up, are much larger than the "large" pipe we are trying to get into (consumer ISP contention).

P2P is like an ISP with 75% of its customer being botnet infect and DDOSing youtube off the net by drowning out legitimate connections. Except replace youtube with a peering link.


funchords
Hello
Premium,MVM
join:2001-03-11
Yarmouth Port, MA
kudos:6
100 will try at creeping up, and more connections will experience packet drop -- (more connections, but not a bigger proportion) -- but either way, routers don't understand connections -- they just deal with packets and when congestion hits, they drop in proportion.

If B is transmitting more data than A, then B will have more drops. We have to mentally turn the situation back into connections in order to predict what happens next.

As to your last sentence:

Client-server is not more legitimate than P2P (The Internet started as P2P), and well-moneyed companies don't deserve the only voice on the 'net.

You might like a receive-only network -- but we've had that before, we called it "Television."
--
Robb Topolski -= funchords.com =- Hillsboro, Oregon
More features, more fun, Join BroadbandReports.com, it's free...


edam

@btopenworld.com
reply to a333
said by a333:

P2P software REDUCES congestion, and avoids the situations most HTTP downloads would just keep trying to hammer their way through.
Haha ha!! You've obviously never managed a network, mate...


TC

@anheuser-busch.com
reply to devnuller
so... isn't this ultimately about service providers overselling their capacity? i mean, so what if i use all of the bandwidth i pay for? is it my fault that my ISP can't handle what they sold me?

just playing devil's advocate here, but i'm failing to see how all internet users should pay the price of ISPs trying to cut corners and milk as much money as possible out of their service. this is their burden to "fix", and by "fix" i mean actually provide the service they have sold to millions of users at a certain price point. if they cannot do that and intend on limiting/restricting their advertised/sold service, then there will be lawsuits (in america at least, given our propensity for litigiousness) against them or at minimum they will be forced to lower their prices.

all the arguments about how much saturation a given protocol causes, connection "fairness", etc., are moot. this is an issue created by greedy companies and overzealous marketing.


iDNbitT247

@embarqhsd.net
reply to devnuller
is this internet scrare tatics. FACT:this fight will never end its a matter of history so biz models are going to change or die


swhx7
Premium
join:2006-07-23
Elbonia
reply to espaeth
said by espaeth:

It is several orders of magnitude more expensive to deliver bandwidth to residential homes than it is to provide bandwidth for servers in data centers. That's why shifting the distribution burden from data center hosted servers to end-user links is such a poor idea. The P2P architecture is one that you arrive at when you develop an application with complete ignorance to the realities of the network infrastructure it will run on.

... Considering that technical hurdles make upstream capacity the most difficult to build out at the edge, an application designed to make constant use of upstream bandwidth is exceedingly bad.

If A and B are both necessary conditions of a bad result, and both are voluntary human actions, then it is a fallacy to treat A as if it were an inevitable fact of nature and only B as a choice for which someone is responsible. To blame B instead of A or both, one needs an argument for B being a bad action and A a good one.

The slowness of residential links in USA, and their asymmetry, are both due to severe lack of competition in broadband markets. This in turn is due to national policies of granting right of way, local monopolies, and subsidies to telcos and cable companies, and permitting them to abuse customers outrageously, with minimal corresponding requirements or enforcement of mandates on behalf of the public.

On the other hand, p2p developers have merely coded for the internet as it was meant to be, and has the potential to be, as a network of peers not reliant on big commercial content providers. It is somewhat backwards to blame p2p for not capitulating to the distortions introduced by bad policies, rather than concluding that the policies have artificially made p2p into a problem.


atom_galaxy

@atomicity.org
reply to devnuller
Wouldn't this be a prime candidate to fix by properly using the TOS and priority packet fields? p2p apps should set priority to minimum and games and VoIP should set it at max and there we are, low-latency for apps that need it, in a protocol-defined well-mannered way.

I'm asking because I honestly don't know, I hope one of you gurus can enlighten me why this is not a solution (because if it was, it would already have been done).