dslreports logo
 
    All Forums Hot Topics Gallery
spc
uniqs
12

a333
A hot cup of integrals please
join:2007-06-12
Rego Park, NY

a333 to SpaethCo

Member

to SpaethCo

Re: Woo Hoo! Free capacity!

Bell's argument was that p2p users were somehow using up bandwidth that was way out of proportion for their population ("5% of their users using 90% of their bandwidth", if I remember correctly). Apparently, that is not the case, given the fact that now Bell sees fit to open their own online video store. Also, they seem to be fine with giving users free speed upgrades. Unless they have some new medium to get videos to their customers, one would only say that Bell isn't facing nearly the bandwidth crisis they're making it out to be...

And the use of the p2p protocol by itself does NOT bog down a network, unless you grossly misconfigure your client. This has been proven, and arguing over it is purely useless considering the number of debates that've taken place about it. P2P does not magically make you get 10x upload/download speeds, it just gets the file faster through its use of numerous simultaneous connections to different peers.

And BTW, in case you didn't notice, Bell pissed off a LOT of customers by their failure of a throttling scheme, as it also blocked pretty much anything encrypted, including VPN connections.

SpaethCo
Digital Plumber
MVM
join:2001-04-21
Minneapolis, MN

SpaethCo

MVM

said by a333:

Bell's argument was that p2p users were somehow using up bandwidth that was way out of proportion for their population ("5% of their users using 90% of their bandwidth", if I remember correctly). Apparently, that is not the case, given the fact that now Bell sees fit to open their own online video store. Also, they seem to be fine with giving users free speed upgrades. Unless they have some new medium to get videos to their customers, one would only say that Bell isn't facing nearly the bandwidth crisis they're making it out to be...
At a high level P2P is different for one main reason: infinite duration transfers. A P2P client's work is never done; it will continue seeding your content for as long as you allow it to run approaching infinity. When you download a movie using HTTP or other streaming protocol, once you have the movie you don't transfer any more on the network. Your work is done, the network capacity you were using goes back to the idle pool for everyone else to tap into.

For fixed duration transfers, increasing the access speed actually tends to do more good than harm. If I go to grab a Linux ISO, if I can download it in 7 minutes instead of 20-25, that means that I am statistically less likely to be tying up shared capacity when someone else wants to be engaging in a large transfer. If I can download it faster it doesn't mean I'll download it a 2nd or 3rd time, it just means more idle time for the network after I get my stuff.
said by a333:

And the use of the p2p protocol by itself does NOT bog down a network, unless you grossly misconfigure your client.
Again, it's not about speed, it's about duration.
said by a333:

And BTW, in case you didn't notice, Bell pissed off a LOT of customers by their failure of a throttling scheme, as it also blocked pretty much anything encrypted, including VPN connections.
I agree the approach is poor, just like I think firing tear gas into a crowd is a horrible approach. At the same time though, for both events there are situations where it needs to be done. There are certain places where it is impossible to quickly build your way out of congestion -- ultimately adding capacity is the only long term solution but when things start hitting the wall there needs to be some action in the here and now. At a high level it's easy to decide priority because while latency due to congestion slows down file transfers, latency due to congestion breaks real-time applications. Clearly broken is worse than slow, and if we could get everybody to agree on that we might be able to avoid the arms race crap between P2P developers working to build in better encryption and stealth vs DPI engineers trying to stay one step ahead to keep the traffic manageable.
Lazlow
join:2006-08-07
Saint Louis, MO

Lazlow

Member

Ok, this reply goes towards that paragraph or two you added (edited into) your after my earlier reply.

A lot of those people had no experience with the system before they bought those houses and may have honestly thought that was just the way things worked. But I agree they(the borrowers) should have known better. My point is not about the borrowers but the lenders. The lenders got themselves into trouble by making loans that they knew were far to risky. But greed drove them. High risk loans have the greatest returns (assuming a significant number of them do not default, our current situation). The reason we had to bail them(the lenders) out was becuase if they went under they would take the entire financial system down with them. This is "smart" business at work, instead of honest business. There were a lot of finical institutions out there they did not make these kind of "smart" loans and they are in much better shape than the ones that did (at least before the bailout). These are the same types of "smart" business practices that a lot of the ISPs are doing.

"They want to introduce you to a product at a low introductory rate so you will stay a customer going forward. This sales tactic is used universally from street drug dealers offering a free "taste" to selling game station consoles for less than the hardware manufacturing costs with the hopes of making it up in game sales after the fact. It's called a loss leader for a reason."

That would be true IF they were only offering it to new customers or old customers using new products, but they are not(which I mentioned above). What you have described is what they USED to do. Which I agree was/is a great idea. They currently are offering some(most?) existing customers the lower rates for the same services that they are currently subscribed to. It is common enough they even have a name for it(deal hopping? something hopping at any rate, its late zzzz). This relatively new (couple or years at most?) behavior does not fit into the loss leader theory (at least as I understand it). Many people are able to jump from one special (as it expires) to the next. If the ISPs were not making money (at the special rates) they would not allow this, they could not afford to. Since they have been doing this for quite a while and continue to do so, the only logical conclusion one can reach is that they are in fact making money even on these special rates.

a333
A hot cup of integrals please
join:2007-06-12
Rego Park, NY

a333 to SpaethCo

Member

to SpaethCo
I think we're essentially agreeing that p2p is a reasonably efficient protocol. As to seeding, it's only the retards that allocate 100% of their uploads to their p2p client that end up causing network slowdowns for the rest of the subscribers. As to the part about faster being better, that's EXACTLY the thing that is better about p2p: it gets users off the network faster.
My complaint is the fact that Bell discriminates p2p in PARTICULAR, although it's just a PROTOCOL... if it's capacity that's the problem, they should throttle EVERYTHING, rather than p2p. IMHO, the ideal throttling technique would be to gradually scale back anyone using full bandwidth for more than say 15 minutes, during peak hours (Of course, no throttling, and capacity upgrades would be the best solution, but that's a different topic).
What I am getting at here is that p2p just uses BANDWIDTH. Bandwidth is bandwidth, regardless of the protocol using it. I.e. Grandma Ginny checking e-mail vs Joe the Downloader torrenting his pr0n is the same at the network level, unless Joe's p2p client somehow namages to open 5000 ports or something like that... Therefore, bandwidth should be treated indiscriminately, the way Comcast is experimenting with over here in the States, with their "protocol agnostic" approach (at least, let's HOPE it works).