said by a333:Bell's argument was that p2p users were somehow using up bandwidth that was way out of proportion for their population ("5% of their users using 90% of their bandwidth", if I remember correctly). Apparently, that is not the case, given the fact that now Bell sees fit to open their own online video store. Also, they seem to be fine with giving users free speed upgrades. Unless they have some new medium to get videos to their customers, one would only say that Bell isn't facing nearly the bandwidth crisis they're making it out to be...
At a high level P2P is different for one main reason: infinite duration transfers. A P2P client's work is never done; it will continue seeding your content for as long as you allow it to run approaching infinity. When you download a movie using HTTP or other streaming protocol, once you have the movie you don't transfer any more on the network. Your work is done, the network capacity you were using goes back to the idle pool for everyone else to tap into.
For fixed duration transfers, increasing the access speed actually tends to do more good than harm. If I go to grab a Linux ISO, if I can download it in 7 minutes instead of 20-25, that means that I am statistically less likely to be tying up shared capacity when someone else wants to be engaging in a large transfer. If I can download it faster it doesn't mean I'll download it a 2nd or 3rd time, it just means more idle time for the network after I get my stuff.
said by a333:And the use of the p2p protocol by itself does NOT bog down a network, unless you grossly misconfigure your client.
Again, it's not about speed, it's about duration.
said by a333:And BTW, in case you didn't notice, Bell pissed off a LOT of customers by their failure of a throttling scheme, as it also blocked pretty much anything encrypted, including VPN connections.
I agree the approach is poor, just like I think firing tear gas into a crowd is a horrible approach. At the same time though, for both events there are situations where it needs to be done. There are certain places where it is impossible to
quickly build your way out of congestion -- ultimately adding capacity is the only long term solution but when things start hitting the wall there needs to be some action in the here and now. At a high level it's easy to decide priority because while latency due to congestion slows down file transfers, latency due to congestion breaks real-time applications. Clearly broken is worse than slow, and if we could get everybody to agree on that we might be able to avoid the arms race crap between P2P developers working to build in better encryption and stealth vs DPI engineers trying to stay one step ahead to keep the traffic manageable.