|reply to Dones |
Re: Why Data Caps Suck
Although the correlation between usage and bandwidth is not quite 1:1, pushing roughly double as much data through a network that is approaching its peak capacity during a given period of interest (ex.: peak hours) does mean having to roughly double the infrastructure. Both peak-hours usage and peak-hours bandwidth are increasing much faster than equipment $/Gbps is decreasing.
While none of it justifies charging over $0.20/GB for usage, there still is a kernel of truth for carriers to worry about how fast peak-hour data is growing because at current rates (60%/year peak usage growth vs 25%/year $/Gbps equipment cost reduction), it may genuinely become both economically and technologically impossible to keep up with demand.
As far as encouraging subscribers to select slower speeds goes, that seems pretty silly since most ISPs have plenty of spare capacity for people to use higher speeds during off-peak hours and likely most of it for a good chunk of peak-hours as well. The only real problem here is people's expectations of dedicated-like performance all day, every day. Wanting to maintain this illusion of dedicated-like service due to customer complaints means ISPs do have to considerably over-build compared to what the average/typical load may dictate. In HK and other exceptionally well-connected cities, it isn't uncommon for an ISP to bring 1-2Gbps to a large MDU complex and then distribute it to 1000+ nearby subscribers at 100-1000Mbps speeds, which means 1-2Mbps provisioning per subscriber but this still yields 30-70Mbps actual speeds most of the time since few subscribers sharing that link are ever active at the same time.
People keep painting oversubscription as the most evil thing ever but it is what makes cheap broadband possible since it would be mathematically/physically impossible to build a non-blocking network of significant scale anyhow - you quickly end up wasting most ports on your very expensive switches/router on interconnect rather than providing actual service to customers. Ex.: if you want to build a 32-ports non-blocking switch out of a bunch of 16-ports switches, you need 6 switches. The equivalent of four whole switches get consumed in interconnects just to double capacity. The fundamental principle of non-blocking networking simply does not scale well beyond a single system... 6X the complexity/cost/space/power for 2X the capacity.