You might recall that back in 2009, we mentioned a piece claiming that the "bandwidth hog,"
a term used ceaselessly by industry executives to justify rate hikes, net neutrality infractions, and pretty much everything else -- was a myth. The piece was penned by Yankee analyst Benoit Felten and Herman Wagter, who knows a little something about consumption -- as he's the man largely responsible for Amsterdam's FTTH efforts
. The problem wasn't bandwidth hogs, argued Wagter, the problem was poorly designed networks built by people more interested in cutting corners than offering quality product.
Data caps, therefore, are a very crude and unfair tool when it comes to targeting potentially disruptive users.
To help prove the bandwidth hog was a mythical creature, Wagner and Felten put out a request for raw data from any ISP would like to participate -- or who wanted to contest their argument. They only got one substantive volunteer, an anonymous "mid size company from North America" selling DSL, but they've issued a new report
after thoroughly parsing the data received.
In a blog post
, Felten notes that the pair took real user data for all customers connected to a single aggregation link and analyzed the network statistics on data consumption -- in five minute time increments -- over a whole day. What they found is that capping ISPs often don't really understand customer usage patterns, and are confusing data consumption (how much data was downloaded over a whole period) and bandwidth usage (how much bandwidth capacity was used at any given point in time).
What they discovered is data that runs in stark contrast to a lot of the claims put out there by some familiar, larger ISPs when justifying caps and overages. Among the pair's findings is that the top 1% of data consumers (which they call "very heavy consumers," instead of the already adversarial "hog") account for 20% of the overall consumption. Data consumption for these "very heavy consumers" was 9.6 GB, while the average for all users was 290MB -- roughly equating to data consumption of 288 GB per month and 8.7 GB per month, respectively.
Data caps as currently implemented may act as deterrents for all users at all times, but can also spur customers to look for fairer offerings in competitive markets.
Looking deeper into the data, they also found that about 61% of very heavy data consumers download 95% of the time or more, but only 5% of those who download at least 95% of the time are very heavy data consumers. While 83% of very heavy data consumers are amongst the top 1% of bandwidth users during at least one five minute time window at peak hours, they only represent 14.3% of said Top 1% of users at those times.
Confused yet? To simplify, one of our readers puts the dreaded highway metaphor, often used by ISPs to justify caps, to work in the opposite direction:
1% of vehicle drivers on the road travel a disproportionate amount of miles compared to the average driver. But they are on the road all the time. Most of the time they are on the road there is no rush hour congestion.The heavy drivers are likely to be involved in rush hour traffic jams, but only represent a small, not terribly relevant, fraction of total drivers in the traffic jam.Limiting the amount of miles a driver can drive, does nothing to widen the roads and little to keep people off the roads during traffic jams, thus does not help with congestion.
The researchers themselves note that blunt caps simply may not work, and they punish those that aren't really causing any network problems:
Assuming that if disruptive users exist (which, as mentioned above we could not prove) they would be amongst those that populate the top 1% of bandwidth users during peak periods. To test this theory, we crossed that population with users that are over cap (simulating AT&T’s established data caps) and found out that only 78% of customers over cap are amongst the top 1%, which means that one fifth of customers being punished by the data cap policy cannot possibly be considered to be disruptive (even assuming that the remaining four fifths are).
Data caps, therefore, are a very crude and unfair tool when it comes to targeting potentially disruptive users. The correlation between real-time bandwidth usage and data downloaded over time is weak and the net cast by data caps captures users that cannot possibly be responsible for congestion. Furthermore, many users who are "as guilty" as the ones who are over cap (again, if there is such a thing as a disruptive user) are not captured by that same net.
Felten concludes that ISPs themselves need to better understand the difference between data usage and bandwidth consumption, or face driving their customers to more reasonable competitors. That's assuming consumers have a choice, given caps exist in many markets largely due to no competition.
There's several things Felten doesn't say, but which anyone who has closely watched this debate will likely have concluded on their own. One thing is that by vilifying the usage of your product, you've already created an adversarial relationship with your customer -- one that might already be strained if you don't provide very good service at a good value in the first place.
It would also be naive to assume many of the larger ISPs -- stocked with number crunchers and network analysts -- don't already know everything Felten stated. However, there's a reason that ISPs don't like bandying real, raw data about -- and it's because there's a few large carriers that like to use bogus science to justify anti-consumer behavior, most recently with AT&T's announcement of caps and overages for DSL and U-Verse users
. When asked to prove that these caps and overages were necessary, AT&T couldn't -- something ignored by general tech press coverage of the move.
As we've noted repeatedly, most carriers impose caps and overages claiming it's due to either network congestion or financial necessity. In realty, caps and overages are implemented by carriers that simply want to jack up the cost of bandwidth so they can protect TV revenues from Internet video by making Internet video more costly and less appealing. The financial "necessity" of moving away from the flat-rate pricing model is proven false quarterly by earnings reports. Claims of chicken little congestion claims (see the bogus Exaflood
) are proven false by poking around real data -- which shows congestion and bandwidth hogs are not quite the bogeymen they're portrayed to be.