|reply to InvalidError |
Re: FibreMedia.ca - Residential Gigabit Internet
I don't normally comment on other ISP threads, but I've been through this several times so thought it'd be relevant.
The issue isn't with the average user, ie let's say the average user is 1Mbps at peak for easy math. The problem with higher maximum speeds is "how much more than the average a single customer can impact costs", ie with a 20Mbps connection the worst a single customer can do is 20x the average whereas with 1Gbps that same single customer can do 1000x the average. Put even more simply, a single user can cost you 1000x the average, which is what you've based your pricing model on.
This is why unlimited plans on higher speed packages become much more problematic/risky. It only takes 0.1% of the customer base using it maxed out at peak to double the overall average capacity cost. Ie 1000 users @ 1Mbps = 1000Mbps or 999 users @ 1Mbps + 1 user @ 1000Mbps = 1999Mbps - therefore that 1 user (or 0.1% of the customer base) can double the average costs for everyone else. Or if 1% of the customers use it fully at peak then it's 990 users @ 1Mbps + 10 users @ 1000Mbps = 10990Mbps, or increasing the average cost by 11x for the other 99%.
I certainly won't comment on what appropriate shaping/triggers are for any given ISP, but I can't imagine anyone can argue that $89 for unlimited guaranteed-1Gbps-speed-at-all-times is really possible at this time? If so then I'm severely overpaying for my transit. Overall I suspect their package will do quite well and kudos to them for the work they are doing. As long as any company is upfront about their policies or limits then people can make an informed decision on whether that product makes sense for them and just because it doesn't work for every person doesn't mean it won't work for most.
edit: For what it's worth, I completely agree with the comment about "as long as usage remains generally intermittent", yes, in that case 30Mbps vs 1000Mbps for the typical average user does not have a significant cost difference. The issue is that as speeds increase the non-typical-sustained-usage-user has the ability to individually skew numbers by a large factor.
|reply to SkyNetCanada |
Alright, alright, I give up!! LOL
My only and sole intention on this thread was to bring the downside up from 10Mbps.
I totally understand everything that everyone is saying and agree for the most part (which is a big step forward for me than it used to be. thanks to rocca, paul, mark and others for helping me understand more over the last few months).
All I am saying is I don't believe that such a drastic drop in speed is right. At least drop it to something that you can ensure is still usefull for all of your customers. That is the only point I was trying to make and I'm sure that everyone can agree on that. I'm sorry if others took offense, that was not my intention whatsoever.
|reply to rocca |
said by rocca:It is neither problematic nor risky if you make no speed guarantees or only very low ones.
This is why unlimited plans on higher speed packages become much more problematic/risky.
If you have 999 users pulling 1Mbps and someone comes along trying to pull 1Gbps with only 1Gbps of uplink available, everyone gets ~1Mbps. If you had 10 subscribers doing 100Mbps each and an 11th jumps in trying to do 1Gbps, all 11 of them drop to ~90Mbps. Zero change in costs - your heavy subscriber cannot cause network usage to magically rise to 200% of existing capacity so your cost can never exceed 100% of what you configured.
But to let links run at ~100% all the time without severely degrading light/medium users' experience, you need to apply CoS/QoS with appropriate PHBs across the network. Once that is done, you only need to worry about upgrading capacity when your load from light/moderate subscribers warrants it while heavy subscribers automatically get whatever additional leftovers this may create.
This approach turns the whole capacity provisioning problem upside-down: instead of worrying about provisioning for peak-hours bandwidth required to maintain the illusion of dedicated bandwidth to end-users, you simply let congestion and CoS/QoS/WRED dynamically sort speeds out and only need to arrange for extra capacity when speeds drop below acceptable levels.
What is most cost-effective? Links that run at 20-30% of their peak-hour rates 16-20h/day or links that run at 70-100% 24/7? In an ideal world, isn't (closer-to-)100% 24/7 exactly what ISPs would like to achieve with things like off-peak unlimited - eliminate the 3:1 to 5:1 disparity between peak-hours and dead-of-the-night?
That's why giving up on the illusion of dedicated speed reduces costs so much - even more so when you control your own first-mile so you do not need to worry about inefficient and fragmented incumbent interconnects.
@ invalid. I'd agree 100% with what you're saying, i've contemplated the same thing with my network. But the end user doesn't get it or understand the intricacies of running the network. They want predictability and It doesn't matter that it's super fast during the day, what matters is that their speed tests differ from hour to hour. It's not an easy spot to be in. From my perspective, I'm paying for transit that I'm not using most of the day. It doesn't matter if it gets put to use as I'm still paying for it.... it all comes down to the customer and their expectations
|reply to InvalidError |
I am perfectly fine with what you have proposed. As a so called "heavy user" (which I think is nonsense for so little traffic) I have zero expectation of full speeds or anywhere near it; as long as the connection is providing reasonable speeds (10 - 20 Mbps is not) unlike the "light" users that would complain like crazy if they are not seeing pretty much full speed even at peak hours. As long as the upstream backhaul links are not congested and upgraded appropriately then that would still allow for reasonable speeds and no ridiculous arbitrary cap.
|reply to prairiesky |
said by prairiesky:Hence the importance of breaking expectations.
it all comes down to the customer and their expectations
North America is one of very few markets in the world where people have grown to expect/demand full speed all the time. Until that sense of entitlement gets broken, there will be no way to achieve high speeds without caps for a reasonable price.
said by prairiesky:ISPs face the same problem too: they have to grossly over-provision and over-build for peak-hours to meet people's expectations of full-speeds during that time and most of it goes to waste for the rest of the day.
From my perspective, I'm paying for transit that I'm not using most of the day.
No matter how the problem gets sliced and diced, people will always find ways and reasons to feel like they're paying for someone else. There isn't anything that can be done about that other than decide which version of it is the lesser evil but that too is up to each person's "preferences."
the hydro companies are the same really, they have to provide enough capacity for peak usage, but they're then left with extra capacity at off-peak. they've come up with 'smarter' billing by discounting rates at off-peak and charging more for the premium peak rate.
this obviously does two things, it makes the heavy peak users pay more of their actual usage share while also letting the smaller users pay very little at peak. it also changes consumer habits and making them use high-usage items at off-peak (washer, dryer, dishwasher, etc).
i don't really know of ISPs that do this, and I'm not even sure if it would be a good thing. the only thing close is the recent free window that a lot of providers are now offering at off-peak (2-8am) or even some traffic shaping at peak (Acanac I think? I forget which TPIA does it).
Note: Hydro Ottawa has 3 rates, off-peak, mid-peak, on-peak.
|reply to SkyNetCanada |
This sounds excellent! Do you plan on having lower speed tiers than gigabit as well?
|reply to jmck |
said by jmck:There is one fundamental difference: power companies cannot reduce voltage during peak hour and even if they did, most modern digital loads would simply crank up their input current to feed their constant-power loads while data networks can shape traffic rates in countless numbers of ways.
the hydro companies are the same really, they have to provide enough capacity for peak usage, but they're then left with extra capacity at off-peak.
If they reduce anything it would be amps not voltage. All hell would break loose if they output something outside the 110-120V range electrical appliances expect.
said by despe666:I take it you have no idea what Ohm's law is.
If they reduce anything it would be amps not voltage.
For simple loads like an electric heater, the load is practically a simple resistor of value 'R' and the current through that resistor is I=V/R so if you want to change 'I', you need to change the heater's resistance and that is generally not possible unless you physically swap out the heater for a lower-power one but then the lower-power heater may be insufficient to heat the room it is in so "reducing the amps" is not possible either - at least not without reducing voltage.
For things like most digital hardware, they work in constant-power mode and will draw however many amps they need from the line to deliver the required power to their load so "reducing amps" here is not possible either.
So, since power companies cannot arbitrarily lower voltage or do anything about how much current customers are drawing, the only thing power companies can do when their network is overloaded is rolling blackouts - controlled disconnects to bring total load within available power budgets and rotate disconnects every hour or two to reduce the chances of spoiling people's fridges and freezers.
Power Companies regulate the voltage continually and in emergency situations as outlined above relatively drastically.
As the load in the grid increases the voltage will reduce ( sag) as the load reduces the voltage will increase. Ohms Law again.
Power Companies keep the voltage within established guide lines by directly manipulating through changing tap positions on transformers or by switching in and out of service VAR resources like Capacitors Banks or Reactors in response to those continual changes in the load and therefore the voltage of the grid. To say power companies cannot arbitrarily lower voltage just isn't quite correct.
|reply to InvalidError |
And what does a rolling blackout do? Reduce the amp (or kW/h) output of the network. You're right, most newer digital devices would adapt to the new voltage output. But go in your kitchen or wherever and play with your dimmer. You'll see what would happen if the utility started messing with the voltage.
|reply to NotQuite |
That link talks about reductions of 3 to 5%, which is half the 110V-120V normal operating range of applicances. Of course voltage fluctuates, that's why there's an operating range in the first place.
|reply to despe666 |
said by despe666:While a blackout reduces the load on the network, it does not magically reduce your load on the network; it cuts you off altogether so you cannot draw any power whatsoever for the blackout's duration.
And what does a rolling blackout do? Reduce the amp (or kW/h) output of the network
A rolling blackout would be like your ISP disconnecting you for the rest of your month once you bust your cap instead of rate-shaping you to some arbitrary speed or billing extra for usage beyond your cap.
BTW, the load on power networks is in watts not "kW/h" and the bills are in kW*h, not kW/h. If your power company bills in "kW/h" then their billing unit is mathematically incorrect.
|reply to rocca |
said by rocca:Google does for Google Fibre (or relatively close to it, since they can't really gaurantee 1Gbps when using only GPON)... but Google doesn't really pay for transit anyhow. IIRC they're almost entirely relying on settlement-free peering, so even if they do have to pay for some transit, their average costs would be minuscule.
I certainly won't comment on what appropriate shaping/triggers are for any given ISP, but I can't imagine anyone can argue that $89 for unlimited guaranteed-1Gbps-speed-at-all-times is really possible at this time? If so then I'm severely overpaying for my transit.
Latest version of CapSavvy systray usage checker: »CapSavvy v4.2 released!