dslreports logo
 
    All Forums Hot Topics Gallery
spc
uniqs
2
34764170 (banned)
join:2007-09-06
Etobicoke, ON

34764170 (banned) to jmck

Member

to jmck

Re: FibreMedia.ca - Residential Gigabit Internet

said by jmck:

it's not a joke really. the speed of the connection shouldn't affect what you need, it's just faster to get what you want when you want it. it's also extremely amazing for sharing and sending large files to people.

it's almost ridiculous that you think it costs them nothing from transit providers to provide a constant gigabit per second. i know you and most of us here likely wouldn't use it, but there's always that jack ass that ruins it for the others, so limits have to be set.

furthermore i don't think simply being reduced to 500Mbit and not have a cap is anything realistic to them in terms of cost.

anyways, compare this to 150/10 cable service and caps or even 250/250 and 175/175 FTTH services from Bell/Rogers and it's insanely better, not only the top speed (4-5 times faster) but the overall monthly costs and increased caps.

is there room for a more expensive package from them with double the cap? sure but they're just launching now.

It is a joke, period. I don't care about a GigE connection that is that crippled. I'd rather have a 200Mbps connection with no cap over a crippled ass GigE connection.

No one is expecting a GigE connection with sustained throughput. Only you are thinking like this.

Except it is realistic. Stop drinking the kool-aid of the incumbent carriers who act as if their networks are built of gold and covered in diamonds.

It isn't insanely better to me when its taking a step backwards. The monthly costs and caps are not better. I don't care about being on a connection with a link speed of GigE. That's only for bragging rights; e-penis +1.

jmck
formerly 'shaded'
join:2010-10-02
Ottawa, ON

jmck

Member

so stay with Rogers or Bell then? i don't know what to say. some people want the massively fast upload, and some people want to be able to get all their cloud data at a moment's notice when restoring a system.

so it's not for you, great. move along or rather stay with what you have and be happy. i'm sure they'll be tweaking and making changes going forward, no reason to get all butt-hurt because of it.
34764170 (banned)
join:2007-09-06
Etobicoke, ON

1 edit

34764170 (banned)

Member

said by jmck:

so stay with Rogers or Bell then? i don't know what to say. some people want the massively fast upload, and some people want to be able to get all their cloud data at a moment's notice when restoring a system.

so it's not for you, great. move along or rather stay with what you have and be happy. i'm sure they'll be tweaking and making changes going forward, no reason to get all butt-hurt because of it.

I don't even live in Montreal so it doesn't matter. But they should be offering other options other than just a burstable GigE connection and a small cap.

I wasn't getting butt hurt. You're interpreting what I am saying.

rednekcowboy
join:2012-03-21

rednekcowboy to jmck

Member

to jmck
said by jmck:

so stay with Rogers or Bell then? i don't know what to say. some people want the massively fast upload, and some people want to be able to get all their cloud data at a moment's notice when restoring a system.

so it's not for you, great. move along or rather stay with what you have and be happy. i'm sure they'll be tweaking and making changes going forward, no reason to get all butt-hurt because of it.

No one is getting all butt hurt at all, with the exception of maybe you. We are all having an intelligent discussion about a system not yet set in stone and I would imagine part of the reasan Skynet started this thread is for some feedback.

I even said that I'm happy that someone is providing this option, however I don't agree with severely crippling a connection, to the point that it's almost unusable, because someone is a heavy user. If you are that concerned with it, don't offer that high of a speed, increase the cap and then deal with the heavy users through traffic shaping, but again not down to 10 Mbps. Be reasonable about it and if someone is constantly over-using then terminate them.

Going from a GigE connection down to 10mbps for using 500GB worth of data really is laughable, or at least it would be if it wasn't seriously being discussed.

Companies like Acanac offer unlimited (albeit at a much lower speed) with a stipulation that they may have to reduce speeds during peak (where it costs the most) but even they only cut the speed in half and they have never implemented that yet.
despe666
join:2009-06-20
Montreal, QC

despe666

Member

I guess we can just go to all the other providers that offer any kind of GigE fiber connections for 89$. Oh wait... I have no particular reason for defending Skynet, but you guys are acting like spoiled brats.
InvalidError
join:2008-02-03

1 recommendation

InvalidError to rednekcowboy

Member

to rednekcowboy
said by rednekcowboy:

I even said that I'm happy that someone is providing this option, however I don't agree with severely crippling a connection, to the point that it's almost unusable, because someone is a heavy user.

I don't see that much of a problem with it. If you use a residential connection much more heavily than intended to, something has to happen.

Why should a company refrain from offering high speeds to occasional/burst users just because heavy users who may have a different idea of what "normal" use is than the ISP exist?

As for what the minimum speed might be, if SkyNet gets their transit for ~$1/Mbps, has internal distribution costs of ~$1/Mbps and amortization on their fiber plant costs them ~$50/sub/month, they should be able to afford aiming for ~20Mbps.

If you look at Google Fiber, even Google does not like the fact that their RESIDENTIAL internet service is getting a lot more attention from businesses and would-be online entrepreneurs setting up shops in residential neighborhoods to abuse Google's service than the normal people Google was aiming for. How this will ultimately play out remains to be seen.

rednekcowboy
join:2012-03-21

rednekcowboy

Member

said by InvalidError:

As for what the minimum speed might be, if SkyNet gets their transit for ~$1/Mbps, has internal distribution costs of ~$1/Mbps and amortization on their fiber plant costs them ~$50/sub/month, they should be able to afford aiming for ~20Mbps.

20Mbps wouldn't be bad. It would still allow the connection to be usable.

All you other guys getting offended, there is no need to take this so personally. We're just suggesting possible alternatives/better ways to implement this. Personally, I don't see a need for a GigE connection for personal use whatsoever other than to say you have a GigE connection, ie bragging rights.

Actually, if you really want to get into it (and I'm not talking about skynet here so don't get your panties all into a bunch):

There is no way a personal user needs this kind of speed. I run a lot of devices on my network (4 laptops, an htpc, 3 gaming consoles, 2 cell phones and 2 ipods and soon to be 2 iptv boxes and voip) and I have more than enough with my 30 Mbps connection.

If they just kept the speeds at a reasonable level, we could have caps at a reasonable level and costs would be, well, reasonable. These insane speed packages that the few use and abuse is skyrocketing the cost for the rest of us.
InvalidError
join:2008-02-03

1 recommendation

InvalidError

Member

said by rednekcowboy:

If they just kept the speeds at a reasonable level, we could have caps at a reasonable level and costs would be, well, reasonable. These insane speed packages that the few use and abuse is skyrocketing the cost for the rest of us.

Actually, lower speeds to light/moderate use subscribers may ultimately be more expensive to provide when all the GbE equipment is already there since providing lower speeds means (even more) grossly under-utilizing the equipment most of the time and increases missed opportunities to complete one usage burst before another begins.

By giving everyone Gbps access by default, this allows everyone to complete whatever they are doing as fast as available capacity will allow at any given time, which is the most efficient way to run a network when managed properly.

What is expensive is not the burst speed. It is the North American consumer's strong sense of entitlement to full-time full-speed that raises costs due to extra over-building and reduces speeds to mitigate the magnitude of that over-building. Once you get rid of that, things get much cheaper and more efficient.

Practically no speed guarantees is how ISPs all over the world manage to offer 100+Mbps service for as low as $30/month.

rednekcowboy
join:2012-03-21

rednekcowboy

Member

said by InvalidError:

said by rednekcowboy:

If they just kept the speeds at a reasonable level, we could have caps at a reasonable level and costs would be, well, reasonable. These insane speed packages that the few use and abuse is skyrocketing the cost for the rest of us.

Actually, lower speeds to light/moderate use subscribers may ultimately be more expensive to provide when all the GbE equipment is already there since providing lower speeds means (even more) grossly under-utilizing the equipment most of the time and increases missed opportunities to complete one usage burst before another begins.

By giving everyone Gbps access by default, this allows everyone to complete whatever they are doing as fast as available capacity will allow at any given time, which is the most efficient way to run a network when managed properly.

What is expensive is not the burst speed. It is the North American consumer's strong sense of entitlement to full-time full-speed that raises costs due to extra over-building and reduces speeds to mitigate the magnitude of that over-building. Once you get rid of that, things get much cheaper and more efficient.

Practically no speed guarantees is how ISPs all over the world manage to offer 100+Mbps service for as low as $30/month.

On a hardware level, I'd have to take your word for it. I am, in no way, shape or form, a network engineer but I will say that, taking into consideration what you say, the more GigE connections you have, the more hardware and maintenance costs you will incur vs having X number of subs at smaller packages. How that translates into cost and time management costs to administer I have no idea.....

I'm talking about a billing/cap level/ratio. By offering an insane speed that no one needs with a small cap, it allows charging more for the "normal speed" packages while combining it with even lower caps.

If you provide a GigE connection with a 1Tb cap (instead of a 500GB cap) that allows for higher caps with the lower speed package. However providing a GigE connection with a 500 GB cap means that you will not get a 500Gb cap at a 30Mbps package but probably only a 100GB/less cap (theoretically) because incumbents tie caps to the speed offerings. The lower the speed, the lower the cap. So you want a high a cap as possible on the highest speed tier so that the lower speed packages also gets the advantage of a higher cap.....

But alas, we are severely derailing this thread as this part of the discussion really does not relate to what Fibremedia is offering.
InvalidError
join:2008-02-03

InvalidError

Member

said by rednekcowboy:

On a hardware level, I'd have to take your word for it. I am, in no way, shape or form, a network engineer but I will say that, taking into consideration what you say, the more GigE connections you have, the more hardware and maintenance costs you will incur vs having X number of subs at smaller packages. How that translates into cost and time management costs to administer I have no idea.....

What is the cost difference of providing a 1Gbps ONT with a 1Gbps port on the OLT to a subscriber who wants 30Mbps vs one who wants 1Gbps? The same since it is all the same equipment anyway, just like how wholesale VDSL2 costs the same for 15/10 and 50/10 - same equipment, hardware and administrative costs regardless of speed and things are more or less the same on cable too. The ISP saves little to nothing from offering lower speed tiers there.

If the OLT has 432 ports with 2x10GbE uplinks, what is the interconnect cost difference between allowing 432 ports to run at full 1Gbps vs 30Mbps? None since the two ports are required for redundancy regardless of whether or not they are required for bandwidth. This means ~45Mbps of uplink available per port and with an utilization ratio likely much lower than 10:1, typical performance should be well over 500Mbps at this point.

Beyond that, we enter the core network where bandwidth within a given chassis is dirt-cheap so the name of the game is merely deciding the balance between available transit capacity across core routers vs potential demand from attached subscribers and minimum speed target.

So the total cost difference between providing 30Mbps over infrastructure meant to carry 1Gbps? Nearly $0 since most of the equipment is required regardless of speed until average load per subscriber exceeds 40Mbps or so at which point additional interconnects at a one-time cost of 7.5-10k$/10Gbps amortized over 5+ years and 2000+ subscribers (less than $0.50/sub/month each) would be required between OLTs and routers. Until then, the only significant capacity-bound cost is external transit.

How soon is average load likely to reach 40Mbps? Sandvine and Cisco's reports say average usage is still under 20GB/month which would be under 0.1Mbps average load; shockingly low considering how much of a vested interest they have in inflating figures as much as possible - that's less than 1Mbps even if you squeeze it in a 4h peak-hours window. Most people on DSLR likely still use less than 300GB/month and that would work out to 1Mbps average use, 6Mbps if you lump it all in a 4h window. So, average usage from average people reaching 40Mbps is not going to happen any time soon.

The reason why incumbents lump higher speeds with higher caps is because higher speeds do not really cost them anything more to provide as long as usage remains generally intermittent.

rocca
Start.ca
Premium Member
join:2008-11-16
London, ON

1 edit

rocca

Premium Member

I don't normally comment on other ISP threads, but I've been through this several times so thought it'd be relevant.

The issue isn't with the average user, ie let's say the average user is 1Mbps at peak for easy math. The problem with higher maximum speeds is "how much more than the average a single customer can impact costs", ie with a 20Mbps connection the worst a single customer can do is 20x the average whereas with 1Gbps that same single customer can do 1000x the average. Put even more simply, a single user can cost you 1000x the average, which is what you've based your pricing model on.

This is why unlimited plans on higher speed packages become much more problematic/risky. It only takes 0.1% of the customer base using it maxed out at peak to double the overall average capacity cost. Ie 1000 users @ 1Mbps = 1000Mbps or 999 users @ 1Mbps + 1 user @ 1000Mbps = 1999Mbps - therefore that 1 user (or 0.1% of the customer base) can double the average costs for everyone else. Or if 1% of the customers use it fully at peak then it's 990 users @ 1Mbps + 10 users @ 1000Mbps = 10990Mbps, or increasing the average cost by 11x for the other 99%.

I certainly won't comment on what appropriate shaping/triggers are for any given ISP, but I can't imagine anyone can argue that $89 for unlimited guaranteed-1Gbps-speed-at-all-times is really possible at this time? If so then I'm severely overpaying for my transit. Overall I suspect their package will do quite well and kudos to them for the work they are doing. As long as any company is upfront about their policies or limits then people can make an informed decision on whether that product makes sense for them and just because it doesn't work for every person doesn't mean it won't work for most.

edit: For what it's worth, I completely agree with the comment about "as long as usage remains generally intermittent", yes, in that case 30Mbps vs 1000Mbps for the typical average user does not have a significant cost difference. The issue is that as speeds increase the non-typical-sustained-usage-user has the ability to individually skew numbers by a large factor.
InvalidError
join:2008-02-03

InvalidError

Member

said by rocca:

This is why unlimited plans on higher speed packages become much more problematic/risky.

It is neither problematic nor risky if you make no speed guarantees or only very low ones.

If you have 999 users pulling 1Mbps and someone comes along trying to pull 1Gbps with only 1Gbps of uplink available, everyone gets ~1Mbps. If you had 10 subscribers doing 100Mbps each and an 11th jumps in trying to do 1Gbps, all 11 of them drop to ~90Mbps. Zero change in costs - your heavy subscriber cannot cause network usage to magically rise to 200% of existing capacity so your cost can never exceed 100% of what you configured.

But to let links run at ~100% all the time without severely degrading light/medium users' experience, you need to apply CoS/QoS with appropriate PHBs across the network. Once that is done, you only need to worry about upgrading capacity when your load from light/moderate subscribers warrants it while heavy subscribers automatically get whatever additional leftovers this may create.

This approach turns the whole capacity provisioning problem upside-down: instead of worrying about provisioning for peak-hours bandwidth required to maintain the illusion of dedicated bandwidth to end-users, you simply let congestion and CoS/QoS/WRED dynamically sort speeds out and only need to arrange for extra capacity when speeds drop below acceptable levels.

What is most cost-effective? Links that run at 20-30% of their peak-hour rates 16-20h/day or links that run at 70-100% 24/7? In an ideal world, isn't (closer-to-)100% 24/7 exactly what ISPs would like to achieve with things like off-peak unlimited - eliminate the 3:1 to 5:1 disparity between peak-hours and dead-of-the-night?

That's why giving up on the illusion of dedicated speed reduces costs so much - even more so when you control your own first-mile so you do not need to worry about inefficient and fragmented incumbent interconnects.
prairiesky
join:2008-12-08
canada

prairiesky

Member

@ invalid. I'd agree 100% with what you're saying, i've contemplated the same thing with my network. But the end user doesn't get it or understand the intricacies of running the network. They want predictability and It doesn't matter that it's super fast during the day, what matters is that their speed tests differ from hour to hour. It's not an easy spot to be in. From my perspective, I'm paying for transit that I'm not using most of the day. It doesn't matter if it gets put to use as I'm still paying for it.... it all comes down to the customer and their expectations
34764170 (banned)
join:2007-09-06
Etobicoke, ON

34764170 (banned) to InvalidError

Member

to InvalidError
I am perfectly fine with what you have proposed. As a so called "heavy user" (which I think is nonsense for so little traffic) I have zero expectation of full speeds or anywhere near it; as long as the connection is providing reasonable speeds (10 - 20 Mbps is not) unlike the "light" users that would complain like crazy if they are not seeing pretty much full speed even at peak hours. As long as the upstream backhaul links are not congested and upgraded appropriately then that would still allow for reasonable speeds and no ridiculous arbitrary cap.
InvalidError
join:2008-02-03

InvalidError to prairiesky

Member

to prairiesky
said by prairiesky:

it all comes down to the customer and their expectations

Hence the importance of breaking expectations.

North America is one of very few markets in the world where people have grown to expect/demand full speed all the time. Until that sense of entitlement gets broken, there will be no way to achieve high speeds without caps for a reasonable price.
said by prairiesky:

From my perspective, I'm paying for transit that I'm not using most of the day.

ISPs face the same problem too: they have to grossly over-provision and over-build for peak-hours to meet people's expectations of full-speeds during that time and most of it goes to waste for the rest of the day.

No matter how the problem gets sliced and diced, people will always find ways and reasons to feel like they're paying for someone else. There isn't anything that can be done about that other than decide which version of it is the lesser evil but that too is up to each person's "preferences."

jmck
formerly 'shaded'
join:2010-10-02
Ottawa, ON

jmck

Member

the hydro companies are the same really, they have to provide enough capacity for peak usage, but they're then left with extra capacity at off-peak. they've come up with 'smarter' billing by discounting rates at off-peak and charging more for the premium peak rate.

this obviously does two things, it makes the heavy peak users pay more of their actual usage share while also letting the smaller users pay very little at peak. it also changes consumer habits and making them use high-usage items at off-peak (washer, dryer, dishwasher, etc).

i don't really know of ISPs that do this, and I'm not even sure if it would be a good thing. the only thing close is the recent free window that a lot of providers are now offering at off-peak (2-8am) or even some traffic shaping at peak (Acanac I think? I forget which TPIA does it).

Note: Hydro Ottawa has 3 rates, off-peak, mid-peak, on-peak.
InvalidError
join:2008-02-03

InvalidError

Member

said by jmck:

the hydro companies are the same really, they have to provide enough capacity for peak usage, but they're then left with extra capacity at off-peak.

There is one fundamental difference: power companies cannot reduce voltage during peak hour and even if they did, most modern digital loads would simply crank up their input current to feed their constant-power loads while data networks can shape traffic rates in countless numbers of ways.
despe666
join:2009-06-20
Montreal, QC

despe666

Member

If they reduce anything it would be amps not voltage. All hell would break loose if they output something outside the 110-120V range electrical appliances expect.
InvalidError
join:2008-02-03

InvalidError

Member

said by despe666:

If they reduce anything it would be amps not voltage.

I take it you have no idea what Ohm's law is.

For simple loads like an electric heater, the load is practically a simple resistor of value 'R' and the current through that resistor is I=V/R so if you want to change 'I', you need to change the heater's resistance and that is generally not possible unless you physically swap out the heater for a lower-power one but then the lower-power heater may be insufficient to heat the room it is in so "reducing the amps" is not possible either - at least not without reducing voltage.

For things like most digital hardware, they work in constant-power mode and will draw however many amps they need from the line to deliver the required power to their load so "reducing amps" here is not possible either.

So, since power companies cannot arbitrarily lower voltage or do anything about how much current customers are drawing, the only thing power companies can do when their network is overloaded is rolling blackouts - controlled disconnects to bring total load within available power budgets and rotate disconnects every hour or two to reduce the chances of spoiling people's fridges and freezers.

NotQuite
@198.254.255.x

NotQuite

Anon

»www.ieso.ca/imoweb/siteS ··· p?sid=bi

Power Companies regulate the voltage continually and in emergency situations as outlined above relatively drastically.
As the load in the grid increases the voltage will reduce ( sag) as the load reduces the voltage will increase. Ohms Law again.
Power Companies keep the voltage within established guide lines by directly manipulating through changing tap positions on transformers or by switching in and out of service VAR resources like Capacitors Banks or Reactors in response to those continual changes in the load and therefore the voltage of the grid. To say power companies cannot arbitrarily lower voltage just isn't quite correct.
despe666
join:2009-06-20
Montreal, QC

despe666 to InvalidError

Member

to InvalidError
And what does a rolling blackout do? Reduce the amp (or kW/h) output of the network. You're right, most newer digital devices would adapt to the new voltage output. But go in your kitchen or wherever and play with your dimmer. You'll see what would happen if the utility started messing with the voltage.
despe666

1 edit

despe666 to NotQuite

Member

to NotQuite
That link talks about reductions of 3 to 5%, which is half the 110V-120V normal operating range of applicances. Of course voltage fluctuates, that's why there's an operating range in the first place.
InvalidError
join:2008-02-03

InvalidError to despe666

Member

to despe666
said by despe666:

And what does a rolling blackout do? Reduce the amp (or kW/h) output of the network

While a blackout reduces the load on the network, it does not magically reduce your load on the network; it cuts you off altogether so you cannot draw any power whatsoever for the blackout's duration.

A rolling blackout would be like your ISP disconnecting you for the rest of your month once you bust your cap instead of rate-shaping you to some arbitrary speed or billing extra for usage beyond your cap.

BTW, the load on power networks is in watts not "kW/h" and the bills are in kW*h, not kW/h. If your power company bills in "kW/h" then their billing unit is mathematically incorrect.

Guspaz
Guspaz
MVM
join:2001-11-05
Montreal, QC

Guspaz to rocca

MVM

to rocca
said by rocca:

I certainly won't comment on what appropriate shaping/triggers are for any given ISP, but I can't imagine anyone can argue that $89 for unlimited guaranteed-1Gbps-speed-at-all-times is really possible at this time? If so then I'm severely overpaying for my transit.

Google does for Google Fibre (or relatively close to it, since they can't really gaurantee 1Gbps when using only GPON)... but Google doesn't really pay for transit anyhow. IIRC they're almost entirely relying on settlement-free peering, so even if they do have to pay for some transit, their average costs would be minuscule.