dslreports logo
 
    All Forums Hot Topics Gallery
spc
Search similar:


uniqs
6335

bemis
Premium Member
join:2008-07-18
united state

bemis

Premium Member

Google Fiber and oversubscribing

My general understanding of how ISPs work is that they oversubscribe their connections to the external internet...

i.e. Comcast sells thousands of "16Mb/s" connections to a given area, I assume that area is connected to a Comcast backbone of sorts with a certain sized pipe to some central location they operate, which itself is connected by different sized pipes to various other networks.

For a given customer Comcast knows that they limit the potential speed at whatever they are selling--in my example say all customers are 16Mb/s, so maybe they have a 10:1 over-subscription, meaning for 1,000 customers with 16Mb/s connections they would need a 16,000Mb/s connection to their backbone to allow everyone to operate at max speed sold, but they may actually only have a 1,600Mb/s connection (10:1) because statistically not all customers will saturate their connection all the time... and I understand the actual ratios may be very different, but I assume that on the whole providers maintain a relatively similar ratio to each other...

So my question is, where Google Fiber is providing 1Gb/s connections... 10-100X average other providers... do they maintain a similar over-subscription ratio as other ISPs? I would imagine that even tho KC experiment may be special case, if they intended to roll out something similar to other areas it would need to be dramatically oversubscribed on average, right?

BTW, I understand that on the whole a connection to an individual server out on the internet is subject to many conditions, so thinking you'll sustain 1Gb/s to anything besides Google's local servers there is pretty much not going to happen...
AVonGauss
Premium Member
join:2007-11-01
Boynton Beach, FL

AVonGauss

Premium Member

The Internet itself is oversubscribed. Most providers probably do not work on a ratio system but rather by monitoring utilization of links. At a cable provider, a given segment may have 250 subscribers but if the utilization stays below certain thresholds (say 70%) they're fine. Both Google Fiber and any other ISP would also have to monitor utilization on their uplinks to the general Internet, again using a percentage of capacity used. Good providers add additional capacity as sustained utilization occurs, others are a bit slower to increase capacity.

bemis
Premium Member
join:2008-07-18
united state

bemis

Premium Member

Right, I'm just thinking of the differences in potential speeds...

If Comcast is selling a range from 16-100Mb/s for downloads, where it's heavily weighted on the slower end, they can make some assumptions about the potential for total bandwidth required to keep their users close to the total bandwidth being sold... and in general that range is considered sustainable for internet connections in general...

1Gb/s on the other hand... that's off the charts fast for many small/medium businesses, let alone a residential user. So I'm just curious if anyone has any insights as to specifically how Google is handling this situation. Are they simply assuming 90% of the sites out there with any significant data to transfer are not going to sustain anything above 30-50Mb/s to a single user? What about torrents? What if they have a few hundred of their users seeding torrents to people out there, they could easy have several hundred Gb/s worth of data to spit out of their network?

I guess what I'm trying to understand is whether the 1Gb/s symmetric connection is more of an "in your face" gimmick for the other ISPs out there, to act as a catalyst for the average user to get 100Mb/s for $100/mo so to speak? We can already see it as caused TW to roll out faster speeds in KC market... Or did they (Google) actually build the infrastructure to genuinely support those levels--and if they did, is it simply unique for their KC test bed, or would they intend to create the same structures elsewhere?
AVonGauss
Premium Member
join:2007-11-01
Boynton Beach, FL

AVonGauss

Premium Member

I think only time will really tell, right now its all still new so from both a users are probably banging on it good right now and Google wants it to be successful standpoint its well maintained and sufficiently provisioned. A couple of years from now, or in newer markets if they expand, will really tell.
chomper87
join:2012-02-22
Clearwater, FL

chomper87 to bemis

Member

to bemis
Nothing is really all that simple in this case.

This really isn't a technology problem, but business decisions. ISPs do assume that not everyone will be 100% bandwidth 100% of the time to External connections. They use some method to determine the appropriate amount to have for backbone links. I have no clue how they may come up with this.

Even ATT, and Verizon who are Also Tier 1 / NSP / Backbones will do this. My Verizon FiOS 75/35 connection doesn't guarantee I get 75/35 to every single server especially one that is off of Verizon's network. Even a carrier grade 100/100 connection from Verizon, ATT, Level3, etc can't guarantee that. If it leaves their network, they lose control. And, obviously if I'm downloading from a 10Mb server who uses Verizon's network, I don't magically get a 75Mb download.

A Cable company has arguments with a TV network over pricing and whatnot, the same thing happens on the Internet. Only it's rarely front page news.
»en.wikipedia.org/wiki/Pe ··· epeering

and probably the most famous is Comcast VS Level 3 / Netflix

»arstechnica.com/tech-pol ··· oorstep/

I think it's a tremendous move by Google to spur innovation and also gives the Cable internet companies and DSL providers a wake up call. I trust Google, I don't think they invested into Google Fiber just for it to turn into a crap product in 4 years. Google has power, ability, and money to deliver the infrastructure. But again business decisions will determine this in the future, and not any technology limit. Verizon has the ability to keep expanding FiOS, they chose to instead expand 4G/LTE as it's immensely more profitable. Let's hope Google doesn't go down that same path.

bemis
Premium Member
join:2008-07-18
united state

bemis

Premium Member

said by chomper87:

ISPs do assume that not everyone will be 100% bandwidth 100% of the time to External connections. They use some method to determine the appropriate amount to have for backbone links. I have no clue how they may come up with this.

Of course not, and I understand that, sorry if I didn't make that clear in other posts? --

My question is about how Google is dealing with over-subscription, which we both agree all ISPs do.

Google's connections to customers are 50-100X higher bandwidth than average residential connections today, so I'm trying to understand if they have taken a different approach to over subscription or not.

Maybe translating to a water pipe analogy--
The average user has a 1" water main. The water company(Comcast/VZ/AT&T) knows they cannot supply full pressure to all 1" water mains at the same time, but they have balanced their system such that on average everyone who will be using water at any given time will get full pressure, and during times of abnormal usage their pressures will drop--and abnormal is not defined as a typical morning when most households will be running a shower, it's defined as say a day in the spring when by coincidence most people decide to water their lawns or fill their pools...the system is overtaxed and a pressure drop is noticed, but it's not a big deal...

Now Google comes along and installs 50" water mains to everyone's home. My question is whether Google, at their pump stations (central offices), is designing their system to supply full pressure to 50" water mains on average or not.

If they ARE, then that means they must be looking at 50X the capacity at their pumping stations.

If they are not, then that means on any given average day, people will notice huge surges in pressure... one minute when no one happens to be using any water their 50" main is at full pressure, another minute when an average amount of consumption in other households is occurring the 50" main's pressure drops drastically...

I'm not saying the 2nd case is bad, or worse, just that it will be that way. In fact, I'd argue it's actually a better case. Because right now if I'm downloading a 500Mb file I will saturate my 20Mb/s connection for 25 seconds. If I had a 1000Mb/s connection the connection would be saturated for just 0.5 seconds. So with periodic bursts of data i would argue that everyone having a 1000Mb/s connection, even if the backbone is over subscribed at the same level as if everyone had 20Mb/s connections, would be a superior over all experience--even if there were periods of frustration where my 1000Mb/s connection was only averaging 20Mb/s due to an average number of users also attempting to download data.
said by chomper87:

I think it's a tremendous move by Google to spur innovation and also gives the Cable internet companies and DSL providers a wake up call. I trust Google, I don't think they invested into Google Fiber just for it to turn into a crap product in 4 years. Google has power, ability, and money to deliver the infrastructure. But again business decisions will determine this in the future, and not any technology limit. Verizon has the ability to keep expanding FiOS, they chose to instead expand 4G/LTE as it's immensely more profitable. Let's hope Google doesn't go down that same path.

I think VZ's decision to focus on 4G/LTE makes enormous sense, not just from a profit standpoint but from a usage model standpoint. There is a practical limit to the existing cellular technologies, just as there is with DSL/Cable. It's clear that "always connected" and portable are the direction that consumers and devices are taking. So VZ has taken the bull by the horns (as it did with FIOS) and has aggressively beaten all the competition to rolling their network.

I don't agree with the decision to slow/stop FIOS roll outs because I think FTTH represents the next 20+ years of home broadband, but I think the realities of operating a business caught them... they were facing high roll out costs and delays due to various reasons including town-to-town franchise battles. Wall Street and analysts could see this, did not see short term profits, and wanted a shift (it was the previous CEO who was gung ho for FIOS).

Google's decision making is entirely based on profit and their goals are to reach those profits--just like all companies--so becareful who you trust because they can turn on a dime.

The current price points for Google Fiber are $0 and $70...

If Comcast & VZ can make money selling a $70 connection then I don't see why Google cannot--Ignoring some potentially higher equipment costs, the various fixed costs for building, billing and supporting these networks do not change based on providing more bandwidth or not, really only the cost to actually supply that bandwidth does--hence the whole point of my questioning... if Google, with their 1Gb/s connections to the home, are provisioning their backbones at the same levels that Comcast and Verizon, with their avg. 10-50Mbs/ connections to the home, then I would imagine Google can make a profit just like the other guys, but I also think doing that will lead to a more varied/peaky user experience as the service is expanded. They will have to reset the mentalities of the average (i.e. non-DSLR) customer.

BTW, I believe at the moment Google Fiber's price points are some what unsustainable and are more pet-project level. They are literally giving it away for the slower service...

But I also believe that VZ FIOS's current price structures are gouging to some point. At the lower levels the speed jumps are about ~$10 more, that makes sense... but to jump to the 300Mb/s level is outrageous.

To go from base of 15/5 for $65 to 150/65 at $100 makes sense--you get 10X the speed for 1.5X the price... but to go to 300/65 it's $210, that's doubling the price and only doubling the speed! The ONT is the same between the 150/65 and 300/65 service, so what changes do they need to make to their network that justify such a high cost jump vs. the other tiers? I would have expected the 150/65 level to have a larger jump because that at least will require ONT replacement for people like me (since my ONT has a 10/100 ethernet port)... So these are the cases I'm hoping Google Fiber can help shake out--the gouging for higher bandwidth.
bemis

bemis

Premium Member

One last thing to point out...

Google literally whipped up cities and towns into a frenzy to get their services--mayors were ceremoniously changing town names, parades were held, etc... The governments of these areas were fighting for Google love because Google made it an exclusive award to be coveted and desired.

FIOS got consumer excited, but the governments and managers of various localities were not nearly as accommodating as I imagine KC was for Google Fiber.

When Apple was rolling out their first stores it was similar... areas got excited that /they/ had an Apple store, because it was exclusive. At the time no one could give two shits if Best Buy opened, or Dell opened a branded store... because the were not exciting and exclusive.

I'm curious if Google DOES decide to become a viable national TV/internet provider, I'm sure the first roll outs will be equally exciting, but by city #28 or so I think things will become less exciting... maybe we'll start hearing the BS that many towns have been pulling on VZ--like trying to charge them property taxes for the utility poles, etc... i.e. money grabs by towns and cities.
davidhoffman
Premium Member
join:2009-11-19
Warner Robins, GA

davidhoffman

Premium Member

"I'm curious if Google DOES decide to become a viable national TV/internet provider, I'm sure the first roll outs will be equally exciting, but by city #28 or so I think things will become less exciting... maybe we'll start hearing the BS that many towns have been pulling on VZ--like trying to charge them property taxes for the utility poles, etc... i.e. money grabs by towns and cities."

Google has created a template for cooperation by other cities wanting Google Gigabit. Those that wish to closely adhere to that template will be strongly considered for service. Those that want to significantly deviate from the template will not be considered as strongly. In the KCMO and KCKS area other municipalities are already using the known agreements with KCMO and KCKS to prepare proposals for getting Google Gigabit. Google has taken the experiences it had with KCMO and KCKS into consideration and has created a more detailed and comprehensive template, so that the problems of the past have a reduced probability of occurring in other cities.

The excitement will continue if Google Gigabit's arrival creates a viable third or forth competitive option in a particular ISP market. TWC was not planning on doing anything with regards to better customer service in the KCMO area. Then Google arrived and after a period of time TWC appears to have decided to make better customer service a funded and staffed priority. Existing TWC subscribers report serious efforts by TWC to resolve long standing technical problems with both cable TV and HSI service.
prairiesky
join:2008-12-08
canada

prairiesky to bemis

Member

to bemis
google is using 10gpon for their networks. and can have a max of probably 32 subs per OLT (maybe 64 but doubtful). They probably can't go higher that 1 Gbit due to Ethernet adapter limitations. (don't know if the modems have integrated switches). So just on packages, they have a max over subscription ratio of 3.2:1. Which basically means they don't have to worry about it.

They could back haul each olt with 10 gig link and not have to worry because the olt can't handle over 10gig. Somewhere up the line, the 10G links will all come together, but like the end user situation, each one of them won't be full all the time. so the same ratios of the next level of over subscription would occur.

Gpon has 1.25 gig / 32 customers

cable has i think 128 /node and each channel is 55 mbits. So 8 channels is 440 mbit capacity. (may be wrong on max subs)

Bascially, the further back in the system you go, the higher the ratios get.

Killa200
Premium Member
join:2005-12-02
TN

Killa200

Premium Member

said by prairiesky:

google is using 10gpon for their networks. and can have a max of probably 32 subs per OLT (maybe 64 but doubtful). They probably can't go higher that 1 Gbit due to Ethernet adapter limitations. (don't know if the modems have integrated switches). So just on packages, they have a max over subscription ratio of 3.2:1. Which basically means they don't have to worry about it.

They could back haul each olt with 10 gig link and not have to worry because the olt can't handle over 10gig. Somewhere up the line, the 10G links will all come together, but like the end user situation, each one of them won't be full all the time. so the same ratios of the next level of over subscription would occur.

Gpon has 1.25 gig / 32 customers

cable has i think 128 /node and each channel is 55 mbits. So 8 channels is 440 mbit capacity. (may be wrong on max subs)

Bascially, the further back in the system you go, the higher the ratios get.

XG-PON1 (10g assym) has a split ratio of up to 128:1 on fiber, making gbit downstream at max 12.8:1 and upstream 51.2:1

GPon (2.5 / 1.25) has a maximum split of 64

Cable docsis is unlimited within reason of hardware. Average of reason is ~256 max. A QAM 256 channel in a 6mhz slot is 38Mbps of usable bandwidth.
prairiesky
join:2008-12-08
canada

prairiesky

Member

said by Killa200:

XG-PON1 (10g assym) has a split ratio of up to 128:1 on fiber, making gbit downstream at max 12.8:1 and upstream 51.2:1

GPon (2.5 / 1.25) has a maximum split of 64

Cable docsis is unlimited within reason of hardware. Average of reason is ~256 max. A QAM 256 channel in a 6mhz slot is 38Mbps of usable bandwidth.

It highly varies, the numbers you're showing really aren't real world numbers. They're based on the receive sensitivities and theoretical splits that can happen and doesn't take into account insertion losses, cable losses, etc. So they're basing those claims on a 21 db spread between send/recieve sensitivities. A) it's not recommended to go right to the edges, B) it's highly improbable it would happen.

So if you get get it to 18 db, you'd get a max of 64 subs and withing 15, you'd get 32 which is much more likely and gives you some room for error.

as for cable, I was remembering the euro docsis specs which is 55 gross 50 usable.

b

racer9876
Defender Of The Universe
Premium Member
join:2000-07-03
Lancaster, CA

racer9876 to bemis

Premium Member

to bemis
For North American DOCSIS it uses a 6MHz wide channel while European DOCSIS uses a 8MHz wide channel. So that's why the North American DOCSIS channel throughput is 42.88Mbps total with 38Mbps usable and Euro is 55Mbps total with 50Mbps usable.

ProFiber
@swbell.net

ProFiber to bemis

Anon

to bemis
I think it is good to consider the technical implications of this, but I think the long term plans for Google's business model are not focused on cost of xyz pipes, connected to abc subscribers. Their motivations are around driving to allow more content that they can index and manage with a better network to provide it. While cost of the network is certainly a consideration, the difference for Google is that they will have the offset of content via advertising that TWC/Comcast, VZ, ATT, etc. have issues with. While TWC can make some money from cable advertising and such, Google gets paid for access to content that is interesting and valued by people...not what some corporations get worked out via Nielsen ratings and such.

I think Google is going to be much more likely to provide redundant connections with larger available bandwidth and distribute load better than most/maybe all other providers out there...simply because their business model is different fundamentally. Therefore some comparisons, while warranted, are not going to work out the same as our previous experience. Ultimately when they start killing the competition, the competition will inevitably improve. Honestly I don't think Google really has it in mind to completely overtake all providers in the US, just put everyone on edge to improve bandwidth so they (Google) can make more money on content, IMO.