dslreports logo
site
 
    All Forums Hot Topics Gallery
spc

spacer




how-to block ads


Search Topic:
uniqs
560
share rss forum feed


Jamal

@omnicode.com

Internet topology & energy usage

These are a couple of questions I've had for a while. The first is how much bandwidth really is there in the internet? For instance Google has 1 gb connections in Kansas City. What does that exactly mean in terms of topology? Does it mean you have 1 gb to the street and then share 20 1 gb fibers with the rest of your 200 neighbors? What about the connection in and out of Kansas City? I can't imagine that it can support hundreds of thousands of 1 gb connections simultaneously.

The 2nd question is about energy usage in fiber & routers. How much energy is used to say push a gb of data down a 10 mile stretch of fiber. Does anybody know this? The same question for routing, but I think that question is something I can test for myself using a power meter & a router.

Just a note about what a 1000 gb cable would entail. I looked at a 12 strand fiber optic cable and it seemed to be about 1 centimeter for 1000 strands that would mean about 82 sq cm. = 9x9 cm. A big cable to service 1000 1 gb connections simultaneously.

Thanks.



Anav
Sarcastic Llama? Naw, Just Acerbic
Premium
join:2001-07-16
Dartmouth, NS
kudos:5

Thinly disguised homework question suggest you research the net.


HarryH3
Premium
join:2005-02-21
kudos:3
Reviews:
·Suddenlink
reply to Jamal

Also read up on multiplexing. By using various color lasers, multiple links can use the same physical fiber. The last article I recall reading about it said 256 links could be on a single fiber.

It's similar to how the cable company can send 500 channels to your house, over a single wire, except the signals are light instead of electrons.



Jamal

@omnicode.com

So the short answer is that that the only people who know the answer to it consider it proprietary information. There's very little detailed public information about private ISP, IXP topology. Claiming you can get it from a publicly available source is pure ignorance.



Anav
Sarcastic Llama? Naw, Just Acerbic
Premium
join:2001-07-16
Dartmouth, NS
kudos:5
reply to Jamal

Great answer! 0/10



LazMan
Premium
join:2003-03-26
canada
reply to Jamal

Google is your friend...

Things like the power consumption of various pieces of hardware is easily accessible.

Various fibre architectures - gpon, DWDM, etc - are also easily found...

It's possible to put 1.6 Tb of data on a single strand of fibre; so the physical size has basically no bearing on the capacity.



cablegeek01

join:2003-05-13
USA
kudos:1
reply to Jamal

No it's there, you just need to use the right search terms. Take a look at carrier switching and carrier routing, as well as DWDM fiber optics, and long haul fiber. try looking up headend power consumption and central office power consumption.
Fiber regeneration sites are also something to look at.

As far as power consumption for fiber goes...the lasers themselves are between a few milliwatts up to a few watts depending on the frequency and distances being used. However, the electronics that modulate that laser (switch it on and off really reaalllllly fast) can use multiple killowats of power depending on how many tranceivers (lasers) that unit is controlling.

I know at the MSO I work for, we have a few cisco CRS-3 routers that have multiple 100Gbps connections (using 4 wavelengths on a single fiber). Those routers use a few watts for the lasers, but aggregating all of the traffic from tens to hundreds of thousands of customers onto the 100Gbps connections takes about 10Kw of power (the switching and routing hardware uses a lot of juice).

For power consumption vs capacity, I'd say pull up a white paper on your average 10Gb SFP+ ZR module. 10GB at up to 70-100KM distance with 1.5W power consumption. Again, the transmission power is really low, but the processing power needed to aggragate 10Gb of data from several customers is where you burn up a lot of energy.

»www.cisco.com/en/US/prod/collate···693.html



Jamal

@omnicode.com

Thanks cablegeek01. It's interesting that routing and switching are the most expensive in terms of energy consumption. I wonder if that's a fundamental issue in terms of information theory or if it can be solved with more advanced technology.

The question in my mind is the issue of whether or not having everybody asynchronously accessing a few distant clouds is a workable solution. I think that Google has already weighed in on part of that question. In spite of their claim of 1 gb connections for all, they offer TV service which if I'm not mistaken is broadcast and not individually streamed.

I'm trying to predict the future, something that is not doable yet with internet searches. I'm wondering if all apps & data are going to be moving to a few clouds, if they are going to be moving to regional clouds, or if there will still be a use for local office & home data storage, aggregation, and application devices.

Internet topology & ultimately energy usage should have an important impact on this question.

Thanks.


aryoba
Premium,MVM
join:2002-08-22
kudos:4

said by Jamal :

Thanks cablegeek01. It's interesting that routing and switching are the most expensive in terms of energy consumption. I wonder if that's a fundamental issue in terms of information theory or if it can be solved with more advanced technology.

One reason of why power consumption is a lot for those routing and switching processes is that the electronic broads and components are not (yet) light-based, it is still somewhat metal-based (to oversimplify) that generate heat and electromagnetic interferences.

said by Jamal :

The question in my mind is the issue of whether or not having everybody asynchronously accessing a few distant clouds is a workable solution. I think that Google has already weighed in on part of that question. In spite of their claim of 1 gb connections for all, they offer TV service which if I'm not mistaken is broadcast and not individually streamed.

I'm trying to predict the future, something that is not doable yet with internet searches. I'm wondering if all apps & data are going to be moving to a few clouds, if they are going to be moving to regional clouds, or if there will still be a use for local office & home data storage, aggregation, and application devices.

Google (like many other large hosting companies) is known to host multiple data centers globally to serve as one giant cloud. So you could say that such giant cloud can be broken down into smaller clouds. Your equipment (i.e. PC, smartphones, IP-based TV) would use the data center (the smaller cloud) with least distance from IP network perspective.


cablegeek01

join:2003-05-13
USA
kudos:1
reply to Jamal

said by Jamal :

Thanks cablegeek01. It's interesting that routing and switching are the most expensive in terms of energy consumption. I wonder if that's a fundamental issue in terms of information theory or if it can be solved with more advanced technology.

The question in my mind is the issue of whether or not having everybody asynchronously accessing a few distant clouds is a workable solution. I think that Google has already weighed in on part of that question. In spite of their claim of 1 gb connections for all, they offer TV service which if I'm not mistaken is broadcast and not individually streamed.

I'm trying to predict the future, something that is not doable yet with internet searches. I'm wondering if all apps & data are going to be moving to a few clouds, if they are going to be moving to regional clouds, or if there will still be a use for local office & home data storage, aggregation, and application devices.

Internet topology & ultimately energy usage should have an important impact on this question.

Thanks.

As far as the cloud discussion goes, this is something that MSOs and content providers are discussing in depth.
The rate that bandwidth is being consumed is growing exponentially, and the older internet model of having your content at the core of the network, and sending it to the edge is rapidly shifting.
Many Internet service providers are peering with content delivery networks and caching companies that store content at the network edge so that the loads on the internet backbone aren't as great. (Google Akamai and Netflix CDN. Google is doing the same with YouTube)
Whether this trend will reverse as terabit backbone connections become more common is unknown in most circles.