dslreports logo
site
 
    All Forums Hot Topics Gallery
spc

spacer




how-to block ads


Search Topic:
uniqs
9
share rss forum feed


FFH
Premium
join:2002-03-03
Tavistock NJ
kudos:5

1 recommendation

reply to DarkLogix

Re: Completely unrelated to anything end-users will see

said by DarkLogix:

hay I want UPS to be able to get my packages to the local UPS location faster so that I might have stuff I order sooner

Funny you should say it that way. Because by having a faster, less congested, more efficient backbone allows Verizon to reduce latency to users of its network.
--
Record your speedtest.net results in DSLReports SpeedWave
»www.speedtest.net/wave/afe201cb84d45c88


espaeth
Digital Plumber
Premium,MVM
join:2001-04-21
Minneapolis, MN
kudos:2

said by FFH:

Funny you should say it that way. Because by having a faster, less congested, more efficient backbone allows Verizon to reduce latency to users of its network.

Not necessarily -- the difference in propagation delay of 1500 byte packets is negligible (at least in terms of milliseconds) between a 10Gbps vs a 100Gbps interface.

The big gain here is to be able to drastically reduce interface counts. Today we're pushing 15-16gbps of constant storage replication traffic between our main data centers, but to transit that traffic today we have to spread that out across a minimum of 3x10GigE links (not accounting for redundancy). In order to guarantee in-order packet delivery, the traffic is hashed across multiple links using layer2/layer3/layer4 headers. The downside to that approach is you don't end up with an even distribution of traffic, so you end up needing more interfaces than you'd think to handle the traffic volume. In our case if we drop from 3 down to 2 links, the traffic tends to take a 70/30 split and the wheels fall off the wagon.


rawgerz
The hell was that?
Premium
join:2004-10-03
Grove City, PA
reply to FFH

In reality they could dramatically reduce latency if they didn't route all traffic through Virginia.



pende_tim
Premium
join:2004-01-04
Andover, NJ
kudos:1

I thought all internet traffic was routed through a secret room in SF?



heat84
Bit Torrent Apologist

join:2004-03-11
Fort Lauderdale, FL

said by pende_tim:

I thought all internet traffic was routed through a secret room in SF?

No. Its routed through Al Gore's house.
--
Bit Torrent is my DVR.


DarkLogix
Texan and Proud
Premium
join:2008-10-23
Baytown, TX
kudos:3
reply to espaeth

well in the full artical they mention the compareison of 10x 10ge vs 1x 100ge as having latency benefites due to (in simple terms) the data actually moving faster like how gig copper uses a higher frenquency than 100meg copper



espaeth
Digital Plumber
Premium,MVM
join:2001-04-21
Minneapolis, MN
kudos:2

said by DarkLogix:

well in the full artical they mention the compareison of 10x 10ge vs 1x 100ge as having latency benefites due to (in simple terms) the data actually moving faster like how gig copper uses a higher frenquency than 100meg copper

That's affecting the least important component of latency, namely serialization delay, which is the amount of time it takes to clock all of the bits of a complete packet out an interface.

10GigE = 10,000,000,000bits/sec = 1,250,000,000 bytes/sec. So to clock out a single 1500 byte packet takes 1500 / 1,250,000,000 of a second, or 1.2 microseconds. (0.0012 milliseconds)

So with 100GigE being 10x faster, that means serialization delay is going to be down to 0.12 microseconds, or 0.00012 milliseconds.

The big component of latency is propagation delay, which is a value that is some fraction of the speed of light based on the media in use -- be it photons traveling through glass, or electrons traveling through copper. This value is limited by physics, so it's never going to get better whether you have 100Gigabit interfaces, 100Terabit interfaces, or 100Exabit interfaces.

Propagation delay in the US for traffic between the coasts is about 80 milliseconds. I think it's pretty safe to say that nothing on the Internet is sensitive enough to network latency that the difference between 80.0012ms and 80.00012ms is going to have any kind of impact on how an application functions.