dslreports logo
site
 
    All Forums Hot Topics Gallery
spc

spacer




how-to block ads


Search Topic:
uniqs
5
share rss forum feed


espaeth
Digital Plumber
Premium,MVM
join:2001-04-21
Minneapolis, MN
kudos:2
reply to FFH5

Re: Completely unrelated to anything end-users will see

said by FFH5:

Funny you should say it that way. Because by having a faster, less congested, more efficient backbone allows Verizon to reduce latency to users of its network.

Not necessarily -- the difference in propagation delay of 1500 byte packets is negligible (at least in terms of milliseconds) between a 10Gbps vs a 100Gbps interface.

The big gain here is to be able to drastically reduce interface counts. Today we're pushing 15-16gbps of constant storage replication traffic between our main data centers, but to transit that traffic today we have to spread that out across a minimum of 3x10GigE links (not accounting for redundancy). In order to guarantee in-order packet delivery, the traffic is hashed across multiple links using layer2/layer3/layer4 headers. The downside to that approach is you don't end up with an even distribution of traffic, so you end up needing more interfaces than you'd think to handle the traffic volume. In our case if we drop from 3 down to 2 links, the traffic tends to take a 70/30 split and the wheels fall off the wagon.


DarkLogix
Texan and Proud
Premium
join:2008-10-23
Baytown, TX
kudos:3
well in the full artical they mention the compareison of 10x 10ge vs 1x 100ge as having latency benefites due to (in simple terms) the data actually moving faster like how gig copper uses a higher frenquency than 100meg copper


espaeth
Digital Plumber
Premium,MVM
join:2001-04-21
Minneapolis, MN
kudos:2
said by DarkLogix:

well in the full artical they mention the compareison of 10x 10ge vs 1x 100ge as having latency benefites due to (in simple terms) the data actually moving faster like how gig copper uses a higher frenquency than 100meg copper

That's affecting the least important component of latency, namely serialization delay, which is the amount of time it takes to clock all of the bits of a complete packet out an interface.

10GigE = 10,000,000,000bits/sec = 1,250,000,000 bytes/sec. So to clock out a single 1500 byte packet takes 1500 / 1,250,000,000 of a second, or 1.2 microseconds. (0.0012 milliseconds)

So with 100GigE being 10x faster, that means serialization delay is going to be down to 0.12 microseconds, or 0.00012 milliseconds.

The big component of latency is propagation delay, which is a value that is some fraction of the speed of light based on the media in use -- be it photons traveling through glass, or electrons traveling through copper. This value is limited by physics, so it's never going to get better whether you have 100Gigabit interfaces, 100Terabit interfaces, or 100Exabit interfaces.

Propagation delay in the US for traffic between the coasts is about 80 milliseconds. I think it's pretty safe to say that nothing on the Internet is sensitive enough to network latency that the difference between 80.0012ms and 80.00012ms is going to have any kind of impact on how an application functions.