said by DarkLogix:
well in the full artical they mention the compareison of 10x 10ge vs 1x 100ge as having latency benefites due to (in simple terms) the data actually moving faster like how gig copper uses a higher frenquency than 100meg copper
That's affecting the least important component of latency, namely serialization delay, which is the amount of time it takes to clock all of the bits of a complete packet out an interface.
10GigE = 10,000,000,000bits/sec = 1,250,000,000 bytes/sec. So to clock out a single 1500 byte packet takes 1500 / 1,250,000,000 of a second, or 1.2 microseconds. (0.0012 milliseconds)
So with 100GigE being 10x faster, that means serialization delay is going to be down to 0.12 microseconds, or 0.00012 milliseconds.
The big component of latency is propagation delay, which is a value that is some fraction of the speed of light based on the media in use -- be it photons traveling through glass, or electrons traveling through copper. This value is limited by physics, so it's never going to get better whether you have 100Gigabit interfaces, 100Terabit interfaces, or 100Exabit interfaces.
Propagation delay in the US for traffic between the coasts is about 80 milliseconds. I think it's pretty safe to say that nothing on the Internet is sensitive enough to network latency that the difference between 80.0012ms and 80.00012ms is going to have any kind of impact on how an application functions.