dslreports logo
 
    All Forums Hot Topics Gallery
spc
uniqs
3650

justin
..needs sleep
Mod
join:1999-05-28
2031
Billion BiPAC 7800N
Apple AirPort Extreme (2011)

justin

Mod

Data center speeds

I'm getting curious about the typical speeds of cross-country internet traffic when the last mile and all that jazz is taken out of the loop. In the good old days, when 3mbit was about the fastest anyone at home got, pretty much any data center could deliver that to an end-user. Now with Fios, optimum online, speed boost etc etc, I'm think I'm seeing that the only reliable way to max out the connection is to test against a data center in the same city, or very close by. Cross country speeds are often not good enough.

When I test download speeds data center to data center, some are pretty slow. Less than 10 mbit. So I'm wondering whether this is a common pattern and how it relates to distance.

the request:

If anyone has a host in a data center that can run a pretty short perl script on a cron job, then let me know by posting or IM. If there is some interest I will write one that will report results in a grid where every data center on the list tests to every other data center twice per day (peak and off peak). I'll also need to know the city and data center your box is in. Such a test won't even have to use much bandwidth. 10 data centers testing each other twice a day every day is less than 6gb. We could even do a weekly cycle and cut tht down to 1gb or less.
P1398750
join:2006-10-01
NA

1 edit

P1398750

Member

Nevermind, don't want to do anything I have no clue on.

twizlar
I dont think so.
Premium Member
join:2003-12-24
Brantford, ON

2 edits

twizlar to justin

Premium Member

to justin
i've got one in dallas, miami, and orlando. Let me know if you need them.
EDIT: forgot to add the datacenters.

Dallas is LayeredTech.
Miami is SAGO Networks in the Digiport Datacenter.
Orlando is DimeNoc.

Djdeadly
join:2000-11-03
San Jose, CA

Djdeadly to justin

Member

to justin
I can do it when it drops down to 1gb a week. I'm in Colo4Dallas.
Turbocpe
Premium Member
join:2001-12-22
IA

Turbocpe to justin

Premium Member

to justin
I've got one in the 1&1 data center in NY. I think the same where you have some of your BBR stuff, so it may not be of any use.

nil

join:2000-11-27

nil to justin

to justin
Btw, with the larger data centers, you'll have to take into account who the particular host is buying bandwidth from as well.

drmorley
MVM
join:2000-12-20
Three Lakes, WI

drmorley to justin

MVM

to justin
I own a server in an XO datacenter here in Chicago. Let me know if you need any help--I'm happy to do so.

justin
..needs sleep
Mod
join:1999-05-28
2031
Billion BiPAC 7800N
Apple AirPort Extreme (2011)

justin

Mod

Ok the first pass at this project is done. The results from running the perl fragment are here:

»/colocspeeds

So now I know what has to be installed, if anyone else can participate on an ongoing basis and wants to see their data center added to the grid, drop me a line.

snappakazoo
Inconceivable
join:2000-09-10
Montclair, NJ

snappakazoo to justin

Member

to justin
The differences in speeds is impressive. Sorry, I can't help you beyond that!

justin
..needs sleep
Mod
join:1999-05-28
2031

justin

Mod

Difference is worrying. It means that for many/most data centers, cross-country traffic is much slower than many home broadband users.
JoelC707
Premium Member
join:2002-07-09
Lanett, AL

JoelC707

Premium Member

Yeah I was just going to say the same thing. Not only is speed a problem on many of them but latency is pretty high on a few too. But then again a couple of them look like they are really in the same DC with 1 and 2 ms pings.

I wonder, did you find out what kind of connection each user has? I know most have an allotted amount of transfer but some may be on an unmetered connection and not everyone can afford 100 megs unmetered. In particular if it is a colo those almost always start out as unmetered so you may be seeing their unmetered port speed.

Joel

justin
..needs sleep
Mod
join:1999-05-28
2031
Billion BiPAC 7800N
Apple AirPort Extreme (2011)

4 edits

justin

Mod

Well apart from two of them (pihost and dimenoc), they are all my machines, and I know they are all on 100mbit switches. There is also no metering that I know of. None of the ports are showing any collisions or errors, none are busy in a way that would impact maximum speed.

Apart from one or two, most of them only show slow speeds when talking cross-country. It makes me think that that is the nature of the internet, currently. But I'm going to inquire from Megapath and Speakeasy to see if these rather slow long distance speeds are what they expect, and are paying for.

The Speed between 1and1 and NAC are not the same DC, the location is NJ and PA respectively. They are just well connected.
JoelC707
Premium Member
join:2002-07-09
Lanett, AL

JoelC707

Premium Member

Ahh so they are all your servers. That's a good way to get a baseline cause you know how your servers are connected and utilized.

No collisions or errors....not your equipment or the connecting switch. I wonder if a traceroute between the DC's would be useful? I'm sure that unless the DC's are owned by the same company just different cities, they do not have direct interconnects. Maybe you could identify a possible overloaded router/circuit or simply a bad route.

Yeah I would definitely find out if they know about the slow speeds. I doubt it is the nature of the internet. As far as I know backbone providers use at minimum OC48's between their routers with many going to GigE or 10 GigE. I now there's an insane amount of traffic flowing over the internet now but there's more bandwidth left than that if residential users are able to max out 8 meg plus connections.

1and1 and NAC aren't the only two that have low pings. Speakeasy and Megapath have 1ms pings too. I know the speed of light is fast and could traverse the US in an instant but the lowest ping I have seen on a PtP OC48 was about 4ms.

justin
..needs sleep
Mod
join:1999-05-28
2031
Billion BiPAC 7800N
Apple AirPort Extreme (2011)

2 edits

justin

Mod

Well for an example, the traceroute from 1and1 to NAC:

 1  10.255.255.253 (10.255.255.253)  2.186 ms  1.905 ms  1.918 ms
2 v998.gw-core-a.nyc.schlund.net (217.160.229.125) 0.687 ms 0.575 ms 0.357 ms
3 ge-511.gw-backbone-b.nyc.schlund.net (217.160.229.66) 0.339 ms 0.384 ms 0.301 ms
4 4.78.164.1 (4.78.164.1) 0.553 ms 0.516 ms 0.555 ms
5 att-level3-oc192.NewYork1.Level3.net (4.68.127.150) 0.895 ms ggr2-p360.n54ny.ip.att.net (192.205.33.93) 1.664 ms att-level3-oc192.NewYork1.Level3.net (4.68.127.150) 0.778 ms
6 tbr2-p033901.n54ny.ip.att.net (12.123.0.94) 1.758 ms 3.022 ms 2.011 ms
7 gar1-p370.nwrnj.ip.att.net (12.123.0.157) 1.306 ms 1.274 ms 1.303 ms
8 att-gige.esd1.nwr.nac.net (12.119.140.26) 1.555 ms 13.670 ms 1.538 ms
9 3.ge-3-0-0.gbr2.nwr.nac.net (209.123.11.189) 1.360 ms 1.225 ms 1.160 ms
10 0.so-0-3-0.gbr1.oct.nac.net (209.123.11.233) 2.768 ms 2.811 ms 2.921 ms

Looking at that it occurs to me that the 1and1 DC might be in new york, not PA. I looked at their website and PA was their home base but perhaps I'm in new york with that box. And I know where NAC is, about 20 miles to the west. But there you go: 10 hops and yet a pingtime of 1-2ms.

edit: I corrected the locations shown by »/colocspeeds , I think this makes a bit more sense ping-time wise, if not actually max speed wise.
Turbocpe
Premium Member
join:2001-12-22
IA

Turbocpe

Premium Member

said by justin:

Looking at that it occurs to me that the 1and1 DC might be in new york, not PA. I looked at their website and PA was their home base but perhaps I'm in new york with that box.
I use 1and1 (shared) and I'm pretty sure that the box is in NY.

Since you have a server there, I guess you probably have no use for another 1and1 customer in the same DC. As long as the bandwidth wasn't too much, I would helped contribute with your project.
JoelC707
Premium Member
join:2002-07-09
Lanett, AL

JoelC707 to justin

Premium Member

to justin
Wow even some hops less than 1ms pings, pretty impressive.

I'd be curious to see some traceroutes for some of those with either slow data rates and/or high ping times.
JoelC707

JoelC707 to Turbocpe

Premium Member

to Turbocpe
Yeah I've got two shared hosting accounts with 1and1, both probably in NY. One MS and one Linux, both developer packages and use very little of the bandwidth. It might help to have other test points in the same DC but maybe not. Only thing it might help to eliminate is a possible overloaded server or switch but that's already covered with your server at 1and1.
togtogtog9
join:2003-12-16
Key Largo, FL

togtogtog9 to justin

Member

to justin

Fat pipe to nowhere

One thing you'll find far too often especially nowadays is a lot of colo providers using the cheapest bulk bandwidth available such as Global Crossing and Level3. (Though Level3 has recently become extremely expensive even though their actual backbone hasn't gotten any better.)

These are basically what I call a "fat pipe to nowhere"

Though you can certainly find a few good routes, far too often backbones like L3 and GBLX route your traffic to some peering point that neither backbone provider on either side of the peer cares enough about and is thousands of miles in the wrong direction from your destination.

For a typical example, an associate of ours in Miami just turned up a global crossing gigabit port. We're connected to Savvis Miami. I am 60ms from him and he's only a few blocks away. Our traffic travels up to DC and back. It's ridiculous.

Savvis has limited connectivity to L3 and GBLX, but from what I've seen GBLX and L3 have limited connectivity to the majority of the rest of The Internet in general. If you want to communicate with someone on L3 or GBLX, you're often going to take a little trip thousands of miles in the wrong direction.

I single those two out because providers like InterNAP and Savvis generally have decent connectivity in more than just a few places to a good majority of the rest of The Internet, and GBLX and L3 are a stark contrast to that.

justin
..needs sleep
Mod
join:1999-05-28
2031
Billion BiPAC 7800N
Apple AirPort Extreme (2011)

justin

Mod

thats interesting. Makes sense. but it also is probably news to most users on fast connections that unless the server they are talking to is close or part of the club, they are not going to max their speed.

I don't think a higher ping time necessarily means a limited download speed, so these "fat pipes to nowhere" must be artificially throttling the conversations they carry? or do you think that a 100ms conversation automatically means a 3mbit transfer rate?

I'm going to add a few bells and whistles to the grid, in particular, sort the data centers by median speed so we can see which one is doing best both for getting and for putting data. I'll also probably add traceroutes (chopping the beginning and end off, for obvious reasons).
togtogtog9
join:2003-12-16
Key Largo, FL

togtogtog9 to justin

Member

to justin

Re: Data center speeds

They do not artifically throttle anything, these are busy backbone routers and they usually don't do complex CPU-intensive operations like that. All they do is copy packets from one interface to another and update routing tables.

Actually instead of being "part of the club" it's more like particular backbone providers don't bother doing enough peering or they set idiotic policies that make it difficult for others to peer with them and last but not least, they're too cheap to buy transit when necessary to improve their connectivity.

TCP without tuning will use FAR less than the actual available bandwidth across a higher-latency cross-country or international link.

The traceroutes both ways would definitely provide a more complete picture. Internet performance is more than just physical proximity as I demonstrated earlier.

justin
..needs sleep
Mod
join:1999-05-28
2031
Billion BiPAC 7800N
Apple AirPort Extreme (2011)

justin

Mod

said by togtogtog9:

TCP without tuning will use FAR less than the actual available bandwidth across a higher-latency cross-country or international link.
I know there are issues with tuning TCP for LFPs (long fat pipes?) but I wouldn't call a pingtime of 90-150ms so problematic that it would require stack tuning to get more than 4mbit out of it?

eg: pihost can pull 12mbit from NAC at 70ms
yet the speakeasy coloc struggles to pull 3mbit at the same ping time?
so clearly something else is going on other than just tcp stack tuning. It sounds like you are saying that for those kinds of super slow results x-country, somewhere along the route something is nearly full?

gtdawg
Premium Member
join:2002-03-17
united state

gtdawg

Premium Member

said by justin:

said by togtogtog9:

TCP without tuning will use FAR less than the actual available bandwidth across a higher-latency cross-country or international link.
I know there are issues with tuning TCP for LFPs (long fat pipes?) but I wouldn't call a pingtime of 90-150ms so problematic that it would require stack tuning to get more than 4mbit out of it?

eg: pihost can pull 12mbit from NAC at 70ms
yet the speakeasy coloc struggles to pull 3mbit at the same ping time?
so clearly something else is going on other than just tcp stack tuning. It sounds like you are saying that for those kinds of super slow results x-country, somewhere along the route something is nearly full?
It could be something on that route that is near capacity and doesn't have much room for bursting.

One thing that sticks out of the norm is the data being _Pulled_ from SW&D San Jose is extremely slow, but the ping times are in-line for the other 2 Bay Area centers. There is probably rate limiting going on there.

justin
..needs sleep
Mod
join:1999-05-28
2031

justin

Mod

The S&D speeds turned out to be a misconfigured nic/switch combo, it has been fixed and speeds are good but until the other hosts get around to testing it again, it still shows bad for data getting pulled.

gtdawg
Premium Member
join:2002-03-17
united state

gtdawg

Premium Member

That would make sense, I'll re-run my side!
JTY
join:2004-05-29
Ellensburg, WA

1 edit

JTY to justin

Member

to justin
said by justin:

the request:

If anyone has a host in a data center that can run a pretty short perl script on a cron job, then let me know by posting or IM. If there is some interest I will write one that will report results in a grid where every data center on the list tests to every other data center twice per day (peak and off peak). I'll also need to know the city and data center your box is in. Such a test won't even have to use much bandwidth. 10 data centers testing each other twice a day every day is less than 6gb. We could even do a weekly cycle and cut tht down to 1gb or less.
Is that 6GB of traffic per month? Or?
JoelC707
Premium Member
join:2002-07-09
Lanett, AL

JoelC707

Premium Member

Most likely per month.

So Justin, just how is this accomplished? Is there a test file on each server that gets downloaded to each other? The test runs in both directions I assume? It seems like it does. Also, what is the "loss 20%" seen on e2000Paul from megapath and NAC? I assume overall packet loss or perhaps it is a server that didn't "check in"? I think it is packet loss because there are several that have no reading at all.

What about still showing the data rate there and put the loss down next to the ping. You wouldn't have to put loss either, you could simply have "10ms 20%" or something like that. Most people would probably assume you are talking about packet loss (assuming that's what it is) with it being next to the ping so the word loss is unnecessary.

state
stress magnet
Mod
join:2002-02-08
Purgatory

state

Mod

said by JoelC707:

Is there a test file on each server that gets downloaded to each other? The test runs in both directions I assume?
Correct.
said by JoelC707:

Also, what is the "loss 20%" seen on e2000Paul from megapath and NAC? I assume overall packet loss or perhaps it is a server that didn't "check in"?
A blank entry means that the host listed on the left did not test against the site listed above. Loss is calculated at test time.

justin
..needs sleep
Mod
join:1999-05-28
2031
Billion BiPAC 7800N
Apple AirPort Extreme (2011)

justin to JoelC707

Mod

to JoelC707
There are a number of things I want to do with this, I've just been distracted so it is on hold for a few days.
One obvious thing: the data center speeds listed are largely a function of ping time. The computers involved are mostly not tuned for large bandwidth delay products. To transfer at 40mbit over a 100ms connection requires a huge TCP receive window.

So instead of worrying too much about speeds, I'm going to average out the latencies, and color code the grid for speeds that are significantly less than the optimal assuming a 64k tcp window size. This normalizes the speeds shown but will highlight any that are below par.

I'll also add traceroutes so people can see the transits that the data centers use to talk to one another. Perhaps some transits will constantly show up as having poor latencies for the distances involved.

Steve
I know your IP address

join:2001-03-10
Tustin, CA

Steve

said by justin:

Perhaps some transits will constantly show up as having poor latencies for the distances involved.
*Absolutely*, though this may produce some slightly misleading results during outages.

Let's say that a data center has two transit providers and takes full BGP routes. It figures out that the best path to some other data center is via provider A - that's what gets shown in the graphs with whatever performance happens.

But if that transit provider is flakey, then the routers will fall back to the next-best route via provider B. This will show up in the graphs as strictly measuring provider A -vs- provider B, but won't properly reflect that provider A is flakey.

This is happening now at our data center, and though I won't name the lousy provider, I can say that Level3 is *fantastic*.

I'm not sure how one would fix this without getting into a big hairy deal on modeling of traffic.

Steve

justin
..needs sleep
Mod
join:1999-05-28
2031
Billion BiPAC 7800N
Apple AirPort Extreme (2011)

justin

Mod

well if it sampled often enough then we can just display the longer term averages or medians. If one data center suffers from route flaps more than another, well, you want to be in the first, no?

But yes this is really just a tiny keyhole view into a very complex picture. I'm not sure expanding it is very practical given the limitations on traffic per month participants would have. (Although traceroutes and packet loss estimators are cheap to run).