dslreports logo
site
 
    All Forums Hot Topics Gallery
spc

spacer




how-to block ads


Search Topic:
uniqs
295
share rss forum feed


FFH5
Premium
join:2002-03-03
Tavistock NJ
kudos:5

Servers on ISPs backbones would give best results

The real desire by the ISPs is to make sure the speed tests are done to servers on their own backbones to maximize performance results. I really don't think it is to alter the actual results; only to make sure the path to speed test servers from customers is as short as possible.

A good compromise would be for the ISPs to do a deal with M-lab to place the M-lab servers on ISP Backbones - just like the ISPs do with OOKLA with the speedtest.net servers.
--
»www.mittromney.com/s/repeal-and-···bamacare
»www.mittromney.com/issues/health-care

ISurfTooMuch

join:2007-04-23
Tuscaloosa, AL
I don't have a problem with placing servers that way as long as it's done in conjunction with testing at the end-user level. After all, it's the end users who have to use this service, and you could have great connectivity from the Internet backbones to the ISP's but lousy connections to the users, especially if you're dealing with old copper and/or oversold networks. Actually, testing in both places is a good idea, since it'd show you where the bottlenecks are.


mmay149q
Premium
join:2009-03-05
Dallas, TX
kudos:48
reply to FFH5
said by FFH5:

A good compromise would be for the ISPs to do a deal with M-lab to place the M-lab servers on ISP Backbones - just like the ISPs do with OOKLA with the speedtest.net servers.

See, I don't see how this would be a good compromise, I mean obviously if the servers are located across the huge pond you're going to consistently get flawed results, however placing the servers in the ISP's network in my opinion would be a BAD idea. Why you ask? Well for one it doesn't show REAL WORLD routing outside of the ISP's network (which I know has NOTHING to do with the ISP itself) but doesn't accurately show the end users experience on the network.

With this being said, and yes it would be costly, but I think the best compromise would be for them to place their servers on the ISP's network, but still have those servers (in the network) that are recording data from the users home also test from themselves to the servers outside of the ISP's network, this way we could not only monitor the data INSIDE the ISP's network, but also keep the ISP honest when they "blame it on someone else" and help them to forge closer relations with CDN's, 3rd party backbones, and etc, and improve the internet overall at the same time by holding the 3rd parties in the equation liable for their networks.

I totally understand most of the time that ISP's are to blame for issues with advertised speeds, however it does no good for the end user if we only fix half of the problem, plus this kind of testing could even lead to new routing protocols or etc, maybe even better DNS servers, or the way DNS works. I mean at the very least we would have a good idea of different possibilities that make that YouTube video load slowly, or why you're only getting 5Mbps download on a 1 gig file when you pay for 30Mbps service.

Matt
--
I am no longer an AT&T Employee. Check out my kudos! »/profile/1626573
Have U-verse questions? Please email uversecare@att.com and they will assist you!!


FFH5
Premium
join:2002-03-03
Tavistock NJ
kudos:5
said by mmay149q:

With this being said, and yes it would be costly, but I think the best compromise would be for them to place their servers on the ISP's network, but still have those servers (in the network) that are recording data from the users home also test from themselves to the servers outside of the ISP's network, this way we could not only monitor the data INSIDE the ISP's network, but also keep the ISP honest when they "blame it on someone else" and help them to forge closer relations with CDN's, 3rd party backbones, and etc, and improve the internet overall at the same time by holding the 3rd parties in the equation liable for their networks.

Sounds like a good idea.
--
»www.mittromney.com/s/repeal-and-···bamacare
»www.mittromney.com/issues/health-care


whiteshp

join:2002-03-05
Xenia, OH
The problem is on the ISP's plan they own and maintain the monitoring equipment so they can cherry pick "how and what" it could monitor. They can't do that on a FCC contractor box in a customers house. Plus just because speed looks great at the CO does not mean the same effort went into the lines going to the customers house.


jlivingood
Premium,VIP
join:2007-10-28
Philadelphia, PA
kudos:2
reply to mmay149q
said by mmay149q:

See, I don't see how this would be a good compromise, I mean obviously if the servers are located across the huge pond you're going to consistently get flawed results, however placing the servers in the ISP's network in my opinion would be a BAD idea. Why you ask? Well for one it doesn't show REAL WORLD routing outside of the ISP's network (which I know has NOTHING to do with the ISP itself) but doesn't accurately show the end users experience on the network.

This is one of the reasons for the HTTP tests to the top 10 websites.

said by mmay149q:

With this being said, and yes it would be costly, but I think the best compromise would be for them to place their servers on the ISP's network, but still have those servers (in the network) that are recording data from the users home also test from themselves to the servers outside of the ISP's network, this way we could not only monitor the data INSIDE the ISP's network, but also keep the ISP honest when they "blame it on someone else" and help them to forge closer relations with CDN's, 3rd party backbones, and etc, and improve the internet overall at the same time by holding the 3rd parties in the equation liable for their networks.

This is more or less what happens now. There are on net ISP servers (130 across all ISPs - 35 in Comcast's network) and off net (M-Labs). Thank goodness we have two, since having on net servers helped identify the March and April issues with tests run against M-Labs servers.

So I agree that having 2 to be a check and balance is a good thing. And the data is not stored on these servers - the test units in customer homes record test results and send them to SamKnows servers.
--
JL
Comcast


jlivingood
Premium,VIP
join:2007-10-28
Philadelphia, PA
kudos:2
reply to whiteshp
said by whiteshp:

The problem is on the ISP's plan they own and maintain the monitoring equipment so they can cherry pick "how and what" it could monitor. They can't do that on a FCC contractor box in a customers house. Plus just because speed looks great at the CO does not mean the same effort went into the lines going to the customers house.

The way the SamKnows system works is actually good in this regard. The test units in homes record the results and send them to SamKnows. So the administrator or hoster of the servers does not have that capability.
--
JL
Comcast


mmay149q
Premium
join:2009-03-05
Dallas, TX
kudos:48
reply to jlivingood
said by jlivingood:

said by mmay149q:

See, I don't see how this would be a good compromise, I mean obviously if the servers are located across the huge pond you're going to consistently get flawed results, however placing the servers in the ISP's network in my opinion would be a BAD idea. Why you ask? Well for one it doesn't show REAL WORLD routing outside of the ISP's network (which I know has NOTHING to do with the ISP itself) but doesn't accurately show the end users experience on the network.

This is one of the reasons for the HTTP tests to the top 10 websites.

said by mmay149q:

With this being said, and yes it would be costly, but I think the best compromise would be for them to place their servers on the ISP's network, but still have those servers (in the network) that are recording data from the users home also test from themselves to the servers outside of the ISP's network, this way we could not only monitor the data INSIDE the ISP's network, but also keep the ISP honest when they "blame it on someone else" and help them to forge closer relations with CDN's, 3rd party backbones, and etc, and improve the internet overall at the same time by holding the 3rd parties in the equation liable for their networks.

This is more or less what happens now. There are on net ISP servers (130 across all ISPs - 35 in Comcast's network) and off net (M-Labs). Thank goodness we have two, since having on net servers helped identify the March and April issues with tests run against M-Labs servers.

So I agree that having 2 to be a check and balance is a good thing. And the data is not stored on these servers - the test units in customer homes record test results and send them to SamKnows servers.

Thanks, I actually didn't know that's how it worked with the Comcast infrastructure, I just assumed (you know it's a fail when you start a sentence off that way) that the servers were random throughout the USA, and the servers held the info to send off to the SamKnows people on the other side of the pond. I'm actually glad Comcast is doing that, because as many things I think that they do wrong (Caps with overages, caps without overages, or just caps in general, plus the ridiculous hoops to jump through to get the cheapest broadband plan offered) this is one thing I'm glad to say has been done right.

I'm really hoping other ISP's will hold this kind of model, and it will help them to identify a lot of the odd issues that come up (We used to get some odd ones in U-Verse, like unable to route to certain websites (due to the IP being blocked on one of the hops on the way there) and other things, and if a lot more companies would host these servers internally it would be a nice check and balance system for who is really at fault, and help to resolve those odd issues quickly (even if they are odd and rare)

Oh well, a pipe dream is a pipe dream, hopefully one day we'll have a self detection system for the internet and break down areas better than the current tools today

Matt
--
I am no longer an AT&T Employee. Check out my kudos! »/profile/1626573
Have U-verse questions? Please email uversecare@att.com and they will assist you!!