dslreports logo
site
 
    All Forums Hot Topics Gallery
spc

spacer




how-to block ads


Search Topic:
uniqs
839
share rss forum feed


NOYB
St. John 3.16
Premium
join:2005-12-15
Forest Grove, OR
kudos:1

[FiOS] Very High Latency During File Upload

Latency During File Upload

46 packets transmitted, 46 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 3.139/162.264/251.553/81.924 ms
 

Latency During File Download
46 packets transmitted, 46 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 3.219/8.084/41.763/7.470 ms
 

Latency During Idle Time
46 packets transmitted, 46 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 3.159/4.277/5.327/0.694 ms
 


Latency to the first hop (Frontier FiOS gateway)

--
Be a Good Netizen - Read, Know & Complain About Overly Restrictive Tyrannical ISP ToS & AUP »comcast.net/terms/ »verizon.net/policies/
Say Thanks with a Tool Points Donation


darcilicious
Cyber Librarian
Premium
join:2001-01-02
Forest Grove, OR
kudos:4
Reviews:
·Frontier FiOS

Re: [FiOS] Very High Latency During File Download

Are you saturating your download link?

(Also, either your images are mislabeled or there's actually poor latency during uploading, not downloading per the thread title...)
--
♬ Dragon of good fortune struggles with the trickster Fox ♬



NOYB
St. John 3.16
Premium
join:2005-12-15
Forest Grove, OR
kudos:1

3 edits

Re: [FiOS] Very High Latency During File Upload

Ah you replied before I corrected the title. Upload is the problem.
And yes both are very near saturation.

But the point is that it is allowed to starve out other traffic. This has been discussed and apparently fixed for downloads. But remains an issue for uploads.

»[Internet] Frontier FIOS - Latency and QoS - Where they fail

--
Be a Good Netizen - Read, Know & Complain About Overly Restrictive Tyrannical ISP ToS & AUP »comcast.net/terms/ »verizon.net/policies/
Say Thanks with a Tool Points Donation


gozer
Premium
join:2010-08-09
Rochester, NY

This post got my attention, because it reminded me of when I noticed the same thing on my dsl service. The link to this is what worked
»[Internet] Frontier FIOS - Latency and QoS - Where they fail

I say this because that is the same time as when I noticed this massive ping spike during an upload and its start. I can also say for fact that this was a change from tests based on the same rules. I seems like Frontier just set their network to make download data have priority over upstream maybe to make netflix work better.



cdru
Go Colts
Premium,MVM
join:2003-05-14
Fort Wayne, IN
kudos:7
reply to NOYB

Your connection is working exactly how it's suppose to work.

Frontier provides you with a interstate highway for your connection. When your connection is idle, there aren't any cars on the highway. Your vehicle can easily merge on to the highway, get off at the next exit, then come back to you without any traffic to slow it down.

When you are saturating your connection, it's like a rush hour traffic jam. The traffic is moving, but it's hard for another vehicle to slip into the driving lanes from the on ramp to drive down the road. So you have to wait until there's a gap between two cars before your vehicle can go.

When all the traffic is saturated going one direction, it affects the speed of traffic going the other direction because the acknowledgements can't be returned in a timely manner. You're uploading a very large file, so you have lots of semis with fully loaded trailers headed away from you. You want to transfer as much as possible as quickly as possible, but you don't want to risk one of your semis breaking down and never making it. So you send as many as you can to clog the highway, but you start waiting for Fedex to return back a message saying that the destination received X number of semis successfully before sending more back. As long as your destination can receive them as fast as your interstate allows, the limit is just how many trucks you can jamb onto the highway.

However, because you are sending all this traffic out, any traffic that's coming in (aka downloading) needs to send it's acknowledgements back out to the server that sent it. But since your upload shipment is clogging the highway currently, those acknowledgements can't get returned as quickly as possible. The sending server sees this delay and interprets it as it's packets aren't getting there as fast as it can, so there is no need to send it as fast as they can since you can only receive it at a fraction of that.

The "cure" for this is to implement QOS. QOS is a traffic cop on the highway that looks at the packets to see which order the packets should be let on to the highway. The cop can look at what each vehicle is carrying and determine if it should be allowed on the highway before other traffic, or if it should have to wait it's turn with the majority of traffic. If the acknowledgement traffic is given higher priority than the general data traffic, then you in theory (and simplifying things somewhat) could reach a peak throughput in both directions without the slowdowns.

There are multiple different QOS strategies that can be implemented and there isn't a one-strategy-fits-all that works for everyone. The simplest is FIFO queing - first in first out. It's fair, predictable, and doesn't require much thinking. However it can result in packet delay and can't prioritize some traffic higher then others.

Other more advance queuing strategies divides the traffic up into multiple different buckets of data. The buckets can be based on protocol, type of packet, who the data is going to or coming from, etc. Simpler strategies like this will just process those buckets in a priority order. Any traffic in a a high priority bucket will sent before the medium bucket, and medium bucket traffic before the low bucket. This can cause lower priority traffic to be unfairly be delayed if there is a steady stream of higher priority traffic. More advanced implementations can rate limit each of the buckets so that say for instance high priority traffic gets 50% of the bandwidth, medium 30%, and lowest 20% if there is data waiting in each of the buckets. Or process the queues in a weighted round robin fashion (e.g. HMHLHMHL or HHML) That way even the lowest data gets some bandwidth even if high priority traffic would otherwise consume it all.

My particular setup with my frontier connection is provided by my router with Tomato firmware. ICMP, DNS, and other "service" protocols get the highest priority. Web, mail, etc get rated in the middle. File transfers and p2p type transfers are the lowest and get the left overs. I've saturated my 35mbit connection with a file download while had a VOIP conversation that was perfectly clear. I've also uploaded a large file to work while my kids game online and they don't experience additional lag.


jamesonnorth

join:2012-12-22
Modoc, IN

1 recommendation

reply to NOYB

The very long post about cars is an excellent explanation, but for anyone who would like a slightly shorter version, here you go

Let's say you have a 35/35 plan and you're uploading at full pipe. When you send pings, those ICMP packets have to fit into an already full pipe somewhere, even if they are only 32 bytes. Fitting them in increases latency for that particular transmission. That's why you implement QoS on your network. If you put a max of 34mbps or even 30mbps, you have room for overhead and general web browsing. It annoys home users to have to give up any bandwidth, but it can really improve your experience.

But really, if you're using all your connection for something, are you surprised it's not lighting fast for something else at the same time?
--
CompTIA Net+ Network Administrator - I know networks!
Professional Photographer - www.jnphoto.biz - Weddings and Senior Photos
Nice and comfy with Frontier DSL: I can help with your issues!
»speedtest.net/result/2472459013.png



kontos
xyzzy

join:2001-10-04
West Henrietta, NY
reply to gozer

Actually it's because of the asymmetrical nature of the connection. Going back to the highway metaphor, there are more lanes in the download direction than in the upload direction. Therefore, when there's a traffice jam, it is worse when the jam is in the upload direction. When there are more lanes available, it's more likely to find a short gap for the next car to get on...