dslreports logo
 
    All Forums Hot Topics Gallery
spc
Search similar:


uniqs
922
WebMaka
join:2013-01-19
Niceville, FL

WebMaka

Member

[FL] Latency issues - Is Cox working on their back-end systems?

I'm running a few machines on Cox Business HSI (basic tier), one of them being a file (for work) and game/voicechat (for after-hours) server. The principal machine is an AMD Bulldozer 8120 eight-core box with 16GB of RAM running Windows Server 2008 R2, connected to an Asus RT-N66 router and Cisco DCP3010 modem, both of which are only a few days old. The previous modem and router were not doing well (both were 3+ years old and dropping data like mad) so upgrades were afoot.

For the last week or so the latency and packet losses are all over the place, but usually only at night (after about 7:30PM Central time). During the day, latency to the server seems to average about 80ms but is somewhat unstable (smokeping shows a deviation range of around 60-150ms), but once the clock ticks over to "most businesses are closed" the latency is suddenly all over the map (smokeping's 5-minute average climbs up over 500ms) and packet loss hits 5% or more. Calls to Cox support have thus far been relatively fruitless as it seems to magically clear up for 15-30 minutes while I'm on the phone with the CSR, and a few minutes after I hang up it seems to go crazy again.

The amount of "crazy" (read: how long it acts up, and how bad the latency and losses get) seems to be traffic-dependent - when we load the game and voice servers (Minecraft & Mumble) with only 5 people, we get several-minute-long bursts of 500+ms pings and 5%+ loss, but with nobody on Mumble and only a couple people in-game it's much better (200+ms pings and 2% loss) but still of questionable stability.

Current smokeping run:
»/r3/sm ··· a84cb0dc

Residential is showing the same pattern to a lesser degree, and it looks like some of that is being masked by downstream data compression.

I've noticed that whenever Cox does any sort of upgrading to their back-end systems, residential service goes to crap until the work is finished, and the behavior here looks similar. Is Cox doing something after-hours (read: upgrades and maintenance) that is driving my business-HSI connection crazy?
bryant313
join:2011-05-24
Las Vegas, NV

bryant313

Member

Re: [FL] Latency issues - Is Cox working on their back-end syste

that sounds exactly what i have been experiencing as well
WebMaka
join:2013-01-19
Niceville, FL

WebMaka

Member

Still getting nonzero packet losses and sporadic latency jumps, as evident by the ongoing smokeping ( »/r3/sm ··· a84cb0dc ). Not nearly as bad as it's been, but nonzero packet loss is not a good thing.

Does anyone have any suggestions on how I might resolve this?

CoxTech1
join:2002-04-25
Chesapeake, VA

CoxTech1 to WebMaka

Member

to WebMaka
Is it WoW you're having issues with by any chance? There have been some issues being investigated closer to their end with regards to latency.
WebMaka
join:2013-01-19
Niceville, FL

WebMaka

Member

Minecraft, actually, and it really gets bad when 6-8 people are in the Mumble server and another 4-5 on the Minecraft server. Something about Mumble in particular seems to make the connection go completely crazy.

CoxTech1
join:2002-04-25
Chesapeake, VA

CoxTech1

Member

Perhaps if you could post some trace route data to the hosts in questions that would reveal some clues as to what is going on.

JUser
@optonline.net

JUser

Anon

Seems like alot of COX customers are having issues connecting to IB right now...
WebMaka
join:2013-01-19
Niceville, FL

WebMaka to CoxTech1

Member

to CoxTech1
The only problem is that traceroutes aren't proving very helpful in locking down what's causing the erratic latency, as it's tough to actually catch and the results we're all getting are contradictory. For example, the line quality/jitter testing from DSL Reports consistently shows 2% loss at the last hop, but none of the folks connecting in from the outside are showing that on their ends. I pretty much can't traceroute since I'm on Cox Home HSI and it's only three hops out the residential gateway and back in through the business side and my Home HSI connection is also being affected to a less obvious extent.

I'm thinking that the only way this is gonna get caught and fixed is to have the connection actively monitored from Cox's end for a few days, during which I'll see about getting everyone to pile in and load the connection down so we can maximize it acting up.

In the meantime I'll see if I can enlist everyone using the servers to ping and traceroute the snot out of the server when the connection goes screwball and see if we can find anything to point at.
WebMaka

WebMaka

Member

Oh, let's throw out some additional details for clarification:

This is a home-office installation, with both residential and business service at the same address - the physical server machine is about ten feet away from me. Both networks are isolated, and there are two drops off the pole. I'd imagine that both are connected to the same node, and I'm wondering if the node's overloaded or flaky.

Home-HSI smokeping:
»/r3/sm ··· b706b771

Business-HSI smokeping:
»/r3/sm ··· b0dc.CA1

Manse
@alionscience.com

Manse

Anon

I'll add myself to the "having this problem" list. Unfortunately, I'm at work and can't post details right now. But I have pingplotter data across multiple nights I can share when I get home. All of the packet loss appears to be at 68.1.4.133 or (occasionally) 68.1.4.159.

I've scheduled two service calls and cancelled them both because I've received calls from Cox saying "Oh, we found the problem last night and corrected it, you should be fine." The problem is corrected for anywhere between 1 and 4 hours, then is back full force.

The PingPlotter runs I have show losses of 10-50% at the above IP, most typically sitting at 20%. I'll data dump when I get home tonight.
cookiesowns
join:2010-08-31
Irvine, CA

cookiesowns to WebMaka

Member

to WebMaka
said by WebMaka:

Minecraft, actually, and it really gets bad when 6-8 people are in the Mumble server and another 4-5 on the Minecraft server. Something about Mumble in particular seems to make the connection go completely crazy.

Hold up, you're saying you're on the basic tier? The issue is with you hitting your upstream cap, causing packet loss.

I suggest turning down your outbound quality on Mumble then. Mumble can eat up quite a bit of bandwidth. Maybe try hosting a local murmur server?
said by Manse:

I'll add myself to the "having this problem" list. Unfortunately, I'm at work and can't post details right now. But I have pingplotter data across multiple nights I can share when I get home. All of the packet loss appears to be at 68.1.4.133 or (occasionally) 68.1.4.159.

I've scheduled two service calls and cancelled them both because I've received calls from Cox saying "Oh, we found the problem last night and corrected it, you should be fine." The problem is corrected for anywhere between 1 and 4 hours, then is back full force.

The PingPlotter runs I have show losses of 10-50% at the above IP, most typically sitting at 20%. I'll data dump when I get home tonight.

Are you getting any loss past cox's CMTS/Router? I've seen plenty of times where maintenance on the CMTS or the router will disable ICMP packets for a given amount of time. Too many requests to it, will also drop ICMP packets, but it will forward packets just fine.