Quick update. After going through the standard tests I got some more details:
East Cost Line Monitor: »
ny-monitor.dslreports.co ··· and=3905West Coast Line Monitor: »
64.81.79.40/r3/cricket/g ··· and=3905Pings have been TERRIBLE in the past 72 hours. I cannot do anything sensitive to latency. Gaming and VOIP (Skype) are both non-functional at this point.
Tracing route to google.ca [74.125.226.55]
over a maximum of 30 hops:
1 23 ms 11 ms 11 ms 10.125.96.1
2 422 ms 19 ms 35 ms 69.63.254.53
3 377 ms 99 ms 7 ms kitchener3.cable.teksavvy.com [24.52.255.86]
4 31 ms 9 ms 10 ms kitchener1.cable.teksavvy.com [24.246.55.33]
5 47 ms 10 ms 8 ms gw-google.torontointernetxchange.net [206.108.34.6]
6 38 ms 14 ms 49 ms 216.239.47.114
7 505 ms 239 ms 9 ms 64.233.175.132
8 456 ms 29 ms 11 ms yyz06s06-in-f23.1e100.net [74.125.226.55]
Tracing route to ns.teksavvy.com [206.248.182.3]
over a maximum of 30 hops:
1 31 ms 10 ms 9 ms 10.125.96.1
2 413 ms 22 ms 62 ms 69.63.254.53
3 457 ms 20 ms 11 ms kitchener3.cable.teksavvy.com [24.52.255.86]
4 11 ms 9 ms 38 ms kitchener2.cable.teksavvy.com [69.165.168.225]
5 21 ms 10 ms 60 ms 2120.ae0.agg01.tor.packetflow.ca [69.196.136.77]
6 32 ms 11 ms 56 ms ns.teksavvy.com [206.248.182.3]
Notice the terrible pings for the second hop, I think this is the CMTS? Then the POI link looks congested. Whats weird is that the traffic goes over one link and then over another. That seems backwards. I would think that traffic would be handed off the TSI link to another segment further down the network.
ping -n 100 -l 1000 (ipv4 Gateway)
Reply from 24.212.140.161: bytes=1000 time=497ms TTL=255
Reply from 24.212.140.161: bytes=1000 time=504ms TTL=255
[snipped]
Reply from 24.212.140.161: bytes=1000 time=483ms TTL=255
Reply from 24.212.140.161: bytes=1000 time=482ms TTL=255
Ping statistics for 24.212.140.161:
Packets: Sent = 100, Received = 99, Lost = 1 (1% loss),
Approximate round trip times in milli-seconds:
Minimum = 6ms, Maximum = 555ms, Average = 402ms
Nice average ping of 402ms to my gateway...yuck
I'm not sure as to cause of this but service had been rock solid in terms of speed and latency since installation in June. Something locally must have changed in the past week or so.