|
VOIP.ms Seattle3 issueSince about 10:00 a.m. PST, all of my customers who are using IP Phones on seattle3.voip.ms are reporting that outbound calls are failing. Inbound calls are working fine. When I move them over to another geographic server, their phones work fine.
Voip.ms says that some ISP somewhere is mangling the UDP packets, and that changing to a single Codec (or to TCP) will also solve the problem.
I've confirmed that doing so works, but for some reason, some of my Aastra/Mitel 6737 IP phones lack the ability to select a single Codec. I make the change and click save, but when I reload the page, I'm back to Codec 1 = "All."
Is anyone else experiencing problems? Any ideas on other ways to resolve this?
Does anyone know about this 6737i bug in choosing codecs? |
|
lilarry Premium Member join:2010-04-06 |
lilarry
Premium Member
2018-Dec-13 6:01 pm
We are experiencing the issue (indeed I believe we were the first to report it). At first we thought it was limited to the Washington servers, but since then we see it is on others, but not all. For example, the New York servers appear to be working. It does not appear to be "some ISP somewhere". We have already observed that users on Optimum, Comcast, Lightpath, FiOS, Spectrum are all experiencing the issue. Also related are issues with BLF. I do understand that switching to a single codec reduces the size of the SIP header, so there may be a clue there. It worked on a couple of phones we have here in the office. But most of the phones we have in field were already restricted to PCMU exclusively and they are experiencing the issue, so it may be that changing that scenario that triggers a fix. Just conjecture though. Regardless, it appears nearly certain that the problem is on the Voip.ms end of this, and I know they are working hard as I type this to figure it out and get it corrected.
Meanwhile, try a server in another city or two (or three). See if that fixes it. |
|
1 edit |
I'm glad to hear from you. Support told me that it was an ISP and not their problem. I don't think that they gave me any indication that they were working to resolve it, though they did say that they were opening a ticket. |
|
removed Premium Member join:2002-02-08 Houston, TX |
to advocate99
I'm having the same issue with Comcast and the Houston server. Changing to TCP transport did help, but I have several users (on Comcast, too, strangely) where the Grandstream phones simply refuse to register with anything other than UDP transport. |
|
lilarry Premium Member join:2010-04-06
2 recommendations |
lilarry
Premium Member
2018-Dec-13 7:24 pm
The issue appears to be with their servers located at SoftLayer Data Centers. That would include Washington, Seattle, Houston, Dallas and others. Try switching your server to one of their servers at other data centers (try New York, Atlanta, Montreal, Vancouver, Denver among others). Let us know if that did the trick (without having to resort to TCP). Worked for us. |
|
|
Denver is working for me. |
|
removed Premium Member join:2002-02-08 Houston, TX |
to advocate99
Are any of you able to see if this was fixed? voip.ms is closed at the moment and the issue still has an Open status with them. I'm trying to figure out whether this is fixed, or if I should reconfigure 50 phones from UDP to TCP to avoid a shitstorm tomorrow morning. I only have AT&T service at home and in my office so I can't test this for myself... |
|
|
lilarry Premium Member join:2010-04-06 |
lilarry
Premium Member
2018-Dec-13 11:41 pm
Seems to be working now. Tested so far on Washington, Houston and Seattle3. Even if it is actually fixed, they likely won't update the issue tracker until morning.
We were going to redirect several hundred phones to non-SoftLayer servers during the overnight hours (I think that's a better solution than TCP) but now I think we will wait until morning. |
|
1 edit |
said by lilarry:Seems to be working now. Tested so far on Washington, Houston and Seattle3. Even if it is actually fixed, they likely won't update the issue tracker until morning.
We were going to redirect several hundred phones to non-SoftLayer servers during the overnight hours (I think that's a better solution than TCP) but now I think we will wait until morning. We're still working with the providers but we don't want to update the tracker until we're sure the issue is fixed properly. Edit: Working but so far it's stable. Good night everyone. |
|
removed Premium Member join:2002-02-08 Houston, TX |
removed
Premium Member
2018-Dec-14 10:42 am
I'm being told that outgoing calls are still failing on the houston servers. Any ideas? |
|
lilarry Premium Member join:2010-04-06 |
to advocate99
We have found that while some devices began working, most are still failing. We are in the process now of migrating devices to non SoftLayer servers, which ARE working. |
|
removed Premium Member join:2002-02-08 Houston, TX |
removed
Premium Member
2018-Dec-14 12:25 pm
said by lilarry:We are in the process now of migrating devices to non SoftLayer servers, which ARE working. Can confirm. I just moved a bunch of Grandstream GXP2170s from Houston to Atlanta2 and all is well. |
|
lilarry Premium Member join:2010-04-06
1 recommendation |
to advocate99
Just shooting from the hip here: The issue appears to be similar to SIP-ALG issues we experience when it is not disabled on local routers. SIP-ALG can cause all sorts of weird intermittent issues like calls not completing or one-way audio. We have found that SIP-ALG issues can often be overridden on a local router by diminishing the size of the SIP Header (for example, by limiting the device to one codec), or by switching to TCP. I wonder if Softlayer did something to their routers yesterday that, in effect, enabled SIP-ALG. Again, just conjecture. |
|
1 recommendation |
They updated their Issues list to indicate that at 9:20 a.m. PST today they opened a ticket with their Data Center. IMO, it seems as if they should have done that yesterday.
---------
Update Dec. 14, 2018, 12:20 EST
A ticket has been opened with our Data Center provider to find more about this situation.
We will provide more updates as they become available. |
|
advocate99 1 edit |
One more thing: I've had calls from clients who were still on Seattle (they were trying to wait it out) indicating that some inbound calls were coming in with no CID.
I also observed that on phones that were able to make outbound calls (only one codec configured), some of the outbound calls with being delivered with a CID of 0000000000. |
|
advocate99 |
A client who is using a FreePBX set-up with only one codec enabled with seattle3 (and thus is able to make calls) has advised me that several calls came in today from known US customers, but the calls were accompanied by CID's from Russia. |
|
MangoUse DMZ and you get a kick in the dick. Premium Member join:2008-12-25 www.toao.net
1 recommendation |
Mango
Premium Member
2018-Dec-15 1:41 pm
How are things going today?
Out of curiosity, does using a non-standard port number (42872 on VoIP.ms's end, anything between 20000 and 65535 on yours) change the symptoms? |
|
removed Premium Member join:2002-02-08 Houston, TX
1 recommendation |
to advocate99
Steve VoIPms (or anyone else with knowledge) -- any updates on this? |
|
|
said by removed:Steve VoIPms (or anyone else with knowledge) -- any updates on this? Some customers when using our IBM/Softlayer servers, when using UDP, we received packets that are chopped if they too big on the initial sip invite. We've been on this for days. A few solutions are to: 1. Switch to TCP 2. Reduce your configuration to one codec (the one you prefer) 3. Switch to a non soft layer server The head scratcher with this is that it's a small % affected so I suspect one of the providers of soft layer responsible for breaking these bigger UDP packets. I would appreciate if some of you guys could send us a trace route (tracert I believe in Windows) to the server that is affecting you). We've been working day and night on this with not much help from Softlayer. Thanks |
|
removed Premium Member join:2002-02-08 Houston, TX
1 recommendation |
removed
Premium Member
2018-Dec-17 10:37 am
We've had several clients, all using Comcast as their ISP with houston.voip.ms as the server, have issues with this. I use AT&T at home and we have AT&T at the office, which haven't had this issue at all - not with us and not with any clients. Here's the traceroute from one of the affected clients, where we fixed the problem last week by forcing TCP: [root @ router] ~ # traceroute houston.voip.ms
traceroute to houston.voip.ms (173.193.85.18), 30 hops max, 60 byte packets
[...]
2 96.120.17.125 (96.120.17.125) 7.102 ms 7.295 ms 7.506 ms
3 ae-108-rur02.royalton.tx.houston.comcast.net (69.139.209.189) 7.963 ms 7.934 ms 7.907 ms
4 ae-2-rur01.royalton.tx.houston.comcast.net (162.151.134.65) 7.618 ms 8.779 ms 8.993 ms
5 ae-29-ar01.bearcreek.tx.houston.comcast.net (68.85.245.85) 9.503 ms 9.715 ms 9.690 ms
6 4.68.71.109 (4.68.71.109) 9.898 ms 8.859 ms 9.281 ms
7 4.35.207.138 (4.35.207.138) 13.810 ms 13.613 ms 13.793 ms
8 ae5.dar01.sr02.hou02.networklayer.com (173.192.18.223) 13.767 ms 16.022 ms ae5.dar02.sr02.hou02.networklayer.com (50.97.18.243) 15.748 ms
9 po2.fcr01.sr02.hou02.networklayer.com (173.193.118.133) 16.360 ms 16.574 ms 16.551 ms
10 * * *
11 * * *
12 * * *
13 * * *
14 * * *
15 *^C
Another client with a similar setup (Comcast & houston.voip.ms) reports that everything is fine as of this morning. Here's their traceroute: [root @ office] ~ # traceroute houston.voip.ms
traceroute to houston.voip.ms (173.193.85.18), 30 hops max, 60 byte packets
[...]
2 96.120.16.185 (96.120.16.185) 7.964 ms 7.146 ms 12.642 ms
3 ae-106-rur02.royalton.tx.houston.comcast.net (68.85.254.193) 12.555 ms 12.462 ms 13.182 ms
4 ae-2-rur01.royalton.tx.houston.comcast.net (162.151.134.65) 12.280 ms 12.168 ms 12.076 ms
5 ae-29-ar01.bearcreek.tx.houston.comcast.net (68.85.245.85) 12.752 ms 18.063 ms 17.949 ms
6 4.68.71.109 (4.68.71.109) 23.444 ms 9.701 ms 9.449 ms
7 4.35.207.138 (4.35.207.138) 17.537 ms 14.619 ms 14.386 ms
8 ae5.dar01.sr02.hou02.networklayer.com (173.192.18.223) 14.402 ms 14.237 ms 14.091 ms
9 po2.fcr01.sr02.hou02.networklayer.com (173.193.118.133) 14.210 ms 15.253 ms po1.fcr01.sr02.hou02.networklayer.com (173.193.118.131) 25.660 ms
10 * * *
11 * * *
12 * * *
13 * * *
14 * * *
15 *^C
Edit: I'm happy to help you guys troubleshoot this as much as possible. Feel free to IM me via DSLReports if you want access to one of the affected networks. I'll get you remoted in and you can run any diagnostic you'd like. |
|
skyhook Premium Member join:2004-06-30 Columbus, OH |
to Steve VoIPms
I'm currently experiencing problems with outgoing calls. The call starts off OK but after a few seconds the audio disappears for a time and then returns again for a few more seconds. Rinse and repeat. Using atlanta.voip.ms.
1 1 ms 1 ms 1 ms 192.168.20.1 2 3 ms 2 ms 2 ms 192.168.7.1 3 13 ms 12 ms 11 ms d60-65-1-128.col.wideopenwest.com [65.60.128.1] 4 10 ms 10 ms 10 ms d4-50-48-122.nap.wideopenwest.com [50.4.122.48] 5 13 ms 10 ms 11 ms 76-73-167-97.knology.net [76.73.167.97] 6 19 ms 12 ms 10 ms 76-73-167-89.knology.net [76.73.167.89] 7 12 ms 11 ms 11 ms 76-73-166-126.knology.net [76.73.166.126] 8 13 ms 12 ms 11 ms 76-73-166-125.knology.net [76.73.166.125] 9 22 ms 32 ms 21 ms user-24-214-131-160.knology.net [24.214.131.160] 10 42 ms 21 ms 33 ms dynamic-75-76-35-8.knology.net [75.76.35.8] 11 21 ms 22 ms 22 ms 4-1-4.ear3.Chicago2.Level3.net [4.16.38.157] 12 49 ms 48 ms 47 ms ae-2-3514.edge2.Atlanta4.Level3.net [4.69.150.165] 13 48 ms 48 ms 46 ms HOSTING-HOL.edge2.Atlanta4.Level3.net [4.53.238.10] 14 53 ms 47 ms 65 ms 63.247.69.38 15 51 ms 48 ms 46 ms atlanta1.voip.ms [75.127.65.130]
Trace complete. |
|
tyrodome Premium Member join:2004-02-18 USA 1 edit |
to advocate99
Here are the current providers for VOIP.MS's points of presence, per » www.ip-adress.com (search box in upper right). Due to mergers and name changes, some providers' names might be wrong. Please correct them here. My clients and I have had the best reliability with Chicago. We've had subpar reliability with Dallas, Houston, Seattle, San Jose, and Los Angeles. Four of those five are Softlayer, for whatever that's worth. I hadn't noticed that until now. atlanta.voip.ms Global Net Access zColo NETWORK TRANSIT HOLDINGS atlanta2.voip.ms Global Net Access zColo NETWORK TRANSIT HOLDINGS chicago.voip.ms Steadfast chicago2.voip.ms Steadfast chicago3.voip.ms Steadfast chicago4.voip.ms Steadfast dallas.voip.ms Softlayer dallas2.voip.ms Softlayer denver.voip.ms Handy Networks denver2.voip.ms Handy Networks houston.voip.ms Softlayer houston2.voip.ms Softlayer losangeles.voip.ms QuadraNet losangeles2.voip.ms QuadraNet newyork.voip.ms Internap newyork2.voip.ms Internap newyork3.voip.ms Internap newyork4.voip.ms Internap newyork5.voip.ms Steadfast newyork6.voip.ms Steadfast newyork7.voip.ms Steadfast newyork8.voip.ms Steadfast sanjose.voip.ms Softlayer sanjose2.voip.ms Softlayer seattle.voip.ms Softlayer seattle2.voip.ms Softlayer seattle3.voip.ms Softlayer tampa.voip.ms NOC4Hosts / HIVELOCITY VENTURES CORP tampa2.voip.ms NOC4Hosts / HIVELOCITY VENTURES CORP tampa3.voip.ms NOC4Hosts / HIVELOCITY VENTURES CORP tampa4.voip.ms NOC4Hosts / HIVELOCITY VENTURES CORP washington.voip.ms Softlayer washington2.voip.ms Softlayer |
|
3 recommendations |
to advocate99
IBM has confirmed the issue has been fixed. Thanks all for helping. |
|