|
[voip.ms] Is it rotten with bugs or is it incompetent users?I moved the reply here to stop hijacking the original thread. said by PX Eliezer704:Why should servers be going down at all?
When did you hear of servers going down at Voipo, CallWithUs, Localphone or Voxbeam, Ooma, Anveo, Junction Networks, Packet8, and so forth? Not with any regularity, for sure. Maybe once or twice at most in several years. I don't believe Martin said the servers had gone down; he said they sometimes had problems, like a ddos with a server which caused it to be erratic, and because they had so many servers that it only affected 10 percent of their users instead of all of them. By the way, while voip.ms could use some fixing, taking the problems all in all that I see reported here, a lot of times the problems have little or nothing to do with voip.ms. In the last few days I saw 4 complaints here, bunched together, one after the other; it looked like voip.ms was reeling with bugs in its system; but it turned out that only one of the problems was the fault of voip.ms; yet they took up pages of discussion here, and give the impression the company does a poor job. In the end it turned out to be: bad power supply poor isp problem with the carrier and, in the instance where it was the fault of voip.ms: the infamous Toronto server |
|
TrevAcroVoice & DryVoIP Official Rep Premium Member join:2009-06-29 Victoria, BC
2 recommendations |
Trev
Premium Member
2012-Dec-19 12:08 pm
That's kind of the problem with BYOD. It's very difficult to determine where the fault is until the problem has been found and corrected. At that point, it's poor practice for the provider to publicly blame the customer and often the customer doesn't bother posting back to admit it was their own fault (or neglects to mention it, instead stating "support fixed it").
So end result is a skewed perception.
With a full service provider, the provider is then responsible for everything except the Internet connection itself (and even then, sometimes they do manage it). You hear less horror stories about these because the user is not able to change configuration to cause problems. Perception is that it's much more reliable. Also, if the provider has trouble they can usually change configuration remotely and to all devices within a short span to work around the problem. This is something that BYOD has a more difficult time with.
TL;dr - it's hard to tell. |
|
|
to Blunderbuss
What about this fellow, are you calling [him] incompetent? » Re: [Voip.ms] Delay in Answering Calls------------ [voip.ms] Is it rotten with bugs or is it incompetent users? I don't think the company wants it limited to just those two choices. |
|
1 edit
1 recommendation |
to Blunderbuss
Just FYI... Industry standard servers have an average incident rate of about 8%. How you should read that is that given a large enough sample size, you can expect 8% of your servers to have an 'incident' in any given 12 month period. Now, what defines an incident? It can range from a simple error message to a more catastrophic failure. DIMM failures, P/S failures, Fan failures, and HDD failures, are the most common physical failure points. Not all of these 'bring a server down' but that depends on the design of the server and how resilient to failure it is. This of course does not take into consideration external factors (cooling issues, AC Power disruption, IP connectivity issues, etc.) And then of course, you have to take into account the infrastructure design. For example, when you hear the nebulous term 'cloud', one attribute is that failures are accounted for in the system architecture. A complete server can go down with no impact to the application or service. Each provider will likely have their own architecture and the characteristics of the design will dictate whether the end user has any idea that there was a failure or not. |
|
|
Excellent information, thank you.
----------------------
Indeed, it is clear that all situations are subject to this (not only VoIP providers).
So the metric we would have to be talking about is disruptions visible to the end user. |
|
cell14 join:2012-01-04 Miami Beach, FL
1 recommendation |
to Trev
said by Trev:That's kind of the problem with BYOD. It's very difficult to determine where the fault is until the problem has been found and corrected. At that point, it's poor practice for the provider to publicly blame the customer and often the customer doesn't bother posting back to admit it was their own fault (or neglects to mention it, instead stating "support fixed it").
So end result is a skewed perception.
With a full service provider, the provider is then responsible for everything except the Internet connection itself (and even then, sometimes they do manage it). I absolutely agree with you. The VOIP provider, ATA/VOIPadapter/VOIP phone, user's other equipment, ISP, ISP's provided( sometimes proprietary) equipment like gateways- all of that can cause problems and then usually the blame game starts. One of the main reasons why people are skeptical about VOIP. |
|
|
said by cell14:The VOIP provider, ATA/VOIPadapter/VOIP phone, user's other equipment, ISP, ISP's provided( sometimes proprietary) equipment like gateways- all of that can cause problems and then usually the blame game starts. Right, that's why it's so important to look for patterns. --------------------------------------- You have to piece together the data. Suppose a man goes to the doctor with the issue that the one time he has sex, he's boiling hot. The next time, he's freezing cold. A puzzle. Until his wife comes in and explains that they had sex in August and then again in February. --------------------------------------- Well, I gotta go out and "get a life". I'm hoping to find one, 9 miles west of Amherst. Wish me luck. |
|
·Fido MikroTik RB750Gr3 MikroTik wAP AC Panasonic KX-TGP500
|
said by PX Eliezer704:Well, I gotta go out and "get a life".
I'm hoping to find one, 9 miles west of Amherst.
Would that be Grand Island or Canada? |
|
|
|
It's near Springfield. |
|
nitzan Premium Member join:2008-02-27
1 recommendation |
to gweidenh
said by gweidenh:Industry standard servers have an average incident rate of about 8%. How you should read that is that given a large enough sample size, you can expect 8% of your servers to have an 'incident' in any given 12 month period. And that's why they invented high availability and (automatic!) fail over. |
|
MartinMVoIP.ms Premium Member join:2008-07-21 |
MartinM
Premium Member
2012-Dec-20 8:27 am
said by nitzan:said by gweidenh:Industry standard servers have an average incident rate of about 8%. How you should read that is that given a large enough sample size, you can expect 8% of your servers to have an 'incident' in any given 12 month period. And that's why they invented high availability and (automatic!) fail over. Not to bash on you Nitzan but I don't think you're handling the same amount of traffic than we have to handle. Anyway, let's not get side tracked.. Our failover will be automatic soon. The problems we've experienced with Toronto in the last few days is a capacity issue with the bandwidth routing at Data center level (out of our control). We're moving away from them. The various times Toronto went down was in an erratic manner (back/gone/back/gone) because the provider was experiencing a DDoS attack on one of their customers (it happened many times, hence why we're moving away). Automatic / "High Availability" would not have done much here. It required human intervention each time. But we won't switch the blame. We are responsible for being hosted there in the first place. Last time Seattle went down for a whole week-end we didn't even get a single thread in that forum because we handled the situation well by redirecting the traffic to another POP in a timely and transparent manner. However, it's always pleasant when another provider sneaks in one of the thread for a cheap poke. There are now about 10 redundant threads about Toronto. I guess we can all choose our flavour and post in any of them. |
|
nitzan Premium Member join:2008-02-27 |
nitzan
Premium Member
2012-Dec-20 9:12 am
said by MartinM:Not to bash on you Nitzan but I don't think you're handling the same amount of traffic than we have to handle. However, it's always pleasant when another provider sneaks in one of the thread for a cheap poke. Pot, meet Kettle. I wasn't trying to "cheap poke" by the way - I was just commenting on a technical aspect. I don't even know what kind of failover mechanisms you have (or don't have) in place. |
|
|
to gweidenh
said by gweidenh:Just FYI... Industry standard servers have an average incident rate of about 8%. How you should read that is that given a large enough sample size, you can expect 8% of your servers to have an 'incident' in any given 12 month period. Interesting average incident rate percentage you quote there 8%, covering everything from the minor glitch to the "Lucy you'se got 'some 'splaining to do" level. I just wonder what kind of percentages are seen in other industries by comparison and what would users reactions be in those cases? I'm going to dig up some reliability stats in the auto industry now and see what kind of numbers I get (I'm betting the incident percentage number there is higher and I don't see call for the return to the horse & buggy) NefCanuck |
|
nonymous (banned) join:2003-09-08 Glendale, AZ |
nonymous (banned)
Member
2012-Dec-20 10:25 am
said by NefCanuck:said by gweidenh:Just FYI... Industry standard servers have an average incident rate of about 8%. How you should read that is that given a large enough sample size, you can expect 8% of your servers to have an 'incident' in any given 12 month period. Interesting average incident rate percentage you quote there 8%, covering everything from the minor glitch to the "Lucy you'se got 'some 'splaining to do" level. I just wonder what kind of percentages are seen in other industries by comparison and what would users reactions be in those cases? I'm going to dig up some reliability stats in the auto industry now and see what kind of numbers I get (I'm betting the incident percentage number there is higher and I don't see call for the return to the horse & buggy) NefCanuck But for critical needs like say banking or visa cards failovers are in place. The end users will not see any long failures. |
|
|
That depends on the severity of the situation, you can try to predict for everything, but nothing is 100% guaranteed in life (other than death and taxes)
For example, last year during the holiday season, one of the major banks in Canada had its entire ATM network crap its pants for the better part of the day. That meant no access to cash via ATM or debit card services, it was horribly inconvenient and I know a lot of businesses lost sales that day that they never made up, but it does happen.
The question is how much more are you willing to pay for the service to get closer to that impossible to achieve 100% uptime?
NefCanuck |
|
SCADAGeo Premium Member join:2012-11-08 N California |
to Blunderbuss
said by Trev:it's poor practice for the provider to publicly blame the customer and often the customer doesn't bother posting back to admit it was their own fault (or neglects to mention it, instead stating "support fixed it").
So end result is a skewed perception. Sometimes, neither the provider nor the customer will admit fault. Unfortunately, that's human nature. said by MartinM:Last time Seattle went down for a whole week-end we didn't even get a single thread in that forum because we handled the situation well by redirecting the traffic to another POP in a timely and transparent manner. I believe good communications also had an affect because the Seattle outage was posted in the Issue Tracker, hence there was no need to ask others if they were experiencing the same issue. |
|
decx Premium Member join:2002-06-07 Vancouver, BC |
decx
Premium Member
2012-Dec-21 12:50 am
said by SCADAGeo:I believe good communications also had an affect because the Seattle outage was posted in the Issue Tracker, hence there was no need to ask others if they were experiencing the same issue. I agree. That time, the issue was in the tracker quite quickly, which kind of removes some of the impetus to post about it. |
|