dslreports logo
site
 
    All Forums Hot Topics Gallery
spc

spacer




how-to block ads


Search Topic:
uniqs
901
share rss forum feed

eeeaddict

join:2010-02-14
Reviews:
·WIND Mobile
·Distributel Cable

WIKIPEDIA down?

No one I know can get to wikipedia I think its a problem with cogent

Traceroute has started…

traceroute to wikipedia.org (208.80.152.201), 64 hops max, 72 byte packets
1 10.0.8.1 (10.0.8.1) 248.779 ms 132.760 ms 155.627 ms
2 dsl-67-55-0-1.acanac.net (67.55.0.1) 163.997 ms 119.414 ms 157.404 ms
3 te0-3-0-7.mpd22.yyz02.atlas.cogentco.com (38.111.102.65) 154.418 ms 130.653 ms 193.419 ms
4 te0-1-0-0.ccr22.yyz02.atlas.cogentco.com (154.54.43.165) 187.183 ms 123.503 ms 90.996 ms
5 te0-6-0-3.ccr22.jfk02.atlas.cogentco.com (154.54.25.105) 120.634 ms 208.324 ms 221.328 ms
6 te0-1-0-4.ccr22.jfk05.atlas.cogentco.com (154.54.3.70) 137.105 ms 156.477 ms 126.369 ms
7 * * *
8 * * *
9 *


Phorkster
Premium
join:2004-06-27
Windsor, ON
kudos:1

Fine here.



Cliffy
Premium
join:2003-06-29
Kitchener, ON
reply to eeeaddict

Anything hosted/routed through east coast will be a mess.
--
there's a fine line between a rut and a groove.



aurora on

@bell.ca

no luck here


eeeaddict

join:2010-02-14
reply to eeeaddict

it seems rogersland people are fine but anyone on bell or dsl is down


Guru

join:2008-10-01
kudos:2

Fine here also!



jasmo34

join:2008-03-20
London, ON
reply to eeeaddict

said by eeeaddict:

it seems rogersland people are fine but... anyone on bell or dsl is down

Well, not quite! That's quite a statement!

No problem here surfing to/around WIKI, or doing trace to WIKI.

That's via Start DSL, using Start's auto-DNS.

Tracing route to wikipedia.org [208.80.152.201]
over a maximum of 30 hops:

1 1 ms 1 ms 1 ms 192.168.1.1
2 9 ms 9 ms 8 ms 64.140.112.15
3 8 ms 9 ms 9 ms core2-london1-ge1-120.net.start.ca [64.140.120.2]
4 9 ms 9 ms 9 ms 204.101.4.193
5 24 ms 26 ms 26 ms bx4-chicagodt_POS2-0-0.net.bell.ca [64.230.186.174]
6 23 ms 22 ms 22 ms ge5-0-9d0.cir1.chicago2-il.us.xo.net [206.111.3.185]
7 55 ms 59 ms 60 ms 207.88.14.201.ptr.us.xo.net [207.88.14.201]
8 46 ms 47 ms 46 ms 207.88.13.69.ptr.us.xo.net [207.88.13.69]
9 47 ms 47 ms 47 ms 209.48.42.50
10 80 ms 80 ms 80 ms xe-0-0-1.cr1-sdtpa.wikimedia.org [208.80.154.210]
11 83 ms 83 ms 84 ms wikipedia-lb.pmtpa.wikimedia.org [208.80.152.201]

Trace complete.


xsbell

join:2008-12-22
Canada
kudos:8
Reviews:
·Primus Telecommu..
reply to eeeaddict

said by eeeaddict:

it seems rogersland people are fine but anyone on bell or dsl is down

No. It's your awesome provider using awesome Cogent transit.


Guspaz
Guspaz
Premium,MVM
join:2001-11-05
Montreal, QC
kudos:23
reply to eeeaddict

I haven't had any trouble, but then again, it hasn't even started raining in Montreal yet.
--
Developer: Tomato/MLPPP, Linux/MLPPP, etc »fixppp.org



squircle

join:2009-06-23
Oakville, ON

1 recommendation

reply to eeeaddict

Fine in Waterloo over IPv6, can't say the same about IPv4.

*cough cough* this is why everybody should be using IPv6 *cough*

whistler:~ tyson$ mtr -wr en.wikipedia.org
HOST: whistler.lan                                  Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- cypress.lan                                    0.0%    10    0.5   0.4   0.3   0.5   0.1
  2.|-- hurricane-electric-tunnel-server-endpoint.lan  0.0%    10   11.6  11.7  11.3  12.0   0.2
  3.|-- gige-g2-5.core1.tor1.he.net                    0.0%    10   13.8  14.9  10.3  17.3   2.0
  4.|-- 10gigabitethernet4-1.core1.nyc4.he.net         0.0%    10   19.5  20.8  19.1  26.3   2.8
  5.|-- 10gigabitethernet2-3.core1.ash1.he.net         0.0%    10   25.3  27.0  24.8  29.2   1.7
  6.|-- xe-5-3-3-500.cr1-eqiad.wikimedia.org           0.0%    10   25.1  27.9  25.1  51.8   8.4
  7.|-- wikipedia-lb.eqiad.wikimedia.org               0.0%    10   25.3  25.3  25.1  25.4   0.1
 
whistler:~ tyson$ mtr -w4r en.wikipedia.org
HOST: whistler.lan                              Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- cypress.lan                                0.0%    10    0.5   0.4   0.3   0.5   0.1
  2.|-- v250-rn-rt-uwt.uwaterloo.ca                0.0%    10    2.1   2.1   1.9   2.3   0.1
  3.|-- rn-rt-mc-1-1-a-rn-rt-uwt.uwaterloo.ca      0.0%    10    2.1   2.1   1.9   2.3   0.1
  4.|-- te2-15-dist-rt-mc.uwaterloo.ca             0.0%    10    1.1   0.9   0.7   1.2   0.2
  5.|-- 172.25.1.105                               0.0%    10    0.9   0.8   0.6   0.9   0.1
  6.|-- gi1-29.mag01.yyz02.atlas.cogentco.com      0.0%    10    9.6  12.0   9.1  23.9   5.7
  7.|-- te0-3-0-0.mpd22.yyz02.atlas.cogentco.com   0.0%    10    9.5   9.4   9.3   9.5   0.1
  8.|-- te0-6-0-13.mpd22.jfk02.atlas.cogentco.com  0.0%    10   21.3  21.2  21.2  21.4   0.1
  9.|-- te0-7-0-12.ccr21.jfk02.atlas.cogentco.com  0.0%    10   21.4  21.2  21.0  21.4   0.2
    |  `|-- 154.54.3.166
    |   |-- 154.54.25.150
    |   |-- 154.54.6.50
    |   |-- 154.54.46.254
    |   |-- 154.54.31.6
    |   |-- 154.54.7.6
    |   |-- 154.54.5.210
 10.|-- te0-1-0-3.ccr22.jfk05.atlas.cogentco.com   0.0%    10   21.4  21.3  21.2  21.5   0.1
    |  `|-- 154.54.3.161
    |   |-- 154.54.3.70
    |   |-- 154.54.31.9
    |   |-- 154.54.7.10
 11.|-- ???                                       100.0    10    0.0   0.0   0.0   0.0   0.0
 
 


Acanac Inc
Premium
join:2007-03-05
Mississauga, ON
reply to xsbell

said by xsbell:

said by eeeaddict:

it seems rogersland people are fine but anyone on bell or dsl is down

No. It's your awesome provider using awesome Cogent transit.

We have more than one peer. Our Innteliquent route can not reach WIKIPEDIA either.

sunday8pm

join:2010-05-24
Reviews:
·Bell Sympatico
·voip.ms
reply to eeeaddict

Same here on Bell 25

Bell:

traceroute to wikipedia.org (208.80.152.201), 30 hops max, 40 byte packets using UDP
1 tomato.lan (192.168.1.1) 0.811 ms 0.744 ms 0.677 ms
2 bellbox.lan (192.168.2.1) 2.280 ms 1.685 ms 2.854 ms
3 bas5-montreal28_lo0_SYMP.net.bell.ca (64.230.197.109) 24.493 ms 24.808 ms 24.881 ms
4 agg2-montreal02_GE0-0-2_143.net.bell.ca (64.230.203.197) 16.949 ms 17.012 ms 16.593 ms
5 bx4-newyork83_pos_2_0_0.net.bell.ca (64.230.187.18) 24.435 ms 23.812 ms 24.084 ms
6 nyc9-core-1.gigabiteth15-1-3.tele2.net (130.244.200.69) 24.327 ms 23.908 ms 24.402 ms
7 ash1-peer-1.xe-0-2-1-unit0.tele2.net (130.244.39.233) 31.384 ms 31.531 ms 31.668 ms
8 * * *
9 * * *
10 * * *
11 * * *
12 * * *
13 * * *
 

Rogers cell:
traceroute to wikipedia.org (208.80.152.201), 30 hops max, 38 byte packets
1 74.198.64.254 (74.198.64.254) 74.598 ms 78.806 ms 141.730 ms
2 172.25.199.65 (172.25.199.65) 76.131 ms 63.752 ms 172.25.199.81 (172.25.199.81) 89.512 ms
3 10.118.20.10 (10.118.20.10) 74.821 ms 10.118.23.22 (10.118.23.22) 80.368 ms 10.118.20.10 (10.118.20.10) 119.133 ms
4 10.118.20.77 (10.118.20.77) 77.889 ms 79.224 ms 77.443 ms
5 192.168.1.79 (192.168.1.79) 91.289 ms 77.944 ms 77.343 ms
6 192.168.1.18 (192.168.1.18) 89.348 ms 90.196 ms 77.025 ms
7 172.25.192.176 (172.25.192.176) 89.911 ms 82.666 ms 79.446 ms
8 10.118.2.209 (10.118.2.209) 89.722 ms 79.339 ms 89.839 ms
9 74.198.64.34 (74.198.64.34) 121.672 ms 78.436 ms 77.361 ms
10 69.63.248.133 (69.63.248.133) 95.450 ms 92.232 ms 87.551 ms
11 69.63.249.165 (69.63.249.165) 113.562 ms 99.002 ms 95.741 ms
12 24.156.156.45 (24.156.156.45) 87.477 ms 113.859 ms 95.616 ms
13 66.185.83.62 (66.185.83.62) 113.884 ms 99.048 ms 111.401 ms
14 ash-core-2.tengigeth8-0-0.swip.net (206.223.115.161) 109.626 ms 107.519 ms 148.533 ms
15 ash1-peer-1.xe-0-2-0-unit0.tele2.net (130.244.64.147) 102.426 ms 105.845 ms 111.179 ms
16 130.244.6.243 (130.244.6.243) 109.169 ms 147.711 ms 107.874 ms
17 xe-0-0-1.cr1-sdtpa.wikimedia.org (208.80.154.210) 143.363 ms 158.645 ms 137.493 ms
18 wikipedia-lb.pmtpa.wikimedia.org (208.80.152.201) 139.600 ms 141.595 ms 139.956 ms
 

graniterock

join:2003-03-14
London, ON
Reviews:
·WIND Mobile
·TekSavvy Cable
reply to Cliffy

said by Cliffy:

Anything hosted/routed through east coast will be a mess.

For sure. Funcomm's data centres are currently on fuel generators. I'm sure others are effected too. Wouldn't be surprised if things go down and pop up as wires break and things need to be rerouted.

»forums.thesecretworld.com/showth···t1443134


creed3020
Premium
join:2006-04-26
Kitchener, ON
kudos:2
reply to eeeaddict

Great information regarding outages at NYC datacenters can be found here: »www.webhostingtalk.com/showthrea···=1205042



Nagilum
Premium
join:2012-08-15
Kitchener, ON
Reviews:
·TekSavvy Cable
reply to eeeaddict

Hurricane Sandy has taken a number of hosting providers and at least one backbone provider offline in the greater New York area. Downed websites and routing issues abound until this is fixed. One of my coworkers found that switching from his ISP's DNS servers to Google's fixed some of the routing issues.

Ars Technica did an article yesterday on the situation.

»arstechnica.com/information-tech···outages/



Guspaz
Guspaz
Premium,MVM
join:2001-11-05
Montreal, QC
kudos:23
reply to eeeaddict

Any website or business that is downed for an extended period of time due to a datacenter falling off the map has only themselves to blame. There's no excuse for NOT having a plan for this exact contingency.

Any business that puts any kind of emphasis on their internet presence should have a disaster recovery plan which includes offsite data backups, and geographically distinct backup hardware or deployment capacity.

If you've built yourself on a cloud platform with an appropriate level of redundancy, it's easy. If you only have reserve redundancy, you switch to your backup plan and start spinning up instances in the backup location. If you have operational redundancy/resiliancy (like Netflix), your system automatically compensates for a chunk of its instances disappearing.

Even if your plan is a semi-manual approach, one that involves downtime during the changeover, a few hours of downtime is much better than days of downtime as some companies (like callcentric) are currently suffering.

Otakuthon's server was in the affected area, and its facility in New Jersey also lost power. However, we've got a DR plan in effect that can recover from a total datacenter failure within an hour or two. Our backups consist of block-level on-site backups (allowing us to survive host-level hardware failure with no more than a day of data loss and only a few minutes of downtime as we spin up a new instance from the backup) and data-only off-site backups (allowing us to survive complete facility failure with no more than a day's data loss and only an maybe two hours of downtime to configure a new instance and migrate the data). In our case, we considered that this level of data gap (up to a day) was acceptable.

In this instance, we found out that our facility was on backup power. I immediately cloned our instance to another facility in California to serve as a cold backup (could be booted in seconds if the main facility went down, without doing the restore-from-off-site-backups approach). If I had not had the opportunity to do that (perhaps the generators had failed or run out of fuel before I noticed the mains outage), I could have started a manual rebuild from our offsite backups and been up and running in just a few hours.

In fact, if I took the effort to do so, I could automate much of the recover-from-off-site-backups process to get us up and running in perhaps half an hour instead of two hours...

And we're a small non-profit once-a-year convention. My employer for my day-job has a hot-backup DR plan that can recover from complete facility failure in minutes, with only seconds of data loss...

What excuse to people like Gawker or Callcentric have?
--
Developer: Tomato/MLPPP, Linux/MLPPP, etc »fixppp.org



milnoc

join:2001-03-05
H3B
kudos:2

Employees and customers of Peer1 in NYC were actually hauling buckets of diesel fuel up 17 flights of stairs to keep their generators running!

Knowing the geography of NYC, I wouldn't have considered that town as a viable location for a datacenter. Most of Manhattan Island's subways are already under the water line, and parts of the city are already frequently flooded during the occasional heavy rainstorm.
--
Watch my future television channel's public test broadcast!
»thecanadianpublic.com/live