A Swedish web-site monitoring company has published a worldwide map of Google's data centers. And people love looking at it. Today, as reported by just about every tech-happy news source on the web, the official Pingdom blog got all graphical with a new Google Data Center FAQ from Data Center Knowledge. Pingdom's Map of All …
Africa is in the way of being the new MSLand (thanks to extensive briber^H financial help for developpment -Mandriva story anyone?), so no Google allowed. Kangaroos and wombats don't need Google, they just go to NZ when they want to have some fun (http://www.theregister.co.uk/2008/03/28/wombat_incident/). As for Wyoming (and the surrounding area), well... do they even have computers there?
I understand Wyoming. Look the state is 1/2 the population of San Jose (California).
They have house numbers that measure distances in 100ths of miles. My brother is on 690 XXXX road, which is 6.9 miles from the "main road". Still a ways from the "city" (small bump in the road).
Then again, the outback of OZ is "wide open!" and Africa needs OLPC to get any computers. Maybe "Google knows best"!?!
DC in Australia
Well, if they do have a DC in Australia, they're certainly not routing my traffic to it judging by the times & names in my traceroutes.
[root@xento ~]# traceroute www.google.com.au
traceroute: Warning: www.google.com.au has multiple addresses; using 184.108.40.206
traceroute to www.l.google.com (220.127.116.11), 30 hops max, 40 byte packets
1 adsl-gw (192.168.10.1) 0.878 ms 0.661 ms 0.609 ms
2 nexthop.vic.ii.net (18.104.22.168) 14.670 ms 14.739 ms 14.402 ms
3 gi1-1.mel-pipe-bdr2.ii.net (22.214.171.124) 15.625 ms 14.457 ms 14.796 ms
4 gi0-15-1-4.syd-stl-core1.ii.net (126.96.36.199) 28.854 ms 29.718 ms 29.549 ms
5 Gi11-1.gw2.syd1.asianetcom.net (188.8.131.52) 29.817 ms 30.377 ms 30.182 ms
6 po0-1.cr1.syd1.asianetcom.net (184.108.40.206) 29.866 ms 30.230 ms 30.266 ms
7 po8-0.gw2.sjc1.asianetcom.net (220.127.116.11) 186.767 ms 187.163 ms 186.820 ms
8 po2-0.gw1.sjc1.asianetcom.net (18.104.22.168) 186.788 ms 187.046 ms 186.819 ms
9 22.214.171.124 (126.96.36.199) 187.011 ms 187.145 ms 187.537 ms
10 188.8.131.52 (184.108.40.206) 192.616 ms 188.533 ms 188.729 ms
11 220.127.116.11 (18.104.22.168) 188.816 ms 190.376 ms 198.637 ms
12 cf-in-f104.google.com (22.214.171.124) 188.934 ms 188.962 ms 190.599 ms
Appears to go to San Jose to me.
I doubt australia
I doubt they've got anything in Aussie, For a start, all connections go international for google services, And secondly i recon we'd have heard about it if they were to build locally.
Theres not really that many data centers in aus of big proportion, I think theres probably less than 5 major ones in each capital(on the eastern side), on the west, well, Much less :P
Allthough google bought out a Sydney based mapping service, they dont have much inside aus either AFAIK.. nothing like what they have in the US/europe.
Not so surprising
when you consider the cost of transport networks in Australia. By not having a data center in Australia the cost of that expensive trans-pacific access is met by the consumer.
GOOG has the b*ll*x to string new fibre across the Pacific. Perhaps they have the willpower to drop kick multiple DC-in-a-box into covertly connected places as well? (Calling Dr. "Indie" Jones!) It's not simply a patent eh?
Re: Oz, Horace Greenly said it best: Go west, young man (but send your money - and market goods - east). GOOG, may you enjoy your mirth on Earth for the worth of your perch down under. But next, mebbe try one without being flagged?
Per usual, amanfromMars must be the defining authority in this matter. Bless.
Nah... they only invented Google maps
Why have a data centre there as well ....
Transcontinental network links?
I'm guessing that they would need to put datacenters in areas with really good, really high capacity network links. What does that map look like compared to where they've put their centres (China and other such countries not included of course, as they have to put datacenters locally in those jurisdictions in order to be able to comply with local censorship requirements.)
They've got *something* in Australia...
$ traceroute www.google.com
traceroute: Warning: www.google.com has multiple addresses; using 126.96.36.199
traceroute to www.l.google.com (188.8.131.52), 64 hops max, 40 byte packets
1 192.168.0.1 (192.168.0.1) 1.209 ms 0.772 ms 0.734 ms
2 lns2.cbr1.internode.on.net (184.108.40.206) 96.443 ms 79.954 ms 98.675 ms
3 gi1-3.cor2.cbr1.internode.on.net (220.127.116.11) 104.171 ms 56.014 ms 48.689 ms
4 pos2-0.cor2.mel4.internode.on.net (18.104.22.168) 237.563 ms 39.715 ms 39.718 ms
5 gi5-2.cor3.mel4.internode.on.net (22.214.171.124) 39.445 ms 39.741 ms 39.205 ms
6 pos3-0.bdr2.adl2.internode.on.net (126.96.36.199) 213.392 ms 185.399 ms 600.878 ms
7 po3.cor1.adl6.internode.on.net (188.8.131.52) 296.328 ms 39.243 ms 39.713 ms
8 g213.internode.on.net (184.108.40.206) 39.710 ms 39.503 ms 42.208 ms
I thought New Zealand had it
heres my pathping result (minus the obvious)
I'm in Queensland
5 TenGigabitEthernet4-1.cha36.Brisbane.telstra.net [220.127.116.11]
6 Pos0-0-0-0.cha-core4.Brisbane.telstra.net [18.104.22.168]
7 Bundle-Ether3.ken-core4.Sydney.telstra.net [22.214.171.124]
8 Port-Channel1.pad-gw2.Sydney.telstra.net [126.96.36.199]
9 10GigabitEthernet2-0.sydp-core02.Sydney.reach.com [188.8.131.52]
10 i-0-0.wil-core02.net.reach.com [184.108.40.206]
11 i-0-0.paix-core02.net.reach.com [220.127.116.11]
12 g4_15-pax06.net.reach.com [18.104.22.168]
13 Google.peer.paix05.net.reach.com [22.214.171.124]
20 tw-in-f104.google.com [126.96.36.199]
No data center in Australia due to legal risk
Google Australia have stated that a data centre in Australia is not possible due to the risk to Google of Australia's DCMA++ copyright laws.
Australia .. the backwards country
We don't need your fancy high speed networks down here. We're still on a high about cable TV and being able to phone people from outside cities. We're a bit slow down here. Deep in our main telephone cable carriers document is their guarentee to supply 2400kbits over a phone line. Woo baby! Google could put a high speed data centre down here and everbody would have to queue up over the single copper cable connecting it to the rest of the world.
Well, somethings terminating Google.com HTTP in South Australia; from my Internode ADSL:
Tracing route to www.l.google.com [188.8.131.52]
over a maximum of 30 hops:
1 <1 ms <1 ms 1 ms my.router [192.168.1.1]
2 20 ms 18 ms 18 ms lns5.mel4.internode.on.net [184.108.40.206]
3 27 ms 27 ms 28 ms vl13.cor3.mel4.internode.on.net [220.127.116.11]
4 28 ms 28 ms 27 ms pos3-0.bdr2.adl2.internode.on.net [18.104.22.168]
5 27 ms 28 ms 28 ms po2.cor3.adl2.internode.on.net [22.214.171.124]
6 27 ms 27 ms 27 ms g216.internode.on.net [126.96.36.199]
This isn't evidence of a data center in SA, necessarily speaking, and oddly enough from an AAPT/Connect.com connection in Melbourne we do indeed go to the US:
$ traceroute -S 3 www.google.com.au
traceroute to www.l.google.com (188.8.131.52), 64 hops max, 40 byte packets
3 atm4-0-8.cor4.bur.connect.com.au (184.108.40.206) [AS2764] 157.240 ms 249.243 ms 295.130 ms
4 ge-0-1-1.dst1.bur.connect.com.au (220.127.116.11) [AS2764] 6.640 ms 6.727 ms 7.077 ms
5 so-4-0-0.cre1.mel.connect.com.au (18.104.22.168) [AS2764] 21.682 ms 7.520 ms 6.727 ms
6 as0.cre1.bur.connect.com.au (22.214.171.124) [AS2764] [MPLS: Label 103264 Exp 1] 7.947 ms 8.370 ms 7.730 ms
7 so-5-1-0.cre1.hay.connect.com.au (126.96.36.199) [AS2764] 20.797 ms 21.276 ms 22.130 ms
8 so-1-0-0.bdr5.syd.connect.com.au (188.8.131.52) [AS2764/AS1221] 29.717 ms 21.799 ms 21.727 ms
9 so1-0-0.sybr4.global-gateway.net.nz (184.108.40.206) [AS9901] 21.095 ms 25.513 ms 22.625 ms
10 * * *
11 * * *
12 * * *
13 220.127.116.11 (18.104.22.168) [AS15169] 187.466 ms 191.343 ms 195.544 ms
14 22.214.171.124 (126.96.36.199) [AS15169] [MPLS: Label 449927 Exp 4] 207.420 ms 188.8.131.52 (184.108.40.206) [AS15169] [MPLS: Label 764124 Exp 4] 194.304 ms 220.127.116.11 (18.104.22.168) [AS15169] [MPLS: Label 491383 Exp 4] 196.383 ms
15 22.214.171.124 (126.96.36.199) [AS15169] 195.796 ms (TOS=128!) 188.8.131.52 (184.108.40.206) [AS15169] [MPLS: Label 643948 Exp 4] 189.918 ms (TOS=0!) 195.034 ms
16 220.127.116.11 (18.104.22.168) [AS15169] 196.809 ms (TOS=128!) 22.214.171.124 (126.96.36.199) [AS15169] 199.066 ms 188.8.131.52 (184.108.40.206) [AS15169] 197.451 ms
17 220.127.116.11 (18.104.22.168) [AS15169] 199.799 ms 22.214.171.124 (126.96.36.199) [AS15169] 202.251 ms 188.8.131.52 (184.108.40.206) [AS15169] 194.406 ms
18 220.127.116.11 (18.104.22.168) [AS15169] 194.484 ms (TOS=0!) 22.214.171.124 (126.96.36.199) [AS15169] 199.816 ms 188.8.131.52 (184.108.40.206) [AS15169] 204.417 ms
19 po-in-f99.google.com (220.127.116.11) [AS15169] 199.225 ms 198.853 ms 231.052 ms
They could do a solar powered data centre
But it would need to be fire proof too.
Answer: There's nothing here.
Seriously. 20-something million people makes us as big as, what, 1/3rd of London?
Of those 20-million-or-so, about 9 of them can get a decent broadband connection, thanks to the head-in-the-sand antiquated so-called 'telecommunications network' we have here. Seriously, after having our business offline for the past 8 days due to the local ADSL exchange being 'dead' (nah, mate, the East Sydney CBD Exchange, not that important, mate) and going through the hoop-jumping ritual that is moving flats, I am quite prepared to say that Google would be better off putting a DC on the Dark Side Of The Planet Coosbane than in Australia.
Unless, of course, it complies with RFC1149 which is FULLY SUPPORTED by the Australian Government!
nothing in OZ? internode has something...
Tracing route to www.l.google.com [18.104.22.168]
over a maximum of 30 hops:
1 2 ms 2 ms 2 ms 192.168.25.1
2 140 ms 28 ms 27 ms lns5.mel4.internode.on.net [22.214.171.124]
3 58 ms 175 ms 71 ms vl13.cor3.mel4.internode.on.net [126.96.36.199]
4 114 ms 38 ms 38 ms pos3-0.bdr2.adl2.internode.on.net [188.8.131.52]
5 51 ms 73 ms 130 ms po3.cor1.adl6.internode.on.net [184.108.40.206]
6 51 ms 100 ms 133 ms g209.internode.on.net [220.127.116.11]
Rob Pike gave an impromptu presentation at AUUG 2 years ago (when one of the scheduled speakers became unavailable) on building a Google Datacentre which touched on a lot of the Google way of doing things. One of things he noted was that there's a fair bit of management around controlling bandwidth use as a new DC is being synched up to the rest of the Google cloud and that they go to fair lengths to manage the bandwidth use and cap it so as not to screw up the rest of the DC's operation as it comes up to speed.
Now recall that's all in an environment of high speed fibre backbones in the US of 100's if not 1000's of Gb/s. Now contrast with Oz which sits at the end of a bunch of shared ocean cables. Each cable is what 3-5Gb/s as I understand things? And there are what 10, 20 maybe 30 of them feeding in here from various directions. I suspect there simply isn't the bandwidth to keep a Google DC running here.
But they do have the Sydney office from which Google Maps was born. And (like most other places I suspect) they recruit fairly vigorously here for both local people and people to relocate to the US.
@ Adrian Esdaile
"Seriously. 20-something million people makes us as big as, what, 1/3rd of London?"
No, it makes you 3 times the size of London. About a third of the size of the UK
Whilst that cesspool on the Thames seems big (particularly if you live in a village of 560), it's only just over a tenth of the population of the UK. Which is why people that live outside of the M25 get a tad miffed at the many comments from those inside that seem to indicate that nothing exists north of Watford or west of Reading.
As I understand it the M25 marks the border between culture and agriculture.
But unfortunately, thanks to government policy and the various inept departments, there is damn all agriculture. But then, the height of culture now is the return to Eastenders of Bianca, and Rickaaaay
- Updated Zucker punched: Google gobbles Facebook-wooed Titan Aerospace
- Elon Musk's LEAKY THRUSTER gas stalls Space Station supply run
- Windows 8.1, which you probably haven't upgraded to yet, ALREADY OBSOLETE
- FOUR DAYS: That's how long it took to crack Galaxy S5 fingerscanner
- VMware reveals 27-patch Heartbleed fix plan