I for one
Welcome our mutually-annihilating anti-matter-based alien overlords. Tom Cruise or Mel Gibson would simply have to shake hands with them rather than expose them to water....I'm sure the world would be better for it too.
1036 posts • joined 23 Sep 2009
Welcome our mutually-annihilating anti-matter-based alien overlords. Tom Cruise or Mel Gibson would simply have to shake hands with them rather than expose them to water....I'm sure the world would be better for it too.
"Jobs reply: "Oh yes they do. We don't track anyone. The info circulating around is false.""
Quick bit about Jobs' reply. Google tracks users web habits, and likely collects "anonymized" usage data (no different than Microsoft being able to determine which buttons in IE are used most often). Jobs is tracking a user's physical location indefinitely. Do they send it back to Cupertino? Likely not. The info circulating is not false, as anyone can pull up these specific files (depending on iOS version) and see for themselves. Jobs is a liar in this case.
If you have an iPhone for work (because you requested it over a blackberry for instance), then your work would have every right to have physical access to your device. They can see where you've been, even during off hours. This in itself is a violation of privacy plain and simple. You might as well have a tracker in your work-issued badge (which is likely in your glovebox perhaps?). You don't have to have something to hide for this to be a problem. There's a privacy uproar for internet browsing, so much so there's "Do Not Track" methods being implemented in browsers now. Simply put, there's no need to keep an indefinite log of location on a phone. The last few hours? Sure. "Take me home" or the like. But not "Show me where I went on vacation last year."
1) Yes and No. SSD awareness is more of an operating system feature. Windows Vista/7, newer Linux versions (don't remember the kernel version number), and yes, even MacOSX (to some extent) recognize SSDs and behave differently (ie: pass TRIM commands to the drive). As far as MBR and the like, yes, works the same.
2) Read my #1 response.
3) It's not imperative, but definitely SSD-debilitating if your OS "defrags" your SSD regularly. Debilitating, meaning reduces lifespan (unnecessary writes) and can cause your drive to run in a "dirty" state, like an unTRIMed drive.
Best bet is to run a TRIM-capable SSD and OS, or at least have garbage collection capabilities for the SSD.
A few other notes:
1) Glad the author used a V+100 Kingston drive. Their older counterparts (the SSDNow 64GB and such non-V+100 drives) are horrific performers compared to other like-priced SSDs.
2) "There's just one fly in the ointment – the age of the upgradeable computer is vanishing." - I would just like to refute this concept outright. Most PC laptops come with easy component bays for hard drives and RAM. They're even making it /easier/ to access such components. It's the world of Apple that you are seeing the "upgradable computer" vanish. They go so far as to (attempt to) require Apple-branded marked-up SSDs (via drivers) to support TRIM. This quote is from the skewed perspective of an Apple user.
The sad fact about all of this is that it will take SETI finding a viable signal for ANY government to take space travel seriously. Within months of finding a true extraterrestrial signal, I'm sure will be the beginning of a "to the moon"-style pursuit of who can first exploit...erm "contact and get to" the source.
"On a Microsoft report of a Microsoft product?"
Once software is written with 8+ cores in mind (note that many are simply dual-core or quad-core capable and gain no benefit from, say, an Phenom II X6), then we can have 16 cores. Currently, the better poke would be from a more capable pipeline and faster GHz. Hence why the Sandy Bridge parts are rather tasty....you can pump a 3.4GHz part to 4.6GHz reliably. Gives substantial performance gains across the board.
Hopefully ARM's entry into the low end (which will likely hurt AMD more than Intel btw), and AMD's potential (please say it's so!) counter-punch with Bulldozer, will cause Intel to innovate and stop holding it's punches. It likely has the ability to pull a Pentium-to-Core2 jump again, but is trickling out the performance gains due to lack of competition in the high-end. No sense playing all your cards at once, right?
Nah, FFS doesn't quite work as RTFA (A for Article, as opposed to the common "M" for Manual).
Just full-encrypt the PlayBook and it might be able to play host to email. BBs get away with security via password, so why not a playbook?
"'iPad? well, we dont know if thats secure, and it needs itunes installed, and its too shiny'"
Apple has shown next to no interest in making the iPad a "secure" device. It's not in their market goals, but if corporate users insist on utilizing the iPad for work, it's the IT dept that suffers, having to work around the lack of security on the device.
There would also be a significant price bump since Apple would be out in the cold and any potential supplier would know that they could take Apple to the cleaners on pricing, just for the privilege of doing business with them. No deal? No products. Ouch. Shoe meet other foot.
"It should be strictly for government to person business for tax, health and benefits and not for general inquiries, monitoring / tracking or frivolous uses (e.g. lending libraries)."
In case you missed it, they're recommending it for online banking and the like too. It's supposed to be an "online identity" like Microsoft's Single Sign-on (LiveID) or the like. Once your username (email address likely?) and password is phished, logged, DB hacked, etc, your life is now an open book with access to any accounts in the system and government services.
As for the SSN bit, yes, Americans (mostly) do have it memorized. However, a hacker getting your SSN isn't going to get them into your bank account (without some social engineering at least...). Basically this online identity will exacerbate the problems we have with SSNs.
The government should invest more time into proper fraud protection schemes and less with helping end-users reduce password re-use with implementing a single password for everything. At least with password reuse, you don't have a convenient list of all the places you use said password. (yes, email would be a list, but if you lose your email account you're toast anyway).
Get some F4s while you can, and sit on your hands for 2 years to see if Seagate incorporates Samsung reliability or if Spinpoint drives incorporate Seagate's "quality"... *sigh*
If you notice, it's % of users. Therefore, if new accounts are created for people to host their iPhone snaps, it (by the very nature of percentile statistics) will cause other camera types to drop. Of note is that the Nokia camera is still increasing, even with the iPhone increase.
What should be shown? Number of users of each type. Of course this would expose their userbase count, but they could not display the numbers and simply give us a chart of "relative" camera use by user count. This will give us a true idea of which cameras are actually dwindling in use and which are increasing, having no bearing on each others numbers.
Statistic fail, merely for hype.
Makes you wonder if this is including their losses for their Cougar Point "recall." They make excellent processors. Unfortunately they don't have any competition in the high-end market. The Sandy Bridge parts are pointless to buy a non "K" series part, since a 3.4GHz i5 can OC to 4.4GHz reliably with the stock fan that comes with it....who wouldn't want an extra 30% bump in single-thread performance? If there was competition, these parts would likely have higher base-clocked non-"K" parts...
But I digress...12.8bn should allow them to continue improving their CPUs. Can't wait for the next-gen architecture. Even a die shrink will be a nice bit of thermal/power reduction.
I've been running F2, F3, and F4 drives for quite a while now. I also have an assortment of Seagate and WD, and a solitary Hitachi drive. Care to know which ones have failed? Two Seagate 320GB drives. That's all. Granted, they lasted nearly two years and their replacements have lasted another two years now. However, I do back them up quite regularly to my F4 drive....
hypothesis: a proposition assumed as a premise in an argument
theory: a proposed explanation whose status is still conjectural, in contrast to well-established propositions that are regarded as reporting matters of actual fact.
fact: something known to exist or to have happened
Therefore, "I believe there is no God" is not a statement of "fact" but, at best, could be considered a Theory. However, theory (or theorem for those maths people) is something at seems to work, but doesn't have definitive proof to make it a "law" or "fact." So, the statement then takes the actual role of "hypothesis" since there has been no supporting evidence for or against the existence of the beardy sky-man.
However, I think everyone is missing the point that a religious leader has denounced humankind's push to control the world around us and stated we should all give it up. This is definitely blind devotion if I've ever heard of it. Any (other) religious person would suggest beardy sky-man would want us to learn and grow in knowledge....or was that passage simply skipped over in bible study?
"considering that grandfather plots are approximately the third most hackneyed cliche in all of science fiction."
And the most misconstrued considering the alternate timeline theory to solve the grandfather paradox. Granted, time travel isn't called such by physicists, but "closed time-like loops"
Likely, the figured "PlayBook" would relate to (American) "football" and could be used in meetings and the like to suggest productivity and such (since the "playbook" has all your tactics and "plays"). Will it work that way? Likely not. Easier to say "It's a blackberry" and get instant "OOOoooo"s by your business associates. About the same as whipping out an iPad2 in a coffee shop will do.
As for the 350(insert your currency here) Honeycomb 10.1" tablet, my money will quickly follow yours. My requirements for a tablet worth my money will be: dual core, 1GB RAM+, 16GB onboard storage (apps and whatnot), SD slot (32GB+ capable) for videos/music/docs (flash stick replacement potential basically), and capacitive touchscreen (obviously) with a decent viewing angle (where every tablet [minus iPad2 and a single Android tablet] so far fails). I'd even be willing to sacrifice 3D game performance for better battery life (retaining video decode and the like of course).
I couldn't call the Xoom cheaper, and not quite better either compared to the iPad2's gfx capability (superior still) and screen. Now, the Samsung 8.9" and 10.1" Tabs are/will-be superior (with a performance hit in gfx compared to iPad2). By years-end, the iPad2 will look quite antiquated, just like the iPhone or MacBook Air does now. That's Apples biggest problem: they're high-end market without necessarily having high-end components/features. Their "high end" is in "Oooh shiny," as a status symbol (phallus waving) and UIs.
First interview person Henry Wertz: "The big one? Price; tapes cost about 1/10th the cost per byte of hard disks."
1.5TB Maxell LTO5 tape from Amazon: $67.95. 1.5TB Western Digital Elements external HDD from Amazon: $78.62. If you want to get really technical, you can get one of those HDD docking stations (similar to requiring a tape drive for tapes [which run about $2,600 for LTO5 btw]) and buy raw disk drives: Western Digital Caviar Green 1.5TB from NewEgg for $59.99. If you want to get really picky, you can assume no compression on the hard disk and an optimal 2:1 compress for the tape to achieve the 1.5/3.0TB capacity, then you have to get a HITACHI Deskstar 3TB (NewEgg $139.99), but mind you, compression on disk is quite easy and 2:1 is by no means difficult to achieve using even low on-the-fly streaming compression. Back up a video or JPEG library and you'll only see 1.5TB out of the tape. Therefore, even worst case (no compression for disk and optimal 2:1 for tape) lands at 2.06x the cost of tape. Best case is only 88% the cost of the LTO5 tape for like capacity. So no, not 1/10th the cost. Sorry. Especially when you factor in the $2600 tape drive vs a moderately priced Cavalry EN-CAHDD2BU3-ZB disk dock (for instance) at $64.99 @NewEgg.
Second interviewee Evan Unrue: "but also, disks keep spinning, so doing this comes with a larger physical footprint in the datacenter and a larger power bill. Tape scales by adding cartridges which don’t spin when not being use and don’t take up space in the IT room as they scale"
Why is it that everyone assumes that a disk-based solution mandates the drives are always on? Sure, the first target in the D2D2T or D2D2D will be required to spin, but not the last stage. Disks would work as removable medium just as effectively as tapes in this regard. I would suggest that disks are less vulnerable to environmentally-caused "bit rot" as well, due to the platters not prone to going brittle as tape has a tendency to do (at the very least it can withstand being in a less-than-ideal storage location better [think attic of IT Director's house or the like] if necessary).
I applaud the third interviewee Chris Evans for pointing out some of the shortcomings of tape solutions. Granted, disk has disadvantages too, and as Chris said, it comes down to finding a balance between the two based on your RTO/RPO requirements. The key is finding the best spot to use the appropriate medium. For enterprise environments with hundreds (or even tens) of TBs to backup, you can't beat a tape library for convenience. For anyone with 3-6TB or less for a full backup set, anything more than tape drive or external HDD is likely overkill, especially for the sub-1TB market.
As always, check your logs on your backup jobs frequently. If that's too much of a pain, find a way to have the results emailed (same as paged nowadays) to you upon completion/failure. For those willing to roll up the sleeves (such as the ZFS/CopyFS commenter above), there's plenty of methods you could employ to produce a better setup for your organization than BackupExec or the like could provide, and using HDDs just makes that solution even easier and more feature-full.
From the article: "for most of us, a 4k TV set still remains years away. So too does suitable content. Most broadcasters are still not solely operating in HD, and Blu-ray capacities aren’t high enough for 4k video right now."
OP: "great that technology is taking leaps forward but not really viable for the residential consumer."
Even though the storage medium isn't there to use this "4K" set natively, upscaling can be more than useful as a stop-gap. Many people are quite happy with how their DVDs upscale to their 1080p displays. So much so that Bluray uptake might have been hampered by the prolific upscaling support in modern DVD players. I've seen 1080p on even a 48" screen look grainy (from a Bluray over HDMI btw), being clearly able to see pixels (dot pitch was the likely culprit, I'll admit). 4K with upscaling will make Bluray quality leaps and bounds above DVD (DVDs would likely look quite horrid at 4K upscaling compared to Bluray).
Once a proper size format comes out (multi-layer Bluray or the return of the superior capacity HD-DVD [yes, unlikely I know]) then people can start having native 4K video. But until then, start mass producing these things so by the time 4K video comes out, the TV prices will be within reason.
"(1) DIMM slot does not properly signal power-loss"
The article said they put a supercap on the board in the event of power loss.
"(2) memory hub has not been designed with microsecond-level time-out in mind"
You can be forgiven for thinking it uses the mem controller as an interface. The article wasn't very clear on that point. It's mounting into, and powered by, the mem slot. Data is likely a SATA port soldered onto the SATADIMM board.
"he hardware specs to pull these features off aren't in the current phones"
So, if the hardware specs are not on current phones....then why was the alpha build of Mango being demoed on a "current" HTC phone?
Still slower CPUs per cycle and a half-step behind on die shrink :(
"I'm still picking up the monthly patch update from MS, but was there any news on the SNAFU that was W7 SP1."
SP1 worked fine for me. On all 7 machines I've patched with it so far.
"Many of the devices are locally branded tablets from from Asian manufacturers, and how many of them will ensure Honeycomb support without Google's say-so or Nvidia's aid?"
If you look at tablets like the Advent Vega or Viewpad, the ability to download and build your own Android firmware is what gives groups the ability to provide those desired updates to antiquated devices. Will 3.0 be able to be hacked to work on these older devices? Perhaps. Will I buy one on that hope? Definitely not. But it is nice to know that there are options to upgrade your firmware, even if your manufacturer has long since (since it was launched?) abandoned your device.
That's a large price premium for network (non)connectivity.
Find a larger, or cheaper TV without worrying about network connectivity and pick up a Western Digital WD TV Live Plus 1080p HD Media Player. Then, in 2 years when it's obsolete, ditch it and buy a newer one. Loads cheaper than just ditching your TV and buying a newer one....
The computers of 5 years ago weren't even fast enough to do the menial office tasks of yesteryear. People merely suffered along because it was as fast or faster than most of the kit out there. Now that home computers have exceeded office kit, people feel like they step into the dark ages when they use their work computer. It IS slower. They can't have 3 memory-intensive applications open at once (Windows, Excel, and Internet Explorer.... :P) like they do at home. Granted, the CPU has been more than powerful enough since the Core2 line came out. RAM has been insufferable on "business class" machines even now. I'm hard pressed to find a vPro-enabled "business class" machine on HP with more than 2GB of RAM without going to the $800 mark. Most businesses likely buy the kit stock or request an extra stick of RAM (if they're smart). This will definitely help things a little bit. What's the other problem? The biggest bottleneck in modern computers: the hard drive. Business machines get a bog standard cheap hard drive. A Western Digital Black would be a decent slot-in for an extra $20 over the de facto. However, many business machines don't store files locally. They host their OS and a hand full of assorted programs. The best idea would be to slap an SSD of a ~60GB variety (or less even) en lieu of a spindle drive. This will place the performance bottleneck back where it should: the CPU. I've seen my old Pentium D machines out-perform newer quad-core Core2-based machines with just an SSD swap-in. (Disclaimer: "out-perform" is entirely user-perception and is not based on CPU benchmarks nor the like, but simply windows boot time and application load time). Granted, code monkeys or other compute-intensive users need better hardware, but the receptionist computer would be a new beast just with a bit of RAM and an SSD.
You're expecting a data centre that requires less power than it takes to run the data centre? A PUE of 1 means zero energy is used in cooling (the bottom co-efficient is the power required by the computing equipment mind you). So, unless the servers are generating their own power from an alternate universe (see Stargate for precedent), then you won't see <1 PUE.
The difficulty you run into with the "hot water" idea is that you can only heat the water up to the temperature of the hot air exhaust (perhaps ~48*C or so if you're running dense), which is still about 12*C colder than the energy-conscious "low" setting of a hot water boiler. Heating the water more than that would require energy to push the heat into already-warmer water.
Now, they already heat the building the data centre is in, but the idea of heating a surrounding residential zone is kind of interesting. Granted, they can't do so during the months when most would prefer AC over heating....and they'd have to figure in the cases of "what if most of the homes are already 'hot enough' and turn off their heat at the same time?" It starts adding complexity when you can't for-sure dump your heat. Ground pipes have the same general problem as the water heating method: once the immediate surrounding ground is saturated, the cooling effects become less efficient, and you're forced to dump heat elsewhere. Unfortunately for your "reclaim it in winter" idea, the heat would have long since dissipated by the time the season changes.
Your "joined-up thinking" would work, if your view of thermodynamics was actually accurate...
They've already thought of that. Look up DC Bus Bars or the like. They have a single large UPS to high-volt bus bar transformer, and it's from that bus bar that a step-down transformer powers each server.
Could just turn off the webshield. It would catch the script in the web cache, but you'd at least have been able to surf the internet.
....so it gives meaning to the premium price they paid for the closed architecture.
"I'd say the ban should extend to "any weapon where you cannot see directly that the enemy combatant you are about to kill or maim is a human being"."
Likely desired so that he can say a quick "Hail Mary" before he's jibbified?
"The kinetic energy generation system is smaller than the laser and uses a loading and targeting system that is completely immune to computer failures and ECM."
Doesn't the launcher rails destroy themselves after about 3 shots?
It's a shame The Reg didn't crunch numbers for the Vertex3 drives. The Crucial m4 drive does have some nice specs, but it eats more than 3 times the power of the F120 drive under load (3 watts, one of the highest among these models). So, not necessarily the best for a "laptop." I'd be more inclined to get the Samsung for such, but I don't have wattage numbers for it. The F120 runs strides against any of the others with real workloads (rather than synthetic) due to it's on-the-fly compression. Crystal uses incompressible data I believe, so these numbers are a worst-case for SandForce-based drives. Going off performance and power consumption, a SandForce-based drive (like the OCZ or F120), or the Intel 320 drive would be the best for performance and power consumption. If you want raw performance, Samsung or Crucial would definately be on the table. However, with those price margins, the Vertex3 240GB would be a forceful contender, if not leader.
Anand or Toms has the numbers you'll need. :)
"It just works"
Loads of things they didn't control for their test subjects. They likely just junked all forms of cancer that could be attributed to other things (skin/lung cancer) and focused on other cancers (stomach perhaps?). They should have found a source group that didn't have sunbathing/tanning in their habits, didn't smoke, do drugs, drink coffee, or have a family history of cancers. Then perhaps they'd have a better subject group they could split out based on drinking habits.
"Why would you want to Start a Shutdown????"
It hasn't been a "start" button since XP was replaced by Vista. It's the "Windows" button. Hence the icon on it.
""Windoze" is as full of holes as a Swiss cheese."
Wow. Quotes AND slang spelling. Grow up much? Either way, if you don't like all the leaky holes in your Windows box, pull up your Windows Firewall and close those open holes. You don't need SMB? Close the ports. The difference between Windows and Linux in this instance is Linux asks you what ports you want to open during install (since it has them all [well, almost all] closed by default), whereas Windows just assumes you'll want all the "it just works" file sharing, printer sharing, DNLA, etc to work.
Windows hasn't been "swiss cheese" since WinXP.
...Lewis didn't write the research paper, nor the conclusions therein. He merely brought it to our attention.
"to $895 (£548) for a fully specced model with 1TB of storage, 8GB of memory, built-on 2.4GHz 802.11n Wi-Fi and a Blu-ray drive."
Seriously, who wants to fork over $900 to chain down 8GB of RAM and blu-ray to an Atom CPU? Would be as useful as dumping that 8GB into the original Commodore....
"Well no one takes these people down the shops at gun-point and forces them to buy these things at these stupid prices."
Actually, if you consider they're (at the cheapest) $500 into the platform, if they wanted the "extra" features, such as HDMI, they either have to change platforms or shell out for the adapters. Another $50 is small compared to a shift to something like the Xoom. Hence, they're literally being forced to buy these magical addons to get the functionality out of their iDevice. Granted, they could simply just live without such features. The sad thing is, no one seems to care what the down-the-road costs of their devices will be.
New marketing idea for Android tablets:
Cost of iPad2: $500
Cost of our tablet: $450
Cost to made the iPad2 able to do the things WE can out of the box....hook up to a set of speakers or connect to your car stereo, play 1080p (impossible, but still...) across HDMI to your TV, connect to your digital camera, work as a mass storage device or read an SD Card: $XXX.
True cost of the iPad2: $700 (we'll call it an even $200 for the adapters)
Our tablet: Still $450. (plus a fiver for your HDMI cable).
"...to support its iTunes video service, according to report citing an "inside source"."
So, does this mean that the Netflix App is the next one to get face-punched?
"allowing tune junkies to store their music collections on Apple's service and access them from any device"
Wouldn't it be a LOT easier just to keep a database of the music they've "purchased" and present that to them as "virtual" files. The rest of the stuff that might be uploaded (assuming they allow non-DRM content to be uploaded....) could be de-dupped in the cloud based on song/video metadata, or the time-tested block-level method...
Perhaps the Reality Distortion Field emitted by the iPhone causes users of said iPhone to overlook things like dropped calls and still be "overall happy" with their iPhone experience....of course, if it was WiFi that kept dropping out, causing forced page re-requests to be the norm, I'm sure there'd be a bit more of an uproar.
"...as well as flooding the area with NFC handsets and SIM chips."
Does that mean they're discounting NFC-equiped Droids? I'll buy one! :)
For future reference, if you're going to infringe a patent, you better just infringe ALL the patents from that company at the same time, so that you'll only get slapped for one infringement, but benefit from their entire portfolio....good to see we now have precedent.
"Clean, cheap, abundant fuel..."
As long as "cheap" is in the equation, your vision won't happen. Money is the driving force for any of this. Columbus sailed to the "new world" to find a better TRADE ROUTE so they could make money. Until something with economics akin to "unobtainium" is found, there won't be a massive drive. Show the world that an asteroid is 50% gold, 25% titanium, and 25% platinum and you'll have scores of people trying to mine it. Heck, even if it is only half that. Oh, and with leniency for the "acceptable loss" (oh, "tragic loss" for the supporters) that is seen in coal mining. Columbus had deaths on his voyage, and we can mitigate better now, but just because someone died doesn't mean we should halt our progress for 10 years while we have a tribunal.
Back to the point: cheap. Your "Star Trek" ideal world won't happen with the driving force being capitalism. As long as it is expensive to get it, the base line cost will always be high. So, no, you won't be seeing cheap space resources until it becomes more economical to get them; which is the point of SpaceX in case you haven't noticed. Of course, the other option would be adopting the Star Trek form of government, which only works in (Science) Fiction.
"The flat mirrors at irregular distances is a better chance at approximating a parabolic shape focusing at the single point. But I think the engineering required to get the precision alignment of the mirrors makes it impossible for it to have been done in ancient times."
"The focal point for the parabolic mirror is too close to shore to have an effect."
Very good. The point of this experiment was toasting ships/sails of an invading fleet, which would require a minimum distance of 150 feet, if not 150 METERS just to make this more useful than, say, FIRE ARROWS. As stated previously, a single parabolic dish would have to have such a slight curvature that using their tools (likely just a hammer and heated metal, even though the blacksmiths then were likely quite skilled none-the-less) would still not be able to reproduce one with the required focal point distance. Then there's the obvious problem of taking more a few seconds to heat the point on the ship, it would require the ship to be stationary. In the Mythbusters experiment, the ship was stationary and sealed with commonly used (and ideal) pitch, and the mirrors were barely 150 feet away. After their burn attempt, they did manage to char the wood, but nowhere near a necessary 2 second flash burn. A modern example of this is using a laser to cause an ICBM to explode en route....
>Ancient< Death Ray - definitely busted. Computer tracking and megawatt lasers are having a hard enough time as it is. :P
...to destroy a CD in a "normal" drive is to have a fracture in the disc. I've seen some (stupidly) attempt to play their FF7 or somesuch computer game with a crack running half the radius of the disc. Only seen or heard of one catastrophic failure though.
I have a friend who claims to have "OC"ed his CPU by (stupidly) splicing in an extra PSU to his ATX mobo connectors. Said his CPU (being bound on top by his heat sink) actually popped through the base of his motherboard and through the side of his case, sticking into the wall. I believe his story about as much as one would believe Kill Bill's version of "punching" through 6 feet of dirt.... which Mythbusters attempted as well, incidentally.
"What if the live data and backups are in the same datacentre?"
The point of a global network of datacentres is precisely so the "backup" isn't in the same datacentre. Not only that, but the "live" data is redundant across multiple datacentres in the event of an outage. They likely split data/parity between the 3 or 4 datacentres closest to your location, so in the event of an outage, there's not a lot of data to push around to rebuild the "lost" information, or in the case of mirroring, much data to push to a new centre to maintain the mirror.
A single vendor then only becomes a problem if you're bound to their services (for whatever reason) and they "adjust" their fees and ToS (like Mozy did). Or if they go out of business (like Mozy might, however unlikely).