994 posts • joined 23 Sep 2009
Quick thought for you:
I'd rather have a few %CPU taken from an i5 (or X4 depending on your faith), then offloading to a likely a 400mhz chip (if it's lucky). Most hardware RAID cards bottleneck themselves by not being able to compute parity fast enough and not being able to handle the volume of data transmits. Even high-end RAID cards can't handle the throughput that software RAID can. If you don't believe me, go get two high-end RAID cards, stick 8 SSDs in RAID-0 on them, then software RAID-0 the two RAID volumes together. Scales quite linearly, which you wouldn't see if the software RAID was the bottleneck.
"You'll be lucky, BTW, to get a Macbook Air for £650 after a year or have a Dell depreciate by only 40% of it initial value."
The reason? Line-up refreshes. In one year, you may get a speed bump (proc/ram) or a HDD size upgrade for the MBA/MBP line-up. In six months, the Dell, et al, laptop will have gone through 2 revisions, and in a year it will be shipping with the next-gen CPU (Ivy Bridge in this case). Who would want to shell out £350 for a second-rate, old-tech PC laptop when you could drop another £500 for the New Shiney, and still be barely more expensive than the original purchasing cost of that MBA? Granted, if you're speaking portability, the MBA does get around fairly easily, but there are PC counterparts to that. The point is more along the lines of why you can still get a last-year's MacBook for a high price: you can't get anything (much) better now. They just don't move that fast. Hence why the MBA was using a slow, underclocked ultra-low voltage Core2Duo (for three years) up until their SNB refresh a couple months ago.
A few corrections
As stated in an earlier reply by some else, but with a lot more bias, you're wrong.
The new AMD kit is a mere $5-10 premium over the "on sale" i3-2100. If you compare vs the i3-2105 (with the HD 3000 gfx, rather than the very worse HD 2000 of the original i3-2100), the price matches exactly.
Platform is also a consideration, since the A75 boards, when features are compared, are generally cheaper than their 1155 counterparts. Not by much mind you.
Also, as a correction to the biased reply, it is true the i3 is a dual core, with hyperthreading, however, hyperthreading gains a lot more than a mere 10% (the OP stated "2.2" factoring in the dual cores) performance increase. There's upwards to a 40-60% increase in threaded situations over running with hyperthreading disabled. What was never touched was that the i3 cores, MHz-for-MHz perform better than the AMD cores.
In the end, the i3-2105 (yes, better GPU than the 2100) has better performance in single-threaded or general light loads. The AMD A8 chip does better when you're taxing the system with heavily-threaded loads (does better by far btw). However, you're not likely to do that unless your encoding video or doing many things at once, at which point, you bought the wrong CPU either way. The GPU core in Sandy Bridge doesn't even hold water against the A8 GPU core. The A8 has 2x the performance of the HD 3000, hands down. There's just no comparison. The only advantage the HD 3000 has is QuickSync. But then again, the A8 has DX11.
Who's the victor? Anyone who buys the $500 Walmart machines. Why? Because they'll have an AMD CPU.
"I'm pretty sure that if I could use a platform working 10 years ago it would absolutely fly on today's hardware and honestly not lose that much in the way of functionality."
You do realize that 10 years ago, you barely had USB support in Windows, you definately didn't support TRIM, SATA 2/3, PCIe, effective multi-cpu computing (no, most programs were, and some still are, single threaded), and Windows 2000 had a nasty 128GB(ish) hard drive limitation requiring a hack (to enable LBA) to work around. And yes, this is the same 10-year-old equipment you're speaking of.
One other thing to mention, "I could use a platform working 10 years ago it would absolutely fly on today's hardware" sorry, no you can't. Just try installing Win98/2k native on the metal. You'll quickly realize that your 10yr-old "platform" is now relegated to VM-only status. Might as well claim that playing the original Super Mario World is all you need, because the graphics were good enough and would simply fly if played on a Wii.
You forget FCoE or NAS/SAN solutions that run over ethernet. Simply running no-local-storage servers with 128GB+ of RAM and hosting 10-20+ VMs is enough to saturate 1Gb, and likely 10Gb ethernet.
That and you still believe BG actually said the 640KB quote I assume?
They could just support community development of Open servers, such as is happening with the reverse-engineering people and World of Warcraft....but sanction it.
Could just pick "Run all from disk" (or whatever) option rather than the "Install on first use" defaults.
For once, a social network that makes sense. The big problem MyFaceSpaceBook has is that your work "collegues" and your "BFF"s get the same level of view to your posts, rants, raves, and terrible cell-phone pics from the night before. "Circles" would allow you to tier that into "this is a picture of me, some basic work/edu history" and "this was me last weekend on a bender"
"And we're not talking IE6 here either; the guy in the article didn't even get his release through testing before support was dropped. That's not cool."
All this in mind, it's when you're forced to update from old versions (v3.6+) to the newest that they shoot themselves in the foot.
Have to start somewhere
"because they don't have the necessary education to work in a knowledge economy (unemployment among graduates has been almost unaffected by the supposed crises)"
What about the young undergrads who are trying to pay their way to this "necessary education" you speak of? What will they do when fast food establishments no longer require burger flippers and fry tossers? Where are the uneducated going to work when there are no need for "sandwich artists" anymore? Just more people to support on welfare, meaning tax hikes, meaning those with the higher education still end up paying for it, perhaps?
Well, not quite....
"the Air is far from being a toy in its performance"
Up until the most refresh a couple months back, bringing the MBA up to Sandy Bridge, it was running an underclocked, undervolted Core2Duo mobile chip. While not a sluggish processor (compared to Centrinos of the day), it was by no means a zippy CPU. Fortunately Apple is no longer selling netbook-esque (read: dated, slow) internals in their MBA.
So, what happened to....
"The same source also spoke of a new iPad display with a resolution one-third higher than the that of the iPad 2."
So, what happened to that mythical 2X resolution screen at 2048x1538 that all the iFans were cheering on about for the iPad2?
Yes, it's a retorical question.
"...and then being able to buy additional features as cheap add-ons would be awesome"
Now if only Apple sold anything at a "cheap" price.... C'mon, these are the 100% markup people.
I, for one....
"What "thing" is it that gets "worse" because of this?"
I'll head out and buy some various great domain names, like the following:
.m and .om (for obvious above reasons)
Perhaps I'll have paypa.lcom and amazo.ncom in my list. If people can't tell the difference between:
what makes you think they can tell the difference with:
You laugh, but....
Look at technology. What are the main two drivers? War and Sex. I would welcome holodecks, even if their primary driver was porn.
....from a company that sues others for using "icons" in a "tiled array" on a "handheld device"....
"...so they probably didn't even see it"
I think you failed to read the article where it stated he got a personal call from Apple which stated they showed it to the developers who "were impressed." 'nuf said? Thought so.
/where's my RTFA (a for article) icon?
MS Office is written in C/C++ (MS Visual or otherwise). MS has written APIs to utilize Office in .NET or the like, but it doesn't mean the code is written in .NET. Might as well claim MS Access is written in VBA....
Nah, Chrome's already covered: Native Code plugin FTW!
This keyboard isn't being pitched as ergonomic (thankfully, because it's not). Having no physical buttons to feel and press is actually worse, from an ergo standpoint, than a 'real' keyboard. This issue came up with the infrared, projected keyboards.
...the original tricorder didn't need blood, breath, or pee. Likely used some form of portable MRI to take the readings though, as it was just waved near the bodily injury. Now, Next Generation had a detachable mini-wand that was used as a remote sensor, rather than simply waving the whole device itself. The other trick is that the tricorder acted almost like a read-only (no way to act on the environment) Sonic Screwdriver, being able to detect lifesigns, mineral composition, air quality, etc. There's quite the challenge ahead for those seeking the $10mil.
Now the down side is that it will take years (as stated) for the drug to be put through additional non-human trials, then a (few) human trials, FDA approval, etc before it will be available to the many individuals staring death in the face during that time, as most die by the age of 22 or so.... I'd certainly hate to be a sufferer just hitting the 20yr mark, knowing I'll die just a year or two before being able to be treated.
Considering that (most) MLC manufacturers consider their 22nm/34nm MLC NAND having a 5,000 write/erase cycle lifespan, 18,251 is quite exceptional. Definitely a boon for Enterprise markets, but hardly useful over traditional flash for consumer markets.
"How the hell do you get a shipminds visible manifestation, and multi-kilomiter long ships hull, probably concealed by layer upon layer of 'fields', to emote in a fasion an audience can appreciate?"
It's simple. Ever seen Tank Police?
This is clearly MS attempting to attract users by offering something akin to FaceTime (how well FaceTime actually works is beside the point...). They'll likely keep support for alternative platforms (likely since it's already there), but I could see them being lax on emerging platforms (MeeGo or some other). But Ballmer did get one thing right, it's about market penetration. The more platforms he continues to support, the more users he'll have, and the more people will want to have Skype to talk to them "for free." I'm just hoping his adverts don't become intrusive. A side bar of ads is tolerable. The popup-in-front-of-video that YouTube does deserves to be shot in the head. (I suggest a sidebar since most screens are the usual 16:9 nowadays, making vertical real-estate fairly valuable)
Drop the price down to a 200-range and put 32GB of flash in there instead of whatever HDD it's likely pimping, and it would be considerable. Could always stuff a USB-based HDMI port off the back too...
Go a step further
Not only would they not have a "members list," but they wouldn't likely log their (likely illegal) chats for the FBI, et al to find later. Black hats like these might be lacking some (hopefully) common social well-being characteristics, but they're by no means idiots. The 650 IP addresses is likely usernames and attached last-logon IPs at the worst, be it to the site, or just to IRC channels hosted on the server.
Intel has already released the info on their Ivy Bridge sockets. Yep. Pin compatible with 1155. You could drop Ivy Bridge in a P67 mobo or a Sandy Bridge in the new 7-series chipset. (BIOS updates apply, as usual).
I guess they actually learned something about compatibility from AMD's AM socket. Although, I'd suggest doing your research before looking the fool by spouting based on your speculation. Perhaps you got suckered into buying a 1366 socket?
Was doing so well
This started out as a great, well-balanced article. Even had comments and clarifications such as: "But disk, constantly online disk, is always at risk....", note the "constantly online disk" clarification.
Then, disregarding all this preamble, Chris writes his conclusion thus:
"Tape's cost/GB stored blows disk away. Tape's reliability, with today's media and pre-emptive media integrity-checking library software is far higher than disk. Tape cartridges don't crash. Tape cartridges aren't spinning all the time, drawing electricity constantly, vibrating themselves slowly to death, generating heat that has to be chilled, and – most importantly – are not always online, always susceptible to lightning-quick data over-writing by dud data or file deletion."
He again compares to tape and states "and – most importantly – are not always online" (which is a good half the paragraph of the effects of disks being always online). He also assumes that the pre-emptive media integrity-checking is a common feature, rather than the new (old?) idea that just got implemented in a single tape library from a specific vendor....
Also, once again, he assumes "Tape's cost/GB stored blows disk away." which, in fact, it doesn't. A "FUJIFILM LTO Ultrium G5 - LTO Ultrium 5 - 1.5 TB / 3 TB" (from Amazon) lands at $67USD, being one of the cheaper options, but you can get a "Seagate Barracuda 7200 1.5 TB 7200RPM SATA 3Gb/s" (from Amazon) for $69.99USD (regardless of how you feel about Seagate. You want WD? It's only $64.99 atm). That's bit for bit the same size (the 3TB is assuming a 2:1 compression, which can be done using HDDs as well. The effective bits are the same, 1.5TB). Sorry Chris, cost evaluations are required before claiming a cost difference that "blows disk away."
As for longevity, I'm willing to bet disk has a higher in-use lifespan too. Have a tape that has as many on-hours as an HDD, and tape will lose. Granted, this has no relevance since most backup storage is used perhaps 30 times before being permanently archived/retired.
When you're looking at solutions such as "Overland Storage REO 9100C VTL" or the equivalent Tape variety, you're definitely going to do better with the traditional tape option, due to the always-on disks and the like, but use a JBOD disk-spanning option, and you could take your backup targets offline (the disks) once your backup job is complete (referring to the last D in a D2D2D option).
Disks have pros and cons compared to tapes, but currently, it's more of a user-preference than any technical or "better than" mythos that delineates the use of either.
@ Sir Spoon
Even with a reduced presence, all they need is one correctly identified marker to shoot you down (both stealth and personal profile).
"I will never use facebook, but my wife does - so I banned her from mentioning me or including any photo's that I might be in :)"
Even though you put a :), I'm still inclined to believe you truly think you're shielding yourself. All you need to have done is a family member or friend of yours, that happens to have taken a picture of you, uploads said picture to Facebook, and then "tags" you and types in your name. You don't have to be a Facebook member to be manually tagged. If you are a member of Facebook, you at least get a courtesy notification that you have been tagged in a photo so you can delete said tag. As a non-member, you don't get such a privilege. Then the CIA could somehow *wink wink* get a hold of such tagged images and process you into some facial recognition database.
Welcome to the computer age. 1984 may not have been nearly accurate enough.
"The point is Android sends a unique identifier. iOS doesn't no matter what settings you have enabled. Google shouldn't be allowing such information to be collected. Its obvious they do it to make another few bucks out of you."
The Unique ID is likely a quality-control mechanism. If a bunch of bogus data is sent from a single UID, they can purge their system by deleting all data from that UID. With the data not being tagged, there's no way for them to purge that bogus data, short of cross-validating (which they would have no way of knowing how many times said UID had reported). I would likely be a trivial matter to have your rooted 'droid spam thousands of bogus data packets, and without the UID, Google would only see it as a certain bit of data has been "upvoted" thousands of times, thus presumed to be accurate. This has plenty of bad connotations if abused. With UIDs (albeit potentially falsified, but easily blocked when checked against valid UIDs), it would be easy to spot such (single-UID) spam.
"If Anon did do it, there'd be no cause for alarm. Anon hacks things because they can (tm) and would not use or resell the personal information collected."
But it could be held for ransom.
"But some of those Android tablets cost more than the iPad.
So cutting corners doesn't seem to significantly reduces the cost."
Don't forget also that Android tabs can do a significant bit more than a stock iPad2, without having to buy dongles. HDMI out and an SD card, to name two. Both dongles are fairly pricey for the iPad2. HDMI out I can see leaving off a tab in most uses, but the SD card is almost a necessity for content consumption (my music collection alone could easily fill an iPad2).
So, what is the cost of that ultra-cheap $500 iPad2 after the adapters, i*branded attachments, etc? Yes, the Galaxy Tab 7" is way overpriced, as is the Xoom for the most part. We'll see what Samsung delivers with their 8.9" and 10.1" models.
The IPad2 actually has 3 cores if you look at it from nVidia's viewpoint. The new Tegra chip has a dual-core ARM cpu and a dual-core GPU, all in the same SoC. The iPad2 has a dual-core ARM chip and a separate single-core GPU (albeit baked into the SoC I believe). Contrast that with the current desktop GPUs which are highly-parallel single-core CPUs. The move to dual-core likely gives it an entire core of extra Ooomph it can whip out when you load up a 3D game, but it can shut entirely off when it simply needs to handle your eBook reader interface. Power savings without (much) GPU performance sacrifice. Sounds good to me.
"take him off to Guantanamo Bay for "questioning" and issuing some "justice" under the pretence that he was already dead...."
This is simply lowering one's self to the level of those you're fighting against. Should the Allies have captured and put into extermination camps all Germans, simply because that's what they did to Jews in WWII? No. Ethics dictate that one would be held for crimes they committed and punished (executed for war crimes likely) in a humane way. Capturing a terrorist and torturing them for the sole relish of exacting some form of vengeance would make us no better than them.
The real rip-off
Even though the originals were "digitally remastered" and had extra CGI and such, there's still one problem with the Blu-Ray version: the original film was still shot in crappy standard-def. The video recording sucked then and will still be like up-converting a VHS to 1080p. Fortunately, the new Eps 1-3 may actually come out in better quality, but die-hard "I must have a blu-ray player due to the superior image" people will be woefully disappointed when they realize if it wasn't shot in high-def, you don't get high-def quality automagically from Blu-ray.
And some additional maths too:
"That makes you suspicious, and then the numbers themselves are seriously weird: 64MB/sec for the 300GB card, 65MB/sec for the 600GB card and a near-derisory 19MB/sec for the 1.2TB card."
Assuming these are random 4k writes....
300GB drive: 64MB / 4K = 16384 IOPS
600GB drive: 65MB / 4K = 16640 IOPS
1.2TB drive: 19MB / 4K = 4864 IOPS
Yes, that 1.2TB drive is certainly very fishy. Likely, they quadruple-booked the flash channels on their cards. The older 1st-gen SandForce controllers (Sandforce SF-1565 cited in the article) could only drive 1 chip at a time per channel (not PCIe lanes, but the data channels to the chips, of which Sandforce only supports 8 per controller, hence 4 controllers per board in this design). The new 2nd-gen controllers can drive all chips in a channel simultaneously, so this bottleneck is/will be eliminated by bumping to a 2nd-gen controller (once they sell enough of these 1st-gen devices).
Just a note
"At least Mac users chose their OS"
Mac users get OSX. PC users get Windows. The choice of OS is what hardware you buy. Then there are the free-thinkers than can install Linux (or other OS) on either set of hardware just the same.
However, the difference is the cost of such hardware. The Core i7 system punted at the Apple Store costs a fair amount more than the Windows-laden Core i7 system punted at the local shop, even though the internals are (roughly) the same. It just depends on whether you want a white computer or an assorted-color/style one and don't mind having Windows (hopefully 7 at least) on it.
Even modern weapons....
Modern rapid "throw a metric tonne of lead down-field in three seconds" weapons have barrel heat issues too. The solution? Don't shoot as much, or swap barrels in-field (using oven mitts apparently). The new cobalt-based barrel should remedy this, and we'll see if they can find such a solution for lasers too. Until then, keep the shots to less than 1 per 3 seconds (or so :P)
Give it a year
"El Reg would like to point out that, if Seagate stayed with its 5-platter design, it could produce a single drive with an awesome 5TB capacity."
We won't see such a drive until they push out a 4-platter 4TB drive in a few months (once other drive manufacturers can [and do] put out 4TB drives). Then they'll likely trickle a 4.5TB and perhaps the 5TB drive. Right now, they'll make enough of a premium pitching the same 3TB capacity but at slightly less cost than competitors, and make a tidy profit.
"and, its statisticians having given up, "virtually countless hours of music"."
2.88MB per 3min song (128Kb/s bitrate), and assuming 5TB and not 5TiB:
5000000 / 2.88 = 1736111 (1.7 million) songs, or * 3 (minutes) = 5208333.33~ minutes, 86805.55 hours, ~3616.9 days, or ~9.93 years worth of audio. Hence why their stat people just gave up. It's not worth listing the number of MP3s anymore.
How much space is this in Libraries of Congress?
"Imagine the truly, gob-smackingly awful RAID-rebuild times of such horrible disk drives."
Yes, a massive 10TB disk spinning at 7200RPM will still take A LONG TIME to rebuild, even sequentially. However, this does not mean that we don't NEED the capacity. If the Gb/sqin increases, more data will flow under the heads per second at the same 7200rpm speed, thus improving the drive's potential sequential read/write speed. We saw this with the jump to PMR. Placing more read/write heads on the arm will further improve read/writes (yes, they plan on doing this, just don't remember the company....). However, assuming a mundane 100MB/s to a 5TB drive, that's still 13.88hrs to fill the drive.
Then there's SSDs. Flash chip density is only limited by the die shrink and how many chips (and channels) you want to stuff into a standard casing (2.5"/3.5" or PCIe card, etc). More channels = more performance (roughly), assuming the controller and drive interface can handle it. Eventually, it might become cheaper to litho (or the like) our storage space rather than BPM a platter, but the endurance of SSDs only gets worse as the die shrinks, hence the research into making nano-levers and the like for more resilient storage.
Is a spinning platter the way forward? Likely not. Is NAND flash? Most certainly not. There are other technologies in development that are likely going to carry us out of our current rust-disk rut and hold us by until the Next Big Thing comes along. Until then, the new 3-platter Seagate 3TB drives will be a welcome product, hopefully causing some 4-5TB drives to show up in the next year or so (not due to tech, as Seagate and simply make a 5-platter drive at any time, but due to the "I want your money" factor).
AC and more :)
From one who liked PoP and Silent Hill, more adaptations would be great.
"....including Assassin's Creed, Ghost Recon and Splinter Cell."
"Assassins Creed Lineage short movies" into a movie would be a good start. Ghost Recon, perhaps, but Splinter Cell would be great. :)
Tomb Raider wasn't too great, but still a decent show to watch in the Theater.
It's also full-duplex, whereas serial connections (like SATA and USB) are still half-duplex. Apple did a great thing and starting stuffing this chippery on their boards. Intel did a FAIL thing and didn't include this tech into their CPU/chipset in lieu of USB3, and instead decided to just shun USB3, leaving out LightSpeed(Thunderbolt) for now, instead. I would have loved to have a LightSpeed-capable external raid box (think NAS).
The Archos 32 has been out for some months now. I'm surprised it's taken this long for El Reg to report on it. I'd still go for their Archos 101 just for the size and screen, let alone the SD slot and all the other tablety goodness.
"...and it was due to be included with Android as well, but both Google and Apple chose to replace the service with their own location systems"
One has to wonder just what type of licensing terms (money) Skyhook was seeking in order to be shrugged off by both Apple and Google. The "we'll just collect the data ourselves" mentality likely didn't surface because the Boss was seeking to save a few cents a device....
"My bet's on shared blame, fuck-ups abound."
Absolutely. The Rocket system claimed port-forwarded addresses to the unsecured DVR. Which means they likely had VPN capability. They likely had static IPs, since they would have no need to port-forward if they didn't know the IP to connect to. The simple solution was to set up VPN to the station and eliminate the need for port-forwarding from the internet at large. At the very least, they should have only allowed connections from the police station's IP range (yes, spoofing is a possibility, but it's still more secure than what was set up).
Not changing a default password on the DVR is simply crap pre-testing and validation/configuration. FAIL for that.
"NTFS has some interesting issues like, constant writes are almost guaranteed as it keeps updating last access etc. Not really into windows+ssd but that is what they said when I suggested ntfs on usb sticks."
Just a note about this comment. NTFS has "last access update" info in a file's metadata. However, WinXP can disable updating this (simple registry 0 to 1 change), and Vista/7 disables the update by default. Most of my WinXP tunings (such as this) have been made irrelevant in Win7. I use NTFS on my USB sticks simply due to the limitations of vfat.
I'm surprised no one commented on:
"GoogleServer: Likely somewhere near 37.0625,-95.677068"
Yes, it's a real lat/long coord, purposely picked. Part of the joke that seems one step too far (having to look it up) to catch. :)
As for the battery-drain of a GPS, the previous commentator is correct: WiFi is intermittent, and can go several seconds (or longer even, depending on power saving mode and other software configs) between WiFi polling. GPS has to, nearly continuously, monitor and number-crunch streaming data from at least three (usually 4) satellites. (By "number-crunch" I mean calculate the satellite's current location via it's almanac of flight paths, update said almanac from satellite data [due to gravitational forces causing orbit shift], constantly adjust its internal quartz clock to account for inaccurate timing compared to atomic timekeeping, etc). This is orders of magnitude more intensive than processing a simple SSID beacon packet (and consequentially discarding all other unnecessary packets) during a finite window of time every few seconds (or more).
I'd prefer only a few days of local caching, or at the very least, having the cache purged (or securely "scrubbed" [which depending on the flash storage controller may be difficult]) when you disable location services. That way, if you're truly paranoid (or want to keep something from the snoopy <insert person/organization X here>) you can just flip the services off during/after and be fine. Now there's the pesky problem of potential records that cell towers can keep of what towers your SIM card has talked to and when.... Perhaps this is all a matter of just turning the phone off (pop the battery perhaps, just to be safe? Sorry iPhone users, you can't do this) so that you're "off grid" during your times of "required privacy."
Cloud computing is the "new thing" that can be a life saver in businesses. However, rare events like this outline just how much of your business you're risking. There are still several advantages to using a cloud service, especially for small businesses using outsourced IT or the like, but there's just no beating a local network setup with a competent IT staff (even if that staff is just one person). A smaller business can likely handle 30min of a server (even an entire VM host) being offline while a critical component is replaced (competent implies being smart enough to stock a spare part of non-redundant server hardware, if the risk assessment is high enough). Likely your ISP will have an outage before a cloud provider will have downtime, so if your local servers have less downtime than your ISP (fairly doable actually), you could be better off having a local setup. Disaster recovery you say? If your business burns to the ground, you run into the question of why you would need access to your servers anyway? Your competent IT staff would have an offsite backup from the day prior anyway, so access to data is there. Sure, you won't be able to bring all your systems back online and operating until you replace your server(s) (unless you have a very inventive IT staff), but with the "disaster" hobbling your place of business, such would not be required.
Money and skill are the primary game-stoppers for a decent local setup. Your budget can't afford the ideal redundancy, infrastructure, internet-connectivity, or staff that a cloud provider can. It really comes down to if you can afford the one-in-a-blue-moon Amazon-style snafu (with the potential loss of your data), or if you prefer to rely on your potentially less adequate DR plan.
- Product round-up Coming clean: Ten cordless vacuum cleaners
- Review We have a winner! Fresh Linux Mint 17.1 – hands down the best
- Product round-up Too 4K-ing expensive? Five full HD laptops for work and play
- 'Regin': The 'New Stuxnet' spook-grade SOFTWARE WEAPON described
- Worstall @ the Weekend BIG FAT Lies: Porky Pies about obesity