994 posts • joined 23 Sep 2009
Perhaps the Reality Distortion Field emitted by the iPhone causes users of said iPhone to overlook things like dropped calls and still be "overall happy" with their iPhone experience....of course, if it was WiFi that kept dropping out, causing forced page re-requests to be the norm, I'm sure there'd be a bit more of an uproar.
"...as well as flooding the area with NFC handsets and SIM chips."
Does that mean they're discounting NFC-equiped Droids? I'll buy one! :)
For future reference, if you're going to infringe a patent, you better just infringe ALL the patents from that company at the same time, so that you'll only get slapped for one infringement, but benefit from their entire portfolio....good to see we now have precedent.
You had a good point up until....
"Clean, cheap, abundant fuel..."
As long as "cheap" is in the equation, your vision won't happen. Money is the driving force for any of this. Columbus sailed to the "new world" to find a better TRADE ROUTE so they could make money. Until something with economics akin to "unobtainium" is found, there won't be a massive drive. Show the world that an asteroid is 50% gold, 25% titanium, and 25% platinum and you'll have scores of people trying to mine it. Heck, even if it is only half that. Oh, and with leniency for the "acceptable loss" (oh, "tragic loss" for the supporters) that is seen in coal mining. Columbus had deaths on his voyage, and we can mitigate better now, but just because someone died doesn't mean we should halt our progress for 10 years while we have a tribunal.
Back to the point: cheap. Your "Star Trek" ideal world won't happen with the driving force being capitalism. As long as it is expensive to get it, the base line cost will always be high. So, no, you won't be seeing cheap space resources until it becomes more economical to get them; which is the point of SpaceX in case you haven't noticed. Of course, the other option would be adopting the Star Trek form of government, which only works in (Science) Fiction.
Thumb up to Tom 13
"The flat mirrors at irregular distances is a better chance at approximating a parabolic shape focusing at the single point. But I think the engineering required to get the precision alignment of the mirrors makes it impossible for it to have been done in ancient times."
"The focal point for the parabolic mirror is too close to shore to have an effect."
Very good. The point of this experiment was toasting ships/sails of an invading fleet, which would require a minimum distance of 150 feet, if not 150 METERS just to make this more useful than, say, FIRE ARROWS. As stated previously, a single parabolic dish would have to have such a slight curvature that using their tools (likely just a hammer and heated metal, even though the blacksmiths then were likely quite skilled none-the-less) would still not be able to reproduce one with the required focal point distance. Then there's the obvious problem of taking more a few seconds to heat the point on the ship, it would require the ship to be stationary. In the Mythbusters experiment, the ship was stationary and sealed with commonly used (and ideal) pitch, and the mirrors were barely 150 feet away. After their burn attempt, they did manage to char the wood, but nowhere near a necessary 2 second flash burn. A modern example of this is using a laser to cause an ICBM to explode en route....
>Ancient< Death Ray - definitely busted. Computer tracking and megawatt lasers are having a hard enough time as it is. :P
All it takes....
...to destroy a CD in a "normal" drive is to have a fracture in the disc. I've seen some (stupidly) attempt to play their FF7 or somesuch computer game with a crack running half the radius of the disc. Only seen or heard of one catastrophic failure though.
I have a friend who claims to have "OC"ed his CPU by (stupidly) splicing in an extra PSU to his ATX mobo connectors. Said his CPU (being bound on top by his heat sink) actually popped through the base of his motherboard and through the side of his case, sticking into the wall. I believe his story about as much as one would believe Kill Bill's version of "punching" through 6 feet of dirt.... which Mythbusters attempted as well, incidentally.
"What if the live data and backups are in the same datacentre?"
The point of a global network of datacentres is precisely so the "backup" isn't in the same datacentre. Not only that, but the "live" data is redundant across multiple datacentres in the event of an outage. They likely split data/parity between the 3 or 4 datacentres closest to your location, so in the event of an outage, there's not a lot of data to push around to rebuild the "lost" information, or in the case of mirroring, much data to push to a new centre to maintain the mirror.
A single vendor then only becomes a problem if you're bound to their services (for whatever reason) and they "adjust" their fees and ToS (like Mozy did). Or if they go out of business (like Mozy might, however unlikely).
"All the built in apps use it. You know, the ones that get replaced by a more professional version by those who use them a professionally."
Yes, and Windows Explorer has quite the number of "more professional" versions that can replace it too. Will we? Not likely.
"The only problem being that it only works for those Big Companies, meanwhile everyone else is shut out or ends up being forced to pay and pay and pay in order to be able to compete in the marketplace with the "big boys"."
Fortunately, for those little guys, they don't have to invest billions into R&D to come up with that base tech that they get to license.
With the Vega's poor viewing angles, it isn't much worth it. I'm holding out to see if the Samsung 89 or 101 offers a decent price/quality ratio. the iPad was never in the runnings due to to lack of use-as-mass-storage-device mode, lacking SD card slot, and restriction of certain software types ("network utilities" to name one) from the iTunes App Store (yes, I'm prefixing "App Store" properly).
Someone buy this writer an Ergo (split) keyboard!
It failed (or will at least) because it used Petfinder.org as the source for "human categorized images of cats and dogs." They even would put a link to petfinder.org under the CAPTCHA as a head-nod credit and to "support adoption" of pets. The obvious problem with this? Image trolls scripting the crap out of Petfinder, causing poor performance and expense, and effectively hash-verifying the images or something similar. If they're smart, they'll convert the scraped images to a 120x120 jpg or the like to prevent direct hashing, but the scrapers could do likewise, and any cropping or the like as well to mimic what they see on live sites (or according to the Asirra specs), or if it comes down to it, random area sampling to compare images to known ones. In short, very easy to game the system. CAPTCHAs are a difficult system to construct. I would posit that it would be far more likely to use the cat & dog solution, but based on breed and/or coat color. The user says "my 'password' will be German Shepherds or Chocolate Labradors" and each random sampling of 12 jpgs will contain at least 1 German Shepherd or Chocolate Lab (yes, Chocolate Lab is not a specific breed, working off the "or coat color" bit) for the to choose from. It would require a human continually refreshing your log-in page to see which breed(s) always show up in each set, which is why you have a password requirement linked to it. Enter password first, which then displays the images (regardless if you get the password correct or not). If the password was wrong, a random sampling of images is shown, if password is correct, show random sampling of images with your password image included. Doesn't get rid of passwords altogether, but CAPTCHAs aren't really meant for log-in authentication, just as preventions for automated sign-ups and the like. Once you are required it show a certain image or image type for authentication purposes (since the human will have to pick something standard), using it (random-image-based CAPTCHAs) for anything more than a minor deterrent at brute-forcing the password becomes not-fit-for-use.
Have yet to get any
No emails yet. Guess I dodged a bullet... or it could be calling in and requesting they "do not use my personal information for soliciting by third parties." If you request they don't, and it doesn't block their ability to give you their primary service, they're required to obey by US law.
No, MS buying Red Hat would be just as obvious. Perhaps "Microsoft Accidentally Posts Windows Kernel Source on Own CodePlex Site."
They stated there's 3 read/write heads on a single arm (well, more inferred, but still obvious). The 3 heads will still be bound by data that's near them. If they're a fairly large-spread array (about 1/3 the radius of the disk wide), it would potentially cut the seek time by a good quarter (most of the latency is likely due to initial movement/stopping at that point, rather than travel time). However, I would have proposed a dual-head solution (granted 3 heads might be just as simple/complex) combined with a dual arm setup. Then you'd have 2 independent arms with two heads per arm. You could read/write up to 4 tracks at once, but not be constrained by the physical location of the data, whereas a single arm with 3 heads would only be useful if other to-be-read-simultaneously data falls under one of the other heads. Two arms will act likely as quick as a RAID1 setup with an intelligent controller, where the two read requests are handled simultaneously, rather than speeding up one read, then both moving to the other read.
Granted, two arms = worse MTBF since there's more moving parts, but it's likely the best way to get better random I/O. Parallelism is why SSDs get such great data rates (yes, and access/"seek" time, but that's practically impossible to eliminate on spindle drives). HDDs will have to think in terms of parallelism before they can start to approach SSDs in performance. 3 heads is a good start, but it will only help in a limited number of situations. The largest benefit will be the (modestly) reduced seek times and concurrent read/write ops (assuming there's a blank track below one of the 3 heads).
"I need adblock for my eyes."
"I need adblock for my eyes."
Easy enough, just gouge them out with Fire(fox).
...break out those aviator glasses and fake mustache from the attic....
60fps gives video a "soap opera" effect, according to PowerDVD. Having seen it, I quite agree. The characters move very smoothly and the colors are notably brighter. Makes it even more odd when a viewer such as PowerDVD "upscales" old DVD content not only to 1080p, but to 60fps as well. Had to pop the DVD in the normal player just to make sure the movie really was as crappy-looking as I remembered it.
As a whole, I whole-heartedly agree with your post. However, two comments to note:
"If you have a copy of IE, you have been screwed by monopoly power.
...But, you still buy IE, right?"
True that since MS Windows is a "monopoly" on PCs, having a copy of IE hiding on your hard drive means a monopoly shoved it down your throat. However, having IE in Windows is NO DIFFERENT than having Chrome in ChromeOS (they're both highly integrated, right?). My Android phone forces me to use Chrome! Oh noes! How about an iP*d or iPhone forcing me to use their Safari browser that came pre-bundled against my will?
Providing a browser by default is not, in my opinion, a problem. You have to get on the internet somehow to be able to download your real browser, right? They may not have distribution rights, or likely care, to stuff a competing product by default into their OS. Imagine the lawsuits that would occur if MS Security Essentials and Office 2010 web were free and bundled with every copy of Windows. Imagine the lawsuits. Last I checked, Apple distributed all sorts of iThingies in their OSX. Is anyone complaining? Have they been sued for providing Safari by default, like MS did with IE? No.
""If there are significant areas of caesium-137 soil concentration of the order of 100,000 Bq kg−1, evacuation of these areas could be effectively permanent," says Smith."
If you actually read the article, they did not find 100,000 Bq/kg of caesium-137. This statement would be similar to "if Chernobyl had contaminated the area with 100,000 Bq/kg of caesium-137" or if we found that amount under your front porch. It's not saying they are likely to find such, nor had found such. It's giving you a rough figure to know where the "permanent evac" level is.
"In short, irrational fear of nuclear technology is what has stolen away the brilliant Jetsons-style future that was envisioned for us 50 years ago – and may yet steal it from our children."
I remember hearing about a period of time where irrational thought prevented technological improvement....oh yeah, it's called the Dark Ages. Perhaps some of these fearmongers (such as the OP) should come out of their quasi-religious delirium and actually learn something about the situation.
Associative Arrays are a likely cause of slowness:
-- vs --
cout << arr;
spot = lookupIndex("name");
cout << arr[spot];
// yes, it can be combined into one line, but >>most<< will do a sanity check on spot before use....
Text searching can be fairly intensive if you don't have decent indexing methods. This is just one example of where PHP can run doggedly slow compared to a language that doesn't have such conventions, and thus has the programmer overhead of knowing the actual indexes for his variables before-hand.
"This was dropped because that energy gets added to the fuel bill of the car. These will have the same effect."
Since the tyres are already being forced to rotate and stretch/compress, placing these around the circumference (or tyre wall, depending on the most optimal locale) would cause no extra fuel consumption, minus the miniscule additional weight added to the outside circumference on the tyre, thus requiring a very very small additional force to move the tyre. You'd see more work required to spin your tyre with a nail stuck in it. This tech would be a great way to replace an alternator if the vehicle could disengage the alternator while the tyres were producing enough 'leccy to run the car components. 5 of these layered could equal an AA battery, and they're smaller than a stamp, so you could likely outfit a fairly decent power source considering the surface area of a single tyre. Just stick it all in one tyre and leave the other 3 as normal rubber and at least you'll reduce the odds of having to replace an expensive tyre to 1 in 4 in the event of a blowout/puncture/etc.
"Watmore told the committee that in his personal view, the government should use more Apple products, just like the ones he uses at home."
Could have guessed the "because it works at home" angle. Last I checked, a Windows Domain was a lot easier to manage and lock down. Of course, the this says just how much he knows about Apple or IT:
"which is all about smaller, more agile, more efficient projects with a bigger emphasis on open source."
While Open Source does make projects perhaps more agile, it definitely doesn't always lead to "more efficient" nor "smaller." Likely can save some money if a current staffer is already familiar with the FOSS in question....And no, Apple is not "Open Source," they just built a GUI on BSD.
"All we know is...."
"...It was us who darkened the skies."
Sounds like a solar cell generating electricity to actively split water molecules using a bit of chemical reaction. Likely this bucket of water would need to be under a clear plastic O2 and H2 catcher and have an apparatus that can filter the two-gas mixture into separate O2 and H2 and pressurize it into canisters. Likely the energy produces from the O2/H2 burns could provide more power than the system requires, depending on the amount of water/sunlight involved. Some actual numbers would be useful.
Shame we only have press release drivel of "he showed that an artificial leaf prototype could operate continuously for at least 45 hours without a drop in activity." and "Right now, Nocera's leaf is about 10 times more efficient at carrying out photosynthesis than a natural leaf. However, he is optimistic that he can boost the efficiency of the artificial leaf much higher in the future." We need proper numbers! Would be like saying "the new Sandy Bridge chips perform fairly faster than the older Pentium 4 model." Bleh.
Since Apple sued Nokia first, you're statement thus implies Apple is the one floundering around and seeking to gain the upper-hand by suing world+dog. Unless you're implying that Nokia, who has been in the cell phone industry for a Very Long Time (tm) has a miniscule patent pool to draw from and the new-comer Apple has loads of highly original cell-phone patents which would allow Apple to have the upper-hand in any patent suit.
"Our customers have told us they don't want to download music to their work computers or phones because they find it hard to move music around to different devices."
They must be using iTunes....
I, personally, find no problem moving my mp3s on and off my devices. Burned disks for the car. Drag & Drop to my shows-up-as-mass-storage mp3 player. Put a copy on the wife's computer. Simples. Of course, it requires that you rip the music from your CD collection yourself....but that's another matter entirely.
Die DRM. Die a horrible death.
From the Piece
As quoted: "Of course, I'm not a coder. Someone else would have to look at the code and make that judgment."
Someone has: Linus Torvalds.
Correction to AC @13:26
"Oh come on, who in their right mind would design a major safety system that *required* electricity to function?
I mean, hell, can you imagine someone proposing that for, say, a nuclear power station?"
Jumping the nuclear fearmongering isn't good taste. So, here's a correction for you: the nuclear power station that you're referring to had its "safety system" work flawlessly: the control rods were slammed into place and stopped the nuclear reaction. Now, as for the COOLING system, that has the obvious power requirement to keep the liquids moving. Now, if you can come up with a way to keep liquid, in a closed system, circulating with no outside power requirements, I'm sure there's a long list of people who would like to talk to you.
Back to the real world, the cooling system has 4 tiers or power requirements: grid power, generator power, battery power, and plug-in generator power. Only the battery power worked. Generators that could plug into the system could not be sourced in the 8hr battery-operating window. Now, they could dump sea-water into the system, as they did, as a "safety system" backup for the cooling. It is unclear whether this is generator-powered pumped, or more of a water-pressure-based system.
So no, you're attempt at a joke is both in bad taste and wrong. Sorry.
This experiment demonstrates the effect of carrying an EM-radiation emitting device on the hip vs no device/weight at all. As stated, the Control group should have been given a hip-holstered, exact-weight/size replica of a dummy device. As commented above, you do not need a radiation-emitting dummy device to be an effective "control." The "control" group is designed to be as-exactly-the-same as the test group(s) as possible. Best way to do this would be to give the control group a dead phone of the same model as the test group. The bone density and mineralization could be due to different stresses on the bone due to a variation of walking with an object on the hip which is (even subconsciously) "in the way" of a normal arm swing when walking. This can cause one to favor the opposite leg and likely lean to the left (in this case) or swing the arm out a bit further on the right (hence the slight left lean to compensate). Thus, a dummy (dead) device would compensate for this potentiality. Short of EMF being generated due to a magnetic field causing power on the circuitry, a "dead" device should work just fine.
The 25nm writes are about 3,000. The older 34nm was about 5,000. Wear leveling and write amplification, being as good as it is (in SandForce controllers at least), means that you could optimally write a total of 360TB to the flash drive before you see cells give out. That's 3.8MB/s of writes, every second, for 3 years. Since OCZ employs 23% or 13% (depending on the "E" series or not) of extra non-user-accessible storage, and from what I understand, the Intel drives have 7%, that 360TB figure is quite obtainable. Even if you assume you'll only pull 2000 writes per cell on average, you're still looking at 2.5MB/s for 3 years.
Needless to say, the MTBF is likely going to be due to failure of the controller chip(s) or the like rather than cell wearing. Even an OCZ drive can withstand an entire (single) flash chip death.
Just post us the screenshot of 256 CPU threads in Task Manager!
"The real underlying question again is why choose only one?"
Exactly. The best bet to be fully protected would be to use a combination of the two. After all, how much is your data really worth?
However, I do agree the comparison is highly unfit for purpose:
"You have to get to the airport two hours before the flight, undergo a lengthy security check, face cabin baggage restrictions or wait at the hold baggage carousel the other end, and then get from your arrival airport to your actual destination, adding time and cost."
Unless the "two hours before the flight" part is disk formatting, not much else applies here. Backup Exec (as much as I dislike parts of it) will do D2D2D without much hassle, do compression on the fly, and encrypt the results as well. Should handle the "baggage carousel" and "security check" parts just dandy. A bit of proper solutions, and you can eliminate the "arrive two hours early" bit by scripting a format on first use.
Now, why use tape over disk? One can suggest cost. However, I just picked up some 1.5TB HDDs (external, USB) for $70. 1.5TB of tape would be ~$55 as two LTO4 800GB tapes, or ~$74 as a single LTO5 tape. So no, price isn't really a comparison point, especially when you factor in a 24/48 tape rack unit or stand-alone tape drive. Even a JBOD 24 disk rack costs less than a 24 tape rack. The key point is, the disk drives are quite reliable when not in use. Every time I hear discussion of "hard disk backups," everyone automatically assumes "live and spinning" drives. This may be a good idea for your standard backup target. However, for archiving, the disks do not need to be continually plugged in. My "archived" drives have a grand total of 10 "on" hours. Just 10. Granted I'm only in charge of ~500GB of actual data (minus the VM images, ISOs, etc). The pleasant thing about my drives (being of the 1TB or 1.5TB variety), I store more than just a "month end" on my "archived" drives. They actually have a backup from early in the month (perhaps the 8th) as well as mid-prior month (about the 17th). Therefore, if my previous month-end drive is unreadable for ANY reason (including getting lost, damaged, eaten, or otherwise destroyed), I can snag data from the next-best point in time (likely the 8th of the following month) from the next-month's drive. Tape can do with with appending data to the end of the backup set, so it's not a "big" selling point. However, doing this EASILY is the advantage of disks. You can't simply delete a backup set from tape by deleting a folder and writing new data; you'd have to purge the whole tape if your data set started before an appended dataset.
Lifespan. It's a moot point. Companies should be doing D2D or T2T data refreshes at least every 3-5yrs. Depending on storage conditions, the readability of either disk or tape in this case should be the same: just fine. The hard disk platters aren't going to be geomagnetically zeroed, nor is the tape. Granted, bringing tapes or drives from storage for their "refresh" cycle does incur a risk of accident. Which would fair better sitting on the shelf for a full 10 years? Well, tape has made strides in the resiliency of the magnetic ribbon the data is stored on, but I personally wouldn't trust it past 10 years. I've read articles written about a company's "data viability" checks, which have found that their tapes aren't very readable (20% in one case) after 8 years or so in "ideal" storage. But that was 4 years ago or more, meaning at least 12-yr-old tape tech.
Honestly, the best reason I can think of for using disks is rapid recovery. You don't have to read in an archive manifest and find your way to the data you need, then spend the tape seek-time recovering it. Depending on your disk-drive backup method, you may even be able to use OS tools (such as full-text search) to help you find the data you need. Very rarely when I'm asked to recover something does the person know the date it was deleted or the name of the file. They tell me "a few months ago" and "it has this in the text" (for Word docs for instance). A very simple "file contains" search in windows will find it on a "raw files" backup (what you'd see with a copy/paste of a folder tree). You simply can't do this with a tape. You'd have to restore your whole dataset and then search through it.
For those still insistent on tapes, you likely have a tape library and other robotic or automated tools, hence your investment keeps you tied to the technology. D2D2T is always an option, and a rather good one in my opinion. However, depending on your know-how, you can set up a D2D2D system that can be worlds better than simply using that middle "D" as a landing space to be promptly shuffled off to tape oblivion. My middle "D" solution holds a 6-month rotating archive on-site, with about 120 days of rotating daily snapshots, all while still spawning off month-ends to the final "D" for offsite storage. I've only once had to grab our off-site storage backups due to this. It was to fetch a file nearly 2 years old, and the best guess they had was "it was in my files over a year ago." Took us hooking up 3 USB hard drives (we tried at 1.5yrs and second was at 2yrs, third was for the "most recent" copy at about 1.75yrs), and about a 30min full-text search for the first drive. File was recovered after about an hour of returning with the offsite backup disks.
True and Sad
It's quite sad how true the OP's statement really is. However, if you've spied the Samsung 8.9" and 10.1" follow-ups to the Galaxy Tab (which was terrible btw), you'd likely be singing a different tune in the way of competing tablets.
Credits in Space
Yes there IS credits in space. Haven't you seen Star Wars? (of course, that was merely story line...perhaps Star Trek: NG would have been a better example with the opening credits).
"Forget the tone of the news reporting, the facts have largely been reported correctly."
From the "nearing Chernobyl levels" part:
"which Wotawa theorises may have been emitted from Fukushima in amounts "20-60" per cent of those seen at Chernobyl"
So, based on a sketchy model, Iodine at 20%, possibly up to 60%, is "nearing Chernobyl levels." I would venture that "nearing" would imply a constant growth that is currently at 60% or greater with no signs of stopping. In the case of radio-iodine, it may have been detected at certain quantities, but is continually decaying and thus will NEVER reach anything "near" Chernobyl levels, since it is not being emitted anymore.
Yes, the reactor situation was a disaster. But so was the tsunami and earthquake. The disaster was that the facility was partially ruined, not that people are "glowing." In terms of preventing a REAL disaster (meltdown) from occurring, or harmful, long-term radiation escaping, this incident was a triumph. Not bad for a worn-down, 40-yr-old reactor design. I tip my hat to these dedicated workers for their calm in this.
"Sorry no, you're completely wrong. If a company is contributing to open source software and implement a number of features they do not have to release the code feature by feature, they can release it when they deem it to be reasonably ready for release."
However, releasing your "production binaries" and STILL not releasing the "reasonably ready for release" source code makes your "open source" technically "closed" until it is released. Open Source releases the source at the time of binary distribution.
Fortunately for Google, their Android license allows them to pull this crap and optionally go closed source at any time. I'm quite disappointed with their tactic on this. It simply slows adoption of their new Android 3.0 since they're only allowing (paid for, likely) "partners" to use it until a certain point in time. This will slow take-up of Android-based tablets and be counter-productive to Android's goal of being prolific. Such a shame.
Easier than that, just have a NIC with an unplugged ethernet cable. When that (random!) time of the day comes to process certs, plug it in, hit the xfer button to dump the requests onto the machine, unplug the cable, process, plug in and xfer the results, unplug. Not as sure-fire as a "never plugged in" computer (which, btw has to get updates somehow, right?). Likely better to have said computer behind a firewall and physically plug/unplug the firewall from the rest of the network when needed. Then, if there's a lurking h4x0r program on the network, it will have only a couple minutes to defeat the firewall that was practically unknown to begin with....It beats having to sneakernet everything, but likely not secure enough for the NSA or CIA to be happy....
All well and dandy....
"Desktop recovery can be even quicker with virtual desktops"
All well and dandy for a 1000+ machine single-location business. But pushing VDI over a WAN? You'd need a separate VDI server for each location, and if they only have 10-20 desktops, it starts to look quite bleak indeed. Any solutions for the conglomerates of "lots of little locations"?
"Rsync may well be fine for backing up a couple of linux boxes and storing the data on disk, possibly even moving it off to a standalone tape drive, but it really doesn't cover the vast majority of functions of modern backup software."
As one that does use rsync for backups, it's actually quite nice until you have to back up monolithic databases such as MSSQL or Exchange. Then you start getting into scripts for sql dumps or Exchange mailbox backups and things start to get a little dicey. However, for file servers and the like, it's a breeze and what you do with the "backed up" data is completely a separate process. I'd had "backup management software" die at this point because the tape I was trying to use wasn't configured as a scratch tape, or it had data on it already and I hadn't given the system the go-ahead to overwrite indiscriminately. It is a sickening feeling to come in after an evening backup and realize the 6hr dump to tape failed and now your data is not even cached on the backup server for a second attempt. Rsync will give you that cache. The other option would be to schedule a D2D2T, and hopefully be able to redo the 2T part if the initial attempt fails. Oh, and there's that nasty bit about recovering data from a tape you thought was good....but that's a whole other story.
@Duncan's second post
"and three failover cooling systems failed in succession leading to helicopter drops of seawater, radiation leaks and sizeable explosions."
No, the "three failover" part is completely wrong. There's only 1 "cooling system." The primary power was lost, and the secondary backup power, the generators, were flooded. The THIRD form of power (batteries) worked PERFECTLY. They ran for 8 hours until they ran out of juice. During that time, mobile generators had been brought it, as the fourth source of power, but their "plugs" wouldn't fit.
As for helicopter drops of seawater, that was for the cooling pools. The actual reactor core had seawater pumped through the normal cooling system. No helicopter drops for that. This seawater coolant, that needed to be vented due to steam buildup, had impurities which increased the likelihood of additional radiation, not to mention the very short-lived radioisotopes that were carried on the steam.
The explosions were probably the worst of what happened, causing 14 injuries. However, with water super-heating, one tends to get a breakdown of it's molecular components: hydrogen and oxygen. They hydrogen is what exploded.
It's a shame that these fanatics and "willy-waving" commentards don't even have correct information and simply spout off their sound bytes in a semi-coherent, although highly distorted, form.
All their doing is "automating" a process that was (should have been!) done manually already. Of course, as mentioned, since the tapes are "archived" "off-site" rather than in the tape library, you have to ship them back to the Co. for validation...on a regular schedule...and insert them manually....hrm. Well, there's a lot to be said for having two backups: one for on site, and one for off....
...because it will be revolutionarily different from RC2 released the day before go-live....
@Jon52: You forgot one key component
"30% is not a trivial amount, if it is essentially free to sell on my own website but 30% in iStore, I am forced to overprice my items on my own website, just so they are not cheaper."
Don't forget that most of these subscription services are direct competitors to iTunes, The Daily, and iBooks. Therefore, your now-increased price is easily matched or undercut by Apple. You'll have to inherently have a weaker business (be it in less margin, or decreased sales) just to "compete," and likely losing out to Apple in the long run.
Whitelists and Blacklists
The general rule of thumb for whitelists and blacklists is everything on blacklists get blocked, but is pre-empted by whitelists to allow "acceptable content" through (think web filtering "net nanny" stuff). However, with tracking websites, the logic is somewhat reversed. You want to give priority to the blacklists over the whitelists. The logic should flow as "all sites are blocked, except the whitelisted sites. If a site is specifically mentioned in a blacklist, the site should be blocked, disregarding the whitelist." Granted, for someone who wants to remain on tracking-websites good side, they allow all sites to track. Then blacklists block sites, and whitelists override blacklists. However, tracking should be treated more like NTFS permissions. It doesn't matter how many "allowed" permissions you have, all it takes is one "deny" and you are denied. This is how whitelists and blacklists should be handled for privacy. It's just a logic fallacy for MS using the old-fashioned whitelist/blacklist mindset.
The problem with the "people pay for unlimited" argument is the Terms of Service says "unlimited" for the device in question. Not the device plus 5 friends. That's what the tethering package is sold for. You, as the user of the service, are not entitled to use it outside the scope of the Terms of Service. You might as well argue with your internet provider (cable, DSL, etc) as to why you can't resell your connection to your neighborhood and act as an ISP. You're paying for the internet connection, so why can't you share the love? Perhaps you're not even pushing something back out, merely letting your neighbors share your cable TV. Is this within your rights as a subscriber? No. Sorry. Doesn't work that way.
You buy a device and sure, you should have full power to do with it as you will (up to and including hitting it with a sledgehammer), but as a subscriber to a service, you can not use the service as if you own it. You're merely paying a fee to be able to use the service in a very finely described way.
"Compared to Apple's Safari 4.0 for Windows released in June 2009, IE9's holding its own. Apple claimed, again without any external verification, that Safari 4.0 landed six million downloads on Windows in its first three days. Again, simple math would put IE9 ahead of Safari 4.0.
IE9 is not as popular as Opera 11 and Firefox 3.0, however. Opera Software in December 2010 claimed 6.7 million downloads for its latest browser on the first day. Mozilla claimed eight million downloads in the first 24-hours following the Firefox 3.0 release in June 2008."
Most organizations (those that would actually be aware of updates that are available outside of Microsoft Update) won't download IE9 until a full review process. Which is why they're likely still on IE6. Well, that and old software that requires IE6 to function properly. Firefox and Opera don't require vetting since they are not commonly used as plug-ins for software, nor are web GUIs designed to work with them (think Cisco web interface for their switches for one). This allows FF and Opera to be freely downloaded and ran from day 1 on home and work computers (where allowed).
As for the Safari 4 browser, that's due to it being bundled with iTunes (QuickTime is in there too if you click on the usual button), like most toolbars are stuffed into the likes of a DiVX (Ask Toolbar?) or Acrobat Reader (Google Toolbar) install. It's also set in the Apple Updater as well.
IE9 was likely NOT in Windows Update for the same reason IE8 and IE7 were not pushed out on day1. Likewise, Win7 SP1. They're giving system admins time to disable the update on their WSUS servers or at least vet the browser. MS caters to more than just home users you know.
"Even machines on the high end are not that different fundementally.
Multiple cores, MMX/SSE, 64-bit, large memory, 3D acceleration, video playback acceleration?
What exactly did you have in mind that's in XP-hardware that isn't in Win7-hardware?
It's a WEB BROWSER we're talking about here, not Crysis.
It's not like the beginning of the 90s versus the end of the 90s."
ASLR for one. The method of sandboxing perhaps? An underpinning need for DirectX11 to run their DirectWrite calls? Perhaps jump lists and taskbar previews? There's a whole proverbial boatload of underlying APIs that only work (or work best) on Win Vista/7, DirectX10+ included, which doesn't run on XP either btw.
Other, more hardware related things? SSDs for one. WinXP starts partitions at sector 63, whereas sector 64 is better, for alignment purposes to prevent unnecessary write amplification. Win7 can tell the difference between a "virtual core" and a real core in your CPU, and gives priority to the real cores, rather than what XP does of mindlessly chucking your Crysis process on a Hyperthread core rather than a "real" one. Great game performance there I bet....
Perhaps you should actually research what underlying changes were made before laughing that XP could inherently support modern tech. It fails on many counts. Just try loading SATA drivers for WinXP without a floppy or slipstreaming the ISO. Yeah, thought not. Win7 allows drivers from USB if they're even necessary.
Virii are executables. I can just as easily run one on my Linux box as my Windows box. The problem is, by and large, the people that hit "yes" to the "are you sure you want to run this potentially harmful program?" Be it Windows, Linux, or Mac, the user can still hit "Yes."
Disclaimer: this would require a "virus"-like program written to run on each of the operating systems mentioned. It's very easy to do so for each of these platforms, but one can't be bothered to target such small markets.
"That means as much as admitting your machines were the cause in the first place..."
That's because most of the home computers out there run Windows. It's merely the best way to get zombies, due to the population. No, it's not due to some inherent flaw causing Windows to be vulnerable to these infections. Usually these bots get on the computer the same way Flash Player or Firefox gets on your computer: they're downloaded and ran. Granted, they're downloaded from less-reputable places. Very few virii nowadays actually install themselves on your computer with no user intervention.
- Product round-up Ten excellent FREE PC apps to brighten your Windows
- Chromecast video on UK, Euro TVs hertz so badly it makes us judder – but Google 'won't fix'
- Analysis Pity the poor Windows developer: The tools for desktop development are in disarray
- Analysis BlackBerry's turnaround relies on a secret weapon: Its own network
- Hire and hold IT staff in 2015: The Reg's how-to guide