63 posts • joined 22 Jun 2009
Re: I think Mr Carl Bass is making the same mistake Steve Balmer did with tablets
There's one problem with this approach. Scale.
Think about it. You can go and buy a nice model aeroplane for, say, 50 quid, then put it together.
3D printing would allow you to print the same nice model aeroplane for, say, 25 quid, plus up-front cost of 5,000. That's not including energy prices, though, and if it takes days to print at several hundred watts of power, it's starting to get a little ineffective.
Worse still, you have to file off the bits sticking out, then wonder why the wings are uneven.
3D printing at a local shop will be a different matter. Let's say a plastic part failed in some appliance (say, a washing machine latch). The manufacturer will either say they don't stock spares, or they'll try to force you to buy the latch assembly, or the whole door, or at the very least, sell you the plastic bit, but charge 20% of a new washing machine.
One of the reasons is that you have to use the 3D printer regularly for the resin to not clog the nozzles. I had the rather sobering experience of using epoxy two weeks ago when it cured in the mixing nozzle within two minutes of being squeezed into it. I think 3D printers use thermal compound for this, but there are two problems with it -- if it doesn't re-plasticize after being warmed, you have to clean out the nozzle mechanically or use solvents, and either of these methods may damage the delicate nozzle. If it does re-plasticize, there's the question how you can heat all pipes that hold it and whether it will be stable in any useful application.
That might be exactly what Carl Bass implied all along -- 3D printing won't be ubiquitous at home, but local businesses will be built around it for sure.
Sony is a media manufacturer, too
In addition to AIT and SAIT that always were niche products meant more to showcase their media manufacturing capabilities than to actually turn a huge profit. Sony OEMs a lot of LTO media and sells some under its own brand as well.
Nobody in their right mind ever suggested that Fujifilm would release a new tape format when they announced joint work on Barium Ferrite. Now they even dedicated a site to BaFe:
Sony stated that their technology would allow storing 74 more times data on a standard BaFe LTO-6 tape. Fujifilm demonstrated a 35 TB tape, Sony now claims they could manufacture tape up to 185 TB in the same format -- exactly 150 TB more.
Since Sony is a media supplier, they are naturally interested in being the chosen media provider. LTO Consortium decided to adopt BaFe for LTO-6 (and presumably LTO-7). If Sony plays this right, they can get the LTOC board to adopt this as media of choice for LTO-8 (and Oracle's T10000x, and presumably IBM's future 3592 drive) and then make money on licensing manufacturing to other media suppliers. Right now, all LTO-6 cartridges *must* be BaFe. Every cartridge sold is an extra solid profit for Fujifilm and Sony rightly wants to jump on that bandwagon.
Re: Ready market...
Wrong. If margin goes UP by a percentage and they're fine to report it, it means it was positive in the first place.
10% margin up 26% is 12.6% now. -10% margin up 26% is negative 12.6% after going up.
Note that nobody uses these statements when margin is negative and drops further down.
And what you're describing is a change that would necessarily be expressed in percentage points, not raw percentage.
There is just one problem with Putin. He was a KGB colonel. Have you ever seen a colonel commanding generals?
Putin may be deranged. He may believe himself to be the Czar. However, that makes him all the easier to manipulate and I can't shake that nagging feeling there's another power behind the throne and it's not even that hard to find.
Just goes to show you how little you can do with a MacBook Pro rather than the other way around. Seriously, why did you have a high performance laptop in the first place if you managed to completely replace it with a commodity appliance? For show?
What gives with just 45 MB/s performance sequential? With RAID 0, no less. A single drive is able to feed and digest 100 MB/s sequentially, with RAID 0, this should nearly linearly increase to 400 MB/s.
Apparently this is a problem with all hardware solutions, I can see. I had a Linux soft raid solution with five 1 TB WD Green drives in a RAID 5 configuration and I was able to get up to nearly 500 MB/s from them. Now I switched to an LSI 9260-8i and the performance dropped at least 10 times, which is ridiculous, and I'm considering going back, despite the sunk costs.
I can see the NAS boxes are even worse.
Aren't ARM SOCs going for as low as 50 cents? Let's imagine a proper dual core goes for 2 bucks, how much can they save by going lower?
First, the slave drivers there will not allow him to dig the stuff.
Second, nobody would buy from him. Companies do not buy such small amounts because they are not sustainable, and because it's hugely unlikely the ore would be of a useful grade (if an ore in the first place). You really need a lot of buckets (=a lot of hands), and you need some ore preprocessing on site, hence operations are set up.
Having recently read about the proceedings in Congo myself, I am appalled by this situation. Unsurprisingly, the corrupt government is not interested in resolving this problem, since they benefit from the minerals one way or another, and probably are involved in illegal dealings themselves.
Cue someone coming there, trying to enforce some humane standards, and be labeled a warmongering world police.
50 MW megaphone?
Man, that could be legally classed as a weapon of mass destruction. Or maybe something Atreides would use.
Re: Good luck to you Penny Arcade
By all means apply. The problem is, they've got millions of readers and PA has a budget of several hundred thousand dollars, perhaps into millions now. The job will definitely suck hard. Of course it won't be a 160 hours per week job, they will expect you to squeeze the 160 hours into 40.
Sure, prioritization, but I seriously doubt they have a ticketing system and any SLAs. You'll likely be doing a few things at once, like trying to work out the network because there are some looming problems that might bring down the whole operation and fixing a printer because the printouts for the next month's presentation come out a bit too dark to the art director's liking. Guess who will demand you drop everything and solve his problem immediately. You'll end up fixing the network after hours, which nobody will notice since you're directly managed by people with no IT experience whatsoever.
If you are professional, if you prioritize and never take overtime, you'll have a working network, but you'll be kicked out because of ignoring the key person in the company. Hint: in a small organization, they are *all* key people and their problems come first. If they retain you, expect no raises because you're not doing your job well.
I wonder who will fall for that ad
I read PA regularly and I appreciate their comic strip, but this ad really shows how easily you can lose touch with reality. I'm sure Mike and Jerry are perfectly happy to work their asses off for a measly reward since they own the operation and I assume they'd be happy with it even if they only barely broke even. But working behind the scenes, however important your job is, you'll never get the accolades and you'll never be in the spotlight, while you're still expected to put all of yourself into the job. Might as well get paid for it, no?
What would be the perks? Being able to talk to the owners? Play video games with them? Seriously? If that's supposed to cover for the ridiculous salary, they're really stretching it.
Problem is, it's a McJob. Expect poor pay, worthless experience and constant patting on the back, telling you how important you are to them. The problem with experience is that it will never be appreciated properly. It will be appreciated by other small operation webcomics, who will line up to fleece you at the same job just like PA is likely to, but any serious employer might actually balk at this, fully expecting that you just goofed off at the job. Finally, for all the work you put in there, you'll be laid off if they're ever merged into something larger.
Re: Well, it's Penny Arcade...
Sure. A 'normal' and 'boring' job ad, outlining the actual challenges and any items specific to the job.
Once you accept a funny ad, expect your job to be funny (but of the black humor variety). Oh, and prepare to accept funny money for that, too.
Re: Good luck to you Penny Arcade
I can't agre enough. Especially when looking at this:
- Annual Salary: Negotiable, but you should know up front we’re not a terribly money-motivated group. We’re more likely to spend less money on salary and invest that on making your day-to-day life at work better.
Improving day-to-day life at work? That still requires money that I would spend on things like a car, a good lunch, etc. The job should be paid no less than 4 salaries minus overhead of 3 extra people, so it should be something like 200-300 thousand dollars. And frankly, the ad reads more like expecting someone to come in to work for some 20-30 thousand.
Somebody doesn't get the concept of preorders
So the mobes will hit the retail channel in November and they will only then be sent out to those who preordered and should get to them before the end of the year?
And to think they only paid €100 for the privilege...
What's IOP? The article uses a pluralization form of IOPs. Obviously IOPS is I/O Operations Per Second, but IOP?
Re: Slightly fruity comparison
Fruit flies? Of the radioactive mutant variety?
What about tape?
Let's discuss the flash solution. It could be made in small or large modules, each would have its own advantages and drawbacks. Even though it's frugal, flash still needs power to function. The larger the basic module, the more power it will use. Then there's the matter of reliability. Larger modules would fail more frequently per module than small ones. And ultimately, they would be more expensive per byte because they would need more complex controllers. These factors would favor smaller modules. However, smaller modules would require more complex routing, switching, finally it would need very complex controllers per each brick of modules.
Would it be cheaper than current flash technologies? Sure. Would it be cheap? Not by a long stretch. Flash is still 8-10 times more expensive than spinning drives. TLC doesn't bring the cost down far enough.
It's also not a matter of density. At the same node, I suppose flash makers could make features denser, but even if they were twice as dense (which is rather unrealistic), we're looking at only four times the raw capacity -- which is still more expensive than spinning media.
Interference would become a greater problem, and it would probably cause the usable capacity to not increase as fast as raw capacity did. Durability would suffer, of course, but as the guy said, it's not a problem for them, especially since they already don't delete the content, but keep it hidden.
Nevertheless, it's still not a solution. Perhaps Facebook will be happy with the resultant module, even if it's expensive, if they think it will save power, or if it would be less complex to build and maintain, but I don't think so.
Which brings me to tape. There are T10000C drives that offer 5 TB per tape, and T10000D on the horizon which will offer more -- that's beyond the LTO roadmap at the moment, so I'm not talking about LTO. Tape has the nice property that when it's idle, it's not using up power and when a cartridge is needed, automation takes care of picking it up and mounting on a drive.
That said, I realize that if he said that waiting times for spinning up disks are too long, waiting half a minute or so to access a tape would probably be much too long for a user to wait. Caching part of the content on disk to wait until a tape is mounted would probably alleviate some of this concern. However, the service is free of charge, so Facebook pretty much has all power to set SLAs for it.
Re: Why when Radio is already free?
What's so bad about DAB?
Re: Wow! 75 times faster than... whaaat?
@Steve.T: Reading comprehension, man. It was obviously irony. Should I have used HTML5-compliant <sarcasm> tags?
They can be used for light gaming, assuming you're happy with 1366×768 resolution at absolutely lowest settings (some games provide Intel-specific setting, which offers quality even below the basic).
Oh, and funny you should mention AMD bought ATi. Remember Intel740? Thought not. Intel bought Real3D and released their GPU in 1998 -- eight years before AMD bought ATi. They had EIGHT MORE YEARS to develop the (admittedly rubbish at the time) solution into a solid product. When AMD bought ATi, they were struggling with their lineup, slowly recovering from 2000 series debacle with notably improved 3000 series, but they weren't well entrenched until releasing the 4000 series and Evergreen. Integrated GPUs from ATi were already vastly better than Intel's at that time and it was without much prior support from AMD that the GPU was excellent. Intel's GPUs continued to lag behind AMD's, and when AMD integrated them into APUs, Intel was again outstripped.
Between 1998 and 2006, Intel had time to improve their GPUs. They failed. They had eight years of possibilities to integrate the CPU and GPU within the hardware, even when the GPU resided in NB, but they didn't care about it. Since 2006 they have slowly improved, with each generation about doubling the performance, but it was still way behind the curve. Seeing Intel's lack of initiative I have to call bullshit on this 'Iris'. Maybe Haswell is not going to bring anything new to the table in terms of graphics (aside from increased clocks) and Iris is just a way to counter the lack of performance by doubling the number of GPUs.
As for playing video streams -- Intel's CPUs DO NOT use the GPU portion for decoding the stream. The CPU has a dedicated processing unit for this. And although it is impressive in its own right, it is supposed to play high numbers of video streams without breaking a sweat.
And your last paragraph -- as long as Intel is trying to stick x86 into everyone's face, they will continue to fail. And it's funny how Intel continuously claims that their target is ahead of them. When i740 was released, high performance GPUs were their future. They failed. Then they said their goal was best integrated graphics. They failed. Then they were supposed to release Larrabee, which was supposed to introduce Intel to the enthusiast GPU market. When that failed, they said Larrabee was intended for heterogeneous computing all the time and they never intended it to be a GPU. Now you are saying their goal is best performance in tablets? Ain't gonna happen. Iris isn't going to convince anyone, either.
Wow! 75 times faster than... whaaat?
Seriously, who are they kidding? Why not claim they are seventy-five HUNDRED (7500) times faster than ViRGE, the 3D decelerator? While they're at it, why not remember that their Core i7 CPUs are several hundred thousand times faster than 8088?
75×rubbish is just rubbish, but more of it. Their drivers are bad, the performance is in the basement compared to integrated GPUs from AMD. While they could be on to something with Iris, the competition would need to stand still for the last five years. Wake up call, Intel! You are NOT competing with 2006 chipset-integrated Radeon or GeForce! You're going to compete with 2014 APUs which are going to include hUMA (which for most users will mean PS4-like GDDR5 system memory). Your GPU may well be 75 times faster than in 2006, but AMD's GPUs made more improvement in the last 7 years and you are not going to fool anyone.
I can see nobody mentioned Bicom yet
It launched in 1993. I remember reading the review, but frankly, there's scant info on the Internet and it's hard to find the information.
Nevertheless, I did find information on it. There were two models: 240i and 260i, differing in HDD capacity (40 and 60 MB, which was a lot for a notebook back then). Specifications:
- Am286LX at 16 MHz (1.5 µm node)
- 2 MB RAM
- Dimensions: 223×161×31 mm (smaller than original EeePC!)
- Weight: 1 kg (which is less than some EeePC models, at almost 1.5 kg)
- 7.5" monochrome displays (640×400 resolution, line-doubled CGA)
- Battery life: 3-4 hours on 5 AA batteries (!), you could use rechargeables (Ni-Cd at the time).
- Price: I can't remember now, but for what it did, it was cheaper than cheapest regular notebooks, at some $300-400.
It's hard not to draw parallels between then and now. The subnotebook was based on technology that was two generations behind the mainstream (486, color displays), which is about where netbooks are in relation to notebooks.
Obviously, technology has progressed since then, but in 15 years between this and the original EeePC, what did we get in return? Frankly, not much! Larger hard drives, color displays (sometimes they are even larger), more memory. But feature bloat caused the netbooks to not perform better than their old rivals. If you used 700 mAh Ni-Cd rechargeables with the Bicom, you got 3 hours battery life. With 2700 mAh NiMH rechargeables, you would get 12 hours -- compare that to 3 hours on an EeePC with batteries rated at 5600 mAh with higher voltage. Displays obviously draw the most energy, but 15 years of progress should have brought them at least to parity. If anything, turning off backlight (or the display altogether, and running on an external monitor), should allow the netbook to work considerably longer, but it doesn't.
Is it unreasonable to expect that you should be able to get a 9-inch notebook running a shrunk CPU two generations old (hey, it would be the original Nehalem now) -- not downclocked, mind you, with an SSD, weighing in at less than 0.5 kg, with dimensions of an A5 sheet of paper and at most 5 mm thick?
Re: Hmmm, what about tape?
You're welcome. And no, I didn't mean absolute leadership, but some of the names and companies should have never made the list.
Hmmm, what about tape?
What about Storagetek/Sun/Oracle and tape? What about IBM for that matter? You mentioned HP, which might go bankrupt and assets may disperse between various other companies, but both IBM and Oracle tape equipment right now holds more data (an order of magnitude more) on tape than is kept on disk. Tape still has the edge over flash for overall TCO, and flash vendors have to catch up with tape, not vice versa.
I understand that you may be enamored with the new technologies, but as it stands, the list is woefully inadequate. Dropbox, Facebook and Amazon? Pretty much equivalent in terms of web storage -- pick any one of the three or add many more to the list if you really believe that they matter. How about adding Rapidshare, then? Going further, I see fusion-io, which is apparently struggling, as you yourself reported:
Is this the financial outlook of a market leader and a successful company?
It doesn't seem that Fusion-IO is leading in benchmarks, either. Or are you considering adding OCZ to the list as well? It certainly doesn't seem that competition has to chase Fusion-IO in anything.
Is this a contest of who makes the presentation most devoid of content and crams as many cliparts to a page as possible?
What do the gears even represent? How the hell are two completely different domains -- sourcing and monitoring -- supposed to drive one another???
Or is it just a classroom project of one of the execs' children?
Re: The success of capitalism
I've yet to see Capitalism at work. The bailouts are precisely where the problems lie -- it's not Capitalism, it's thinly-veiled Socialism with tolerated Personal Property (unless said Personal Property needs to be taken over by the Government to support its own interests).
Why would it matter whether they are experts on global climate? The only conclusion of the research is the unprecedented scale of carbon sequestration in peat, and the enormous rate of growth of mires and bogs. The additional conclusion that the amount of carbon sequestrated might be high enough to offset human industrial CO2 emissions is added almost as an afterthought.
By the way, annual anthropogenic CO2 emissions are within the margin of error of estimates on amount of CO2 emissions from a medium-sized volcano eruption. How could human-made CO2 be responsible for anything? That's the thing that I've never seen climate scientists refute. It's like they're trying to explain how we can heat an ocean using a candle.
Given that indeed.com is a global site, and contains many more jobs in other sectors than IT, it's a big deal.
Even if it is not a huge percentage within IT, it's still significant.
Compared to other IT jobs, which include programming where the basic skill is a language, then there are generic IT jobs where no skills are required, web coding where you'll find the usual fare of CSS and PHP, Hadoop will be a significant part of the remainder of critical jobs -- like mainframe or Unix administration -- it might seem archaic for some, but it brings a lot of money.
Not everyone will have to know Hadoop inside-out, but those who do, and whose skills are required, will rake in the big cash.
Re: To quote Top Gear... How Hard Can It Be™
Okay, so I'm risking a reply to what may be an obvious troll, but... The Soviets were thought to have enormous apparent advantage in rocket design 50 years ago, and well, look up N1.
Soviets couldn't build a "simple rocket" to reach the Moon, it makes the American Saturn V that much more remarkable. (Some try to say it was a lucky break, but the perfect safety record says otherwise).
Re: "squeezed the juice" out of the two papers...
Unfortunately (yes, I'm Polish), not much. It's not that a lot of the theoretical foundations weren't laid by Polish mathematicians, it's that certain political decisions caused them to fall by the wayside.
However you want to twist it, siding with the French in their code-breaking efforts cost them the chance to work at Bletchley Park. Turing was a brilliant mathematician and computer scientist and he did a lot more work in breaking the code than any other man.
Cheers to that!
Well, the writer lives on the assumption that Apple's market share is growing and is significant everywhere in the world.
Well, that's not the case in roughly 90% of the world. OS X requires Apple hardware, and people don't want to pay the Apple tax for an otherwise ordinary PC. Not to mention exorbitant prices for parts and limited upgradeability. Sorry, but for the price of a Mac I can get a much more capable PC and run whatever I wish on it.
Re: Re: Re: huh?
Aside from being a shiny toy, what can a tablet do that a PC cannot at half the price?
Re: Re: "Windows is dead."
Yes, that is, assuming all those users decide to either:
1. Accept 10-12 inch screens on their tablets.
2. Accept 10-12 kg tablets with 27 inch screens.
Everybody seems to be under the impression that screen size no longer matters. And if you add in a keyboard, mouse, external screen and a power brick to an otherwise svelte slate, the sum becomes vastly more cumbersome than a desktop, vastly more expensive, and vastly less capable.
Or has everybody forgotten that tablets cost twice the amount of a more capable desktop PC?
How much are they suing for?
It misses on the most important detail -- how high do they rate their moral losses and how much do they want from Tesco/Apple/world+dog?
Mine's the one with the disclaimer not to use as parachute on the tag.
Lucky then that the financial analysts covering Oracle are able to tell what SPARC is, considering those covering IBM aren't able to tell anything about one of IBM's most valuable assets.
Firefox? They did 3.0, jumped to 3.5, sanity apparently hit them for a while, since they did 3.6, then jumped to 4.0, but suddenly lost it all by skipping to 5.0 in three months, 6.0 in about two, and 7.0 in just one more. If they release something in mid-November, it's going to be version 11...
All those searches have to pay for themselves, you know
They don't make (much) money when you search for a search engine.
Furthermore, using specialized search engines implies that you are shopping, and context ads in those engines are much better at being effective (as opposed to context ads when you are searching for whatever) -- and context ads generate the most revenue -- precisely the revenue that Google is otherwise losing.
It's Intel's GPU drivers, which are not up to scratch.
Quite frankly, the last post is an insult to intelligence. Which of the three companies would you prefer to own (assuming you were looking for longevity):
Company A, $100 bn revenue, $150 bn costs, $50 bn loss
Company B, $80 bn revenue, $60 bn costs, $20 bn profit
Company C, $50 bn revenue, $10 bn costs, $40 bn profit
Going by your logic, you would believe company A has the best outlook of the three above.
Yes, because as we all know, increasing debt limits and doing nothing about the spending is the right way to go wrt budgets.
@E 2 : Ummm, no
The beta drivers were there earlier. Plus, the older drivers worked (sort of). And let's not forget that it was 6 weeks after the *announcement* was made. General availability wasn't until a few weeks after the release.
Well, how is *your* English?
"Ellison said in the Wall Street call today that Oracle has installed over 1,000 Exadata clusters (not racks, but distinct clusters) so far, and that it can triple the base to more than 3,000 machines in fiscal 2012."
In your own words:
What Oracle said is they have "installed" over 1000 Exadata servers. They made it clear it is not 1000 clusters.
To me, it's pretty clear Ellison said clusters. Care to re-read? BTW, you were replying to a post made at 9 p.m. and accused the guy of early drinking on Friday. You made your post at 9 a.m. on Saturday. What does that make you? Pot? Kettle? Black?
Installed vs. sold
Now, correct me if I'm wrong, but isn't this more a matter of how many Exadata clusters are installed compared to sold, but not installed yet? I.e., the sales team could have closed the deal already, but put it on a three-month backorder, then it will take some time to install, lab test and put into production.
You seem to be taking the common definition of something being installed or sold, and not the one that's used by businesses.
Why doesn't Tilera compare their CPU to SPARC?
Interestingly, Tilera didn't seem to show whether there are any advantages in the merge-sort when compared to a SPARC T3 CPU. Since T3 is also made on 40 nm, the comparison would make sense in that regard, especially since both CPUs are RISC designs.
Back when Itanic was being actively backed by Intel and major software vendors, its notoriety as triumph of marketing over CPU design for the sole purpose of bringing the general purpose CPU market under Intel monopoly was pointed out and rightly criticized by all major tech sites, el Reg included.
Then, one company after another stopped supporting it, including Microsoft and Red Hat, and Intel was ostensibly lukewarm towards continued development of Itanium, but all remained fine and well, and attacks on Itanic continued.
Now, when Oracle finally jumped the gun and announced termination of development for Itanic, everyone is suddenly rushing to Itanium's defense (and to bashing Oracle)? Come on! I know that the general attitude of most sites is that Oracle is greater evil than even Microsoft, not to mention hp or Intel (which, bafflingly, is still presented in good light despite the uncovered monopolistic practices), but its gotten ridiculous at this point. Microsoft or Red Hat were never subjected to a fraction of the criticism that's being leveled at Oracle.
Itanium was never a good chip, plain and simple. Intel suggested an ambivalent attitude towards it in the recent years and I can hardly believe it will retain any edge over x86, much less any significant edge. Coupled with dwindling market share, this was expected. But to defend Itanium all of a sudden? I'm baffled.
We really need an angelic/demonic Larry icon...
Let me clarify a few things
First of all, those customers did not migrate off of Siebel or Peoplesoft. The original assertion was that those customers migrated from Oracle support to SAP support, but did not change the software (which would be ridiculous otherwise and Oracle would have no case there).
If you are considering damages, it does matter how many customers have been lured away. However, the number of customers lured away is inconsequential (it could very well be zero) when it comes to determining whether SAP was guilty of IP theft or not.
Ellison did admit that the worst case scenario was averted, but you can't claim 300 customers (out of 300 thousand, give or take 299 thousand) is just a small bunch. The largest customers make up the bulk of the contract value.
He does make a lot of sense
The anti-Oracle tone I see all over the press would be ridiculous if it wasn't so damaging to Oracle. Sun was going down and nobody seemed to care what would happen to the IP if it went down completely. IBM certainly didn't care, but Oracle did. They bought out Sun with cold, hard-earned cash, and it was obvious then and is as obvious now they want to extract as much value out of Sun as possible.
First such obvious negative publicity was over OpenSolaris's demise. That was fair game, though, especially as it appears OpenSolaris did not live up to its potential and most code donations were from within Sun, not from outside developers, save for small pieces.
Then, recently, it's about OpenOffice -- true, at least that was called out by Sun, but it still appears to have been done haphazardly and certainly without proper funding, it seems. I doubt that without corporate backing (especially monetary), LOo will get anywhere. They'll probably try to fly back under Oracle's wing before 2011 is over. I may be wrong, it depends on how much they will want to drive the point, but I'm fairly sure there will be a lot of stagnation in development in the meantime.
And now about Java -- what's wrong with the roadmap that Oracle laid out? Nothing, apparently, apart from the fact that Oracle was the one that laid it out and Oracle is against Google, which automatically makes Oracle evil and all their decisions null and void?
Is it really bad that Oracle tries to recover the money they spent on Sun?
He does make a lot of sense
1. When flash reaches high enough capacity for home use at low enough prices, the market will slowly abandon spinning drives. With less traditional drives sold, losing economies of scale will slowly hike the price of HDDs, closing the gap even further in a positive feedback loop. As consumer drives go up in price, so will enterprise drives. This does not affect tape, which was always niche compared to disk.
2. Bit density on tape drives still has ample room to grow. T10K cartridges have surface area of about 75,000 cm^2. Compare this to about 456 cm^2 maximum for 4-platter 3.5" disks (I'm assuming 3.5" platter diameter with 1" diameter hub). The bit density for T10KB-formatted tape is about the same as of a 6 GB disk. There's ample room for growth. Assuming four-platter 2 GB disk bit density, a typical (4x5x1") cartridge could hold over 150 TB of data.
3. T10KB has 240 MB/s native throughput, not 120 as in the article (that's the throughput of the original T10K). A 20 TB cartridge will store data at 20 times the density. Assuming there would be 144 tracks (compared to 36 of T10KB), linear bit density is 5 times higher, so 1.2 GB/s throughput should be achievable. Assuming 100,000 slots means 10 connected SL8500 libraries with 64 drives each, that 1,380 TB/hour translates to almost precisly 600 MB/s (given rounding, it's insignificant).
4. As opposed to LTO, Storagetek drives maintain backward and forward compatibility with the same cartridges usable on various generations of equipment (based on the formatting), regardless of technology or format changes in between. It can be expected that the T10K cartridge will be usable on T10KC or T10KD drives, depending on their underlying technology. Obviously, Fowler may have meant 20 TB compressed capacity, which makes it perfectly viable -- 10 terabytes in 2015 seems almost like a breeze. Assuming a 2 TB T10KC is released before May 2011, 4-5 TB T10KD in 2013, 10 TB T10KE is certainly possible in 2015. 20 terabytes native is significantly more involved and would possibly require Storagetek to break backwards compatibility.
5. At some point, it may be possible that flash becomes significantly cheaper (although it's doubtful that progress would be notably faster than Moore's observation suggests, though 3-bit MLC could allow flash to overtake Moore's, as could 3D cells suggested by some people), and tape storage will be on the way out, possibly replaced by switched SATA/SAS in a MAID (zero spin-up time could make it possible). This of course assumes that the high-density storage is indeed cheaper to make and that there will be people willing to pay for lower tier (slower, but higher capacity and/or significantly cheaper) SSD storage.
That's just fantastic
I just love this double standard. On one hand, you are urged to be successful in what you do. On the other hand, if you're "too" successful, the likes of those watchdogs will want to punish you for trying.
This is ridiculous. Google built an empire of its own from scratch pretty much without any competition. Now that their business model is proven and successful, freebooters want to just copy them and get rich in the process. But one after another fails and then blames Google (rather than their ineptitude and lack of originality and distinguishing features) for that fail.
So it's a Sun Ray
Only it consumes 3-6 times the power and requires expensive software to run?