302 posts • joined Wednesday 10th June 2009 13:49 GMT
Mark Lockareef says
From Mark Lockareef:
Just read your most recent article on Nexenta. As you know, Evan did a great job building the company in the early days but needed to pass the baton onto a leader that could take the company to the next level. I joined Nexenta as our interim CEO in February to help the company find our next permanent CEO. You state that I "departed in pretty short order" but I was actually the interim CEO for 7 months...longer than I expected to find the permanent CEO (I thought my role would last about 3-4 months). Turns out that great CEO candidates in the next-generation storage space are few and far between. We actually got a lot done in those 7 months wrt building the foundation for growth. Now that Tarkan is on board, it will be fun to watch how he continues the story.
Just wanted to make sure you got the straight scoop.
Death of the business Desktop
The business PC desktop is facing death by a thousand VDI cuts augmented by a BYOD bashing.</p>
<p>Business desktop death pointers:</p>
- EMC’s XtremIO all-flash array, twinned with VMware View or Citrix Xen Desktop and backend VNX arrays, can support 7,000 or more virtual desktops
- All-flash SolidFire arrays can support large-scale, 1,000+ VDI roll-outs
- Atlantis and all-flash Violin arrays can support 1000s of virtual desktops
- Hybrid flash/disk array startup Tintri can support 1,000s of virtual desktops
- Startup Pivot3 supports VDI
- There is a VDI-focussed Vblock from VCE
I know this is repetitious; that’s the point. These aren’t just a few pointers; this is a flood, a veritable tidal wave of systems all focussed on removing pricy and complex-to-manage business desktops with centralised virtual desktop systems.
Some other suppliers with VDI capabilities; Pure Storage, Tegile, Nimble Storage, and Fusion-io. A combination of flash storage and deduplication is making it possible for cost-efficient and storage capacity-efficient VDI set-ups with the responsiveness of actual desktops, or better.
Set this VDI blitzkrieg to one side and consider BYOD - Bring Your Own Device, in which users bring their own notebook computers to the office. This is the guerilla war assaulting the business PC with VDI being a full-on, frontal assault.
The net result could be a multi-year reduction in business PC use with, in some businesses, desktop PCS literally disappearing.
We haven’t any numbers, beyond the general PC annual shipment numbers decline. Our sense of it is that the business desktop is an endangered IT species facing year-on-year shipment declines, wilting under the impact of artillery barrages from the massed ranks of VDI howitzers and BYOD sharpshooters.
In that case there will be a consequent decline in business PC component shipments; hard disk drives, power supplies, DRAM, motherboards and CPUs.
If 200,000 business desktops go away each year for five years that’s a million fewer hard drives shipped. And it could be worse; 500,000 fewer desktops each year means 2.5 million fewer drives over five years.
I think we’re at a VDI/BYOD tipping point and a storm surge of virtual desktop instances is going to wash increasingly unwanted and unloved business desktops right out of the offices they’re anchored to, never to return.
Is this true? Will it happen? Am I smoking pot?
I think not Sherlock.
The SMART flash DIMM announcement opened up a major server memory redesign period. The idea of packing NAND chips tightly together and accessing them in the same address space as main memory is highly attractive to server manufacturers looking for an edge in running applications faster, faster than PCIe flash for example.
SanDisk has bought SMART and now has a DIMM future (sorry). My understanding is that all the major server suppliers are looking at non-volatile memory DIMMs and designing future servers with storage memory, and not just with NAND but envisaging post-NAND technologies such as Phase Change Memory (PCM), Spin Transfer Torque (STT) RAM or some flavour of Resistive RAM (ReRAM) technology.
This technology transition will make storage memory byte- instead of block-addressable; the programming model would change. There would need to be a software layer, like Memcached, to present storage memory as pseudo-RAM to applications
We could think of X86-populated motherboards populated with storage memory DIMMs.
Cisco’s UCS servers are known for having large amounts of RAM. Building on its Whiptail all-flash array acquisition it would not be surprising if Cisco were to announce storage memory-using servers in 2014. We’re surely going to see Whiptail arrays using UCS servers instead of the Supermicro mills they currently employ.
Dell, IBM, and HP server engineers and designers must be actively looking into the same storage memory technology.
And it’s not just server manufacturers. Storage suppliers with an interest in PCIe flash are also looking at this topic. For example, I’m convinced that WD with its Virident PCIe flash acquisition is looking at the field, as well as Fusion-io. There is a go-to-market issue for the non-server suppliers, as in, who do they sell to?
Do they pursue IEM deals with the server suppliers, or retrofit deals with independent system vendors?
Moving on, in some scenarios a bunch of clustered storage memory DIMM servers with could avoid the need for an external flash array and talk to persistent external storage disk drive arrays for bulk capacity.
I’m seeing storage memory DIMMs as predominantly a server supplier play, and one that limits the applicability of all-flash arrays. Am I smoking pot here? Have my hack’s table napkin-class ideas gone way past reality? Tell me what you think is real here - and if reality bites my ass then I’ve learnt something, which will be good.
Re-purposing old arrays
A German IBM customer, the Ernst Strüngmann Institute (ESI) for Neuroscience of Frankfurt., has dumped the EMC/Isilon O/S from three 36NL nodes and replaced it with SUSE Linux with IBM's GPFS as a filesystem.
Each node has 36 internal disk drives in a RAID-6 configuration. The InfiniBand adapters involved work with RDMA enabled for native GPFS - version 184.108.40.206 to be precise.
In effect old - 2011 era - not that old - Isilon hardware is being re-used in a 3-node cluster to function as a 55TB filestore using IBM software. Cool.
Are their other examples of storage array re-purpising that beat this in coolness factor terms?
3D Read/write heads
Would it be theoretically possible to 3D print a disk drive read:write head?
I think you'd need a 3D printer that could print small numbers of molecules ...
Answers on a postcard ..... to this forum please.
BT's press office sent me this note:
How much free storage you get is dependent on what package you choose, Unlimited Broadband extra and Unlimited BT Infinity 2 customers get 50GB:
StorageTek not Sun
Sent to me so I'm passing it on:
Randall Chalfant was at StorageTek when acquired by Sun. Looking at his dates of service, it would be more accurate to identify him as ST rather than Sun.
Seems unlikely he was deeply connected to any of the ZFS work which was all done in CA at the time.
Re: Is this article just the result of being sloppy or is it a blatant shill?
No shilling here. My info is that WD Reds have 5400-5900 rpm through Intellipower and so aren't true 5900rpm drives. Are you saying yours is 5,900rpm constantly? In which case my files are wrong :-)
Re: WD 5900rpm
I have WD's 4TB Red drive going at 5400-5900 with Intellipower so I didn't class them as a true 5900rpm drive.
What's an enterprise SSD sale?
Does building SSDs for your own use count as an enterprise SSD sale? Gartner says it does and includes Google as an enterprise SSD supplier with a revenue share because it builds SSDs for its own use. Is that right?
Hook, line and sinker
Sent to me anonymously:-
Congratulations on taking the press release bait hook, line and sinker. This really is a complete troll release.
Do I have to spell it out to you? They're claiming they can be "maintaining up to 100 per cent bandwidth utilisation", which means simply stuffing the line as full as it will go. This is empathically not a design goal of rsync, quite the opposite really.
If this is their design goal then they ought to try and compare with bittorrent, a protocol designed to exploit weaknesses in TCP's ideas of "fairness" (confused? ask Andrew for the Briscoe paper redux) to stuff the line as full as it will go.
By contrast, rsync tries to reduce the need to transfer anything to the absolute minimum, preferring to leave the line idle while it works out what can be safely skipped. So this comparison is a little dishonest, to be quite unduly charitable about it.
Yet you bought it and wrote a nice little piece regurgitating their lies. How nice.
Amount of DRAM cache
1GB of DDR3 DRAM cache. That was left out - oops!
Are SPC Benchmarks useful?
A commentard ripped into me over the HP SPC-1 benchmark win story - No Dell, no EMC? Well, HP's storage champ then.
Here's what the comment said (between arrow lines):-
You're kidding me right?
Chris, have you even read the specs of the arrays you're drawing comparisons against? Do you understand how the SPC benchmarks work and the impact particular type of resources have on the different workload profiles used?
Storage performance and scalability is large dependent on a number of different resource types and the ability to distribute different types of workloads across available resources (utilization rate).
From a workload distribution perspectives in simplistic terms we can think about frontend and backend. For the frontend, the 3PAR 7400 is using more FC ports than both the V7000 and the HUS150. We could get extremely technical on the impact more ports yields from a buffering and queuing perspective, but I think its pretty clear that more ports ultimately means that a host can process more I/Os in parallel.
From a backend workload perspective it more about the disk to offload the workload to. In some workloads profiles its almost always about the disks, with others parts of the I/O chain in-between almost running at line speed. The 7400 again has more disks (SDDs) than both the V7000 and the HUS150, in both cases we are talking double digits more and at 2500IOPS a pop thats 25,000 raw IOPS we can handle without cache! .
When it comes to dealing with the different workload profiles cache (for all but random read workloads) is king, a well designed array's performance scaling profile is based largely on this single resource type all other things being equal. The 7400 has more than 4 times the amount of raw cache than the V7000 and double the cache of the HUS150 When it comes random reads SDD is our saviour and as I said above, we have more SSD spindles in the 7400.
SPC comparisons that are designed in this way prove nothing. Please stop harping that X is better than Y because you've looked at some summaries at the SPC website, its false (performance) economics and just wrong.
So the SPC-1 benchark result summaries are not a valid way of comparing different vendors' systems.
The SPC web site home page includes this text; "SPC-1 benchmark results provide a source of comparative storage performance information that is objective, relevant, and verifiable. That information will provide value throughout the storage product lifecycle, which includes development of product requirements, product implementation, performance tuning, capacity planning, market positioning, and purchase evaluations. The SPC-1 Benchmark is designed to be vendor/platform independent and are applicable across a broad range of storage configuration and topologies."
The vendors agreed the SPC-1 benchmark and submit systems to it and publish results. So it's valid, I strongly submit, for us hacks to write about them. In other words I disagree with the comment above.
Is that a reasonable line to take?
Re: Actually, yes...
This is all getting interesting and I think I'm getting polite wrist slaps from people who say the demand for in-array processing is more complex and more widespread than my simple little article says.
Yes, okay, it is. And, yes, okay, the idea of a storage array with spare server engines running VMS looks good and sensible .... but will Cisco, Dell, HP and IBM, who own most of the server market between them, do this?
It's okay for EMC and DDN and Violin Memory to push the im-array processing idea because it marks out their arrays as having more value but general adoption needs a big system vendor to jump on board and, so far, none of them has.
In-array compute ....
.... is not going to fly. If networked storage takes too long to access then bring it closer to the servers. Don't start putting servers in storage arrays; that's like putting a car's engine inside its petrol tank. If fuel takes too long to get to the engine bring the petrol tank closer to the engine. Simple.
In-array compute is a dead duck, a dodo, an ostrich, a bird that isn't going to fly except in small market niches where buyers are array-centric in their approach rather than server-centric. To mix the metaphors, in-storage processing is for the birds.
Is this view pot-smoking or realistic?
NetApp layoffs insane
NetApp is no longer the best place to work. Their latest layoffs include a number of long term senior CLOUD strategists and Enterprise Architects who were hired to help provide the company with direction and focus on Cloud.
One can only assume, that the entrenched incumbent sales leaders have stuck their heads in the sand, preferring to continue to sell in outdated modes, via storage based solution sales, rather than helping their partners and their channel design and build truly competitive Cloud services. Their ignorance of Service Strategy, Design, and the basic tenets of ITIL and ISTM will well serve their competition well, and white box storage behind well designed Cloud Services based on Open Source will continue to eat their lunch.
Sent anonymously to me.
HDD vendor way back
Your piece on the death of traditional HDD vendors is interesting, but I wonder if there is a way back for the likes of Seagate and WD? It seems to me that today we have something of a cartel on the supply of flash memory chips, as a result of which we see artificially inflated prices for emerging SSD drives (£350-£450 for a Samsung 840 Pro 512Gb, for example, which is many multiples the price of a top-of-the-range 2 or 3Tb HDD).
This is because the emergent dominant players in this market are currently milking it for all they are worth. It's often the way, and if a market stagnates with 2 or 3 major players, we see little or no commercial pressure to innovate or reduce prices (think Microsoft in desktop, or even say Canon/Nikon in cameras, Nintendo/Sony/Microsoft in games consoles, etc,etc ).
So a possible route back for Seagate and/or WD would be to purchase a Flash Memory fabrication plant and then run out a line of decent quality drives that seriously undercut the existing cartel. Give the market say 1Tb Flash drives for an initial £500 [aiming to drop to £400 when the inevitable fightback begins] and there's the chance of winning back market share.
If they wanted to do this, Seagate/WD could fight back with say last year's SSD technology (one die size large fabbing, slightly lower clock rates) which may enable them to buy up fabrication gear on the cheap. They won't be able to compete with Samsung/Kingston/etc in performance terms, but as long as they offered a drive that was markedly faster than the best HDD and seriously undercut the cartel, they may succeed.
The one thing SSD doesn't have today is a high-capacity drive. Samsung/WS could do it. Question is, after the relatively recent floods in Thailand, and the time, trouble and effort they undoubtedly put into recovering their HDD fabrication capacity, have they got enough left in the tank to scale up a serious SSD challenge. I suspect we'd learn that the flooding is a much larger factor here than merely being caught asleep at the switch...
[Sent to me]
Are Disk Drive Vendors screwed (by flash)?
Read Mike Shapiro's view that HDD vendors are screwed by flash (included bellow). What do you think?
- Is Seagate clawing its way back?
- Is Toshiba sitting pretty with NAND foundry and HDD manufacturing operations?
- Can WD claw its way back?
- Are hybrid SSHDs enough for the HDD vendors enabling them to effectively ignore SSDs?
------------------------Mike Shapiro interview-------------------
The disk drive vendors have been utterly screwed by mismanaging the disruptive force of solid state drives: that's the view of Mike Shapiro - lately a storage bigshot at Sun and Oracle.
Mike Shapiro was a Sun Microsystems Distinguished Engineer, CTO, and VP of Storage for Sun and then Oracle. He is most recently a founder at a stealthy storage startup about which we know nothing. We spoke to Mike about his views on flash drives and the HDD suppliers.
El Reg:Did the disk drive companies enter the solid state market in a timely manner?
Mike Shapiro: Despite the emergence of STEC as the first enterprise SSD vendor in 2006 (when we first talked to them at Sun) it is remarkable to me that Seagate, HGST, and WD all failed to enter the SSD marketplace until 2011 (5 years later, by which time it was essentially too late). And when they did, they did the obvious dumb thing of pricing the SSDs above the 10k and 15k RPM drive lines - i.e. they made the classic error of thinking that they could solve a disruption by just organising their own revenue streams regardless of other market forces.
El Reg: So what happened?
Picture of Mike Shapiro, Bryan Cantrill
and Adam Leventhal
Mike Shapiro:As a result of this massive screwup, a raft of solid state drive companies entered the market, which in turn I believe spurred the NAND vendors to say, 'gee, if that's all it takes to mark up our NAND, surely we can do a better job of that'. Furthermore, the NAND companies are entirely staffed by ex-DRAM people whose major lesson in life was that DRAM was killed from a margin point of view by letting someone else (the x86 CPU) commoditise the interface to it. So the idea of keeping the deep NAND interfaces secret, building their own controller or firmware or drive, while doling these secrets out very cautiously, made sense.
El Reg: Can you split SSD market development into phases?
Mike Shapiro: So we really see three phases of the SSD market using the rear-view mirror -
(1) STEC creates the market (first customers EMC and Sun)
(2) Startups enter the market, partnered with the NAND suppliers
(3) NAND suppliers become SSD suppliers, kill off startups (and STEC)
El Reg: How did the HDD suppliers mis-read things?
Mike Shapiro: How different it might have been if they (HDD suppliers) had acted in stage (2) Furthermore, the disk vendors assumed that all of their volume (i.e. small servers and laptops and desktops) would come from the 2.5in disk drive form factor for client [products] and would become the dominant form factor around 2009-10. But instead, thanks to cloud and tablets and iPhones, that entire transition has in fact been killed - the server market is in decline. All of the mobile computing and devices use 100 per cent flash, and so in fact the remaining use case for disks will be 3.5in (bulk storage).
El Reg: Bulk storage disk drive sales prospects look okay though?
Mike Shapiro: [Yes] thanks to the hyperscale customers like Google and Facebook and Apple, the market for bulk disk is now a direct sale (i.e. literally directly to the end customer) rather than an indirect one (through HP, Dell, IBM, Oracle etc). So we see an ability for the HDD guys to (temporarily) grow margin as they adapt to this opportunity, yet over the long term the opportunity to keep their position in the volume client storage device business has been entirely squandered.
El Reg supposes that Shapiro's criticisms are directed mostly at Seagate and Western Digital. The third HDD supplier is Toshiba and it operates flash foundries in partnership with SanDisk. Seagate has just widened its flash storage offering with three SSDS and a PCIe card powered by Virident, and Seagate is now almost 10 percent owned by flash foundry-operating Samsung. WD has an investment in all-flash array startup Skyera and is expected to widen its SSD range soon.
Can Seagate and WD catch up? Shapiro would think not. They are utterly screwed. ®
Why have CommVault shares outperformed EMC's?
Although CommVault and EMC have both exhibited steadily climbing annual revenues and profits CommVault shares have outperformed EMC's. Why? Are investors and the stock market commentators, analysts and influencers crazy?
A coming story about CommVault's results has charts that show the two company's annual results and their share price movements compared. How is it that their share prices have moved so differently?
From Paper's author Mario Blaum
in your note you write: "The paper, by IBM Almaden researcher Mario Blaum, professes to solve a problem where RAID 5 is insufficient to recover data when two disks fail."
This is an incorrect statement. I never say that in the article. RAID 5 cannot recover data when two disks fail. For that you need RAID 6. The problem addressed in the paper is one disk failure and up to two "silent" failures in sectors. That is, when a disk fails and you start reconstructing using RAID 5, you may find out that you cannot read a sector, and then you have data loss.
In order to handle this situation, people use RAID 6. But this is a very expensive solution, since you need to dedicate a whole second disk to parity. For that reason, we developed the concept of Partial MDS (PMDS) and Sector-Disk (SD) codes, which have redundancy slightly higher than RAID 5 but can handle the situation described.
Let me point out that Microsoft researchers in the Azure system came up independently with a similar solution, though the application was different (our main motivation was arrays of flash memory). Both the IBM and the Microsoft solutions involve computer searches. A theoretical solution was an open problem, which I provide in the paper you mention. The solution involves mathematical concepts like finite fields (to which you refer ironically as a mathematical side line with no real world applicability). I will make no apologies for the math and you are certainly free to believe that this is just a mathematical curiosity. However, we recently presented our results at FAST13 (Plank, Blaum and Hafner) and there was great interest. Jim Plank and I are preparing an expanded version under request for ACM. Best regards.
Nail. Head. On the. Hit.
I couldn't understand the paper as I don't have 90 per cent of the maths knowledge to do so - cerebral insufficiency. Mario couldn't explain it to me so I could understand it because of my limitations - so that would be no use in trying to get the paper's contents described on the Reg'.
This way is much better - and more fun.
Posted for Grant from US Army
Very simple answer concerning the RAID 5 paper -
The math is just to validate the findings and is immaterial to his basic premise, which is that by adding additional parity bits to a RAID 5 array, the fault tolerance of RAID 5 exceeds RAID 6, and at a much reduced cost.
Chris comment: Wow; I wish the abstract had said that!
Comment from Forum member who has forgotten his password
Basically the paper gives the theoretical proof and underpinnings for SD codes. SD codes are more efficient than RAID-6 in that you do not have to dedicate 2 disks for parity which tolerate the failure of one disk and one sector. ie RAID-6 protects you from a disk failure and an URE (unrecoverable read error) during a RAID rebuild.
With SD codes, a disk and a sector within a RAID stripe are dedicated for parity which is more efficient and actually maps to how devices fail. ie Entire disks rarely fail, what is more likely is the failure of a sector within the device. The USENIX paper quoted in the comments has more information. A key quote is below
"We name the codes “SD” for “Sector-Disk” erasure codes. They have a general design, where a system composed of n disks dedicates m disks and s sectors per stripe to coding. The remaining sectors are dedicated to data.
The codes are designed so that the simultaneous failures of any m disks and any s sectors per stripe may be tolerated without data loss"
The research paper offers the proof for why this is so.....
Chris comment - perhaps we need erasure codes to recover forgotten passwords....
Crowd-sourcing interpretation of IBM RAID 5 extension paper
The thread for comments describing and interpreting and reviewing IBM Almaden Researcher Mario Blaum's paper: "Construction of PMDS and SD Codes extending RAID 5" which can be downloaded as a pdf from here.
My insufficiency of cerebral matter prevents me so doing. Help please.
I'm taking a long view here and assuming these problems will be ironed out.
"As a matter of interest, where did you get the idea that "Every byte stored in Amazon's cloud, or Azure, or the Googleplex, or Rackspace, is a byte not stored in a private VMAX, VNX, Isilon, FAS-whatever, VSP or HUS, StoreServ, StoreVirtual, StoreWhatever, V7000, XIV, or DS-whatever."?
Because it seemed self-evident. Am I talking rubbish?
Cloud storage & legacy storage supplier vertical disintegration
This topic is for comments to the notion that public cloud storage growth will cause legacy storage product sales to collapse with existing storage suppliers becoming cloud storage service operators, if they can, or cloud storage service component suppliers, They will have to vertically disintegrate.
Re: LTFS and ugly ducklings: LTFS pitfalls
A vendor sent me these points about LTFS:
I would like to invite you to examine a list of caveats that ALL LTFS adopters need to pay attention to before they simply abandon whatever "proprietary" software they are currently using before moving all of their eggs into that LTFS basket.
In truth, this issues with tape and the general storage population were more related to capacity and performance rather than any problems with vendor lock-in. When a user chose a vendor's solution, they generally standardized on that solution - whether a tape technology or a software model - so not being able to read a DTL tae in a VXA drive was not at the heart of any displeasure on the part of the user. Rather more that they needed a week and major automation or staffing investments to create a backup to the existing tape technologies when they could accomplish the same apparent backup to a disk array in hours with no addition staff or education requirements.
The sad fact is that the tape drive vendors solved the primary issues with the advent of LTO-5 technology. With a proven throughput of 140MB/sec - 200MB/sec and capacities of 1.5TB to 2TB per tape (real numbers, not mythical marketing fluff), the capacity and performance issues became non-existent.
It was actually the unexpected and undisclosed announcement of IBM at NAB in 2011 (the remaining LTO.ORG members weren't even aware it was happening at the time) that has caused further fracturing in the market space as many existing tape software vendors were improving their tape support and offering much more robust solutions thanks to the combination of capacities and performance of the LTO-5 technology. Now, the LTO.ORG members had just placed a shot across their bows that warned that the work that so many had done for so long was now no longer applicable.
There are many aspects of tape that LTFS does NOT take into account, however. No verification of data written to the tapes. No mechanism for spanning writes across multiple tape volumes. Serious recovery issues if a reset or power glitch occurred during the writing of data to an LTFS tape. No easy way to track tapes that are not currently mounted on your system.
And my favorite glitch - there's no single point of support for an LTFS user when things go wrong (and they quite often go VERY wrong). Since it's open source, it's pretty much a case of "you broke, you get to keep all the pieces" when you need help. The response is generally "the source code is freely available..." But, how many small businesses or production companies have staff who are familiar with low-level C/C++ coding at the kernel level with a complete understanding of the low level operation of tape devices? On the other hand, that "openness" can also result in many splinter implementation as users decide that they can do this or that better.
I've anonymised the post in case the vendor meant it for me privately - but the points are the points.
LTFS and ugly ducklings
If it quacks like a duck, looks like a duck and swims like a duck then it's a duck, and not a swan. So what is LTFS?
LTFS is a way of providing file:folder type access to files on tape using drag and drop operations. You are no longer forced to use a backup application or equivalent software to move files to and from tape and so, the story goes, an obstacle to wider tape usage is removed.
My query is; how much of an obstacle is it? If I, as a user, have file:folder access both to an external disk and to a tape drive; on which device will I store my files? It will be disk, natch, because access is faster and there will be a backup, probably also on disk, in case disk numero uno goes tits up. Tape is still tape, still slow compared to disk, even inside an LTFS wrapper.
If the company I work for already has a tape system and it implements LTFS then yes, I could use the tape but why would I want to do that? If I was forced too then, fine, reluctantly I'd use the damn thing, but sneak in USB sticks to make life easier where I could.
There's a rumour that one large tape system-supplying vendor has not one LTFS-using customer in Europe. It wouldn't be surprising. For everyday access to files, putting LTFS access on tape in a disk-using world, is like putting lipstick on a pig. It's still a pig.
Am I right or am I being a dickhead about this and missing a point or points?
What is the real problem here?
Sent anonymously to me:-
A Reg reader has the following comments to make on the story Rise Of The Machines: What will become of box-watchers, delivery drivers?. The request to send this message came from the IP address 220.127.116.11.
This is more or less a transitory problem. Even if this now causes 10M people extra on welfare, the problem will go away in wotsit 55-odd years. Of course, maybe there'll be no state in 55 years.
And that's assuming there really are no alternatives. With the screaming about needing more foreign workers, well, maybe this pool of labour can fill that need, who knows.
That there's little manufacturing on US soil left, I can't really be arsed to care. Partially their own fault for letting that happen. Then again, as cheap becomes popular it becomes more expensive, making manufacturing elsewhere interesting again. There's plenty of room for innovation here.
I don't think we should frame it as an insurmountable problem. There'll be change, and it'll be painful, to be sure. But with a little looking forward for opportunities instead of problems, a lot can be done.
Also, google has already lost all credibility as "doing no evil". But I still don't think I'm going to be enamoured of the idea of holding back true technical invention for the sake of the incompetence of the representatives failing to care for their large pool of workers.
Besides, it's a wider problem, much wider. Both in geographical sense -- foxconn building factories run entirely with robots, nary a human in sight -- and the technology sense. You already pointed out the Luddides (and they did have a point), and we've been doing nothing but putting people out of work.
Alright, not entirely true. We've also created a lot of cubicle potato jobs, both as "knowledge worker" (arguably positive) and as what looks like machine minders but really are minded-by-machine drones, barely thought capable of clicking an icon.
And that, that is much worse, for it makes us string puppets of our own technology. The same is happening in a lot of security applications, but the redmondian desktop is perhaps the most widespread insultingly patronizing string puppeteering of human workers available today.
But anyway. We've automated the shit out of many a thing, putting ever more people out of work, and in the meantime the population has done nothing but grow.
What, now, is the real problem here?
Facebook's OCP is unrealistic - for the rest of us
Can Facebook's OCP vision succeed in turning back time and disaggregating the IT server, storage and networking industry into separate component developments linked by support of common interfaces?
Re: Math not quite right
That seems very good thinking to me. I should have thought of it myself.
Re: Software Ecosystem for Infrastructure
Naah, that seems unlikely. Mainstream customers will surely buy (pursuing low risk options) from mainstream vendors and only the biggest and most competent will "roll their own" with non-mainstream components. Just my two cents worth.
Re: Software Ecosystem for Infrastructure
I think any rise in SDDC will necessarily bring a restriction in the number of server, storage and network products supported to the subset of those available that "play nice" with VMware, Microsoft's Hyper-V and Red Hat Linux. The SDDC is a logical extension of the HW abstraction layer whose time "may" have come if top-level data centre IT component suppliers support it. But I feel these suppliers may well want to have their proprietary software layered on top of the data centre virtualisation software. It's what they've done with open abstraction layers in the past. Can they do his with VMware data centre virtualisation? We'll see.
Re: EVA end-of-life and no 3PAR replacement products???
Well, that was then and HP has fixed its mid-range hole with the new StoreServ products and EVA-->StoreServ migration facilities. Dell now has a harder job to do, competing with HP.
The future of storage
I've been talking to Jean-Luc Chatelaine, EVP strategy & technology for DataDirect Networks, and I'd like to check out his view of things with you; it being surprising.
He thinks that, starting 2014 and gathering pace in 2016, we're going to see two tiers of storage in big data/HPC-class systems. There will be storage-class memory built from NVRAM, post-NAND stuff, in large amounts per server, to hold the primary, in-use data, complemented by massive disk data tubs, ones with an up to 8.5-inch form factor and spinning relatively slowly, at 4,200rpm. They will render tape operationally irrelevant, he says, because they could hold up to 64TB of data with a 10 msec access latency and 100MB/sec bandwidth.
He claims contacts of his in the HDD industry are thinking of such things and that it would be a disk industry attack on tape.
So .... what do you think of JLC's ideas?
Sent to me anonymously and posted here as the quickest way to get the comment known:-
In your story, "Are you ready for the 40-zettabyte year?", you write, "The amount for 2020 would be 43.2ZB by interpolation." The increase is doubling every 2 years so you cannot use linear interpolation.
Half way between 2019 and 2021 the data will be closer to the 2019 amount than the 2021 amount. So 40 zettabytes looks reasonable for a continued doubling every two years.
Is he right? Am I innumerate?
You make good points and your comment was an interesting and enjoyable read. Ombudsman we are not and wouldn't pretend to be. But we do like reporting interesting things and this surely is interesting.
Did you know that customers worried about latency between their NetApp DC and AWS can deploy Riverbed Steelhead WAN optimisation? All they need is an appliance or software version of Steelhead in the DC and a Cloud Steelhead (CSH) at the AWS side and the NetApp replication is deduped along with any other data directly between DC and AWS. No need at all for the Colo site that NetApp specify and the CSH is set up as easily as buying an instance from Amazon in the first place via an online portal.
Jim Morris Group Business Development Manager
Zycko Benelux BV Smart Business centre, Daalwijkdreef 47, 1103 AD, Amsterdam The Netherlands
Sent to me and posted anonymously
What about what IBM does with their object based file system on IBM i? It seems to work pretty effectively at putting the data that will benefit the system most on flash and leaving the rest on the spinners.
Do their patents ( or the radically different nature of the storage management ) mean that no one else can do this? I seem to recall the BeOS file system having some of the same features, too.
Impractical SPEC sfs2008 NFS benchmark win by Huawei
Huawei soared tio the top of the SPEC sfs2008 NFS benchmark ranks with a 3 million+ IOPS score. Previous record-holder Avere didn't think much of it and here's what a spokesperson said:-
Unlike Avere's SPEC posting that used a single namespace (as well as the top results from both NetApp and EMC Isilon), Huawei used 24 filesystems and thus requires 24 mount points for each client. This is completely impractical from a management standpoint since the client has to somehow know which file system to look into for their data. It is like having 24 different internets and having to know that espn.com is on internet #17.
And as the data grows, you need to add new file systems (e.g. 25, 26, etc.) and move data between file systems as they fill up. This causes downtime as data is moved between the file systems and all the clients and application servers need to be constantly updated with the new location of the data. There is no abstraction layer separating the physical from the logical. So, while the solution they provided has high performance, it is just not practical in a real-world scenario.
There are many other gotchas in their test build that make it more of a science project than a real-world system, such as the requirement for two networks, both 10 GbE and Fibre Channel, limited scaling with 16 NAS engines max and more.
Seems a realistic point to me.
Pedantry in a humorous vein
Sent to me and I couldn't resist posting it here:-
Dear Mr. Mellor,
I realize it is a bit of a quibble, but shouldn't that be 'about 125 miles', 'a bit under 125 miles', or (most accurately), 'a bit over 124 miles'?
I will restrain my pedantic metric tendencies, and not grumble about the different sizes of mile...
Anon - name withheld by me.
PMT is now an acronym for Pendantic Metric Tendencies :-)
Options for Quantum - exiting tape?
With activist investor Starboard Value popping up out of the blue and claiming it owns 15.6 per cent of Quantum's shares and therefore wants a seat on the board and advice proffered about how to raise Quantum's share price taken seriously, what should Quantum do, and what would Starboard Value's advice be?
One suggestion is that Quantum could sell off its Scalar tape library business and exit the tape business altogether. This could enable it to pay off a lot of debt and stop making, presumed, losses from its tape business.
Is tis a good thing to do?
Who might buy the tape business? HP, IBM, Oracle, Qualstar, SpectraLogic or Tandberg Data?
What would a Quantum tape exit do to the Linear Tape Open consortium?
Have at it :-)
HP Memristor prospects
I had an e-mail from Blaise Mouttet re the notion that the memristor could be an important part of any HP recovery. Here it is:-
In your story last month "Flashboys: HEELLLP, we're trapped in a process size shrink crunch" [http://www.theregister.co.uk/2012/10/12/nand_shrink_trap/] it was commented by Bennet that "If it works who cares?" in spite of the fact that the memristor models appear wrong. Imagine if Violin Memory tried to manufacture flash memory arrays using incorrect models of transistors. Somehow I don't think that would work out too well for product development. In engineering good models are required to manufacture reliable products. The fact that Bennet does not understand this point is illustrative of either his incompetence or his inability to grasp what the "memristor" argument is really about.
Regarding the comment by Gartner's [Valdis] Filks that the memristor could represent the saving of HP this is probably based on a misunderstanding of HP's patent position. Samsung owns the patent for the TiO2 device HP originally claimed to be a memristor (see US Patent 7417271 - http://www.google.com/patents/US7417271). HP does not have a basic patent for metal oxide ReRAM and most of the metal oxide ReRAM patents are held by other companies (Unity Semiconductor, Panasonic, Sharp, Samsung). Meanwhile Stan Williams basic memristor patent (application 11/542,986 filed in 2006 -http://www.google.com/patents/US20080090337?dq=electrically+actuated+switch+williams&hl=en&sa=X&ei=FOqTULHMDNDU0gG804CwDw&ved=0CDgQ6AEwAg) has been repeatedly rejected by the US patent office and Hynix's relevant patents are almost completely devoted to phase change memory based on chalcogenide materials rather than the metal oxide ReRAM which HP claims to be a memristor. How exactly does the memristor represent the "saving of HP" if they don't have any fundamental patents and their alleged manufacturing partner is devoted to a different technology?
Is HP delusional over its memristor technology and IP?
Oracle RAC on NetApp FlexPods
Here's a mail I received about this story:-"NetApp and Cisco waggle shrunken ExpressPod at Hitachi and friends"
[Re] your article about FlexPod. Although I find Oracle's support of RAC on VM to be a massive step forward I also see it as slightly underhand. They still do not support processor pinning in any other virtual environment other than their own so in my view the support of RAC is negated by still requiring to licence every core in your VM farm or buy a site licence. Where I work we have been forced down the OVM\OVS route for this very reason. We do however run a mixed environemnt and with the current state of OVM\OVS I would much rather stick with physical tin for Oracle and a VMware farm for the rest. Unfortunately due to our requirments to stand systems up rapidly, etc. we have had to go down the OVM\OVS route.
What are your thoughts?
Indeed, what are your thoughts? Physical or VMware virtual tin for Oracle?
- Xmas Round-up Ten top tech toys to interface with a techie’s Christmas stocking
- It's true, the START MENU is coming BACK to Windows 8, hiss sources
- Google embiggens its fat vid pipe Chromecast with TEN new supported apps
- Pic NASA Mars tank Curiosity rolls on old WET PATCH, sighs, sniffs for life signs
- Microsoft: Don't listen to 4chan ... especially the bit about bricking Xbox Ones