* Posts by Chris Mellor 1

357 publicly visible posts • joined 10 Jun 2009

Page:

It's big, it's expensive and it's an audiophile's dream: The Sonos Sub

Chris Mellor 1

SUBstance abuser

Loved the comments, especially audiophile = hifi enthusiast with credit card. The whole set of comments has opened doors into music playback fields I didn't know existed. Sort your room out before sorting the audio out - that notion is, well, interesting. Music playing in my house is something we do in a room alongside other things - eating, reading, TV, meeting friends, chilling out and so on. It isn't just a listening room.

But I have got to listen to a high-end audio system to see/hear the difference. Are there demo centres anywhere near Croydon, UK I could visit?

Chris.

Chris Mellor 1

Well no. But a good wind-up comment.

Chris Mellor 1

Ah, never realised that matt black version was a time-limited offer. Couldn't see it had ever happened anyway.

Chris.

Chris Mellor 1

Re: Virtual purchase

Damn right!!!!

Chris Mellor 1

Re: Well..

Oh. Bach to my sources - sorry.

Chris Mellor 1

Re: Standard

Pair iof Play:3s. Before that a cheap Sharp surround system. Progression is a wonderful thing.

Chris.

Chris Mellor 1

Re: Virtual purchase

YES!!!!

Chris.

Should Seagate offload Xyratex disk array business

Chris Mellor 1

Should Seagate offload Xyratex disk array business

Seagate is buying Xyratex and gaining:-

1. HDD manufacture test equipment business,

2. Storage array disk enclosure business with customers Dell/EqualLogic , HP/3PAR and IBM XIV & StorWize,

3. ClusterStor HPC storage array business.

Seagate spun out its previous disk array business as Xiotech in Nov 2002. It sold its disk-enclosure-focussed Advanced Storage Architectures group to Xiotech in Nov 2007. Now it's back in the storage array OEM and end-use business. Should it be? Should it just offload the disk enclosure business and ClusterStor businesses to X-IO (renamed Xiotech) and I've that form a better growth path for its revenues?

Evan Powell exits Nexenta as Wyse guy strides in

Chris Mellor 1

Mark Lockareef says

From Mark Lockareef:

Just read your most recent article on Nexenta. As you know, Evan did a great job building the company in the early days but needed to pass the baton onto a leader that could take the company to the next level. I joined Nexenta as our interim CEO in February to help the company find our next permanent CEO. You state that I "departed in pretty short order" but I was actually the interim CEO for 7 months...longer than I expected to find the permanent CEO (I thought my role would last about 3-4 months). Turns out that great CEO candidates in the next-generation storage space are few and far between. We actually got a lot done in those 7 months wrt building the foundation for growth. Now that Tarkan is on board, it will be fun to watch how he continues the story.

Just wanted to make sure you got the straight scoop.

Best,

Mark Lockareff

Death of the business Desktop

Chris Mellor 1

Death of the business Desktop

The business PC desktop is facing death by a thousand VDI cuts augmented by a BYOD bashing.</p>

<hr class=JumpCut>

<p>Business desktop death pointers:</p>

I know this is repetitious; that’s the point. These aren’t just a few pointers; this is a flood, a veritable tidal wave of systems all focussed on removing pricy and complex-to-manage business desktops with centralised virtual desktop systems.

Some other suppliers with VDI capabilities; Pure Storage, Tegile, Nimble Storage, and Fusion-io. A combination of flash storage and deduplication is making it possible for cost-efficient and storage capacity-efficient VDI set-ups with the responsiveness of actual desktops, or better.

Set this VDI blitzkrieg to one side and consider BYOD - Bring Your Own Device, in which users bring their own notebook computers to the office. This is the guerilla war assaulting the business PC with VDI being a full-on, frontal assault.

The net result could be a multi-year reduction in business PC use with, in some businesses, desktop PCS literally disappearing.

We haven’t any numbers, beyond the general PC annual shipment numbers decline. Our sense of it is that the business desktop is an endangered IT species facing year-on-year shipment declines, wilting under the impact of artillery barrages from the massed ranks of VDI howitzers and BYOD sharpshooters.

In that case there will be a consequent decline in business PC component shipments; hard disk drives, power supplies, DRAM, motherboards and CPUs.

If 200,000 business desktops go away each year for five years that’s a million fewer hard drives shipped. And it could be worse; 500,000 fewer desktops each year means 2.5 million fewer drives over five years.

I think we’re at a VDI/BYOD tipping point and a storm surge of virtual desktop instances is going to wash increasingly unwanted and unloved business desktops right out of the offices they’re anchored to, never to return.

Is this true? Will it happen? Am I smoking pot?

I think not Sherlock.

Storage Memory

Chris Mellor 1

Storage Memory

The SMART flash DIMM announcement opened up a major server memory redesign period. The idea of packing NAND chips tightly together and accessing them in the same address space as main memory is highly attractive to server manufacturers looking for an edge in running applications faster, faster than PCIe flash for example.

SanDisk has bought SMART and now has a DIMM future (sorry). My understanding is that all the major server suppliers are looking at non-volatile memory DIMMs and designing future servers with storage memory, and not just with NAND but envisaging post-NAND technologies such as Phase Change Memory (PCM), Spin Transfer Torque (STT) RAM or some flavour of Resistive RAM (ReRAM) technology.

This technology transition will make storage memory byte- instead of block-addressable; the programming model would change. There would need to be a software layer, like Memcached, to present storage memory as pseudo-RAM to applications

We could think of X86-populated motherboards populated with storage memory DIMMs.

Cisco’s UCS servers are known for having large amounts of RAM. Building on its Whiptail all-flash array acquisition it would not be surprising if Cisco were to announce storage memory-using servers in 2014. We’re surely going to see Whiptail arrays using UCS servers instead of the Supermicro mills they currently employ.

Dell, IBM, and HP server engineers and designers must be actively looking into the same storage memory technology.

And it’s not just server manufacturers. Storage suppliers with an interest in PCIe flash are also looking at this topic. For example, I’m convinced that WD with its Virident PCIe flash acquisition is looking at the field, as well as Fusion-io. There is a go-to-market issue for the non-server suppliers, as in, who do they sell to?

Do they pursue IEM deals with the server suppliers, or retrofit deals with independent system vendors?

Moving on, in some scenarios a bunch of clustered storage memory DIMM servers with could avoid the need for an external flash array and talk to persistent external storage disk drive arrays for bulk capacity.

I’m seeing storage memory DIMMs as predominantly a server supplier play, and one that limits the applicability of all-flash arrays. Am I smoking pot here? Have my hack’s table napkin-class ideas gone way past reality? Tell me what you think is real here - and if reality bites my ass then I’ve learnt something, which will be good.

Re-purposing old arrays

Chris Mellor 1

Re-purposing old arrays

A German IBM customer, the Ernst Strüngmann Institute (ESI) for Neuroscience of Frankfurt., has dumped the EMC/Isilon O/S from three 36NL nodes and replaced it with SUSE Linux with IBM's GPFS as a filesystem.

Each node has 36 internal disk drives in a RAID-6 configuration. The InfiniBand adapters involved work with RDMA enabled for native GPFS - version 3.5.0.11 to be precise.

In effect old - 2011 era - not that old - Isilon hardware is being re-used in a 3-node cluster to function as a 55TB filestore using IBM software. Cool.

Are their other examples of storage array re-purpising that beat this in coolness factor terms?

3D Read/write heads

Chris Mellor 1

3D Read/write heads

Would it be theoretically possible to 3D print a disk drive read:write head?

I think you'd need a 3D printer that could print small numbers of molecules ...

Answers on a postcard ..... to this forum please.

Chris.

BT doles out measly 2GB to customers in Dropbox-alike BT Cloud

Chris Mellor 1

BT's press office sent me this note:

How much free storage you get is dependent on what package you choose, Unlimited Broadband extra and Unlimited BT Infinity 2 customers get 50GB:

http://www.productsandservices.bt.com/products/broadband/online-storage

http://www.btplc.com/news/Articles/ShowArticle.cfm?ArticleID=87EF70E6-C043-487D-9D0A-C17E79BA559E

Chris.

Ex-Sun Micro CTO reveals Greenbytes 'world-beating' dedupe

Chris Mellor 1

StorageTek not Sun

Sent to me so I'm passing it on:

Randall Chalfant was at StorageTek when acquired by Sun. Looking at his dates of service, it would be more accurate to identify him as ST rather than Sun.

Seems unlikely he was deeply connected to any of the ZFS work which was all done in CA at the time.

Seagate drops new summer spinners, bares 'quiet', 'fast' models

Chris Mellor 1

Re: Is this article just the result of being sloppy or is it a blatant shill?

No shilling here. My info is that WD Reds have 5400-5900 rpm through Intellipower and so aren't true 5900rpm drives. Are you saying yours is 5,900rpm constantly? In which case my files are wrong :-)

Chris Mellor 1

Re: WD 5900rpm

I have WD's 4TB Red drive going at 5400-5900 with Intellipower so I didn't class them as a true 5900rpm drive.

Chris.

What's an enterprise SSD sale?

Chris Mellor 1

What's an enterprise SSD sale?

Does building SSDs for your own use count as an enterprise SSD sale? Gartner says it does and includes Google as an enterprise SSD supplier with a revenue share because it builds SSDs for its own use. Is that right?

Panasas: We'll move the earth for you SIXTEEN times faster than FTP

Chris Mellor 1

Hook, line and sinker

Sent to me anonymously:-

Congratulations on taking the press release bait hook, line and sinker. This really is a complete troll release.

Do I have to spell it out to you? They're claiming they can be "maintaining up to 100 per cent bandwidth utilisation", which means simply stuffing the line as full as it will go. This is empathically not a design goal of rsync, quite the opposite really.

If this is their design goal then they ought to try and compare with bittorrent, a protocol designed to exploit weaknesses in TCP's ideas of "fairness" (confused? ask Andrew for the Briscoe paper redux) to stuff the line as full as it will go.

By contrast, rsync tries to reduce the need to transfer anything to the absolute minimum, preferring to leave the line idle while it works out what can be safely skipped. So this comparison is a little dishonest, to be quite unduly charitable about it.

Yet you bought it and wrote a nice little piece regurgitating their lies. How nice.

------------------------------

Chris.

Buffalo herds DDR3 RAMs into DriveStation's spinning rust corrals

Chris Mellor 1

Amount of DRAM cache

1GB of DDR3 DRAM cache. That was left out - oops!

Chris.

Are SPC Benchmarks useful?

Chris Mellor 1

Are SPC Benchmarks useful?

A commentard ripped into me over the HP SPC-1 benchmark win story - No Dell, no EMC? Well, HP's storage champ then.

Here's what the comment said (between arrow lines):-

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

You're kidding me right?

Chris, have you even read the specs of the arrays you're drawing comparisons against? Do you understand how the SPC benchmarks work and the impact particular type of resources have on the different workload profiles used?

Storage performance and scalability is large dependent on a number of different resource types and the ability to distribute different types of workloads across available resources (utilization rate).

From a workload distribution perspectives in simplistic terms we can think about frontend and backend. For the frontend, the 3PAR 7400 is using more FC ports than both the V7000 and the HUS150. We could get extremely technical on the impact more ports yields from a buffering and queuing perspective, but I think its pretty clear that more ports ultimately means that a host can process more I/Os in parallel.

From a backend workload perspective it more about the disk to offload the workload to. In some workloads profiles its almost always about the disks, with others parts of the I/O chain in-between almost running at line speed. The 7400 again has more disks (SDDs) than both the V7000 and the HUS150, in both cases we are talking double digits more and at 2500IOPS a pop thats 25,000 raw IOPS we can handle without cache! .

When it comes to dealing with the different workload profiles cache (for all but random read workloads) is king, a well designed array's performance scaling profile is based largely on this single resource type all other things being equal. The 7400 has more than 4 times the amount of raw cache than the V7000 and double the cache of the HUS150 When it comes random reads SDD is our saviour and as I said above, we have more SSD spindles in the 7400.

SPC comparisons that are designed in this way prove nothing. Please stop harping that X is better than Y because you've looked at some summaries at the SPC website, its false (performance) economics and just wrong.

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

So the SPC-1 benchark result summaries are not a valid way of comparing different vendors' systems.

The SPC web site home page includes this text; "SPC-1 benchmark results provide a source of comparative storage performance information that is objective, relevant, and verifiable. That information will provide value throughout the storage product lifecycle, which includes development of product requirements, product implementation, performance tuning, capacity planning, market positioning, and purchase evaluations. The SPC-1 Benchmark is designed to be vendor/platform independent and are applicable across a broad range of storage configuration and topologies."

The vendors agreed the SPC-1 benchmark and submit systems to it and publish results. So it's valid, I strongly submit, for us hacks to write about them. In other words I disagree with the comment above.

Is that a reasonable line to take?

Chris.

In-array compute ....

Chris Mellor 1

Re: Actually, yes...

This is all getting interesting and I think I'm getting polite wrist slaps from people who say the demand for in-array processing is more complex and more widespread than my simple little article says.

Yes, okay, it is. And, yes, okay, the idea of a storage array with spare server engines running VMS looks good and sensible .... but will Cisco, Dell, HP and IBM, who own most of the server market between them, do this?

It's okay for EMC and DDN and Violin Memory to push the im-array processing idea because it marks out their arrays as having more value but general adoption needs a big system vendor to jump on board and, so far, none of them has.

CHris.

Chris Mellor 1

In-array compute ....

.... is not going to fly. If networked storage takes too long to access then bring it closer to the servers. Don't start putting servers in storage arrays; that's like putting a car's engine inside its petrol tank. If fuel takes too long to get to the engine bring the petrol tank closer to the engine. Simple.

In-array compute is a dead duck, a dodo, an ostrich, a bird that isn't going to fly except in small market niches where buyers are array-centric in their approach rather than server-centric. To mix the metaphors, in-storage processing is for the birds.

Is this view pot-smoking or realistic?

Chris.

Analyst warns NetApp is prepping layoffs

Chris Mellor 1

Lay-offs took place

I was told by one person anonymously that 900 people were laid off by NetApp in US yesterday, mostly older people. It's not verified.

Chris.

Chris Mellor 1

NetApp layoffs insane

NetApp is no longer the best place to work. Their latest layoffs include a number of long term senior CLOUD strategists and Enterprise Architects who were hired to help provide the company with direction and focus on Cloud.

One can only assume, that the entrenched incumbent sales leaders have stuck their heads in the sand, preferring to continue to sell in outdated modes, via storage based solution sales, rather than helping their partners and their channel design and build truly competitive Cloud services. Their ignorance of Service Strategy, Design, and the basic tenets of ITIL and ISTM will well serve their competition well, and white box storage behind well designed Cloud Services based on Open Source will continue to eat their lunch.

-------------------------

Sent anonymously to me.

Are Disk Drive Vendors screwed (by flash)?

Chris Mellor 1

HDD vendor way back

Your piece on the death of traditional HDD vendors is interesting, but I wonder if there is a way back for the likes of Seagate and WD? It seems to me that today we have something of a cartel on the supply of flash memory chips, as a result of which we see artificially inflated prices for emerging SSD drives (£350-£450 for a Samsung 840 Pro 512Gb, for example, which is many multiples the price of a top-of-the-range 2 or 3Tb HDD).

This is because the emergent dominant players in this market are currently milking it for all they are worth. It's often the way, and if a market stagnates with 2 or 3 major players, we see little or no commercial pressure to innovate or reduce prices (think Microsoft in desktop, or even say Canon/Nikon in cameras, Nintendo/Sony/Microsoft in games consoles, etc,etc ).

So a possible route back for Seagate and/or WD would be to purchase a Flash Memory fabrication plant and then run out a line of decent quality drives that seriously undercut the existing cartel. Give the market say 1Tb Flash drives for an initial £500 [aiming to drop to £400 when the inevitable fightback begins] and there's the chance of winning back market share.

If they wanted to do this, Seagate/WD could fight back with say last year's SSD technology (one die size large fabbing, slightly lower clock rates) which may enable them to buy up fabrication gear on the cheap. They won't be able to compete with Samsung/Kingston/etc in performance terms, but as long as they offered a drive that was markedly faster than the best HDD and seriously undercut the cartel, they may succeed.

The one thing SSD doesn't have today is a high-capacity drive. Samsung/WS could do it. Question is, after the relatively recent floods in Thailand, and the time, trouble and effort they undoubtedly put into recovering their HDD fabrication capacity, have they got enough left in the tank to scale up a serious SSD challenge. I suspect we'd learn that the flooding is a much larger factor here than merely being caught asleep at the switch...

[Sent to me]

Chris Mellor 1

Re: Are Disk Drive Vendors screwed (by flash)?

Me too TeeCee. I'd have hight hybrids (SSHDs) would be being taken up by tablet and ultra-thin notebook makers enthusiastically.

Chris.

Chris Mellor 1

Are Disk Drive Vendors screwed (by flash)?

Read Mike Shapiro's view that HDD vendors are screwed by flash (included bellow). What do you think?

- Is Seagate clawing its way back?

- Is Toshiba sitting pretty with NAND foundry and HDD manufacturing operations?

- Can WD claw its way back?

- Are hybrid SSHDs enough for the HDD vendors enabling them to effectively ignore SSDs?

Chris

------------------------Mike Shapiro interview-------------------

The disk drive vendors have been utterly screwed by mismanaging the disruptive force of solid state drives: that's the view of Mike Shapiro - lately a storage bigshot at Sun and Oracle.

Mike Shapiro was a Sun Microsystems Distinguished Engineer, CTO, and VP of Storage for Sun and then Oracle. He is most recently a founder at a stealthy storage startup about which we know nothing. We spoke to Mike about his views on flash drives and the HDD suppliers.

El Reg:Did the disk drive companies enter the solid state market in a timely manner?

Mike Shapiro: Despite the emergence of STEC as the first enterprise SSD vendor in 2006 (when we first talked to them at Sun) it is remarkable to me that Seagate, HGST, and WD all failed to enter the SSD marketplace until 2011 (5 years later, by which time it was essentially too late). And when they did, they did the obvious dumb thing of pricing the SSDs above the 10k and 15k RPM drive lines - i.e. they made the classic error of thinking that they could solve a disruption by just organising their own revenue streams regardless of other market forces.

El Reg: So what happened?

Picture of Mike Shapiro, Bryan Cantrill

and Adam Leventhal

Mike Shapiro:As a result of this massive screwup, a raft of solid state drive companies entered the market, which in turn I believe spurred the NAND vendors to say, 'gee, if that's all it takes to mark up our NAND, surely we can do a better job of that'. Furthermore, the NAND companies are entirely staffed by ex-DRAM people whose major lesson in life was that DRAM was killed from a margin point of view by letting someone else (the x86 CPU) commoditise the interface to it. So the idea of keeping the deep NAND interfaces secret, building their own controller or firmware or drive, while doling these secrets out very cautiously, made sense.

El Reg: Can you split SSD market development into phases?

Mike Shapiro: So we really see three phases of the SSD market using the rear-view mirror -

(1) STEC creates the market (first customers EMC and Sun)

(2) Startups enter the market, partnered with the NAND suppliers

(3) NAND suppliers become SSD suppliers, kill off startups (and STEC)

El Reg: How did the HDD suppliers mis-read things?

Mike Shapiro: How different it might have been if they (HDD suppliers) had acted in stage (2) Furthermore, the disk vendors assumed that all of their volume (i.e. small servers and laptops and desktops) would come from the 2.5in disk drive form factor for client [products] and would become the dominant form factor around 2009-10. But instead, thanks to cloud and tablets and iPhones, that entire transition has in fact been killed - the server market is in decline. All of the mobile computing and devices use 100 per cent flash, and so in fact the remaining use case for disks will be 3.5in (bulk storage).

El Reg: Bulk storage disk drive sales prospects look okay though?

Mike Shapiro: [Yes] thanks to the hyperscale customers like Google and Facebook and Apple, the market for bulk disk is now a direct sale (i.e. literally directly to the end customer) rather than an indirect one (through HP, Dell, IBM, Oracle etc). So we see an ability for the HDD guys to (temporarily) grow margin as they adapt to this opportunity, yet over the long term the opportunity to keep their position in the volume client storage device business has been entirely squandered.

El Reg supposes that Shapiro's criticisms are directed mostly at Seagate and Western Digital. The third HDD supplier is Toshiba and it operates flash foundries in partnership with SanDisk. Seagate has just widened its flash storage offering with three SSDS and a PCIe card powered by Virident, and Seagate is now almost 10 percent owned by flash foundry-operating Samsung. WD has an investment in all-flash array startup Skyera and is expected to widen its SSD range soon.

Can Seagate and WD catch up? Shapiro would think not. They are utterly screwed. ®

Why have CommVault shares outperformed EMC's?

Chris Mellor 1

I reckon if CommVault's share price is realistic then EMC is dreadfully under-valued by the market. On the other hand the opposite could be true! I just can't make sense of it.

Chris.

Chris Mellor 1

Why have CommVault shares outperformed EMC's?

Although CommVault and EMC have both exhibited steadily climbing annual revenues and profits CommVault shares have outperformed EMC's. Why? Are investors and the stock market commentators, analysts and influencers crazy?

A coming story about CommVault's results has charts that show the two company's annual results and their share price movements compared. How is it that their share prices have moved so differently?

Crowd-sourcing interpretation of IBM RAID 5 extension paper

Chris Mellor 1

From Paper's author Mario Blaum

Chris,

in your note you write: "The paper, by IBM Almaden researcher Mario Blaum, professes to solve a problem where RAID 5 is insufficient to recover data when two disks fail."

This is an incorrect statement. I never say that in the article. RAID 5 cannot recover data when two disks fail. For that you need RAID 6. The problem addressed in the paper is one disk failure and up to two "silent" failures in sectors. That is, when a disk fails and you start reconstructing using RAID 5, you may find out that you cannot read a sector, and then you have data loss.

In order to handle this situation, people use RAID 6. But this is a very expensive solution, since you need to dedicate a whole second disk to parity. For that reason, we developed the concept of Partial MDS (PMDS) and Sector-Disk (SD) codes, which have redundancy slightly higher than RAID 5 but can handle the situation described.

Let me point out that Microsoft researchers in the Azure system came up independently with a similar solution, though the application was different (our main motivation was arrays of flash memory). Both the IBM and the Microsoft solutions involve computer searches. A theoretical solution was an open problem, which I provide in the paper you mention. The solution involves mathematical concepts like finite fields (to which you refer ironically as a mathematical side line with no real world applicability). I will make no apologies for the math and you are certainly free to believe that this is just a mathematical curiosity. However, we recently presented our results at FAST13 (Plank, Blaum and Hafner) and there was great interest. Jim Plank and I are preparing an expanded version under request for ACM. Best regards.

-Mario Blaum

Chris Mellor 1

Posted for Grant from US Army

Very simple answer concerning the RAID 5 paper -

The math is just to validate the findings and is immaterial to his basic premise, which is that by adding additional parity bits to a RAID 5 array, the fault tolerance of RAID 5 exceeds RAID 6, and at a much reduced cost.

-------------------------

Chris comment: Wow; I wish the abstract had said that!

Chris Mellor 1

Comment from Forum member who has forgotten his password

Basically the paper gives the theoretical proof and underpinnings for SD codes. SD codes are more efficient than RAID-6 in that you do not have to dedicate 2 disks for parity which tolerate the failure of one disk and one sector. ie RAID-6 protects you from a disk failure and an URE (unrecoverable read error) during a RAID rebuild.

With SD codes, a disk and a sector within a RAID stripe are dedicated for parity which is more efficient and actually maps to how devices fail. ie Entire disks rarely fail, what is more likely is the failure of a sector within the device. The USENIX paper quoted in the comments has more information. A key quote is below

"We name the codes “SD” for “Sector-Disk” erasure codes. They have a general design, where a system composed of n disks dedicates m disks and s sectors per stripe to coding. The remaining sectors are dedicated to data.

The codes are designed so that the simultaneous failures of any m disks and any s sectors per stripe may be tolerated without data loss"

The research paper offers the proof for why this is so.....

-------------

Chris comment - perhaps we need erasure codes to recover forgotten passwords....

Chris Mellor 1

Crowd-sourcing interpretation of IBM RAID 5 extension paper

The thread for comments describing and interpreting and reviewing IBM Almaden Researcher Mario Blaum's paper: "Construction of PMDS and SD Codes extending RAID 5" which can be downloaded as a pdf from here.

My insufficiency of cerebral matter prevents me so doing. Help please.

Chris.

Reg boffins: Help us answer this Big Blue RAID data recovery poser

Chris Mellor 1

Re: Really?

Nail. Head. On the. Hit.

I couldn't understand the paper as I don't have 90 per cent of the maths knowledge to do so - cerebral insufficiency. Mario couldn't explain it to me so I could understand it because of my limitations - so that would be no use in trying to get the paper's contents described on the Reg'.

This way is much better - and more fun.

Chris.

Cloud storage & legacy storage supplier vertical disintegration

Chris Mellor 1

H SeymourHoltz,

I'm taking a long view here and assuming these problems will be ironed out.

Chris.

Chris Mellor 1

"As a matter of interest, where did you get the idea that "Every byte stored in Amazon's cloud, or Azure, or the Googleplex, or Rackspace, is a byte not stored in a private VMAX, VNX, Isilon, FAS-whatever, VSP or HUS, StoreServ, StoreVirtual, StoreWhatever, V7000, XIV, or DS-whatever."?

Because it seemed self-evident. Am I talking rubbish?

Chris.

Chris Mellor 1

Cloud storage & legacy storage supplier vertical disintegration

This topic is for comments to the notion that public cloud storage growth will cause legacy storage product sales to collapse with existing storage suppliers becoming cloud storage service operators, if they can, or cloud storage service component suppliers, They will have to vertically disintegrate.

LTFS and ugly ducklings

Chris Mellor 1

Re: LTFS and ugly ducklings: LTFS pitfalls

A vendor sent me these points about LTFS:

I would like to invite you to examine a list of caveats that ALL LTFS adopters need to pay attention to before they simply abandon whatever "proprietary" software they are currently using before moving all of their eggs into that LTFS basket.

In truth, this issues with tape and the general storage population were more related to capacity and performance rather than any problems with vendor lock-in. When a user chose a vendor's solution, they generally standardized on that solution - whether a tape technology or a software model - so not being able to read a DTL tae in a VXA drive was not at the heart of any displeasure on the part of the user. Rather more that they needed a week and major automation or staffing investments to create a backup to the existing tape technologies when they could accomplish the same apparent backup to a disk array in hours with no addition staff or education requirements.

The sad fact is that the tape drive vendors solved the primary issues with the advent of LTO-5 technology. With a proven throughput of 140MB/sec - 200MB/sec and capacities of 1.5TB to 2TB per tape (real numbers, not mythical marketing fluff), the capacity and performance issues became non-existent.

It was actually the unexpected and undisclosed announcement of IBM at NAB in 2011 (the remaining LTO.ORG members weren't even aware it was happening at the time) that has caused further fracturing in the market space as many existing tape software vendors were improving their tape support and offering much more robust solutions thanks to the combination of capacities and performance of the LTO-5 technology. Now, the LTO.ORG members had just placed a shot across their bows that warned that the work that so many had done for so long was now no longer applicable.

There are many aspects of tape that LTFS does NOT take into account, however. No verification of data written to the tapes. No mechanism for spanning writes across multiple tape volumes. Serious recovery issues if a reset or power glitch occurred during the writing of data to an LTFS tape. No easy way to track tapes that are not currently mounted on your system.

And my favorite glitch - there's no single point of support for an LTFS user when things go wrong (and they quite often go VERY wrong). Since it's open source, it's pretty much a case of "you broke, you get to keep all the pieces" when you need help. The response is generally "the source code is freely available..." But, how many small businesses or production companies have staff who are familiar with low-level C/C++ coding at the kernel level with a complete understanding of the low level operation of tape devices? On the other hand, that "openness" can also result in many splinter implementation as users decide that they can do this or that better.

--------------------------

I've anonymised the post in case the vendor meant it for me privately - but the points are the points.

Chris.

Chris Mellor 1

LTFS and ugly ducklings

If it quacks like a duck, looks like a duck and swims like a duck then it's a duck, and not a swan. So what is LTFS?

LTFS is a way of providing file:folder type access to files on tape using drag and drop operations. You are no longer forced to use a backup application or equivalent software to move files to and from tape and so, the story goes, an obstacle to wider tape usage is removed.

My query is; how much of an obstacle is it? If I, as a user, have file:folder access both to an external disk and to a tape drive; on which device will I store my files? It will be disk, natch, because access is faster and there will be a backup, probably also on disk, in case disk numero uno goes tits up. Tape is still tape, still slow compared to disk, even inside an LTFS wrapper.

If the company I work for already has a tape system and it implements LTFS then yes, I could use the tape but why would I want to do that? If I was forced too then, fine, reluctantly I'd use the damn thing, but sneak in USB sticks to make life easier where I could.

There's a rumour that one large tape system-supplying vendor has not one LTFS-using customer in Europe. It wouldn't be surprising. For everyday access to files, putting LTFS access on tape in a disk-using world, is like putting lipstick on a pig. It's still a pig.

Am I right or am I being a dickhead about this and missing a point or points?

Rise Of The Machines: What will become of box-watchers, delivery drivers?

Chris Mellor 1

What is the real problem here?

Sent anonymously to me:-

A Reg reader has the following comments to make on the story Rise Of The Machines: What will become of box-watchers, delivery drivers?. The request to send this message came from the IP address 94.211.113.82.

This is more or less a transitory problem. Even if this now causes 10M people extra on welfare, the problem will go away in wotsit 55-odd years. Of course, maybe there'll be no state in 55 years.

And that's assuming there really are no alternatives. With the screaming about needing more foreign workers, well, maybe this pool of labour can fill that need, who knows.

That there's little manufacturing on US soil left, I can't really be arsed to care. Partially their own fault for letting that happen. Then again, as cheap becomes popular it becomes more expensive, making manufacturing elsewhere interesting again. There's plenty of room for innovation here.

I don't think we should frame it as an insurmountable problem. There'll be change, and it'll be painful, to be sure. But with a little looking forward for opportunities instead of problems, a lot can be done.

Also, google has already lost all credibility as "doing no evil". But I still don't think I'm going to be enamoured of the idea of holding back true technical invention for the sake of the incompetence of the representatives failing to care for their large pool of workers.

Besides, it's a wider problem, much wider. Both in geographical sense -- foxconn building factories run entirely with robots, nary a human in sight -- and the technology sense. You already pointed out the Luddides (and they did have a point), and we've been doing nothing but putting people out of work.

Alright, not entirely true. We've also created a lot of cubicle potato jobs, both as "knowledge worker" (arguably positive) and as what looks like machine minders but really are minded-by-machine drones, barely thought capable of clicking an icon.

And that, that is much worse, for it makes us string puppets of our own technology. The same is happening in a lot of security applications, but the redmondian desktop is perhaps the most widespread insultingly patronizing string puppeteering of human workers available today.

But anyway. We've automated the shit out of many a thing, putting ever more people out of work, and in the meantime the population has done nothing but grow.

What, now, is the real problem here?

Facebook's OCP is unrealistic - for the rest of us

Chris Mellor 1

Facebook's OCP is unrealistic - for the rest of us

Can Facebook's OCP vision succeed in turning back time and disaggregating the IT server, storage and networking industry into separate component developments linked by support of common interfaces?

Brit disk biz Nexsan out of the frying pan and into the firing line

Chris Mellor 1

Re: Math not quite right

That seems very good thinking to me. I should have thought of it myself.

Cheers.

Software-defined data centre. Any takers?

Chris Mellor 1

Re: Software Ecosystem for Infrastructure

Naah, that seems unlikely. Mainstream customers will surely buy (pursuing low risk options) from mainstream vendors and only the biggest and most competent will "roll their own" with non-mainstream components. Just my two cents worth.

Chris Mellor 1

Re: Software Ecosystem for Infrastructure

I think any rise in SDDC will necessarily bring a restriction in the number of server, storage and network products supported to the subset of those available that "play nice" with VMware, Microsoft's Hyper-V and Red Hat Linux. The SDDC is a logical extension of the HW abstraction layer whose time "may" have come if top-level data centre IT component suppliers support it. But I feel these suppliers may well want to have their proprietary software layered on top of the data centre virtualisation software. It's what they've done with open abstraction layers in the past. Can they do his with VMware data centre virtualisation? We'll see.

Fixing Dell Storage

Chris Mellor 1

Re: EVA end-of-life and no 3PAR replacement products???

Well, that was then and HP has fixed its mid-range hole with the new StoreServ products and EVA-->StoreServ migration facilities. Dell now has a harder job to do, competing with HP.

Chris.

Chris Mellor 1

Fixing Dell Storage

Dell storage revenues have declined for two years and its head, Darren Thomas, has just resigned. How should Dell storage be fixed so it delivers on its promise?

The future of storage

Chris Mellor 1

The future of storage

I've been talking to Jean-Luc Chatelaine, EVP strategy & technology for DataDirect Networks, and I'd like to check out his view of things with you; it being surprising.

He thinks that, starting 2014 and gathering pace in 2016, we're going to see two tiers of storage in big data/HPC-class systems. There will be storage-class memory built from NVRAM, post-NAND stuff, in large amounts per server, to hold the primary, in-use data, complemented by massive disk data tubs, ones with an up to 8.5-inch form factor and spinning relatively slowly, at 4,200rpm. They will render tape operationally irrelevant, he says, because they could hold up to 64TB of data with a 10 msec access latency and 100MB/sec bandwidth.

He claims contacts of his in the HDD industry are thinking of such things and that it would be a disk industry attack on tape.

-------------------------

So .... what do you think of JLC's ideas?

Are you ready for the 40-zettabyte year?

Chris Mellor 1

Wrong interpolation

Sent to me anonymously and posted here as the quickest way to get the comment known:-

In your story, "Are you ready for the 40-zettabyte year?", you write, "The amount for 2020 would be 43.2ZB by interpolation." The increase is doubling every 2 years so you cannot use linear interpolation.

Half way between 2019 and 2021 the data will be closer to the 2019 amount than the 2021 amount. So 40 zettabytes looks reasonable for a continued doubling every two years.

-------------------------

Is he right? Am I innumerate?

Chris

Private cloud user in dialogue with Atmos, el Reg facilitating

Chris Mellor 1

Re: Follow-on?

Guus,

You make good points and your comment was an interesting and enjoyable read. Ombudsman we are not and wouldn't pretend to be. But we do like reporting interesting things and this surely is interesting.

Best wishes,

Chris.

Page: