May I just say:
"How'd you like to drive this pulling machine, billy boy?"
UK-based data storage start-up DataSlide has announced potentially revolutionary hard drive technology, and a Partnership Network agreement with Oracle for the Berkeley Data Base to be embedded into the device. DataSlide's Hard Rectangular Drive (HRD) does not use read-write heads moving across the recording surface of a …
"How'd you like to drive this pulling machine, billy boy?"
So sad. They just missed (by two years) the 75th anniversary celebration of the first time it was invented.
Apparently a HDD has zero dimensions, at least according to their PowerPoint circa 2006.
Whilst if this works it looks like SSDs could have been the shortest lived fad ever, where is the proof? The slideset talks about all the concepts being proven in July 2006 and the article says all the technology is proven and doesn't need big changes in manufacturing process, so how come this hasn't already been out and defeated SSDs? If it really was that easy I'd expect a product in a range of capacities to be on the market already. I'm guessing the catch is coordinating all the heads. Otherwise I would have expected someone like EMC to have taken their arm off at the elbow in the race to get this into their SAN arrays ahead of their rivals.
If this really works it truly is revolutionary. It makes you think "why the hell did they make hard discs the way they did back when they did it, it all is rather silly, really."
If it does what it says on the box, at a reasonable price, it will make a lot of hard drive manufacturers go "hmm", and then "ohshit".
I remember seeing this in text books (about mainframes) when I was first at college in the late 80's. Have occasionally wondered why (with the advantages they promised) they seemed to never make it to the mainstream.
I always assumed the head density required was beyond the technology available..
...because then you'd have to machine a curved surface. Very difficult for HDD tolerances. Also says the surface oscillates not rotates.
...position the boom then read like stink from 64 flying heads. This was ca. 1970, when the drums were already rotating at 7200 RPM.
Sounds much like the old Burroughs head-per-track disk drives in use up to at least 1975.
Pah, old news, you've been able to buy Rabbit vibrators like that for years.
Paris, she told me so...
Sure is a lot of dismissive comments in here; I guess it is easier than actually reading the article.
This isn't a drum, or a hard disk with multiple heads. It is an 8x8 array of heads shimmying about a static piece of magnetic recording medium.
The technology is the same as that of a hard disk - magnetic heads over a magnetic medium but the execution is a genius 'why didn't I think of that' step.
I'll be interested to see how this pans out.
I love that it says 'CONFIDENTIAL' on each slide. Slide 10 mentions 36 Gbytes and Dataslide Capacity. Is this the intended capacity of initial production models? If so and if they can deliver performance, then they would be very good as system disks for both high end desktops and embedded systems. If not, then we wont hear much more about them.
Slide 14 explains that a spinning HDD doesn't use the centre or the corners of the storage surface; could that be because the storage surface has a spindle in the centre and ...er...doesn't have any corners?
Slide 16 claims it has no seek time, but if it 'moves' the surface by oscillating at 800 Hz, that will give an effective seek time of 1.25 ms maximum, less than 1ms average; which is small but not-zero.
The entire presentation has the look of a VC-wank show, created by marketing after they've had the usual very short chat with the engineers (running away as soon as they can before any possible infection takes hold).
How do they maintain the extremely accurate positioning required between all 64 heads when they only move the media?
You would think on a conventional 3 platter 6 head drive you could use all 6 heads at the same time and get 6 way RAID like performance from a single drive but you can't because the variable misalignment between heads means the single actuator can only keep one of them accurately on track at a time. They have the same problem. Can they keep the head and media aligned and thermal expansion matched well enough to allow the heads to be small enough to get a useful storage capacity?
Finding the web site reveals a few more things. There aren't 64 heads - there are (so it says) millions of the things. However, only 64 of them can be working at any one point. That at least covers the point of being able to move heads independently, although it does required the fabrication of vast numbers of read/write heads over quite considerable lengths. A 8cm square plate would require 6 such assemblies. In order to address 36GB then each square would have around 4.5Gbits or about a matrix of about 67k x 67kbits. In practice it would probably be more like 80k x 80k bits to allow for error correction and all the other stuff. The "track width" would be about 125 nanometres (of course the read/write heads would have to be quite a lot smaller). If the plate was smaller then the dimensions would have to be reduced appropriately for the same capacity. Note that's about 5 million heads in all.
An areal density of 4.5Gbits per sq cm is fairly modest - there are drives with about 10 times that aerial density. In principle, the square plate could be smaller by a factor of about 3 in each dimension without exceeding what is done on some high density drives. Also the above assumes that the bits have roughly the same linear density in each direction. It would be possible to have fewer tracks with more bits per unit length. However, do the maths and at the 800Hz reported oscillation speed (from the pdf), and taking each "track" at about 8K then we can get 64heads x 9Kb x 800Hz = 460MBps or close enough to their 500Mbps. Average latency will be about 0.6ms (assuming one way read/write and, like a disk, that average latency is half the cycle time).
As the heads appear to be held in dead-reckoning with the data plate (unless there is some form of micro-adjustment per group of heads) then the tolerance for differential expansion will be virtually zero - a relative expansion of 125 nanometres across a with of 8cm is a tiny amount. That's one part in about 800,000.
I'll be convinced when I can buy one of these things. My money is still on flash...
Quite apart from this not even being a working prototype - more a concept model with a few of the principles having been lap demonstrated, then I really have to question how on earth some of those random access IOPs figures are justified. With 64 heads operating in parallel, then it's very easy to see how 500MBps can be reached (less than 8MBps per head).
The PowerPoint slide set implies that in latency terms it is equivalent to a 96,000 RPM hard drive. That would give an average rotational latency of about 0.3ms (ignore seek time for these purposes). If the oscillating plate can be read/written in both directions (a challenge) then it would have to oscillate backwards about 1600 times per second (which is going to use a lot of power for a slide of any reasonable mass). Even then, to get the very high number of random IOPs quoted, then each head will have to be individually movable at right angles to the direction of movement of the rectangular plate (an essential, but huge engineering challenge of great complexity). So it is just about possible to imagine that you could get 64 x 3200 = 200,000 theoretical random IOPs per second. Drop off a few as zooming from one side of a rectangular sector to another will take time so there will be a few "oscillation" misses, and that inevitably there will be times when some heads will have not IOPs to do and others more than one, then you might come up with a figure of 160k IOPs.
The very considerable engineering challenges here are :-
What mass is this slider and how much power does it take to oscillate it at the Khz + level? Note that rotary movement is relatively efficient once spun up (subject to frictional drag) - the constant acceleration/deceleration of an oscillating plate is not.
How easy is it to move 64 heads individual, even in just one dimension (and if you can't move them individually, then you can't get the random IOPs)?
How can you keep the read/write heads at the right distance from the slider? Traditional hard drives float heads on a cushion of air. That isn't going to be possible her - is this a contact system, or is the separation finely engineered mechanical?
What is the data capacity of this beast? It's going to be expensive on a per-plate basis, and it doesn't seem amenable to stacking lots in an enclosure due to the complexity. The total surface area can be bigger than a 3.25" single plate, but something (say) 8cm x 8cm (a little over 3 inches) is going to require the plate and heads each to be movable about 1cm (a big ask at those access rates). Reduce the plate size and you have less of a problem but cuts capacity. For for a 3cm square plate and there is one quarter the capacity.
I'd also worry about the reliability of this thing with all those mechanical parts. Even then, the latency on I/Os comes nowhere near the best flash drives. This thing would appear to offer latencies in the region of several hundred microseconds. The best flash drives are down at 10s of microseconds (albeit that you need direct PCIe attach to see it),
So lots and lots of questions about this and surely the days of macro-sized mechanical storage beasts have got to be numbered. Moving parts are just bad - oscillating moving parts probably worse than rotating ones.
I would have thought that there would be quite a few difficulties with this kind of technology.
It does say that that the surface oscillates.
In a conventional harddrive, when the read heads move, there is a "settling down" time for the heads to come to a proper rest so that you can start reading reliably. It seems to me that they are going to have big problems since the surface is in almost perpetual movement and changing direction to boot.
Sounds quite hard to me. But if they can get it to work, it would be pretty quick.
Reading their website (much more informative than the ppt), it appears that they have a "massively parallel" array of R/W heads, and that 64 of them can be active at one time. It's "one head per sector". Still, claiming 160,000 ops/s, and 64 heads in parallel, requires 2500 ops/s/active head. Either the oscillation rate has been increased from the 800 Hz (from the ppt) or more than one I/O per stroke is possible, perhaps both.
>It is an 8x8 array of heads
Look again at page 14 of the presentation. It says "one head per sector", Although elsewhere it says that there can be 64 heads active at any one time, it doesn't say that there are only 64 heads. That also explains the lack of seek time. All the controller has to do is activate the head that corresponds to the sector you want to read, then start the piezo jobby and read the data. The heads seem to be lithographed (if that's a word) onto the substrate. Although I can't say what so many heads would do for the cost of the device.
Presumably, also, the 64 active heads number is just a place holder - it could be more, if you paralleled up the number of controllers - though if 64 would saturate a PCs bus, there would be little point in having more - until you want to address the storage server market.
So far as the comment about corners and centres goes, it looks like all the presentation is trying to say is that when you fit a circular platter into a rectangular disk tray, there's wasted space around the outside and in the middle. A disadvantage that his rectangular plate, whizzing up and down doesn't have. I'd also guess that you could sandwich lots of head / data layers together in one box to multiply up the storage, too.
To deal with each of the above in order:
It is not a drum, it is a (very) flat rectangle of Corning ULE glass, which has the mechanical properties of Aluminium and a CTE of 10-9, hence no thermal instability or miss-alignment issues.
Actually a read write head is a 'point' source, I believe a point or locus, is by definition dimensionless ?, a line, one dimension, a square, two dimensions etc....two dimensions are one of the foundations of relational calculus, hence SOL and most Databases, hence archtecturally useful in such applications..
Al the heads are lithographed (at micron feature size) in a very similar manner to LCD production using precisely the same techniques as current HDD heads, however by using a novel (hence patented), but fundamental magnetic flux concentration principle the flux produced is both orthogonal to the head matrix surface, but also ideal for perpendicular media and significantly more efficient per unit of current.
The huge reduction in process steps and component parts means that it will be possible to manufacture at a similar price point to current high end HDDs, and especially if one takes into account the short stroking and switching off of the cache of such drives at Tier 0.
The novel flux 'focus' and the simplicity of the design means most of the lithography is self-aligning, hence very low mask costs.
There is a very considerable amount of 'prior -art' in the recent decades to attempt just this, however the majority of these were concentrated on solving problems of particularly thermal miss alignment, the use of the Corning ULE glass removes this issue.
The heads are all fixed, the entire matrix moves, about 100 microns, the random access is achieved by time-slice switching during each half oscillation.
Similar to any number of head per track etc... devices, but in fact a head per sector and essentially a RAM mapped architecture.
The current first product feature size and surfaces, provides between 40 and 80 GBytes, this size of device at Tier 0, at very low energy per IOPS is a premium and growing market, also it is plug and play, can be put into a server rack in the DMZ and let your SysOps loose..... it won't be long before if moves down the storage Tiers and curiously that maps to our proposed manufcaturing road-map :-)
True, a disk wastes about half of the available linear space to put media into than HDD case, not only that but it wastes the 'depth' of the case, which dataslides can fill with head and media units to give increased capacity in a standard device, hence 'spatial' as opposed to 'areal' density.
There is no seek, since each head is located over the sector of media (and data) which it is registered to, it does however have a LATENCY of 0.5 ms at 1 KHz, also with some MRAM in the CMOS and some appropriate firmware it is possible to move this to much less.
Curiously, as to the order in which the various 'professions were introduced to the concept, this was actually invented by SysOps, tired of managing server farms.... scarily, the engineers are still very much in the ascendant :-)
The basic alignment process is done with piezoelectric actuators which currently are in any number of industrial applications and especially in the IC industry, nanometre accuracy is standard, and is a direct voltage step process.
Also the CTE of 10-9 means about an atom width per degree Kelvin I believe... so not an immediate problem...
In fact it is likely that the first product, because of the largest/optimal IC stepper size, will be a RAID device n each surface...
Nice idea though
if it ever appears, is going to need to be cheaper than an ordinary hard drive to stand any chance.
10,000 SSD re-writes anyone
I can think of a workaround for every single problem I can find with this design but it ends up being a lot of workarounds. It's not so simple and robust in the end. The web site looks fishy to me. All that talk of IOPS and bandwidth without mentioning density reminds me of cars and trucks that quote crankshaft torque without the RPM.
Mine's the bicycle with over 300 foot-pounds of crank torque (if I pull really hard on the handlebars).
Interesting reply by Mr Barnes. The only questions still lurking for me are:
how is the read/write plate kept off the storage surface?
how are the read plate and storage surface kept aligned in the x axis as the storage surface vibrates away in the y axis?
I accept that it only moves microns, but surely, ever since Analogue computers,
we have known that electro-mechanical devices are inherently more prone to
failure than solid state ?
Why is this device better than solid state memory / memory sticks ?
Has it a longer re-write life, and why ?
And can't we now do core memory in flatpack construction ?
Thanks for the detail - it fills in a few gaps and it certainly makes the position look very different to the Powerpoint slide and Press release stuff. So it looks very much like this is going to maximise the area of the slide within the 3.5" form factor (which maybe allows for something close to 10cm x 15cm or 150 sq cm). That means the aireal density will be moderately low - 36GB over 150 sq cm gives about 1.92 Gbits/sq cm or a pitch (assuming equal density on both directions) of 0.2 microns allowing for a small overhead for error correction bits and so on (a micron-level pitch would only allow for about 15 Gbits or 2 Gigabytes on a single surface). However, I think it almost certain that the bit density along the line of oscillation is much higher than that between tracks as it would reduce the number of heads considerably. Even so, the number of heads is going to be vast many 10s of millions and maybe heading towards the 100 million mark.
The between-track density might be fairly low - perhaps an order of magnitude bigger than the best semiconductor fabrication, but it is surely something of a challenge to produce such large matrices of heads over quite large surface areas of 10s of square centimetres.
I await products - I still put my money on flash at the moment.
Over the many years I've been in the IT Industry there have been countless discussions of when the Hard Drive would eventually become obselete.
Well it hasn't happened yet, and if this stuff really does do what it says on the tin then its not going to happen anytime soon either.
Mines the one with a Hard ONe in the pocket.
The surfaces are full face contact with a coating of taC (tetrahedrally amorphous carbon) this is simplistically a solid mixture of sp3 diamond and graphite, with a CoF of 0.01 - 0.001 depending on doping with nitrogen and humidity, it is therefore one of the hardest and slipperiest materials known. It has a very high heat transfer rate and in long term tests has shown no detectable heat from friction, no stiction and no detectable wear in high abrasion tests. The effective 'fly-height' of the heads is the RMS of the surface of the head matrix, this can be provided by Corning as a raw material at <1 nanometre RMS and <10 nanometre curvature.
The head matrix plate and the media surface are kept aligned in the X axis an active piezoelectric actuator and by sprung loaded side bearings, also coated in taC. Essentially a 'square bearing' it is also important to consider that apart from the ULE glass, the amount of skew necessary to miss-align mechanically is quite large over the entire slide.
It may appear pedantic, but the drive is a direct step voltage driven Oscillation, not a vibration, the former is deterministic and can be predictably controlled, the later is neither.
Curiously and often counter-intuitively, solid state comes with a built in cause of failure, with many recent small feature size chips having as little as 10,000 re-writes, obviously this is not an issue for your photos and music etc. on a memory stick, but at the Tier 0 of a server stack, logging, meta data and snapshot management can mean that this can be used very rapidly, wear-levelling is used, but this also comes with re-write issues.
Dataslide media is standard magnetic media and has a very well proven history in terms of longevity.
Also piezoelectric actuators are a very mature technology and have been tested for many billions of cycles.
There are in fact 'flat-pack' products, they do not however deal with the issue of re-write, power consumption, price/performance or heat dissipation(which becomes a more significant issue as the heat produced in inner layers has to be removed through the outer layers..
My apologies, the capacity is rather deep in the technical 'stuff'', the current capacity at current sampled head feature size of 1 micron , is 80 GBytes for a 2 and 1/2 inch form factor, the road map is 2 Tbytes on two lithography 'turns', which the IC people tell me is pretty unambitious... ?
Incidentally the potential problems of noise from any oscillation, is dealt with by having two units (consisting of one media plate and two head matrix surfaces) physically coupled with an anti-phase drive signal, this effectively acoustically couples both and the result is a silent drive.
Indeed, let us turn our eyes away from the 100 year + lifespan of SSDs and look at the new toy.
That will break down in a few years.
Sad little bit of distraction, isn't it?
100 year lifespan? Id like to meet the maker of that gem of marketing BS.
Let me see if I now understand your product at least somewhere nearly adequately:
80 GB is about 160 million sectors at the currently-standard sector size of 512 bytes (granted that may change to 4 KB before long but if you're sampling now that's what you're stuck with in most environments). At your stated one head per sector that's 160 million heads (80 million per side if you're using two-sided media). Given the cost of current conventional disk heads that would be a show-stopper in itself, so you must obviously have decreased this cost by many orders of magnitude (certainly by being able to manufacture heads in huge batches per chip and possibly aided by the significantly different mechanical environment in which yours operate).
You don't need perfect 2 in x 2.5 in head plates or media surfaces, since it really won't matter much if even many thousands of heads per plate (and/or many thousands of sectors per surface) are defective as long as you detect them and map them out before shipping the product.
If the amplitude of your oscillation is 100 microns as you stated and you must fit 5K - 10K bits (one 512 byte sector plus overhead and gaps) into that space that's a linear density of 10 - 20 nm per bit which is comparable to linear densities on the densest contemporary conventional drives and approaches the size of a single magnetic grain on the media (perhaps your different mechanical environment makes this density easier to achieve). You could fit 660 such sectors along the 2.5-inch long dimension of your plate and would then need about 120,000 'tracks' across the 2-inch dimension (60,000 tpi, which is only about 1/4 that of the densest contemporary conventional drive) resulting in track spacing of about 0.40 microns - perhaps consistent with your statement above that the heads are manufactured 'at micron feature size'.
What the above (particularly the relationship to media grain size) seems to imply is that you'll need to get any short-term density improvements almost wholly from tpi increases and will be lucky to approach a factor of 10 there even with significant improvements in lithography (in contrast to the factor of 25 which you hope for). This means that you may remain at a significant rack density disadvantage when compared with conventional disks (especially when comparing against 2.5" conventional disks, where your power advantages become less significant as well).
As long as your side bearings don't allow side-to-side movement beyond a few dozen nm you don't need head alignment perfection in the head plates either, since each head sees only the portion of the media surface which belongs to it (but you do need to keep the heads adequately separated - in both dimensions). The side-to-side tolerance becomes tighter commensurately with the tpi increases mentioned above.
On the IOPS front, with 64 heads accessible in parallel on each of two surfaces you can achieve 128,000 random sector transfers per second with 1000 oscillations per second. Does the 160,000 IOPS claimed in the article mean that you can transfer in both stroke directions of a cycle?
The IOPS that you describe are not, however, directly comparable to conventional disk IOPS:
1. Disk IOPS are normally measured using 4 KB transfers. If you used 4 KB transfers that would decrease your IOPS by a factor of 8.
2. Real-world disk IOPS often involve even larger random transfers (e.g., 8 KB or 32 KB for older Oracle databases, quite possibly larger ones for newer ones or other contemporary applications). Again, that would cut your real-world IOPS commensurately for such applications.
3. Disk IOPS improve with queuing depth with only a sub-linear increase in latency. Your IOPS improve far less with queuing depth and your latency increases linearly.
In other words, you do have a dramatic advantage over a conventional disk in the IOPS area but you're currently overstating it significantly.
At 1000 oscillations per second and 64 heads in parallel on each of two surfaces you can transfer 64 MB/sec using 512-byte sectors (or 128 MB/sec if you can read/write on both stroke directions of the cycle). So how do you achieve the 500 MB/sec figure that the article claims?
You quote a latency of 0.5 ms., but it's not clear what that's supposed to mean when compared to what latency means for conventional disks. Your best possible access time for a request is clearly 1/2 cycle (0.5 ms.). Your worst possible access time for a request (in the absence of queuing) would be 1.5 cycles (1.5 ms.) if a) you can read/write in only one direction and b) you can start the transfer only at the start of the sector and c) the request hits you just after you've passed the sector start, resulting in an average access time of 1 cycle (1 ms.). If you can read/write in both directions *and* start a transfer anywhere within the sector.the worst-case access time drops to 1 cycle (1 ms.) resulting in an average access time of 0.75 cycle (0.75 ms.).
Incidentally, there's nothing that MRAM can do for that latency, and any effect that it has on *perceived* average latency can be applied to disks as well - more effectively, in fact, since their actual latency is so much higher.
And your suggestion that your two-dimensional medium layout is somehow 'architecturally useful' to SOL (SQL?) and the relational calculus sounds like pure poppycock, but I'd be happy to listen to you explain why that might not be the case.
The bottom line appears to be that your product offers about 1/10th the random-access latency and 10 to 100 times the practical IOPS of a conventional enterprise drive (or 1/20 the random access latency and 20 - 200 times the practical IOPS of a conventional SATA drive) with comparable bandwidth, rack density, and power consumption (especially when compared with 2.5" conventional drives).
That means that for capacity- and bandwidth-driven applications you'd need to sell your current 80 GB units for well under $10 apiece to compete with SATA drives (which are currently running about $80/TB at Newegg) - which doesn't strike me as providing the profit margin that you'll likely be seeking.
You would, however, appear to satisfy a niche for high-IOPS applications which don't require large amounts of storage more cost-effectively than conventional disks can even if your units are priced at well over $1000 apiece (at least as long as those applications can't share their storage with a great deal of 'cold' data and hence reap the benefits of the many disk arms that would otherwise be largely idle). In that environment, however, you need also to compete with flash storage that offers comparable bandwidth, IOPS, and latency (far better read latency, in fact - and probably better shock-resistance too) at a far more attractive price point.
So while your technical approach is really neat, unless you can make it inexpensive as well it's not clear that it will fly competitively. But I'd be delighted to be convinced otherwise.
The basic 'unit' of a dataslide is a double-sided media plate with a head matrix on each side, two of these are then help in opposition to each other and driven by the same signal anti-phase,(to acoustically couple and therefore remove any sound from the oscillation) so the working 'unit' has four media surfaces, because this unit is a few mm in depth it is possible to fit at least 2, and in a full height form factor 3, of them into that space. I believe that this may help with capacity calculations, also read and write is done on each half cycle.
The immediate advantages are to Tier 0 as observed.
Bravo that man, a very interesting counterpoint and one which doesn't nay say, rather it puts the tech in its relevant light.
Going back about twelve years ago, I knew some clever bunnies. They had an idea to replace disks with "3D storage CDs". The idea started similar to this disk idea - a square CD that didn't spin, but had a very fast and accurately aimed laser that danced all over the surface to write or read data. Seek times were amazing, and the only moving part was the laser turret. Once they had cracked the laser aiming problem, they looked at a 3D version putting the laser on a mount in the center of a box made of six similar squares, giving almost 360 degree coverage in and almost six times the density (you lost a little area for where the laser's mounting arm entered the box). A big problem was the CD tech of the day didn't have the data density required to take on hard drives, and the 3D box idea took up too much space compared to a 3.5in drive. But what really killed it was they couldn't get VC funding, and this was in the pre-Y2K boom times. If this has got VC money behind it in a downturn then maybe it has got further down the development trail than my friends got.
It seems to me that instead of using one of these 36GB devices, if one desires IOPS they should put their database in DRAM.
Considering that 4GB DIMMs now wholesale for less than $35, and that with both Metaram and ZRAM now being pushed into the channel we will be up to 32GB DIMMs by the end of next year, why would I buy a bunch of magnets?