Intel has rejected a reported finding that its clever controller algorithms that deliver terrific write I/O performance on its X25-M solid state drives actually contributes to them slowing down over time. The issue is fragmentation. The Intel controller combines many small file writes, smaller than the X25-M's block size, into a …
Othere systems which use roll-up write techniques to optimise write performance see this sort of effect. NetApp OnTap will do it if you run to high space utilisation levels as the available free space gets fragmented and it is more difficult to do efficient writes as there are few "clean" areas to do the most efficient writes. The worst access patterns are those with lots of small, random updates (typical of a database) - if the workload is largely sequential writes then it is much less of an issue. However, given that the random read speed of these tjhings is a lot better than real disks, decent algorithms which avoid defragmentation could be implemented a lot more efficiently than with real disks.
Isn't this what a wearlevel system is all about? Mapping logical blocks to random physical blocks so that all cells get an equal amount of write/erase cycles.
What could contribute to the slowdown is that for some reason the dirty cells aren't erased again quickly, so at some point a write operation has to wait for an erase cycle to free up a cell.
Did they refute it, or did they just deny it, without proving their point? Refer to dictionary for the difference, please.
Re: Anything new?
Good idea, but unfortunately they saw honking levels of perfomance degradation in read operations, so it isn't anything about writes having to wait for an erase to occur.
Conventional fragmentation makes no odds to an SSD anyway, it doesn't matter which cells you read in what order, the access time's the same as there's no physical geometry or moving head to take into account. This leaves Intel's "packing" strategy as the likely culprit and the authors here make a compelling argument for this.
I went looking for the Intel X-35, thinking I had missed the news and bought an obsolete model...
Are you sure you got that right?
Uh, help me here. Got to disagree, because that's not what happens with random writes. The *best* access patterns are those with lots of small, random updates, because the system turns those into sequential writes in new space; ONTAP doesn't overwrite blocks.
And for fragmnatation (which any fule no happens on any highly utilised system) there's reallocation; a background, user-schedulable process that "defrags". I'd like to see your proposal for avoiding fragmentation. You could make a fortune...
didn't some review find out recently...
that SSD's using more than 80% of their capacity start to slow down? the solution: format the drive to 75% capacity and leave 25% unformatted!
@ Ricky H
That solution will result in the slowdown, if it were present at all previously, at the point of 80% of the 75% sized partition (even sooner than not doing it).
Bin as media hype
Until we see some numbers and methodology we can't really have a clue what's going on. If the reviewer of that other article has been hesitant about pubishing serious stats, I'd say it raises questions about the validity of his review. Until then I feel it should be - innocent until proven guiltly
Sounds like WAFL to me :-) Call the patent police.
Actually, it was known before
Someone should point Intel to http://www.bigdbahead.com/?p=71