It's Windows Vista's fault that solid-state storage isn't performing as well as its proponents predicted. So said SanDisk CEO Eli Harari, but at least he didn't go as far as saying it's Microsoft's problem to fix. SSDs are viewed as the heir apparent to the hard disk, particularly for laptops and other mobile computers. SSDs are …
I don't know
I think bloated operating systems can be blamed to some extent if they thrash like hell. My Linux boxen don't run 'small, basic' sorts of software but 99% of the I/O is real, unavoidable I/O, not virtual memory thrashing. None of this constant chug, chug, chug of the disk I see on my XP box (3Gb of RAM) seems to happen on my Ubuntu machines (512Mb to 1024Mb of RAM - sorry, core).
We have bought out this new shiney technology and it doesn't work with existing technology, so it's their fault, not ours.
Next you'll be telling me my USB3 device is not compatible with Dos 3.22
That'll be why
That'll be why tests have shown the Macbook Air's performance is better when you have the SSD, right?
He may as well have said "our vista drivers are borked"
Its not ms's responsibility to make drivers for hardware its the hardware manufacturers, just about every other hardware company out there know this.
Even if the likes of NVidia causes 26% of the crashes on vista with theirs...
"We'd say they're the SSD's shortfalls. "
"Vista works the way it does because of its long hard disk heritage. If SSD makers want their products to replace HDDs, it's up to them to develop drives that can be slotted into existing systems and deliver real benefits."
bullshit. If one wants to utilize a technology to its best, one needs to take its limitations into account. This especially applies to a new technology. Why bother with sectors, file fragmentation etc filesystem crap on a storage that is just flat memory address space? To tax the OS more than is strictly required?
Of course, this being Vista and Microsoft with all "we can run all the 20th century programs" heritage, this is what some are expecting. And some are loathing.
and yet the benchmarks are on MacOSX ?
Vista is to blame for bad bench marks on OSX ?
I really don't think its anything to do with MS.
Seek time and transfer rate
What else do you need?
Seek time should be zero, giving you a few ms advantage, then transfer rate - well, that should be limited by the interface, no need to get a spinning disk past a read head to get at the data...
Just because vista needs 40GB swap space (which it uses badly) shouldn't compromise the performance of an SSD - surely...
What have I missed?
Can he back this up?
Does he mean it's slower or consumes more energy on Vista? Is that compared to XP or Linux?
What's Vista doing differently? Does it hit it's page file more?
So do SSDs perform well on the MacBook Air ?
How about Linux (e.g. ReiserFS) ?
Come on El Reg, how about beefing up the content of the article?
It's not just the memory cacheing...
Vista's disk-activity issues extend well beyond simple swap files - there's Shadow Copy and background indexing also. As many others have already discovered, "optimised" Vista tends to mean XP
It's entirely down to the disk manufacturers. They either need to build this into the drive controller (as they're conceding) or write specific drivers for the OS - which seems like overkill as the IDE/SATA drivers are pretty much standard. Hardly MS' fault!
Call me dumb....
But why not have a hybrid device - put a (relatively) small HD to be used solely for swap and mount the rest of the storage as SSD.
I only wish M$ would bring out a better Hard Disc Filing system rather then the current spray and pray one they use with all their existing OS's !
OK, you're dumb...
Because some small hard drive would be dog slow and make for crappy swap space...
A flash SSD *should* be faster and be much more suited to being a swap file...
Most device manufacturers have gone the other way round, cache frequently accessed data on a small fast SSD which is joined to a massive hard drive for the less often used pieces of data...
Not run well on MacBook Air
Someone asks how it runs on the Air - not as well as the spinning counterparts.
Though it does sum it up quite well in the linked article - SSDs perform better for random access, but conventional hard disks work better in sequential mode, which is what you'd expect really. I guess this is what the argument boils down to - "Everyone's realised how to tweak performance out of normal hard disks and adapted the store procedures to match - no fair!"
Remove the trick of storing large blocks of similar data contiguously and SSDs will win - however why would they get rid of that advantage?
...or you could just have enough RAM to completely disable the swap file in Windows. I've been running XP with 2GB RAM and no paging file for about a year now, with zero problems and zero swap thrashing. If you need more RAM, just buy more RAM- don't complain your hard drive/SSD isn't fast enough. RAM's cheap enough for there to be no excuse for having less than 2GB of it. Isn't it?
Easy target - sloppy thinking
yeah, yeah - as always, blame your problems in Microsoft. The meeja loves to spout away about how climate change is Microsoft's fault and people who can't think for themselves think Maddy McCann was vapourised by some evil code in Vista.
Reminds me of a few years back working where a large HBA company acting as sub-contractor on a project couldn't get their code working. They swore blind for a couple of months that the problem wouldn't go away until Microsoft re-wrote all the device recognition code inside Windows.
Needless to say, the end customer bought the excuse until I wrote my own version of their code that did things properly.
Actually NTFS is very good and robust.
It's the way MS programmers have been dumbing down the OS since NT3.5x that is the problem.
The faster seek time is only an advantage on single threaded /single user. You don't see a problem with SATA HDDs till you have 4 or 5 network users, then an old 40Mbps SCSI RAID5 array with 5 drives striped outperforms it by x10.
(On dual drive desktops I always make sure Swap and Data are on different drives).
SSD tend to be used in applications where random access seek is not an issue.
Turning off Virtual Memory in NT/2K/XP/Vista is a little more complex than disabling swap file space. Trivial in Win3.x/Win9x if you have enough RAM. On the NT family you need to edit some registry stuff too. It (NT3.1 and later) was never designed to be used without Swap. One difference is Win3.x/Win9x pages, and even pages application code. The NT/2K/XP/Vista family uses the original exe file as part of the virtual Memory, hence shouldn't write the exe code to swap file ever. Technically on NT/2K/XP/Vista it's not swap at all, but part of Virtual Memory, which is not the same thing.
Why linux is different
Yes, I can see dozens of people exploding in front of their screens at the mention, yet again, of linux - get over it.
The difference with Linux is that swap is only used when it has to be, i.e. when you run out of physical memory. On my 2Gb laptop, with full screen video running on one monitor, browser, IRC, email, a compiling job running (yes I'm a programmer), an IDE with 20 files open plus a whole bunch of background applications I'm only 2Mb into swap.
This is very different to the model Windows uses where swap is used constantly, even if it's not required.
SSDs are for storage
If the SSD is for storage then great. If the SSD is for 'OS' then it is a different kettle of fish.
Storage with SSD is excellent: open a file; make lots of changes and save. The changes need to be stored somewhere _temporary_ and not on something with a limited r/w cycle. Use throw-away USB sticks. Actually thats the way my EEE will be configured.
As many point out - overtime the 'OS' has developed the need for virtual files, often due to limited memory. Actually since the available RAM is now 2-4GB and expanding, do we really need an OS which still requires *MORE* virtual space. Oh wait - modern 'OS's are not designed to be light weight and small footprint any more.
Need to stick with Windoze...? Start getting your admin's to look towards embedded OS systems which have been designed to work on solid state (Compact Flash) systems for sometime. XP Embedded... or some of the new variant Penguins'.
Changing the SSD technology to use a 2-4GB ram to mirror regularly used files would solve the problem of hot exchange (e.g. switch a HDD to SSD). Getting the OS redesigned to use less RAM and HDD thrashing (or cached space; remember the good old RAM Discs of MS-Dos) would make much more sense.
Longing for the day microsoft wakes up to small, light weight, fast modular operating systems.
It *is* an OS issue too
I think it would be fundamentally stupid to build SSDs that behave like HDDs just to cope with the HDD driver design in current OSs. HDD drivers have stuff like an elevator algorithm to deal with the fact that a HDD needs to track across cylinders and the algorithm optimises the order in which disk transfers are done. With a SSD the access time is essentially independent of location, just like RAM, so simpler access algorithms are required. The OS driver writers need to work with the SSD manufacturers to create appropriate drivers to replace HDD drivers.
spawn of the devil and should be destroyed where ever possible - get more ram.
SSD can and do out perform HDD
OCZ are dropping their pants and shaking their butts at the competition. Faster than all but the fastest of HDD, we are talking velociraptor / raptor territory here.
Better one tommorrow
its business get used to it.
Welcome to the free market, crippleware as standard.
the world would end if you cant sell an upgrade...
Typical. First the Death Stars, and now Super Star Destroyers...You can tell Imperial hardware is running Vista.
Mine's the one with SW FTW on the back.
SanDisk is the issue
I think the main problem is that SanDisk's crappy SSDs are incredibly sluggish: something like 50MB/s read, 40MB/s write. It's not an SSD problem - it's a SanDisk problem.
This sounds a bit petty
Sure instead of simply blaming MS (or vice-versa), surely the most productive route would be for the OS maker(s) and hardware manufacturers to WORK TOGETHER to ensure get the best out of their respective products?
Come on people, peace and love, share a Kit Kat, hands across the ocean, united we are stronger, etc...
In real life...
Flash is supposed to have zero seek time. In reality the transfer rate is so slow that there is an effective seek time while you wait for the data for the last read/write to trickle through the interface. Old flash devices are slow. Plenty of modern flash interfaces are slow too. Then there is the misinformation about USB2: modern machines have one USB2 interface, about three USB1 interfaces and about six ports. When you plug a USB1 device into a port, the USB port multiplexer assigns it a USB1 interface (if one if free). Otherwise the device shares the USB2 interface. The USB2 interface is fast enough for one storage device, but not for accessing two at high speed. Your only real hope for getting speed out of flash is with PATA(one device per interface), SATA, memory mapped or fast compact flash in a fast controller in a cardbus slot under ideal conditions.
Linux has a choice of flash optimised file systems that can only be used on a tiny minority of flash devices because USB/CF/PATA/SATA flash pretends to be a hard disk (because windows does not understand anything else). Next we will have to pay for a Vista work-around controller on every flash device. Please can we have a SCSI command for telling a flash device that a sector no longer contains valid data. While we are at it, linux can handle sector sizes over 512 bytes. The SCSI command set already supports large sector sizes (SCSI commands are used for SATA and USB). The wear levelling algorithms built into flash would be simpler and more effective if they told the operating system the real sector size of the device. At the moment the have to lie for compatibility with XP/Vista.
One day hardware manufacturers are going to catch on to the fact that they can demonstrate all the advantages of their kit far more quickly on Linux than by waiting years for people to buy a new version of windows to get support for new tech.
RE: Why linux is different
"The difference with Linux is that swap is only used when it has to be, i.e. when you run out of physical memory"
So does vista. My page file use hovers around the 0 mark. Not completely zero; I don't think vista automatically swaps data back in in there's free pages availble in physical memory (which makes sense; if a program isn't going to use that page again for an hour, say, i'd rather the page wasn't constantly being loaded back into memory until needed; just use it for disk cache instead)
As noted by mage, XP and Vista _will_ run without any page file at all - they'll just chuck program pages out of RAM instead. So if you have enough on the go, your RAM will be full of potentially-hardly-used data while your disk thrashes loading in your programs 64k at a time. Bad if you have a fast inner loop straddling two pages.
Which brings me to:
"Why bother with sectors, file fragmentation etc filesystem crap on a storage that is just flat memory address space? To tax the OS more than is strictly required?"
Because memory hasn't been used like that in a long while :) File fragmentation is irrelevant in memory, which is totally fragmented into chunks as it is, because there's no seek issues like with hard drives.
Windows dumb swapping
I haven't used Vista, but the swapfile usage in XP is dumb beyond belief. My home machine has 4GB of RAM (3.25GB usable in XP), and I've seen Windows swapping things out to disk when I have over 1.5GB of free RAM!
It's not unusual for me to come back to the computer after a break and see that I've got 2GB of free memory, 1.2GB used, and 1GB swapped out. I don't want to disable the swapfile because I risk windows crashing if a rogue app eats all the memory, yet performance can be horrendous and there is no need for it at all.
Fair enough, XP was designed when people had less memory than this, but surely you would expect some kind of sanity check in the system to prevent it swapping things out to disk when there is tons of memory free?
It's interesting that I have only one device with an SSD available, and it's a Sony Vaio running Vista. The (admittedly) deeply unscientific act of running the Windows Performance Rating tool suggests it's easily the best performing component in the box, scoring a solid 5/5.
I certainly haven't noticed any disk subsystem performance issues with that SSD under Vista; in fact it feels very snappy. It's worth noting in passing that the drive doesn't appear to be Sandisk though ;)
@frymaster - Shut Up!
and stop bringing facts into the debate. Truthiness should be enough for anyone: why would a freetard want to let the cold light of day into their world. After all, actually having a clue takes effort: far simpler to parrot something they read on /.
Incidentally, anyone know what "I've seen Windows swapping things out to disk when I have over 1.5GB of free RAM" means in myxiplx's post above? The only conclusion I can come to is that he's attached a kernel debugger and is watching calls to the appropriate Nt... APIs, or is glued to a logic analyser hanging off the SATA interface.
Who Supplies Future Memory is a Leading Question?
"when they come with Windows XP, have virtual memory disabled."
How do they enable virtual memory or is it nowadays AIdDynamic Feature and Virtualised Facility ....Almost AutoMatic? Which would be Real SMART HAI Technology. Full of Eastern Promise and Gracious Endeavour.
But Nothing Really New for IT is just an Advanced Turing Test, for New Fangled Entanglement in Virtual Realities, which Tends to be the Reality of Things which we Think to be New, although they are invariably All only Beta Versions of All that has gone Before. .... which is QuITe Heartening for IT means that Everything is Available in the Past to Build the Future on the Present. You just have to Search for all the Pieces/Bit and Bytes which Frame the Picture you See when Reading Words Building Futures and Derivative Virtual Trading InfraStructures in FreeSpace Red Zones for Hot Protection. AI Safe Haven for Seventh Heavens.
ITs EduTainment. :-)
Thats because Windows XP can only use 3GB of RAM, regardless of whether you have any more. It just ignores the rest and uses the swap file (assuming 32-bit version).
Also, a single windows process can only use 1GB of RAM, regardless of how much you have. So things like photo-editing software can't use all your memory. This is one of the reasons graphics people prefer OSX, as it's Unix base does not have these silly limitations.
"So does vista."
Previous versions didn't, XP/2000 and those before it used the swap file all the time (in a default install). If they've finally fixed this in Vista, then good for them! To be accurate I never mentioned Vista, since I've never used it I wouldn't dare invent 'facts'.
Of course Vista, from all accounts, suffers another problem which nullifies the swap fixes they've made - it uses a lot more memory meaning you'd have to have quite a bit more installed to avoid using swap. However at least you can work around the problem if you throw enough cash at it :)
Saintly Bill, 'cause it looks like they got at least one thing right in Vista.
I do not see how it could be Vista's fault here, I'm afraid... Although I've read Vista IS probably stupid when handling swap and all.
"spawn of the devil and should be destroyed where ever possible - get more ram."
Well, it might not be advisable anyway, according to some tests. A Vista computer with 4 GB of RAM started swapping after having just above 1 GB RAM used (therefore, 3 GB still free, for the math challenged). Why on Earth is it already swapping?
This is mentioned in a simple test here: http://www.itwire.com/content/view/19553/1141/
And a better one at Tom's Hardware:
Does No Swap File Equal Better Performance?
The wibbly wobbly world of Windows memory use.
Windows memory management is a right pig to understand. I’ve wasted hours reading up on it and I admit that I don’t understand it fully. Here’s what I have gleaned, remember that this is my take on it and I’m happy to be wrong. If a Windows guru can shed more light on the matter, feel free.
Windows treats all your memory as fair game, when a program loads, it does not have any idea what the memory requirements are for the program. So it gives the program a fat dollop of memory for it to play in. When you think about it, it does make some sense, why bother to give a program a small amount of memory when one of the first things it does when it runs is ask for some more memory? Windows does not know how much memory the program needs and plays safe by giving it a chunk of the available memory. If the program never requires more memory to run, Windows does not have the overhead of having to allocate more memory to it. In systems with gigabytes of memory, why let 70% of it never do anything useful until the system becomes more stressed?
For instance, I can create a program that’s nothing more than a window, with a button on it that displays “Hello, World” in a message box when the button is clicked. That was given 14.5 megabytes of memory, according to TaskManager. Is my program using that memory? Of course not.
What Windows giveth, it also takes away. As Windows gets more stuff loaded, it will remove memory from applications. I’m not sure how it works out what programs get memory removed, like I said, it’s a pig to understand.
A few years back, I got into the mindset of how to make Windows use less ram. I finally realised that it made very little difference having a computer that after boot, had 1.4Gb free or 1.8Gb free. The biggest impact I could have on performance was to keep badly written software off it in the first place. I went for software that does a job well, and does not pretend to be all things to all people. I ask myself questions like do I really need an instant messenger? No? Disable it. Do I need a bloody google toolbar on my browser(s)? No? Don’t install it or Yahoo’s effort. iTunes? No, if I want to buy music, then I go to a record store and buy a CD or order it from Amazon. Then I rip it onto mp3 at better quality.
Sorry, getting off topic. A poster commented that, after getting back from lunch, XP had swapped his apps to the swap file. Yep, Windows does that, if a program is doing bugger all, why keep it in ram? Bung it in the swap file and let programs that are doing work have the ram.
I’m not saying this good or bad, but for a SDD manufacturer to moan that it’s the operating systems fault for their poor performance is silly.
And, memory does get fragmented, if program A uses 10mb and you load some more programs, then program A requests more ram, its likely to get ram that’s nowhere near the programs original address space. Fragmentation on a SDD drive is no different than fragmentation on a HDD. The only difference is that one is reading and writing ones and zeros in a block of memory and the other is moving ones and zeros across magnetic platters.
why on earth would you use flash for swap?
Even with wear-levelling, flash memory is really only appropriate for few-write, many-read use as already noted. Why on earth would anybody (let alone a technologically sophisticated company like Microsoft) recommend its use for virtual memory swapping?
I was, frankly, baffled when I first heard about this, and remain so to this day.
"Vista is not optimised for SSDs" vs "ReadyBoost (tm)"
So when a USB stick says "ReadyBoost compatible" (this being a Vista feature which, in brief, is supposed to enable paging/swapping to SSD in preference to disk-based page/swap space), what it's actually saying is "We, that is Microsoft and us, the USB Flash vendor, lied to you".
How on earth could that happen?
More importantly, wtf isn't ReadyBoost's uselessness mentioned in the original article or any of the comments to date?
I think vista sucks just as much as the next guy, but I hope the vista blamer SanDisk CEO Eli Harari notices that the benchmark in this article was done on MACS! it does not make sense!
If you go away for lunch, just what exactly would be running that requires more memory than the idle applications you left it with? Sounds like Windows is merely clearing space for the sake of it and not because it's actually required. All that happens is that you get back to your desk to find things sluggish because everything has to be moved back from swap - that's just stupid design.
Lower power consumption?
All the tests and benchmarks I have come across seem to show that SSD offer nothing in the way of power efficiency.
Longer battery life is not one of the benefits you gain from a SSD. At least not any drive that is designed to actually match or exceed the performance of a real hard drive.
It may well be true...
Let's take the case of the aspire one, which has a 8 Gb SSD and which comes preloaded with Linpus Linux. the machine runs fine with the provided Linux distro, but slow to a crawl with windows because of SSD access.
The reason is that if the SSD has a decent read speed, write speed is abysmal (about 5 Mb/sec) and apparently this would affect windows much more than Linux.
It seems that in the world of flash, buying chips with good write performance costs a lot, meaning that if you want a cheap SSD you'll have to sacrifice write speed.
ASUS also had to deal with the problem with the EEE. In the case of the EEE 900, you get 2 disks, one fast-but-expensive 4 Gb SSD for your OS and a 8 or 16 slow-but-cheap SSD for your data. The much cheaper 16 Gb EEE 900 drops the fast-but-expensive 4 Gb SSD and only keeps the slow-but-cheap 16 Gb SSD, but then it's only available with Linux as far as I know...
The conclusion is that windows seem to have a lot more trouble dealing with the slow write performance of cheap SSDs than Linux, requiring either expensive chips or providing low performance.
Actually MacOSX is not much better there. The kernel can only use 1 Gb even on 64-bit Platforms.
Performance is less so the OS does have an impact with increased often unnecessary IO, - indexing on windows, file access times on nix, logs in general (flush them to disk at system shutdown) etc... etc...
The ideal solution would be, in effect, a hybrid of swap and caching.
Here's how I envisage it working - Windows, when idle, would copy (not swap) pages from RAM to disk, effectively creating a swapped page which is pre-cached in RAM for next use.
Then, if those pages are needed again in the near future, you don't have to wait for them to be swapped back from disk and, if the pages were written to, Windows would delete the disk copies and mark the pages for re-copying next time it was idle.
If, on the other hand, Windows received a program request for more RAM than was free, it would free up RAM by trashing the RAM copies of the least recently used pages, which should already have been copied to the swap file. In caching terms, it would eliminate the least recently used data from the cache.
I don't know if this is how it works, but it is, to me, the logical solution.
As a refinement, Windows could allow programs to request that certain pages not be removed from RAM. For example, a game might do this with graphics textures that it has preloaded, knowing they are likely to be required in the next room. Again, it wouldn't surprise me if this has already been implemented.
SSDs aren't a lot oh things makers make us believe
Many of the current SSDs have MLCs, and this automatically means slower transfer speeds, higher power consumption and lower cell endurance than when single-level cell memory is used and then what SSDs "should" be like (what manufacturers try to make us think they are like).
SSDs can also be more sensitive to magnetic fields and electric or static charges than HDDs. And, unbelievable as it may seem, they can even cut the battery life of your laptop (http://www.tomshardware.com/reviews/ssd-hard-drive,1968-11.html).
So, while the way OSes use drives may have something to do with how they perform (though random writes are their weakest point and regular HDDs are also slower at performing random operations then when doing sequential ones), the cruel truth is that current SSDs don't live up to the hype simply because they can't.
Tony Smith is wrong
No. it's is NOT the SSD manufacturers that have it wrong.
What tony is saying that hardware manufacturers should bend their systems over backwards to fit into MS's dodgy practices? I think not.
MS Should pay attention to the SSD Specifications and write appropriate controller software with SSD specific timiongs etc, and use these then they detect a SSD drive.
Lower power - I don't think so
Of course the manufacturers claim it's lower power, but the reality is a little different.... There's a good article on Tom's hardware here: