Microsoft has done some tuning of Windows 7, so that it will make much better use of PCs and laptops with solid state drives than the lamentable Vista. It's nothing very clever, but it should make for a snappier Windows. A Microsoft blog on the topic says that the main problem areas are the limited life of SSDs, extended by wear …
I have a better idea
Have a pci-e card that can hold 20 memory sticks and can also hold 3 lithium batteries for extended long life.
Add all the memory you want and the battries will kick in when the pc is turned off and the memory will be used as a virtual hard drive. REAL memory si faster than SSD anyway so SSD is not the way to go.
Now I just need to get an engineer to buid my device and I'll be a happy camper.
Solid state is a step up from the old magnetic record player technology we currently use, but clearly still has its' downfalls.
Being an electro-chemical storage system it is guaranteed to wear out (as with the moving parts in HDDs), but I'd be interested to see how the two techs durabilities stack up.
I'm wondering what on earth happened to the nano-tech spintronics we heard about so many years ago - where they found they could put a postitive or negative spin on individual atoms?
@AC: I have a better idea
That already exists: RAM drives. There are several companies already producing them (e.g. Gigabyte). They can cost a lot, have limited capacity and battery lasts (depending) only 20 hours between power-off cycles (at least from reviews I have read) so you would have to ensure you have a UPS unit attached to your PC as well.
RAM-based SSDs already exist
And very fast they are too.
http://www.tgdaily.com/content/view/42283/135/ for example.
I've been running windows 7 beta for ages now, and benchmarks showed my SSD hard disk running well over 10% faster than XP, even on top of the optimisations and partition alignment I did in XP.
Trim command and "better idea"
It's good that Win7 is getting trim command support. Well, hopefully they already have it, and are not throwing it in at the last minute between the RC and final release -- that time is for bug fixes, not potentially bug-inducing new features. (Throwing in WinXP integration at the last moment already throws that out the window though.) Linux kernels got trim support a few months ago...
Re: "I have a better idea", look up the Gigabyte I-Ram. It is (was?) PCI, but then used SATA to actually transfer the data, and is battery backed. Pricey though -- it was supposed to be released at $50 (back in 2005), came out at about $80, then when they IMMEDIATELY sold out were re-released at $150, and they're almost that much now.
The two big problems:
1) There's nothing else like it on the market (anyone who thought of making a competing product saw SSDs approaching the $1000 mark and dropping fast, and so didn't bother).
2) Since there's no successor products, it's pretty obsolete by now.. PCI, SATA-150, and it takes DDR with a max of 4GB. A PCI-e follow-on to this (using DDR-2 probably, although DDR would have plenty of speed..) would be fantastic.
They're in use in hard drives. Giant Magnetoresistance and Spin Valves are how their densities keep growing (that and charges are now stored perpendicular to the platter rather than parallel). I assume you're talking about MRAM. Quite simply, it doesn't shrink well.
RE: I have a better idea
Not sure if you are aware of this:
gigabye iram (www.gigabyte.com.tw/Products/Storage/Products_Overview.aspx?ProductID=2180)
I know if it only PCI 2.2 and is limited to 4GB but....
I haven't tried Win7, but here are a few obvious performance enhancements for SSDs that all OSs should be doing:
1. Make sure the MBR and each partition reserves 64 sectors, not 63. Because SSDs are a completely new technology, they aren't build to the old-style Cylinder, Head, Sector format or the 512-bytes-per-sector format. SSDs have a "page" size of 32KB. Because of backwards compatibility, OSs format a drive so that the first 63 sectors (one entire track/head) are reserved. This was done because in CHS mode, there can be a maximum of 63 sectors per track (for some reason, sector numbering started with 1 instead of 0). Reserving 63 sectors would be bad for SSDs because that would mean the data would start on sector 64, which would be the last 512 bytes of the first 32KB page (in other words, more read/write cycles than otherwise needed).
2. More importantly, make sure the cluster/node size is 32KB (to match the smallest write the SSD can do). NTFS' default size if 4KB. Yes, using a size of 32KB will result is more wasted space, but it also means 32KB writes instead of 4KB writes, so it should improve performance due to fewer read/write cycles (by writing the full 32KB, you would write the entire page instead of writing only a portion of it which would require a read/write).
3. Make sure that the size of all non-file data (directory entries, file allocation / node tables, etc) is a multiple of 32KB to ensure that it doesn't cross a page boundary (which could result in additional read/write cycles).
----- Battery-backed RAMdrives
As for the idea of battery-backed RAMdrives, it's not really a feasible idea. It would be great from a performance standpoint, and it's definitely a good idea for short-term storage. But for long-term storage, it's too risky (too high of a risk of data loss due to loss of power), too little capacity per price point (though perhaps better than or comparable to SSDs), and too heavy (due to battery weight).
Let's use as an example the Kingston 2GB 1066MHz DDR2 memory module, part number KHX8500D2/2G. That module runs at 800MHz at 1.8V and uses 1.584W, giving us an amperage of 0.88A. It runs at 1066MHz at 2.2V (using approximately 1.936W). To get a 64GB RAMdrive, we would need 32 modules, giving us a total power draw of 50.688W (800MHz) or 61.952W (1066MHz).
A high-capacity Ni-Cad rechargeable AA battery yields 2000mAh at 1.2V, giving us approximately 2.64Wh. To power that 64GB RAMdrive at 800MHz for only one hour, you would need 20 batteries; to power it at 1066MHz for one hour, you would need 24 batteries.
A 12-cell Dell Li-Ion battery yields 96Wh at 14.8V, at a cost of $142, weighing 1.3 lbs. This battery could power that 64GB RAMdrive for only 1h53m (800MHz) or 1h32m (1066MHz).
As you can see, a high-capacity, battery-backed RAMdrive just isn't feasible. It would draw too much power and it would be too heavy. Compare this 64GB RAMdrive at 51W (over $1000) to a WD (model WD5000BEVT) 500GB 2.5" SATA-II hard drive at 2.5W ($100) or the Intel X25-E Extreme (model SSDSA2SH064G1C5) SLC SSD at 2.6W ($799).
One of the statements needs to be better qualified
You make this statement about random writes "Some random write sequences on poorly designed SSDs can cause a similar kind of problem, with erase:write sequences for blocks backing up because the incoming random writes come in too fast. Multiple blocks have to first be erased and then written, slowing the system down." .
And then later state to "not defrag" because of the good random read performance from SSDs. I agree with this from a "read" perspective. But, you should know that large contiguous LCN ranges in NTFS prevent/minimize split I/Os and hence sequential writes from becoming random writes. A good defrag will consolidate free space. This is also why write-logging programs (e.g. Steady State) provided benefits to earlier generation SSDs.
I agree that better (Intel x-25 series) SSDs don't need software/defrag, but you should better qualify your statement to "not defrag" and avoid generalizations. "Poorly designed SSDs" are almost all of the products put on the market in 2008 and still most of those this year.
Argh "The total effect of this SSD-specific tuning is that Windows 7 will use SSD speed much, much better than previous versions and avoid exacerbating SSD problems by excessive random read operations." - obviously this guy does not understand the topic of his article, nor the point he was trying to make.
On SSD defragging
Sent to me by mail and posted here: It is not correct to say that you don't need to defrag SSDs. Mapping fragmented files consumes pool and in extreme cases with 32 bit Windows this can be fatal to the OS.
Reading or writing a fragmented file is inherently less efficient than with a contiguous file because each fragment has to be accessed via a specific IO operation. More IOPs, more system overhead.
The point I think you were trying to make is that SSDs have zero seek time and do not suffer from rotational delay and missed rotations etc. Crappy applications or a badly fragmented filesystem can still make an SSD based system look bad. The example there being Vista.
Good idea! shame they can't do the bleeding obvious things too!
Why is it they can THINK and come up with clever things all on there own, yet they can't do the "bleeding obvious" things, like not to hide file extensions!!!!
That's going to scupper Scientologists plans then ;)
re: On SSD defragging
"The point I think you were trying to make is that SSDs have zero seek time and do not suffer from rotational delay and missed rotations etc."
Actually, no. The point is that SSDs do NOT map data like a hard drive does. On a hard drive, sector 12345 will always be located at physical sector 12345. That's not true with an SSD. On an SSD, because of wear-leveling, what the computer sees as sector 12345 may be physical sector 4678 one day and physical sector 19483 the next day. I agree with you about fragmentation slowing things down due to additional overhead, but defragmenting an SSD will kill it much faster than normal by causing millions of unnecessary writes. When you're dealing with a medium which can only handle 10,000 writes per page/location, that's a huge problem. Even for SLC drives, rated at 100,000 writes, defragmenting will kill the drive much quicker than necessary.
- Mounties always get their man: Heartbleed 'hacker', 19, CUFFED
- Batten down the hatches, Ubuntu 14.04 LTS due in TWO DAYS
- Samsung Galaxy S5 fingerprint scanner hacked in just 4 DAYS
- Feast your PUNY eyes on highest resolution phone display EVER
- AMD demos 'Berlin' Opteron, world's first heterogeneous system architecture server chip