A flash device that can put out 100,000 IOPS shouldn't be crippled by a disk interface geared to dealing with the 200 or so IOPS delivered by individual slow hard disk drives. Disk drives suffer from the wait before the read head is positioned over the target track; 11msecs for a random read and 13msecs for a random write on …
I had wondered what all this SCSI Express murmuring was all about!
It used to be "pick the one with the best performance."
Now I guess the best answer would be "pick the one least likely to get you sued over patent violations."
Dumb question time
The SSD in my netbook plugs into the motherboard (or some daughterboard - haven't checked) via what I believe is a mini-PCIe socket. Now I'm not claiming that's a suitable replacement for plugging your expensive SSDs into your expensive server hardware...but is my netbook doing something that could be theoretically scaled up, or is it doing something dumb like ATA over PCIe and the host controller is somewhere on the other side of the socket?
it's using sata over minipcie
Hi, it's using sata over minipcie connector.
It's the dumb option.
Probably for the best too, since real PCIe would require special BIOS and operating system support just to boot, not something you'd want to give to non-technical users. The Flash and controller used in those applications are not that speedy either.
It's been a while since I've been intimately involved with low-level details of microcomputers...
... but my money is on "it's doing something dumb." After all, dumb has been the ethos of "industry standard" microcomputer design since the original IBM PC.
Must be too close to christmas
I could have sworn that read...
It's using sata over mincepie
Real PCIe would not require special bios or OS support, there is no way to directly interface PCIe with a slew of flash chips so you still have to have a controller which can report itself as a bootable device conforming to standards. Same is true if you pop a SATA controller card into a PCIe slot, it's not what the card actually does, it's how it represents itself to the outside world logically and with I/O requests. For example OCZ company makes a Revodrive product that does this though it uses more PCIe lanes (and costs as much as an entire netbook).
With the typical netbook PCIe SSD, the performance is horrible due mostly to a slow controller with little if any DRAM cache, and little if any parallel flash chip access... there simply isn't enough room on the card for many chips unless you start stacking them but then the cost gets beyond the price point of a netbook.
RE : Dumb question time
The mini-PCIe SSD in the notebook is NOT using a PCIe connector!
It's a SATA connection using the mini-PCIe physical socket.
Think Intel do a "mini-PCIe" socket SSD that works in a standard MINI-PCIe but not sure!
But can you boot from it?
The problem as far as I understood with PCIe SSD products such as Fusion-io can't be booted from. Until that changes, their growth in the desktop and laptop sectors will be limited.
Does anyone know if that has changed?
Some recent ones are bootable, but to do so they need to carry a BIOS extension and you have to the the OS to load a driver right after boot. So it isn't a plug, install system and play solution.
I think the attempt to standardise mentioned in the article is a step to resolve that problem.
Isn't the Intel thunderbolt interface
a thru road to the PCIe bus?
Someone needs to develop an entirely new storage interface (maybe they're already working on it) thereby making all existing computers obsolete.
Now there's a name I haven't heard for a decade or two, not since the old SCSI interface on my towerified Amiga 1200. Brings back fond memories of huge ribbon cables wide enough to drive a London bus on, that put the piddling little parallel IDE cables to shame. Looking forward to using SCSI again, if only for old times' sake!
If you've used an LTO tape drive recently then you've very klikely to have used SCSI recently. Or a server class RAID array for that matter.
Decade or two?
Have you been hiding under a rock? SCSI never died, and has been king in servers ever since. It's now effectively evolved into SAS as the defacto server standard.
iSCSI is still very much in vogue too.
I never stopped useing it.
In the home user world it hasn't been populr for some time, but those of us in the professional world have been using SCSI in the intervening years, sometimes on a parallel bus, sometimes over a Fibre Channel bus, sometimes we use the protocols of Fiber Channel on top of a Ethernet transport, but mostly now we use Serial Attached SCSI.
This is just another in a long line of SCSI transports we have used contentiously over the years, albit a nice one. (SCSI has always been a much better engineered protocol then ATA, and it much better suited to the unique properties that SSDs can bring to the table)
Check out the definition of SAS when you have some free time.
What is stopping you?
Serial Attached SCSI or SAS.
Solid State and Scale-Out
Great article, I think you're exposing the next big debate in storage. Your point on scale out architectures that depend on Ethernet connectivity will limit solid state performance is right on. I spent several years working for a scale out storage vendor and am very familiar with that shortcoming. That brings up an interesting issue, will any of these new interfaces will allow connections between systems over moderate distances to enable scale out without limiting solid state performance?
Multi-channel thunderbolt FTW!
Better yet, lightpeak for an out-of-box experience.
Am I right in thinking thunderbolt doesn't offer enough bandwidth for SSD?
Ribbon fiber-optic cables anyone?
Perhaps we'll see more x16 slots on server motherboards.
This has the makings of a format war. Not good for anyone who backs the wrong horse in the meantime.
Perhaps it's time to remind our elected representatives that they have the power to nip this sort of thing in the bud, by pre-emptively annulling a few patents here and there.
Not sure I understand this article, PCI flash drives are already available.
I am using this one http://www.ocztechnology.com/ocz-z-drive-r2-p88-pci-express-ssd-eol.html and I am sure newer quicker versions are being made now.
Surely OCZ are not the only company doing this.
It does cut out (some of) the middlemen, but it still presents as a SATA interface with a hard drive attached and all of the associated overheads, have a look at your control panel (or whatever), you'll see a SATA interface (Sandforce?). Seeking is blisteringly fast, but transfer rate isn't much better than my 15k SAS array (and of course mine is much cheaper per Gb).
...am I the only person to find this neologism offensive?
So it is a bit like having both SAS and SATA interfaces
SCSI Express seems to be a way to allow fast SSDs disks to be (hot) plugged into SAN controllers without having to rewrite the software on the SAN controllers, while still permitting SAS disks to be plugged into any slot.
Nvme seems to be more aided as plugging fast SSDs disks directly into PC and servers without adding match to the cost of the SSD disks or servers.
So it is a bit like having both SAS and SATA interfaces at the two ends of the market.
SATA already has hot plug written into the protcol
It's a great article by the way - very informative.
It bemuses me why we aren't moving away from SAS and to SATA 3; a lot of issues with SAS have already been addressed.
HP's P410i controller takes SATA and SAS together
I did an interesting experiment on a DL360, the disk slots are SATA (regardless of protocol), anyway I've a 4 disk 10K 300GB each in RAID 0 of SAS; the spare slot I put an OCZ agility 3 60GB drive in a caddy and slotted it in the server - create it as a logical disk etc.. Ok, its only SATA 1 so 1.5Gbits/second but I still put the RAID array to shame on physical IOps.
What I'm saying is that we have the facility of using and getting the perf from SSD's now using existing protocols, random IOPS with a sub-millisecond latency is what I'm after as a database specialist.
Is it a conspiracy that the controller is limited to 1.5Gbits for SATA drives? If it was SATA 2 I'd have 300MBytes per second per drive with IOPS sub-millisecond, SATA 3 600MBytes per second.... all on one drive! Just negates the need for so many 15K disks so the vendors start losing money; ever wondered why the price of enterprise SSD drives to go in your kit are like 5x + the price of commodity ones that actually out perform!