* Posts by Erik (TMS)

10 publicly visible posts • joined 29 Feb 2012

IBM pours $1 BEELLION into flash SSDs

Erik (TMS)

Re: IBM have gone the way of Oracle..

Re AC: specs and capabilities for the new FlashSystem products are clearly stated in the datasheets linked under the second section on the new IBM Flash page (http://ibm.com/systems/storage/flash).

Direct links:

FlashSystem 810/710: http://public.dhe.ibm.com/common/ssi/ecm/en/tsd03166usen/TSD03166USEN.PDF

FlashSystem 820/720: http://public.dhe.ibm.com/common/ssi/ecm/en/tsd03167usen/TSD03167USEN.PDF

Costs can be easily determined by contacting your friendly IBM seller.

What other questions do you have? (I'm a new IBMer/ex TMSer.)

Fusion-io flies into flash SAN space

Erik (TMS)

Re: Hurry up

You don't need to have Fusion-io cards to get speedy SAN access. My company, Texas Memory Systems, has been making fast Flash-based SANs for several years, and there are no Fusion-io cards or SAS/SATA SSDs inside.

Facebook smacks away hardness, sticks MySQL stash on flash

Erik (TMS)

Re: This doesn't make sense

If the database is CPU-bound, TheCreditCruncher's thoughts make perfect sense. But if the database is I/O-bound, switching to a faster storage medium could get more performance out of fewer CPU licenses. Most solid state storage basically makes I/O wait times disappear, so CPU utilization goes up, so overall database performance goes up beyond target metrics, so the number of CPU or server DB licenses required goes down. This doesn't apply to all workloads, of course, but many heavy-hit databases follow this pattern.

Chip alchemists 'turn cheap silicon into longer-lasting flash'

Erik (TMS)

Re: Further Commoditization Plus Helps Keep the Shorts at Bay

1. Storage hypervisor that IBM is pushing: are you referring to the SAN Volume Controller/Storwize V7000 platform? That's definitely not based on VSL; it predates Fusion-io by a few years and was never tied specifically to Flash technology.

2. SCSI Express and NVM Express are emerging standards that push PCIe-connected Flash (cards or drive modules) further down the path of commoditization by standardizing the operation of such PCIe devices as perceived by the host. Neither standard is specifically connected to VSL, though Fusion-io did show a proof-of-concept SCSI Express module at HP Discover last year.

Regarding the "deconstructing the mainframe" idea: both centralized and distributed storage have places in enterprise environments, and the pendulum has swung both ways. (That idea applies to compute power as well.)

Violin and Fusion separation lessening

Erik (TMS)

Re: Violin and Fusion separation lessening

These new announcements/rumors (in Violin's case) that Violin and Fusion-io can accelerate in-memory databases by bringing them into their storage platforms do make the suppliers look similar, and potentially superior to others... but only for that class of applications. It is clear that Violin and Fusion-io are interested in reaching further into the software stack, which is a net win if you're betting on differentiated software.

Still, I would wager that most enterprise datacenters are not focusing on in-memory databases as their biggest IT priority, with the exception of shops like high-frequency traders and some HPC environments.

Running applications in the storage array

Erik (TMS)

Re: Running applications in the storage array

I believe it is relatively trivial for storage appliances that already include a heavy x86-based software layer to run more software on it, up to and including actual business applications. I don't believe that actually gives significant advantages, because network latency is usually trivial compared to the latency in the aforementioned software layers or in the array hardware designs, except in truly extreme performance cases: think some high frequency traders that have skipped Flash and store everything in DRAM, or some HPC environments.

In my opinion, converging compute and storage resources is a cyclical change, not a secular change in the industry. I don't think there is a right answer or a consensus among enterprises as to whether the converged approach is better. The pendulum has swung in both directions if you look at the history of computing.

Most storage manufacturers would LOVE to have servers integrated into their storage. Most server manufacturers would LOVE to have storage integrated into their servers. Manufacturers want to own the whole stack, and complete integration can offer some benefits. The magnitude of those benefits varies widely. It is nearly always possible to build non-vendor-integrated systems, for the same price or less, that work just as well or better than integrated stacks.

So let's look at the angle from IT shops' perspectives. There are basically two camps here. One says that vendor-integrated solutions are great because they reduce complexity. The other says that vendor-integrated solutions are lousy due to lock-in and interoperability concerns. There is no right answer for 100% of workloads.

(Disclaimer: I work for TMS.)

Flash Array Frenzy

Erik (TMS)
Thumb Up

I feel like this is built upon rumors and speculation primarily at this point.

Nonetheless, let's say the rumors are true. If that's the case, then the *other* solid state storage companies should get very excited. Unless XtremIO has magic that 10+ other start-ups have not yet produced, an acquisition from EMC for almost half a billion dollars sets the stage for a feeding frenzy at relatively high revenue multiples. (In XtremIO's case, I believe their multiplier is approximately infinity.) There are a lot of companies out there with theoretically similar products to XtremIO; some of those companies have customer revenue, too.

(Disclaimer: I work for TMS.)

VSAs and flash

Erik (TMS)

Re: VSAs and flash

Both are often bottlenecks. In general, native PCIe SSDs provide fast hardware performance that is then limited at some level by software running on the host.

Erik (TMS)

Re: VSAs and flash

(Disclaimer: I work for Texas Memory Systems.)

My perspectives on your points:

1. Almost all storage array vendors run a software-based front end on x86 platforms. I would expect all the software-heavy array players to produce VSAs as it is a trivial way to get more market presence.

2. VSAs will never deliver the performance of dedicated hardware. You cannot add performance with software. Given that potential VSA customers are therefore not performance-centric:

a. VSAs allow physical storage to be managed like virtualized servers.

b. Therefore, VSAs allow consolidation of hardware and personnel.

c. High-end arrays and virtualized arrays already provide that consolidation appeal—little benefit from virtualizing the virtualized.

d. So VSAs threaten low-end arrays with high licensing costs.

3. A VSA would generally serve as a bottleneck for PCIe-attached SSDs. Dedicated hardware SSD arrays will nearly always win when performance is the #1 goal.

Texas flash: TMS Ramsan-820 born in the USA

Erik (TMS)
Stop

Re: Price?

Disclaimer: I work for TMS.

You may know TMS as the "premium price" option, and that was true to some degree a few years back, but it is no longer the case.

The list price for the 820 (and 810) is $12.50/GB of capacity, which is similar to the cost of storage capacity in other enterprise rackmount (and PCIe) MLC/eMLC storage products on the market today. If you'd like to see for yourself, request a free quote at the TMS web site.