IBM and Violin have announced a great big GPFS numbers record: the software scanned 10 billion files in a flash – well, 43 minutes – using four Violin flash memory arrays. This was 37 times faster than a previous GPFS record of scanning one billion files in three hours, but that was with the file system metadata stored, like the …
I am not too surprised you got a response
because doing it allows them to show off their system again. And Big Blue does have a lot of smart people, AND allows them to talk to the outside world.
Smart is relative.
And yes, while IBM does have a few smart people, they are vastly out numbered by arrogant smug gits who couldn't pass a Turing test.
The reason why they are talking is that if they didn't, then no one would notice that the product exists.
I really do feel sorry for all of those smart people who are being out numbered by their smurt counterparts.
I hate to be the one to ask is why is this really news?
GPFS fills the same niche that is already filled with MapR's implementation of HDFS and of course Apache's HDFS.
As to speed and performance... I would be interested in seeing it benchmarked against MapR's Hadoop release.
Here we go...
GPFS can allow for POSIX compliant file systems ... HDFS is not and will never be.... Big fat difference ....
Watch Violin, they're going to explode any time soon... I've worked in data storage for 12 years and I've never heard buzz like that about Violin at the moment.