World+dog agrees that Hadoop is a very fine tool with which to tackle map reduce chores, but the software has a couple of constraints, especially its reliance on the Hadoop Distributed File System (HDFS). There's nothing wrong with HDFS, but its integration with Hadoop means the software needs a dedicated cluster of computers on …
"“We abstracted out an HDFS layer but underneath that it is actually talking to lustre."
Err, Hadoop has a specific class/interface, FileSystem, designed to let anyone implement a filesystem underneath: local being a key one. All you have to do is implement it and then pass tests like FileSystemContractBaseTest to convince yourself you got it right.
While Intel make it sound like they did some heavy engineering "we abstracted out an HDFS layer", what they probably mean is they took the ASF-supported LocalFileSystem class and tweaked it to get locality information out of Lustre, then ran some (? how many?) tests to show it worked. Having them talk about the tests, that would be interesting. Ask them (or any other "we swapped HDFS for -" vendor) for that question, as only EMC/Pivotal have owned up to testing on a 1000+ node cluster
- Boffins attempt to prove the UNIVERSE IS JUST A HOLOGRAM
- Review Raspberry Pi B+: PHWOAR, get a load of those pins
- Review Reg man looks through a Glass, darkly: Google's toy ploy or killer tech specs?
- MEN WANTED to satisfy town full of yearning BRAZILIAN HOTNESS
- +Comment 'Stop dissing Google or quit': OK, I quit, says Code Club co-founder