World+dog agrees that Hadoop is a very fine tool with which to tackle map reduce chores, but the software has a couple of constraints, especially its reliance on the Hadoop Distributed File System (HDFS). There's nothing wrong with HDFS, but its integration with Hadoop means the software needs a dedicated cluster of computers on …
"“We abstracted out an HDFS layer but underneath that it is actually talking to lustre."
Err, Hadoop has a specific class/interface, FileSystem, designed to let anyone implement a filesystem underneath: local being a key one. All you have to do is implement it and then pass tests like FileSystemContractBaseTest to convince yourself you got it right.
While Intel make it sound like they did some heavy engineering "we abstracted out an HDFS layer", what they probably mean is they took the ASF-supported LocalFileSystem class and tweaked it to get locality information out of Lustre, then ran some (? how many?) tests to show it worked. Having them talk about the tests, that would be interesting. Ask them (or any other "we swapped HDFS for -" vendor) for that question, as only EMC/Pivotal have owned up to testing on a 1000+ node cluster
- +Comment Anti-Facebook Ello: Here's why we're still in beta. SPAMGASM!
- Vid+Pics Microsoft WINDOWS 10: Seven ATE Nine. Or Eight did really
- Analysis Windows 10: One for the suits, right Microsoft? Or so one THOUGHT
- Xbox hackers snared US ARMY APACHE GUNSHIP ware - Feds
- George Clooney, WikiLeaks' lawyer wife hand out burner phones to wedding guests