While Hadoop is all the rage in the technology media today, it has barely scratched the surface of enterprise adoption. In fact, if anything, we are still only on the first few steps of the Big Data marathon, a race that Hadoop seems set to win despite its many shortcomings. The big question will be whether the market will keep …
Oh, it's Java. Never mind then
Man who owns a business that claims to solve $technology's only major flaw, thinks $technology will do well?
True. But this does seem to be a much better article than the usual stuff he writes.
Lots of lingo and spin. So like any bored IT tech patching servers, I went away to read for a bit. Sounds like it's a big fish in a very small pond, although the distributed filesystem part may be useful. I used to work at the human genome project over a decade ago, and we had clusters then that dealt with the masses of data by breaking it up into smaller jobs and spreading it across many nodes with shared storage - so what's really new here?
This hammer is terrible becasue it sucks as a screwdriver.
Matt, I agree we are seeing more efforts from Big Data companies trying to make Hadoop a mature and complete solution. As an alternative to Hadoop, LexisNexis has open sourced the HPCC Systems platform that is a complete enterprise-ready solution. Designed by data scientists, it provides for a single architecture, a consistent data-centric programming language (ECL), and two data processing clusters. Their built-in analytics libraries for Machine Learning and BI integration provide a complete integrated solution from data ingestion and data processing to data delivery. This all in one platform means only one thing to support and from a significant lower number of resources. In contrast, the complexity of the Hadoop ecosystem requires a huge investment in technology and resources up front and throughout.