Upstart database MongoDB has reached its 2.6 release armed with technologies that its backing company says represent "a foundation for the next decade of database innovation." The 2.6 release of the NoSQL document-oriented database became generally available on Tuesday. With this version, the database's eponymous steward MongoDB …
Why either or? Postgres has recently merged support for binary JSON which means you can have all the flexibility you want with the added goodness, speed and reliability of indices based on relational algebra.
Can we have less coverage of industry PR and more DBA meat, please?
> Postgres has recently merged support for binary JSON
I doubted that you could index on a json sub-part, but to my surprise, you can <http://stackoverflow.com/questions/17807030/how-to-create-index-on-json-field-in-postgres-9-3>.
Assuming you wanted to do that, I mean. I have to say I still don't understand what mongo is supposed to be giving us that a typical sql DB can't. The one claimed difference, 'unstructured information' is entirely blobbable, and apparently now indexable by subcomponents (in PG anyway), so what's left? Anyone?
> Can we have less coverage of industry PR and more DBA meat, please?
Yup, in spades.
The ability to scale to very large levels without having to pay for extremely expensive experts to cludge together some make do solution in a relational DB, that's basically the reason.
Re: Scoffing @bigtimehustler
Postgres is free, but I guess that's not your point. Genuine experts are going to proportionately expensive in either technology I'd have thought, however you can probably do with *less* of them in an RDBMS because the intelligence (in the form of the optimiser) is built into the software so there's likely less effort to write complex queries.
> ...cludge together some make do solution...
Empty emotive words.
I am still trying to figure out how any meaningful application can be written without proper transactions.
Mongo has document (aka record) level transactions, but thats no good when transfering money, or other sensitive values.
(oh and dont get me started with the "feature" of reading uncommited yuck)
"The ability to scale to very large levels"
Scale Mongo to "very" large levels and what you've got is a very expensive database that's little better than a tape-based cold storage system. Mongo simply isn't scalable. Mongo's what you use if you're writing a cupcake recipe website for your mum. It's lovely if what you want is simplicity and flexibility, but scalability? See: Scoff. Its scaling model is ye olde sharding and the database-wide locking makes it genuinely worse than a traditional SQL system - most of which now feature document-oriented extensions and in-memory systems as standard.
If what you want it scalability and flexibility, you choose HBase or Cassandra. Simple as that.
(anonymous for obvious reasons) I'm a self appointed relational DB expert and I charge very expensive fees (sadly not as often as I'd like), yet I've had so far only two customers approaching me with MongoDB related questions. In both cases, the answer was "you can do it better with a SQL DB, and one whose license would not cost you a penny". I'd still charge them for tuning Postgres^H^H^H^H^H^H^H that DB, but no as much as for tuning Mongo. And at least I'd not have to fight with database-wide write locks.
Not saying that perhaps there are places where MongoDB is a good fit, only that I can't see any where SQL alternatives could work at least as well as Mongo. Lack of experience, perhaps, but mentioning real world examples surely would help.
Don't think that Oracle is losing any sleep on Mongo.
KISS principle can be applied to everything. Yeah, even data models and the ensuing databases. Actually in my experience this is the most important place to apply it when designing software that stores data.
When you don't need transactions. Oh yeah your "serious" applications can be designed (sometime even better) without them. When the relations you need can be counted on one hand fingers. Yes that means you would have to know the requirements before coding not after :P . When data amounts are medium (half a trillion smallish documents) on day one. When there are 100+ times data reads vs. one write (most data is written once and accessed orders of magnitud more times).
Then you can take MongoDB, install it on your cheapo x86 server and run it with stock configurations with everything working without any previous knowledge of the DBMS (first time we tried it in production). That also means you don't have the config hook in your code to turn on autocommit mode for your Oracle/Postgre database (sic.) .
U guys just widen your horizons, there is never one best tool for the job. Why use scissors when all you need is a knife?
PS: have used Postgres, Mongo, Oracle extensively on multiple big projects with different goals.
- YARR! Pirates walk the plank: DMCA magnets sink in Google results
- Pics Whisper tracks its users. So we tracked down its LA office. This is what happened next
- Review Xperia Z3: Crikey, Sony – ANOTHER flagship phondleslab?
- Ex-US Navy fighter pilot MIT prof: Drones beat humans - I should know
- OnePlus One cut-price Android phone on sale to all... for 1 HOUR