I saw Hugh Darwen give a presentation last year on the history of SQL, the problem with NULL, and Tutorial-D. It was fairly enlightening, even if I found myself sceptical as to it's practicality.
As I understood it, one of the problems they see with NULL is that it fails to distinguish between 'information unknown' and 'does not have a' - or rather than the semantic meaning of the NULL value is held in the code that deals with the NULL value, rather than understandable from the schema.
The solution proposed wouldn't work with modern RDBMS - seemed to involve denormalising every NULLable column off into it's own table, but it's only our experience of RDBMS performance that makes us think this is such a bad idea.
I think a lot of the concepts in Tutorial-D would help close the supposed 'Object-Relational' mismatch - it seems to have a closer fit to the notion of inheritance.
But overall, I share your pessimism - the reality is that there is a generation of 'database hostile' programmers out there, who would rather pull back thousands of rows up into an OO language, modify them using an iterator, and push them all back to the database layer, than use a set based update statement.
(Equally I think it is little appreciated how much relational theory stands on top of very solid mathematical foundations, far more so than the heuristic approach of object modelling. Then again, consider the success of mathematically sound formal languages against the programming languages that have been succesful).