The unified query (YUK) paradigm, essentially a whopping big space containing lists of objects (i.e. tables,) events (i.e. immutable time sensitive effective sequences,) and generalised pools of objects, has come a long way in a short time. The functional results of unified query, (i.e. the ability to optimally generate efficient, but very complicated resultsets can only really be done, by having super huge data space, and a parallel despatcher engine, which fires off multiple implementations of each of the resultset processes and terminates the lot when the most efficient one has finished.
The derived data component of unified query, also provides the ability to build and throw away enterprise service datasets (i.e. derived data which can be reported, but whose underlying data can't, and whose cost of generation is too expensive to be done multiple times anyway.) An example of the latter would be the generation of average speeds of cars by registration number. The host system can know the locations and times, and compute running averages which can be replicated out individually or as a set, but withold the roads upon which they were clocked.
A vast array of virtual memory has the ability to answer ridiculously complex queries due to the scale of fast space allocation, that conventional computing can't.
It would be ridiculous to ridicule this in the fashion of Eigen, just because the maths hasn't yet got an application.
*** Resultset process - a set of processes which all do the same thing in different ways.
eg. a simple example would be sorting unknown amounts of data. The despatcher contains a list of functions (e.g. shell, quick, bubble) that sort data, and gives them all a go at it, they all work in different ways, and so one of them is likely to be more efficient. When one finishes, the others are aborted, causing their enterprise dataset to be deleted and the space reclaimed, the one which finished has its data kept (or deleted, depending on cost benefit analysis - a result set is just a function* of dry (- don't repeat yourself) data anyway.) This is the exact opposite to current RDBMS which choose a single algorithm, based on clever guessing.
At the enterprise scale, the sort will be one of a hierarchy of functional results, in a Lisp like defun() though using Linq like functionality. The hierarchy of the functions providing the answer. This can simply not be done using conventional architectures.