I find the idea of the split supercomputer interesting. Although this seems silly at first glance, I think it is actually sensible. The much lower bandwidth between the two halves* would absoultely cripple certain workloads (a weather model, for instance, traditionally each node would end up walking through a pretty large portion of system memory as it calculated, meaning a shared memory supercomputer is a must.) But, the reality is this type of system doesn't tend to run one giant workload anyway, the tendency is to have a number of jobs running on it at any given time, and also a lot of jobs would not walk through system memory the way a weather model would. Either one of these, this lets each university have a local resource with (presumably) automatic use of excess on the other university's system, which is great.
*I don't know what the speed will be, but certainly below the 56GB/sec infiniband local to each half!