Re: Writing parallel code doesn't have to be any harder than writing sequential code.
"But when you start talking about true parallelisation, with multiple threads working on the same data set, these approaches don't work. HPC code writers have struggled with this problem for many years."
I suspect that we're in violent agreement. You pretty much hit the nail on the head with respect to threads hammering away at some shared data.
The point I'm trying to make is that writing a bit of code to do something in parallel isn't hard in itself. In fact languages & tools that have a concept of parallelism make a lot of problems a lot easier to solve. :)
On the other hand breaking up the problem into nice discrete computational units that run nicely in parallel at run-time is hard. In essence I'm saying the mechanics of writing parallel code are actually straightforward, the most intractable bits lie in the logical domain.
With respect to VISC it is a step in the right direction, but AFAICT it seems to be rooted in the tightly coupled thread world. As you know component failure in a distributed system is almost guaranteed - and in the real world you usually have to share your system with other workloads at runtime, so what I would like to see this kind of tooling scale from threads on the same die right up to balancing multiple workloads on a few hundred racks. In my minds eye that magic toolset would stitches all the pieces together so a developer/ops/sa can take a kernel / dataset, move it around and refactor it to fit the hardware it's running on at run-time.
I know that does sound like a bit of wishful thinking, but many pieces of the puzzle have been done already over the past 30 years or so,