It is, perhaps, one of those forgotten facts that computing is still a relatively young technology, made all the more poignant by the realisation that many of the people driving the High Performance Computing (HPC) business, like Burton Smith, Microsoft's technical fellow in charge of advanced strategies and policies for the …
InterNetworking Computer Grid Systems ... The Quantum OS
"while atomic memory transactions implement dependence awkwardly. The use of such technologies in mainstream computing is new ground and he acknowledged that atomic memory transaction technology already has critics claiming it is doomed to be too slow. He also pointed out that this was still to be shown as a permanent condition."
"implement dependence awkwardly"? Is that the same as registration of assets is measured and personally cultivated for faultless performance? How very wise in any NEUKlearer Situation.
"As Smith observed: "Although operations seldom commute with respect to state, transactions give us commutativity with respect to the invariant, and it would be nice if the invariants were available to the compiler if programmers can provide them readily.""
..... Sounds just like a Singularly Sophisticated Swingers Site.... A Virtual Reality Pioneers Home Base Honey Store/Arsenal. AI Virgin Venture for CyberIntelAIgents.....
""We have to rethink the basics of computing, but thanks to HPC we have a good starting point. It does mean, however, that many applications will have to be re-modeled and re-engineered from the strategy downwards."" .... thus to reinvent them as a Beta Mirror Image of Parallel Paths and Processes, which will also, Co Laterally increase processor power/activity at Zero Cost? ...... as RSS/Cookie type Feeds refine Enrichment of Information into Intelligence and to ITs CyberIntelAIgent Feed Facilities.
Which would Actually be a whole new Root Server System, NeuReal Server System
AI Quantum Server......in AI Coalition of the Ready, Willing and S.M.A.R.T.er Enabled 4Peer2Peer Ennoblement/Stock Enrichment?
Hah, pipes weren't even mentioned. No wonder M$ had a large interest in the ``conference''.
Been there, done that.
It's called "re-entrant" code. Had your boffins spent any time in the big iron mainframe world you'd know it's been in place since the mid 70's. Why is it that the micro-heads always think they've come up with something new when it's been in place for over a quarter century in the mainframe world?
Ditto for workload management. Cutting edge in the micro world, but old-hat on the mainframes.
It's a shame that the cultural and "religious" differences between mainframers and micro-heads prevents building upon previous knowledge like you normally see in any other scientific discipline. Tsk, Tsk.......
Problems solved for over 30 years
Over three decades ago all of these parallel programming problems were solved. Nothing new in the interim, here. Even the 8086 went multi-processor on the Intel development stations. Once again its all a lot of pandering to lazy programmers who can't be bothered to read available literature. Take a browse in a good bookstore sometime. (Has anybody noticed how closely ML languages resemble Jackson Design Methodology?)
He's there to talk about HPC, but having taken the Microsoft Schilling, a dig at Apple has to be included doesn't it.
"This is what I call the Apple approach, where a great new technology is introduced with not much thought given to the pain it might cause users of an earlier technology. But we have to take existing users with us."
Now I'm not a Mac user, never have been, probably never will be, but one thing I do know is that through the two radical changes of processor architectures, Apple have provided excellent backwards compatibility for applications and migration paths for developers.
Microsoft on the other hand have been on the same architecture from day 1, and you are extremely luck if you don't have to start again from scratch with every new version of Developer Studio and associated cruddy class library or ill thought out framework.
There are further languages in the field
When mentioning NIAL and SISAL, SAC needs to be mentioned, as well. SAC (or Single Assignment C, www.sac-home.org) produces efficient parallel code from seemingly sequential program specifications. They have several case studies on their website, suggesting that SAC yields very competitive if not better runtimes than code produced by modern fortran compilers.
- Review Samsung Galaxy Note 8: Proof the pen is mightier?
- Nuke plants to rely on PDP-11 code UNTIL 2050!
- Spin doctors brazenly fiddle with tiny bits in front of the neighbours
- Game Theory Out with a bang: The Last of Us lets PS3 exit with head held high
- New material enables 1,000-meter super-skyscrapers