* Posts by Louis Savain

53 posts • joined 18 Mar 2008

Page:

Meet the man building an AI that mimics our neocortex – and could kill off neural networks

Louis Savain

Re: model a neurone in one supercomputer

I disagree. We can certainly model the brain or any physical system in software using current computers. The only barrier is the complexity of the model.

3
2

Rock star physicist Cox: Neutrinos won't help us cheat time

Louis Savain
Thumb Down

RockStar Physicists Have No Clue

So physicists have no idea why the speed of light is the speed limit. Was it a guess all along? It sure seems that way, doesn't it? Otherwise, they would know that the neutrino FTL result was in error and the whole nonsense would have never made the news.

Truth is, if physicists really had a clue about the nature of motion, they would know that the speed of light is not just the fastest speed in the universe but the *only* speed in the universe. Nothing can move faster or slower. A body that seems to be moving at a much slower speed than c is really making a series of quantum jumps at c, with lots of wait periods in between. This is true no matter how smooth motion appears at the macroscopic level.

And if physicists really understood the causality of motion, they would also know that we are moving in an immense lattice of energetic particles without which there could be no motion. Google "Physics: The Problem with Motion" if you really want to understand motion. Believe me, you don't understand motion even if you think you do. And while you're at it, Google "Why Space (Distance) Is an Illusion" for more surprises.

0
9

Father of Lisp and AI John McCarthy has died

Louis Savain
Unhappy

Sad But...

Yes, John was a thinking man, a giant in his day. I don't like lisp (or any functional language) but it could just be me. Others swear by it. Still, McCarthy did as much as anybody (certainly as much as Minsky) to get AI research stuck in the hopeless rut of symbol manipulation for half a century. What a waste of time and brains! It is only recently that new thinkers in the field have begun to realize that intelligence is at its core a tenmporal phenomenon.

That being said, "artificial intelligence" has a nice ring to it. I thank McCarthy for that because the phrase has caused more people to become interested in the field than anything else.

0
4

Russia, NASA to hold talks on nuclear-powered spacecraft

Louis Savain

Primitive!

The use of reaction mass for space propulsion, whether nuclear or chemical, is about as primitive as can be. Not to mention slow, dangerous and frustratingly expensive. Sure, the blue glow of spaceship exhaust in space movies may look cool. But rest assured that humanity is not going to colonize the solar system with mass-ejecting rockets based on Newtonian physics, let alone the star systems beyond. Unless there is some sort of breakthrough in our understanding of motion at the fundamental level, there will be no mass migration off the earth any time soon.

The truth is that we will not come out of the current dark age of space travel until and unless physicists pull their collective head out of the sand and realize that they do not really understand motion, even if they think they do. A new analysis of the causality of motion reveals that Aristotle was right to insist that motion is caused and that, as a result, we are immersed in an immense lattice of energetic particles without which there can be no motion. Soon, we'll learn how to exploit the lattice for clean energy production and extremely fast propulsion. New York to Beijing in minutes, earth to Mars in hours; that's the future of transportation. Imagine vehicles that can move at prodigious speeds and negotiate right angle turns without slowing down and without suffering any damage due to to inertial effects. It will happen in your lifetimes.

Google "Physics: The Problem with Motion" if you're interested in the amazing future of travel.

0
5

China picks MIPS for super-duper super

Louis Savain

Another Chinese Me-too Project

Interesting but what a waste of good research talent! The hard reality is that any new processor that does not solve the parallel programming crisis is on a fast road to failure. No long march to victory in sight for the Loongson, sorry.

China should be trying to become a leader in this field, not just another me-too follower. There is an unprecedented opportunity to make a killing in the parallel processor industry in the years ahead. Intel may have cornered the market for now but they have an Achilles' heel: they are way too big and way too married to last century's flawed computing paradigms to change in time for the coming massively parallel computer revolution. Their x86 technology will be worthless when that happens. The trash bins of Silicon Valley will be filled with obsolete Intel chips.

Here's the problem. The computer industry is in a very serious crisis due to processor performance limitations and low programmer productivity. Going parallel is the right thing to do but the current multicore/multithreading approach to parallel computing is a disaster in the making. Using the erroneous Turing Machine-based paradigms of the last sixty years to solve this century's massive parallelism problem is pure folly. Intel knows this but they will never admit it because they've got too much invested in the old stuff. Too bad. They will lose the coming processor war. That's where China and Intel's competitors can excel if they play their cards right.

The truth is that the thread concept (on which the Loongson and Intel's processors are based) is the cause of the crisis, not the solution. There is an infinitely better way to build and program computers that does not involve threads at all. Sooner or later, an unknown startup will pop out of nowhere and blow everybody out of the water.

My advice to China, Intel, AMD and the other big dogs is this: first invest your resources into solving the parallel programming crisis. Only then will you know enough to properly tackle the embedded systems, supercomputing and cloud computing markets. Otherwise be prepared to lose a boatload of dough. When that happens, there shall be much weeping and gnashing of teeth but I'll be eating popcorn with a smirk on my face and saying "I told you so".

How to Solve the Parallel Programming Crisis:

http://rebelscience.blogspot.com/2008/07/how-to-solve-parallel-programming.html

0
0

Intel puts cloud on single megachip

Louis Savain

Intel Is Delusional

Seriously, Intel needs to take a close look at IBM's big failure with its much ballyhooed Cell Processor that recently came to naught. I mean, who are they kidding? You don't design a parallel processor and then wait around for the software engineers to come up with a software model for it. This is as ass-backward as you can get. It should, of course, be the other way around:

First you come up with the right parallel software model and then you design a processor to support it. But that would be asking way too much from the Intel's engineers. They have way too much invested in the failed paradigm of the last century to do the right thing. Maybe some unknown startup will see the writing on the wall.

How to Solve the Parallel Programming Crisis:

http://rebelscience.blogspot.com/2008/07/how-to-solve-parallel-programming.html

1
6

Cambridge string theorist to succeed Stephen Hawking

Louis Savain
Boffin

Simple Question for the Lucasian Professor...

... from way in the back of the class. A very simple question for the 18th Lucasian Professor of Mathematics: Why do two bodies in relative inertial motion stay in motion?

A. Nothing is needed to keep them moving. Newton proved that a long time ago.

B. It is all caused by inertia and momentum. Newtonian proved that one too.

C. Physics is not about the why of things but the how.

D. I don't know.

C. All of the above.

E. None of the above.

Soon, a reevaluation of our understanding of the causality of motion will reveal that we are immersed in an immense lattice of energtic particles. Floating sky cities, unlimited clean energy, earth to Mars in hours, New York to Beijing in minutes. That's the future of energy andtravel.

The Lucasian professor could not have made this prediction because he does not really understand motion. If he did, he would know that Aristotle was right to insist that motion requires a cause and he would not accept the position. But then again, neither did his predecessors.

ahahaha... AHAHAHA... ahahaha...

The Problem with Motion:

http://rebelscience.blogspot.com/2009/09/physics-problem-with-motion-part-i.html

0
0

There's water on the Moon, scientists confirm

Louis Savain

Great News But Rocket Propulsion Blows

Great news indeed. Still, it's depressing to think that we're still using an ancient, dangerous, primitive and very expensive space transportation technology: rocket propulsion. One thing is sure; we'll never colonize the solar system with rockets at the rate we're going.

But rejoice. Soon, a new form of transportation will arrive, one based on the realization that we are immersed in an immense ocean of energetic particles. This is a consequence of a reevaluation of our understanding of the causality of motion. Soon, we'll have vehicles that can move at tremendous speeds and negotiate right angle turns without slowing down and without incurring damages due to inertial effects. Floating cities, unlimited clean energy, earth to mars in hours, New York to Beijing in minutes... That's the future of energy and travel. Check it out.

The Problem With Motion

http://rebelscience.blogspot.com/2009/09/physics-problem-with-motion-part-i.html

0
0

Intel execs shift seats as Gelsinger departs

Louis Savain
Thumb Up

Time for the Baby Boomers at Intel to Retire

Chipzilla needs to reorganize its parallel processing/multicore division. They can start by forcing the baby boomers (the Turing Machine worshippers) to retire so that they can forge a new future. Being a behemoth is no guarantee of survival in this business, And hanging on to the antiquated computing paradigm of the past is a sure recipe for failure.

And by the way, that hideous heterogeneous monster, a.k.a. Larrabee, has got to go.

How to Solve the Parallel Programming Crisis:

http://rebelscience.blogspot.com/2008/07/how-to-solve-parallel-programming.html

0
0

The multicore future, and how to survive it

Louis Savain
Grenade

@BlueGreen

Yo, BlueGreen. Identify yourself if you got any gonads.

0
0
Louis Savain

Re: Unconventional Thoughts

Yeah, but I'm right about parallel programming.

0
0
Louis Savain
Thumb Up

Between a Rock and a Hard Place

Interesting article. As much as the powers that be at Intel know that multithreading is not the answer to the parallel programming crisis, they also know that they have wedged themselves between a rock and a hard place. They have way too much invested in last century's legacy technologies to change strategy at this late stage of the game. It makes sense for them to acquire outfits like Rapidmind and Cilk. The former specializes in domain specific application development tools for multicore processors while the latter makes clever programs that discover concurrent tasks in legacy sequential programs. Intel's intelligentsia figure that this should give them enough breathing room while they contemplate their next move. Question is, is this a good strategy? I really don't think so.

In my opinion, Intel, AMD and the others are leading their customers astray by making them believe that multithreading is the future of parallel computing. Billions of dollars will be spent on making the painful transition by converting legacy code for the new thread-based multicore processors. The problem is that multithreading is not just a bad way to program parallel code (do a search on Dr. Edward Lee of Berkeley), it is the cause of the crisis. Neither Rapidmind nor Cilk are going to change that simple fact. These acquisitions are just a way of trying to force a round peg into a square hole as much as it can go. Sooner or later, a real maverick will show up and do the right thing and there shall be much weeping and gnashing of teeth . Being a behemoth is no guarantee of long-term survival in the computer business. Sorry, Intel.

Google or Bing "How to Solve the Parallel Programming Crisis" if you're interested in finding about the real solution to the crisis.

0
0

DNA-carbon nanotube microprocessors — small hope for a big shift?

Louis Savain
Thumb Up

The Memory Bandwidth Monster

Nice article. IBM researchers may indeed be close to a giant leap in miniaturization and cost efficiency. This could theoretically lead to extremely powerful, cheap and energy efficient consumer electronics. This is exciting because future computer applications will rely heavily on brain-like neural networks that are based on huge numbers of parallel processing entities. Future consumers will demand portable intelligent assistants that would fill a warehouse if based on today's technology.

However, there is a monster that threatens to destroy these exciting developments. It's called memory bandwidth. It is a monster that gets meaner and nastier every time you add another core to a processor. The reason is that all the cores must use a single data bus and a single address bus to access a single piece of data at a time and this creates a paralyzing bottleneck. Current memory systems would be hard pressed to keep up with a processor with a hundred cores, let alone the thousands and millions of cores that are being predicted by the pundits.

It goes without saying that the industry must eliminate the bottleneck. The core/bus/memory approach to computing is fast approaching the end of its usefulness. We must come up with a completely different type of computer, one that solves the bandwidth problem by embedding huge numbers of elementary processors directly into the memory substrate. Decentralization will be the game in town. Biological brains get around the bandwidth problem by growing as many nerve fibers as necessary. However, this is a very slow process, one that would be inadequate with the fast random access requirements of modern software. See link below if you’re interested in this subject.

Parallel Computing: The Fourth Crisis

http://rebelscience.blogspot.com/2009/01/parallel-computing-fourth-crisis-part-i.html

0
0

AMD beta seeks CPU-GPU harmony

Louis Savain
Thumb Up

Very Interesting Article

"All well and good, but our experience with OpenCL has shown it to be a bear to program with."

**

Exactly. This is essentially what it comes down to. Does OpenCL make parallel programming of heterogeneous processors easy? The answer is no, of course, and the reason is not hard to understand. Multicore CPUs and GPUs are two incompatible approaches to parallel computing. The former is based on concurrent threads and MIMD (multiple instructions, multiple data) while the latter uses an SIMD (single instruction, multiple data) configuration. They are fundamentally differrent and no single interface will get around that fact. OpenCL is really two languages in one. Programmers will have to change their mode of thinking in order to take effective advantage of both technologies and this is the primary reason that heterogeneous processors will be a pain to program. The other is multithreading, which, we all know, is a royal pain in the arse in its own right.

***

Obviously what it needed is a new universal parallel software model, one that is supported by a single *homogeneous* processor architecture. Unfortunately for the major players, they have so much money and resources invested in last century's processor technologies that they are stuck in a rut of their own making. They are like the Titanic on a collision course with a monster iceberg. Unless the big players are willing and able to make an about-face in their thinking (can a Titanic turn on a dime?), I am afraid that the solution to the parallel programming crisis will have to come from elsewhere. A true maverick startup will eventually turn up and revolutionize the computer industry. And then there shall be weeping and gnashing of teeth among the old guard.

**

Google "How to Solve the Parallel Programming Crisis" if you're interested in an alternative approach to parallel computing.

0
0

Intel cozying up to Google Chrome OS

Louis Savain
Boffin

Reactive Sounds a Bell?

Greenhalgh,

Your entire rant is pointless because you are arguing out of ignorance. The COSA software model is reactive and synchronous, which means that nothing happens unless there has been a change. It further means is that process timing is exact and that a read always immediately follows a write. Changes are automatically propagated to every component that needs them. It means that contention is a non-issue. See ya around, baby boomer.

Phew! Turing Machine heads and thread monkeys are a dime a dozen. Can't get rid of them. But take heart, Greenhalgh. It's never too late for you to repent of your manifold sins. ahahaha... AHAHAHA... ahahaha...

0
0
Louis Savain
Boffin

It's not Rocket Science

Greenhalgh, you're wasting my time.

copsewood, what I'm proposing is not rocket science. It's plain common sense. if you don't get it, it's because something is wrong with your brain. A baby boomer, are you? You got Alzheimer's, maybe? Oh well. Many people do get it. And some people are indeed working on their own implementations. I've got other things to be busy with. But I do enjoy getting on your nerves every so often.

0
0
Louis Savain
Grenade

You Ain't my Peers

@James Greenhalgh

You know what you can do with your peer review, don't you, Greenhalgh? You can pack it where the sun don't shine. You (the computer academic community) are not my peers. Why would I want you as my peers? You are failures. You turned computer science and programming into a tower of Babel, a big fucking pile of crap. You were wrong about computing from the start. This is something I realized the first day I opened a book on computer programming. Your computing model (the Turing Machine) is crap. You have shot computing in the foot. Big time. The parallel programming crisis is just the chickens coming home to roost, as they say in the USA. With a vengeance, I might add, if only because billions of dollars of rich folk's money are in the balance. I'd be scared if I were you.

Y'all better get ready to face your computing KARMA. The exit is on the left and don't let the door hit your behind on the way out, just in case. LOL.

0
0
Louis Savain

It's the Fucking Threads, Stupid!

Neil Standbury,

Your rant is worthless. It's the fucking threads, stupid! It has always been the fucking threads. And not just because they are prone to lock and are hard to debug but also because they are a pain in the ass to program and understand. The solution demands that we get rid of the fucking threads. Read this over and over till it sinks.

The goal is to make it as easy as possible to produce rock solid software applications as rapidly as possible. It's not about making a bunch of aging baby-boomer geeks feel good about their ugly and hopelessly flawed cryptic code. In other words, it's about money and profit. It's time for the Turing worshippers of the last century to retire. You caused the crisis. You failed, goddamnit!

That being said, it's good to see that you, at least, identify yourself. That's gotta count for something. More than I can say for the anonymous cowards. LOL.

0
0
Louis Savain

Is that you, PZ Myers?

Yo, Anonymous,

Is that you, PZ Myers? Come on man, go get some gonads and identify yourself. LOL.

0
0
Louis Savain

Intel Is Scared

Otellini has a big problem on his hands. He can sense that the computer industry is ripe for a seismic change. The parallel programming crisis threatens to unleash a frenzy of innovation that may render Intel's processors obsolete. So Otellini figures that the best way to leverage Intel's heavy investment in last century's technology is to support as many OSes as possible in order to lock in as many customers as possible. If a major change happens, that should give him some breathing room.

The problem with operating systems, however, is that they will all become obsolete in a few years. This includes all the dinosaurs from the 20th century: Windows, Unix, Linux, MacOS, etc. And, let's not forget the processors they run on. They will all join the buggy whip and the slide rule into the pile of abandoned artifacts. Why? Because the coming solution to the parallel programming crisis will not suffer a bunch of primitive and inferior technologies to survive.

So Google's Chrome OS is yet another Linux OS? Please, don’t make me laugh. Linux is a mummy, a decrepit museum piece from a soon to be forgotten age. Eric Schmidt is clearly delusional. Google’s mountain of cash is not enough to guarantee success in this cutthroat business. Chrome OS is doomed before it is even born. Heck, Google’s own future is precarious because the computer industry is at a dangerous crossroad. A wrong turn may turn out to be very painful if not fatal. My advice is: Y’all should think carefully before deciding on which way to proceed.

How to Solve the Parallel Programming Crisis:

http://rebelscience.blogspot.com/2008/07/how-to-solve-parallel-programming.html

0
0

Google Oompa-Loompas dream of virus-free OS

Louis Savain

Delusional Schmidt

Nice article. Google is about to learn a painful lesson. The stuff that's out there, including Linux (the basis of Chrome OS), is inherently insecure. The problem with operating systems is that they will all become obsolete in a few years. This includes all the dinosaurs from the 20th century: Windows, Unix, Linux, MacOS, etc. They will all join the buggy whip and the slide rule into the pile of abandoned technologies. Why? Only because the coming solution to the parallel programming crisis will not suffer a bunch of primitive and inferior technologies to survive.

Another Linux OS? Please, don’t make me laugh. Linux is a decrepit museum piece from a soon to be forgotten age. Eric Schmidt is clearly delusional. Google’s mountain of cash is not enough to guarantee success in this cutthroat business. Chrome OS is doomed before it is even born. Heck, Google’s own future is precarious because the computer industry is at a dangerous crossroad. Y’all should think carefully before deciding which way to proceed.

How to Solve the Parallel Programming crisis:

http://rebelscience.blogspot.com/2008/07/how-to-solve-parallel-programming.html

0
0

Sun killing 'Rock' Sparc chip?

Louis Savain
Thumb Up

It's not too late

Very interesting article. Sun's Rock failed because it was not innovative enough, in my opinion. Anybody who thinks that the multithreading CPU technologies of the last century are viable in the age of massive parallelism should lay off the dope, in my opinion. This is a lesson for all the big players in the business and it will not be the last big chip failure either. Get ready to witness Intel's Larrabee and AMD's Fusion projects come crashing down like so many Hindenburgs.

As I wrote on Ashlee Vance's NYT blog (http://bits.blogs.nytimes.com/2009/06/15/sun-is-said-to-cancel-big-chip-project/?hp), after the industry has suffered enough (it’s all about money), it will suddenly dawn on everybody that it is time to force the baby boomers (the Turing Machine worshippers) to finally retire and boldly break away from 20th century’s failed computing models.

Sun Microsystems blew it but it’s not too late. Oracle should let bygones be bygones and immediately fund another big chip project, one designed to truly rock the industry this time around and ruffle as many feathers as possible. That is, if they know what’s good for them.

How to Solve the Parallel Programming Crisis:

http://rebelscience.blogspot.com/2008/07/how-to-solve-parallel-programming.html

0
0

Google: The internet is 'the right programming model'

Louis Savain

The Cloud Is Not a Programming Model

Google wants us to believe that cloud computing is the natural order of everything in programming only because Google's business model hinges around cloud computing. This may be true for certain applications (spreadsheets, word processing, etc.), but the reality is that the world is quickly moving toward a massively automated age in which cars can either drive themselves (or, at least, react to dangerous situations to prevent collisions) and personal robots do chores around the house including the baby sitting. The computing horsepower that will be required for this sort of stuff can only be achieved via massive parallelism and gargantuan random access memory. There is no way the public is going to trust their mission and safety-critical computing needs to an invisible cloud in the sky. What if the system suddenly becomes overwhelmed with processing requests or some technical difficulties? Imagine the damage that a cloud blackout or brownout could do to a modern economy that is completely hooked to it.

So no, Eric. Cloud computing is a very good thing but it is not a programming model. It's just a business model. As soon as some maverick startup comes out with a solution to the parallel programming crisis (see link below), get ready for an explosion of super-complex and environmentally-aware apps that would tax the bandwidth of any cloud infrastuture.

How to Solve the Parallel Programming Crisis:

http://rebelscience.blogspot.com/2008/07/how-to-solve-parallel-programming.html

0
0

Intel's Otellini plots little growth path

Louis Savain

Short-term Growth vs. Long-term Success

Short-term growth is no guarantee of long-term sucess. Sooner or later, the market will come to its senses and realize that neither Intel's antiquated CPU technology nor Larrabee (a GPGPU hybrid built on Intel's antiquated CPU technology) is the solution to the parallel programming crisis. Larrabee sounds powerful and cool until you come face to face with the nasty problem of programming the heterogeneous beast. In the end, neither GPU not CPU (nor a hybrid) will survive the coming paradigm shift. What is needed is a new type of processor built to support a universal and deterministic parallel programming model. Intel (and those who invest in Intel's technology) should tread carefully because being a market behemoth is no guarantee of invincibility. A visionary startup or a true maverick may sneak up behind this sleeping giant and steal the pot of gold.

How to Solve the Parallel Programming Crisis:

http://rebelscience.blogspot.com/2008/07/how-to-solve-parallel-programming.html

0
0

Apple setting up chip division for next-gen iPhones?

Louis Savain

Be Careful, Apple: Paradigm Shift Ahead

Unless Apple comes out with a hardware and software solution to the parallel programming crisis, this is an investment that will come back to bite them in the ass. Hard.

How to Solve the Parallel Programming Crisis:

http://rebelscience.blogspot.com/2008/07/how-to-solve-parallel-programming.html

0
0

The long slog to multicore land

Louis Savain

Marcus Levy's Got it Ass Backwards

You don't create a programming model to support an existing hardware model. It should be the other way around. First you come up with a correct programming model; then you devise the hardware to support the model. Sheesh.

The Multicore Association is supposed to lead and suggest new ways to get us out the crisis. Kissing Intel's or AMD's ass is not the way to go. It does not matter if you depend on them for financial support. The main leaders of the multicore processor industry have let it be known that they want to support the multithreading model of parallelism. The only problem is that multithreading is the cause of the crisis, not the solution. Marcus Levy needs to wake the hell up and be a real leader. There are enough ass kissers in the business already. Go to the links below for enlightenment.

How to Solve the Parallel Programming Crisis:

http://rebelscience.blogspot.com/2008/07/how-to-solve-parallel-programming.html

Encouraging Mediocrity at the Multicore Association:

http://rebelscience.blogspot.com/2008/05/encouraging-mediocrity-at-multicore.html

0
0

Nvidia seeks 'PC soul' software

Louis Savain

A Sign of Weakness

Not that this is a bad move by Nvidia but it leads me to conclude that neither CUDA, nor OpenCL nor GPUs are the answer to the parallel programming crisis. If monetary incentives are needed to motivate people to write applications for a processor, my bet is that something is wrong with that processor. What I mean is that there is something about GPUs that prevents them from being an ideal solution to general purpose parallel programming. That something is obvious: They are not universal and they are a pain in the arse to program.

The powers that be at Nvidia realize that they are in trouble. It is obvious that GPUs are not the answer and traditional CPUs are worse. We need a new programming paradigm and a new processor architecture to support the new paradigm. Nvidia should continue to milk as much money as possible with their current technology but they should invest their R&D money into something else. One thing is certain: Nvidia cannot say that nobody warned them.

How to Solve the Parallel Programming Crisis:

http://rebelscience.blogspot.com/2008/07/how-to-solve-parallel-programming.html

0
0

Exploding core counts: Heading for the buffers

Louis Savain
Thumb Up

Dump Multithreading Now!

Excellent article. The powers that be at Intel, AMD, IBM, Sun, HP and the other leaders in multicore processor technology know that multithreading is not part of the future of multicore computing. In fact, multithreading IS the reason for the parallel programming crisis. The big boys and girls are pushing multithreading because their current crop of processors are useless without it. However, this is a disaster in the making because the market will be stuck with super expensive applications that are not compatible with tomorrow's computers. Fortunately, there is a way to design and program multicore processors that does not involve the use of threads at all (see link below).

Having said that, I agree with the article's author. The biggest problem facing the processor industry is not the parallel programming crisis. The worst problem of them all has to do with memory bandwidth. As the number of cores continues to increase, memory subsystems will be hard pressed to keep up. The memory bandwidth problem is the real showstopper because it threatens to repeal Moore's law regardless of whether or not the programming problems are solved.

I suspect, however, that a correct programming model will open new avenues of research that may lead to a solution to the memory bandwidth crisis.

How to Solve the Parallel Programming Problem:

http://rebelscience.blogspot.com/2008/07/how-to-solve-parallel-programming.html

0
0

RISC daddy conjures Moore's Lawless parallel universe

Louis Savain

The Thread Concept Is the Real Problem

It is becoming painfully obvious to all but the most hard-core addicts of last century's technology that the multithreading approach to parallel computing is not going to work as the number of cores per processor increases. That the computer industry continues to embrace the multithreading model in the face of certain failure is a sign that computer science is still dominated by the same baby boomer generation who engineered the computer revolution in the 70s and 80s. It is time for those gentle folks to either step aside and allow a new generations of thinkers to have a turn at the helm or read the writings on the wall.

We must go to the root of the parallel programming problem to find its solution. Multithreading is a natural progression of the single thread, a concept that was pioneered more than 150 years ago by Babbage and Lovelace. The only way to solve the problem is to get rid of the thread concept once and for all. There is no escaping this. Fortunately for the industry, there is a way to design and program computers that does not involve threads at all. Here are a few links with more info if you're interested in the future of computing.

How to Solve the Parallel Programming Crisis:

http://rebelscience.blogspot.com/2008/07/how-to-solve-parallel-programming.html

Nightmare on Core Street:

http://rebelscience.blogspot.com/2008/03/nightmare-on-core-street.html

Like it or not, we need a paradigm shift. The industry will be dragged kicking and screaming into the 21st century if necessary. The market always dictates the ultimate course of technology.

PS. Patterson should listen to one of the members on his PCL team, Dr. Edward Lee who has had a lot to say on the evils of multithreading.

0
0

Supercomputing past masters resurface with coder-friendly cluster

Louis Savain

The Baby Boomers Have Failed. Sorry.

Let me be the first to congratulate Wallach on having solved the parallel programming crisis. Not!

The baby boomers, due to their enfatuation with Turing machines and their addiction to everything sequential and algorithmic, gave us the parallel programming and software reliability crises. Now they are old and they've run out of ideas. It is time for them to peacefully retire and let a new generation of thinkers have a go at it.

How to Solve the Parallel Programming Crisis:

http://rebelscience.blogspot.com/2008/07/how-to-solve-parallel-programming.html

0
0

Intel rallies rivals on parallel programming education

This post has been deleted by a moderator

Louis Savain

@John Savard

You wrote, "But the idea of 'not teaching' sequential programming, although silly when taken literally, can still mean something that does make sense: to avoid teaching bad habits, introduce looking for potential parallelism very early in teaching programming."

We will not solve the parallel programming crisis until we stop looking for parallelism in our sequential programs. A day will come when we will, instead, look for sequences in our parallel programs. That is to say, parallelism will be implicit and sequences will be explicit. Until then, we are just pissing in the dark. Just a thought.

0
0
Louis Savain

Multithreading Is a Monumental Mistake

"Sequential is so over" but so is multithreading. Unfortunately, nobody at Intel, Microsoft and AMD seems to have received the news. Multithreading is the reason for the parallel programming crisis. It is not the solution. It bothers me to no end that the main players in parallel programming and multicore technologies have not learned the lessons of the last three decades. Their choice of multithreading as the de facto parallel programming model is a monumental mistake that will come back to haunt them. There is nothing wrong with making a mistake but forcing your entire customer base to switch to a hopelessly flawed computing model that they will eventually have to abandon is not something that will be easily forgotten or even forgiven. Many billions of dollars will be wasted as a result.

There is an infinitely better way to design and program parallel computers that does not involve the use of threads at all. The industry has chosen to ignore it because the baby boomers who started the computer revolution are still in charge and they have run out of new ideas. Their brains are stuck in 20th century mode. Indeed, the parallel programming crisis is their doing. The industry needs a change of guard and they need it desperately.

How to Solve the Parallel Programming Crisis:

http://rebelscience.blogspot.com/2008/07/how-to-solve-parallel-programming.html

See also:

http://www.theregister.co.uk/2008/06/06/gates_knuth_parallel/

0
0

North Korea photoshops stroke from Kim Jong Il

Louis Savain

That phot is faked too!

@Andrew Alcock:

"A Korean source has a subtly different version of the same image:

http://english.chosun.com/w21data/html/news/200811/200811060007.html"

Thanks for posting the link. This image is also photoshopped. Notice that the sunny areas on both sides of Kim are not as bright as the others. Me suspects that Kim Jong-il is probably dead or in a coma.

0
0
Louis Savain

Look Even Closer

It's not just the angle of the shadow on the wall. If you look carefully, it's not even the same wall.

0
0

Tilera gooses 64-core mesh processor

Louis Savain

Transforming the TILE64 into a Kick-Ass Parallel Machine

The CPU is a soon-to-be a dinosaur, an ancient technology that somehow escaped from a museum of the last century. Tilera should design its own processor core. The CPU, a sequential core, has no business doing anything in a parallel processor. It does not make sense. You're either parallel or sequential, take your pick. What is needed is a pure MIMD vector processor and dev tools based on a true parallel programming model, as opposed to the old multithreaded sequential crap. Tilera should get in bed with Nvidia and Nvidia should change its SIMD GPU into a pure MIMD vector processor. This way, they can have a homogeneous multicore processor that can handle anything, not just graphics. Tilera's Imesh technology is just what is needed to solve the cache coherency problem and will serve as an ideal vehicle for effective hardware-enabled load balancing. Heck, Nvidia should immediately acquire Tilera by making them an offer they can't refuse. Nvidia has the chance of a lifetime to dominate the computer industry for decades to come.

The writing is on the wall. The CPU is dead. Good riddance. MIMD vector processing is the name of the new multicore game.

Transforming the TILE64 into a Kick-Ass Parallel Machine:

http://rebelscience.blogspot.com/2008/08/transforming-tile64-into-kick-ass.html

0
0

Big and bendy e-ink displays on way

Louis Savain

Awesome

This is pretty damn cool. Although I can't wait for the full-color, magazine quality version (hehe), this will do fine for browsing the news or reading a novel. Hopefully, it will have wi-fi internet capability.

0
0

Intel says 48 core graphics is just over the horizon

Louis Savain

48 Cores vs. 500 cores in AMD's FireStream 9250 and 240 cores in Nvidia's Tesla 10P

Way to go, Intel. Intel needs to go to computer science rehab.

Larrabee: Intel’s Hideous Heterogeneous Beast:

http://rebelscience.blogspot.com/2008/08/larrabee-intels-hideous-heterogeneous.html

0
0

Cuil feasts on Salmon of Nonsense

Louis Savain

Cuil Is Probably a Latin Import

Several romance languages have a similar word for arse. Culo in Spanish and cul in French. The French 'culotte' means pants. These words apparently came from a Latin word for tail (queue in French) but I could be wrong. Hopefully someone more knowledgeable can find out the true origin of the word. Interesting choice anyway. Maybe it's cuil.com's way of saying they are going to kick Google in the cuil.

0
0

Sun may or may not be about to obliterate Oracle and Microsoft

Louis Savain

Neither Erlang Not TM Will Do

There are major problems with Erlang and functional programming in general. You can find out why at this link:

http://rebelscience.blogspot.com/2008/07/erlang-is-not-solution.html

Transational memory may solve a couple of problems but, as Professor Edward Lee explained in "The Problem with Threads", multithreading is worse than most of you suspect. The problem has to do with the inability to deterministically prescribe the order of transactions. This makes debugging a nightmare. Transactional memory does not help. Indeed, TM makes the problem even worse, in my opinion, because of the need to reprocess the transaction in case of corruption.

The solution to the parallel programming problem is to do away with threads altogether. There is a way to implement parallelism that is 100% threadless. It is a method that has been around for decades. Programmers have been using it to simulate parallelism in such apps as neural networks, cellular automata, simulations, video games and even VHDL. Essentially, it requires two buffers and an endless loop. While the parallel objects in one buffer are being processed, the other buffer is filled with the objects to be processed in the next cycle. At the end of the cycle, the buffers are swapped and the cycle begins anew. Two buffers are used to prevent racing conditions. This method guarantees 100% deterministic behavior.

In order to solve the parallel problem, it suffices to implement this method at the instruction level and make it and inherent part of the processor itself. To find out why multithreading's days are numbered, read "Parallel Computing: Why the Future Is Non-Algorithmic:

http://rebelscience.blogspot.com/2008/05/parallel-computing-why-future-is-non.html

Admittedly, this solution will require a reinvention of the computer and of software construction methodology as we know it. But there is no stopping it. The sooner we get our heads out of the threaded sand and do the right thing, the better off we will be.

0
0

Intel juices parallel programming for the masses

Louis Savain

@Jim Crownie

"I agree, but given where we are, what do you suggest?"

Jim, I suggest we do away with threads altogether and there is no need to wait for the academics to sort it out. The academics are the ones who gave us the damn threads in the first place. However, it's good to see that not all academics are clueless arse kissers. UC Berkeley's Professor Edward Lee is a case in point.

There is a simple and infinitely better way to implement parallelism that does not involve threads at all. The only thing is that it will require a reinvention of the computer as we know it and a repudiation of the Turing machine as a viable computing model. This is the future of computing. Whether you or Intel like it or not, there is no stopping it.

Read "Parallel Computing: Why the Future Is Non-algorithmic" at the link below to find out why multithreading is not part of ther future of computing.

http://rebelscience.blogspot.com/2008/05/parallel-computing-why-future-is-non.html

0
0
Louis Savain

@BlueGreen

Why be a Wintel anonymous coward? You are a gutless thread monkey and you're full of crap, that's all. See you around.

0
0
Louis Savain

The Professor vs. the Thread Monkeys

The strange thing about Intel's addiction to multithreading is that Professor Edward Lee, the head of UC Berkeley's Parallel Computing Lab, which is partially funded by Intel and Microsoft (another infamous bastion of thread monkeys), is known for rejecting threads as a viable solution to parallel programming. The Professor made big news a couple of years ago when he published his now famous "The Problem with Threads".

One is left to wonder, will Professor Lee cave in to the thread monkeys (like the gutless cowards at Stanford's Pervasive Parallelism Lab) or will he have enough huevos to tell them in their faces that they're full of crap? Time will tell.

UC Berkeley’s Edward Lee: A Breath of Fresh Air

http://rebelscience.blogspot.com/2008/07/uc-berkeleys-edward-lee-breath-of-fresh.html

0
0

Here's the multi-core man coding robots, 3-D worlds and Wall Street

Louis Savain

The Future of Parallel Programming Is Non-Algorithmic

The rabid Turing Machine worshipers on Slashdot are censoring my comments as usual. LOL. So, I'll post my message here. It is a good thing The Reg believes in freedom of expression.

It is time for professor Olukotun and the rest of the multicore architecture design community to realize that multithreading is not part of the future of parallel computing and that the industry must adopt a non-algorithmic model (see link below). I am not one to say I told you so but, one day soon (when the parallel programming crisis heats up to unbearable levels), you will get the message loud and clear.

http://rebelscience.blogspot.com/2008/05/parallel-computing-why-future-is-non.html

0
0

AMD loses $1.19bn and CEO Ruiz

Louis Savain

AMD Can Kick Intel's Ass But Only By Forging a New Market

Replacing Ruiz with Meyer does AMD no good unless Meyer takes the opportunity to point AMD toward a new direction. It is time for AMD to realize that, even though it has the best engineering team in the world, parroting Intel's x86 technology is a losing proposition. Nobody can beat a behemoth like Intel by playing Intel's own game in Intel's own backyard.

Now that the industry is transitioning away from sequential computing toward massive parallelism, AMD has the opportunity of a lifetime to take the bull by the horns and lead the world into the next era of computing. Intel's is making a grave mistake by adopting multithreading as their parallel programming model. AMD must not make the same mistake. There is an infinitely better way to design and program multicore processors that does not involve threads at all. To find out why multithreading is not part of the future of computing, read 'Parallel Computing: Why the Future Is non-Algorithmic':

http://rebelscience.blogspot.com/2008/05/parallel-computing-why-future-is-non.html

0
0

Yahoo! exec! exodus! continues!

Louis Savain

Sinking Ship?

Sinking ship? Are you kidding? Jerry Yang is to Yahoo what Steve Jobs is to Apple. Yang is the man and he did the right thing to save his company. He should be applauded. There is a need for an antidote to the Google monster and Yahoo is it, not Microsoft.

If the jumping rats were all that good, why would the ship be sinking? It is good thing that the old guard is leaving. They failed. Now, what Jerry needs to do is to attract a true visionary who understands what is wrong with the search and advertising business (and there is a lot that is wrong with it other than it being kinda boring) and come up with an exciting and elegant solution that will leave the competition in the dust. Jerry should also consider diversifying Yahoo's business a little. There is lot more fish in the computer ocean than just search and advertising. Watch Yahoo bounce back like a tiger and confound all the doomsday prophets.

Louis

http://rebelscience.blogspot.com/

0
0

Multi-threaded development joins Gates as yesterday's man

Louis Savain

I Am a Crank and Proud of It

Yo, Cloudberry,

I am a crank and a crackpot. I say so on my blog. Click on the link, "Who Am I?".

0
0
Louis Savain

@Anonymous Coward

Yo, Coward,

COSA is based on a well-known technique that has been used by programmers for decades to emulate deterministic parallelism in such apps as neural networks, cellular automata, video games, simulations, and even VHDL. It's not rocket science. It requires nothing more than a collection of elementary objects to be processed, two buffers and an endless loop. Nobody can lay claim to having invented this technique since it is pretty much obvious and has been around from day one. So don't tell me that your academic buddies invented it. That is simply bull. Besides, I want neither help nor approval from academia. COSA is dear to me but if its success must depend on academic approval, I would rather see it fail.

Again, the technique I mentioned above is not rocket science. I simply took it down to the instruction level and added some bells and whistles and a reactive envelope in order to turn it into a programming model. However, since current processors are designed and optimized for the algorithm, COSA would be way too slow because COSA is non-algorithmic. It must be implemented on its own COSA-compliant processor for performance reasons. Even though I could demonstrate the reliability aspect of COSA on an existing machine, it turns out that nobody really cares about anything in the computer business other than performance, productivity and low power usage. I found that out the hard way. COSA would be way too slow if implemented in software. However, given a specially designed processor, it would blow everything out the water in terms of speed and universality. It would make both thread-based multicore processors and GPUs obsolete. You will not believe how many times the engineering folks at Intel, NVidia, AMD, Texas Instruments, Berkeley, Stanford, UIUC, etc... (and even financial houses like JP Morgan and Morgan Stanley) have visited my blog and the COSA site lately. I keep a record. These guys know what I got and they know I'm right but they're scared to death. I also make enemies because I tell it like I see it so I'm sure I get badmouthed a lot. It's funny but I don't care.

So again, write me a fat check and I will deliver. Unless I see some real cash, I ain't lifting a finger because cash tells me two things: 1) My idea is good enough to attract a sponsor and 2) the sponsor/investor is serious. If the industry is not willing to invest in COSA, it does not deserve it. After all, several universities in the US and abroad are getting tens of millions of dollars from the industry to solve the parallel programming problem, and they don't even have a viable solution, just the same old crap. Heck, the computer science community is the reason that we are in the mess that we are in. If they can attract money, so can I, because I do have something that they don't have: a freaking solution to their freaking problem. Until then, I'll just keep writing stuff on my blog until the pain gets to be so unbearable for Intel and the rest that they'll just have to come knocking on my door or acknowledge that I'm right. I can wait. I have learned to be very patient. Besides, it's a lot of fun.

0
0
Louis Savain

@Anonymous Coward

Coward writes: "Be constructive".

I am being constructive. I have put together a comprehensive alternative to multithreading. Problem is, the kind of paradigm shift that I am promoting must be built on top of the ruins of the current programming model. There are no two ways about it. The current programming model simply sucks and someone has to say it. I don't mind doing it. Unlike some people I know, I ain't a coward. CSP is a failure because it is too hard to learn and it is not intuitive. It's a nerd language.

As far as implementing COSA is concerned, unfortunately, I was not born with a silver spoon in my mouth, nor do I have a sponsor with deep pockets (this may change soon enough though). However, you can always write me a check for a few million bucks and I'll deliver a COSA-compliant multicore CPU (that will blow everything out there out of the water), a COSA OS, a full COSA desktop computer and a set of drag-and-drop development tools for you to play with. It should not take more than two years. There is no reason that a multicore processor cannot do fine grain, deterministic parallel processing and handle anything you can throw at it with equal ease. There is no reason to have specialized GPUs or or heterogeenous processors. One processor should be able to handle everything if it is designed properly. A fast universal multicore processor is what the market wants.

In the meantime, I will continue to bash everything and everybody that needs to be bashed, including CSP and Hoare. If you think this is bad, you haven't seen me bash the functional programming crowd (e.g., Erlang and the rest). LOL. BTW, if you are offended by my Bible stuff, don't read what write and stay away from my blog. It's not meant for you.

0
0
Louis Savain

Occam, Transputer & Academia

Occam and the transputer failed because academics have a way of taking the simplest and most beautiful concepts and turning them into complex and ugly monsters. Sorry, Hoare. I always tell it like I see it. Besides, the transputer was a multithreaded computer, AFAIK. The ideal parallel computing model already exists in nature. It is called the brain. The brain is a pulsed (signal-based) neural network. It is a reactive parallel systems consisting of sensors and effectors. Our programs should likewise consist of sensors and effectors.

Programmers have been simulating parallel systems like neural networks and cellular automata for decades. And they do it without using threads, mind you. All it takes is the following: a collection of elementary objects to be processed, two buffers and a loop. That is all. It is not rocket science. It is the most natural and effective way to implement parallelism. Two buffers are used to prevent the signal racing that would otherwise occur. It suffices to take this model down to the instruction level and incorporate the buffers and the loop in the processor hardware. The programmer should build parallel programs with parallel elements, not sequential elements. He or she should not even have to think about such things as the number of cores, scaling or load balancing. They should be automatic and transparent.

If we had been using the correct programming model from the start, we would not be in the mess that we are in and you would not be reading this article and these comments. Transitioning from single core to multiple parallel cores would have been a relatively simple engineering problem, not the paradigm shifting crisis that it is. Multithreading (and its daddy, single threading) is a hideous and evil monster that was concocted in the halls of academia. Academics may jump up and down and foam at the moth when I say this but it is true: The computer academic community has shot computing in the foot. It is time for programmers and processor manufacturers to stop listening to the failed ideas of academia and do the right thing. We want fast, cheap, rock-solid supercomputing on our desktops, laptops, and even our cell (mobile) phones. And we want it yesterday.

0
0

Page:

Forums