back to article RISC daddy conjures Moore's Lawless parallel universe

The oft-cited Moore's Law is the fulcrum of the IT industry in that it has provided the means of giving us ever-faster and more sophisticated computing technology over the decades. This in turn allowed the IT industry to convince us that every one, two, or three years, we need new operating systems, better performance, and new …

COMMENTS

This topic is closed for new posts.

Page:

  1. Anonymous Coward
    Anonymous Coward

    why are PCs still so slow then?

    The simple answer to that is the bloatware we call Operatiing Systems with backwards compatible crap that hasn't been used in 10 to 15 years and also the speed of data transfer of hard drives.

    What REAL improvements have we had in hard drives in the last 5-7 years? Average transfer speed and seek times have only modestly improved but capacity has increased incredibly (along with HD failure rates) Compare that to CPU, GPU, and memory speeds.. well there is no comparison.. Hard Drives can't keep up and they haven't for years!

  2. Anonymous Coward
    Anonymous Coward

    @nobby

    "I don't have to buy a new chair just because Cushion 2.0 doesn't fit...."

    Not met my wife then...

  3. Anonymous Coward
    Joke

    @ Charles Manning

    Compiling a million lines a second sounds nice, until you realize it's still Pascal...

  4. Anonymous Coward
    Thumb Down

    the multi-core PC con

    >The market always dictates the ultimate course of technology.

    BS.

    The market wants faster chips, not more chips or cores. Who uses several apps concurrently, that can all saturate the CPU? Who runs multithreaded apps? Only specialised professionals (and gamers, but they have dedicated GPUs for that), regarless of the author's wishful thinking.

    The rest (99.99%) of the PC market wants fast single-threaded processing, for fast boot and snappy windows, fast spell check, etc. Give me a 8GHz, even a 4GHz, single-core CPU any day over a 2GHz quad-core. No doubt more apps will come that will require more power (accurate voice recognition or OCR, where are you?), but the bottom-line is multi-core CPUs are shoved onto us because chip manufacturers take the easy way, not because they are wanted or even useful. Multi-threaded apps are difficult to write, and even more difficult to tune. Fast single core CPUs would have the preference of users and developpers. OK, maybe 2 core are alright, in case an app is CPU-bound and the OS can't do multitasking properly (You know who you are...), but we don't need more cores. We need more speed. And we are all waiting....

  5. jon

    AC 28 Nov 03:20 is right...

    What all CPU engineers need to start working on is a multicore design, that will ALSO include some way to take the simplest program, and spread the load evenly across the cores...

    Is this possible??? this will then enable all the basic stuff to run faster, without complex expensive stuff...

    'The market' is tied up by the salesmen, who keep pushing the numbers... they just take the clockspeed, multiply that by number of cores, giving a nice big number to impress joe average... sucessfully, due to large sales pushing the price down, and single cores either dissapear, or become 'uneconomical' due to low sales, and lack of boards that will take them!!

    - then he gets that home, and get even more dissatisfied at the real speed...

    you may say different, but try walking into PCWORLD and asking for a pentium1.. even DABS has none... and I would not want second hand stuff...

  6. Marian Csontos

    It is obvious. Or is it?

    I thought impossibility of infinite exponential growth is obvious to everyone but economists and EU politicians with their infinite GDP growth, stock exchange players expecting shares to grow by 5%+ every year, people expecting interest on their savings in banks to be higher than (inflation+constant).

    How many cycles will adding cores last? 5, 10, may be 16. Certainly not 32.

    Fortunately there is enough crapware out there even 100 years is not enough to fix, and certainly there will be more in years to come. ;-)

    Marian

  7. Singlewhip
    Alert

    At the risk of being considered a self-promoter...

    I've been blogging about this for quite a while, preparatory to writing a successor to "In Search of Clusters" about the issue. See http://perilsofparallel.blogspot.com/ .

    Servers are no problem. They'll just get smaller and more efficient. They use huge numbers of cores already, just in separate machines. Virtualization rules.

    Clients are the problem, and they're a big one because they have the combination of volume and high price that funds a good part of the industry. Most microprocessors are $5 units, like the one that runs your dishwasher. Intel gets multi-hundreds of $, sometimes $1000s, per chip, and AMD too, for first-run chips.

    And programming... see my posting about 101 parallel languages, all current, absolutely none of them in use.

  8. Singlewhip
    Unhappy

    Oh, and by the way...

    John Cocke was the person who got the ACM Turing Award for inventing RISC architecture. See http://awards.acm.org/citation.cfm?id=2083115&srt=alpha&alpha=&aw=140&ao=AMTURING

    Dave Patterson is a great guy, a really smart guy, an acquaintance of mine, and a great namer; He came up with the term RISC. But he's not the original daddy. That's John Cocke.

    This misunderstanding brought to you by (*humph*) zealous IBM security fostered by people in IBM Research who thought keeping it secret made it seem more important. (The mainframe guys weren't buying that, but there's another tale.) But the ACM got it right.

  9. Anonymous Coward
    Boffin

    Niagara nonsense @Matt Bryant

    I've only just read this falsehood from Matt Bryant:

    "This failure to ramp up the infrastructure is perfectly demonstarted by Sun's Niagara chips, where they have effectively given up on the idea of keeping a core spinning and instead settled for having lot of cores idle and waiting whilst a few work"

    This is precisely the opposite of the truth; the Niagara chips use many thread contexts to keep the cores busy while some threads are waiting for memory. For applications like webservers, the impact is dramatic, e.g., Zeus:

    http://www.zeus.com/assets/default/Site/en/images_user/image/Zeus_Price_Perf_Grph24_11_2008.PNG

    To do this, they have more memory bandwidth (including a crossbar on chip) than typical CPUs because they effectively transform a latency problem (individual threads waiting for memory access) into one of bandwidth (lots of threads accessing memory while some are executing).

    The result is that individual thread performance isn't great, but for workloads comprising many threads or processes the throughput is much greater than anything else around right now, simply because so little hardware is idle.

  10. Julian
    Coat

    Back to the mid-80s

    So, let's get this straight - Patterson, who kick-started RISC architectures in the early 80s is talking up new paradigms of parallel processing, a hot topic from the mid-80s.

    Thing is, we solved it in the mid-80s, with the INMOS Transputer. INMOS was therefore sold off by the Tories as soon as possible. The Transputer was pure genius since it was able to easily map programs that ran internally on a simulated multi-processor to an actual multi-processor environment: so the language encouraged parallel programming and it scaled from 1 to 1000s of devices.

    Let's do a bit of Math. The early Transputers ran at 20MHz (giving 20 simple MIPS of performance) and probably had about 100K transistors in them each with at least 4K of on-chip RAM (+off chip too). In 1989 I ran my dissertation project on a 9-transputer rack giving me: 20*9=180MIPS of performance.

    Let's scale that by 2 decades. Instead of 20MHz we have 3GHz ( x 150) and instead of 100K transistors we have 2 billion transistors (x200). That's equivalent to 20*150*200 = an astonishing 600,000 MIPS of performance / Transputer (with an internal memory equivalent to 800K). My equivalent transputer rack would have 4.9TIPS of power!

    Instead we decided to base the future of computing on the (literally) back-of-an-envelope design which has set us back 20 years. I'll grab my coat.

    -cheers from julz @P

  11. TMS9900
    Coat

    Great comments

    Really enjoying this stuff.

    A few things I would like to add to the mix, in no particular order:

    1) IMHO, one of the biggest problems is the skill sets of the 'new generation' of programmers. I had a graduate who apparently was a Java guru. Yet he had no concept of ASCII. He did not understand *how* toLower (or LCase in VB) *actually* worked. To him, it was just 'magic black box' stuff.

    2) Given the above, if we gave that graduate, say, a 40 core Intellasys processor (which are available now, off the shelf, yes, *FORTY* cores), what would he do with it?

    3) All of the above does not mean that this graduate is thick/stupid/whatever. Actually, he was really bright, and has gone on to do well. However, the standard of his degree course at university was appalling. Until we can get back to 'brass tacks' in the educational side of things, we are not going to produce people with the *knowledge* (note: not talent; you are born with talent) to take the latest multi-core processors and do something truly radical, and ground breaking with them.

    4) One day, I got two graduates together. I put the following to them:

    "We need to build a computer system that can control a radio telescope. A big huge fucking radio telescope. Not only will it control the movement of the dish in real time in order to track moving objects in the sky, it must also gather the data received from the telescope and store it so that it can be reviewed in real time, online, by multiple users at the same time. Furthermore, the data should be stored historically and available for instant recall so that comparisons can be made with older data. All this, while ensuring that the telescope is moved efficiently, without burning out the motors in the drive gear. What do you suggest?"

    They came up with credible solutions, none of which were wrong particularly, and were a reflection on modern programming/system analysis thought trends...

    "Well, we'll use a few computers... One for an SQL database, one for tracking the telescope, and one for viewing data."

    "Ok, great. But that's an awful lot of processing power. How will they communicate with each other?"

    "Using XML over a LAN."

    "Yes that will work. But if you use XML, you will need an XML parser, and code to package your data into XML packets - some sort of object model..."

    "Yes, we will abstract each item of data into objects, these can remoted over the LAN using SOAP."

    "Ok, its sounding pretty cool. XML is really only useful though when you need to share your data with third parties, where it needs to travel through firewalls, and be parsable by another machine that may not necessarily be running the same platform as you. We're talking about a system that is self contained, connected via a switch. Couldn't we just use sockets and our own protocol? Wouldn't that be much more efficient?"

    "Well yeah, but, that would be difficult..."

    Then I leave them goggle eyed when I say, "actually guys, I'm pulling your chain. This problem has already been solved. In 1971. By Chuck Moore. He did the whole thing on one PDP-11 with a single disk drive and 32K of RAM."

    Sometimes, I really do think we've gone backwards.

    Mines the one with the "Threaded Interpretive Languages" (1981) book in it. Sometimes we should go back and read the old stuff, lest it not be forgotten. It might teach us something.

    Mark

Page:

This topic is closed for new posts.