back to article US boffin builds 32-way Raspberry Pi cluster

Boise University PhD candidate Joshua Kiepert has built a 32-way Beowulf cluster from Raspberry Pis. Kiepert says his research focuses on “developing a novel data sharing system for wireless sensor networks to facilitate in-network collaborative processing of sensor data.” To study that field Kipert figured he would need a …

COMMENTS

This topic is closed for new posts.
  1. tabman
    Thumb Up

    Just sitting down

    and getting the thing to work is awesomme. Big Thumbs Up Kiepert

    1. agricola
      Thumb Up

      @tabman: THANK YOU!

      A huge "THANK YOU!" for changing the subject!

      I am continually amazed at the amount of time and bandwidth wasted when someone sees, and creates, an opportunity to bash an object of their disaffection; and bash the bashers.

      1. agricola
        Thumb Up

        Re: @tabman: THANK YOU!

        Less than one hour, and already one downvote. How about that?

        My point made!

        Get a life, you gits.

        Regards to all, even you mindless, ,know-nothing (except how to play games; this IS a great game, isn't it!), web-enabled super-heroes.

  2. Richard Wharram
    Thumb Up

    Looks really cool

    But how well does it run Crysis?

    Maybe Quakeworld?

    1. Professor Clifton Shallot

      how well does it run Crysis?

      Dunno but imagine a Beowulf cluster of these things!

      1. Anonymous Coward
        Anonymous Coward

        Re: how well does it run Crysis? @Professor Clifton Shallot

        "Dunno but imagine a Beowulf cluster of these things!"

        It is a Beowulf, isn't it?

      2. Anonymous Coward
        Anonymous Coward

        Re: how well does it run Crysis?

        "Dunno but imagine a Beowulf cluster of these things!"

        Can one run a cluster of clusters or is that just a cluster?

      3. Jes.e

        Re: how well does it run Crysis?

        "Dunno but imagine a Beowulf cluster of these things!"

        Darn. You beat me to it.

        (Upvote anyway)

        The first step to building the world's most advanced system.. through recursion!

  3. JDX Gold badge

    Very neat

    Interesting point about the network cabling being an issue, they are quite heavy in comparison to the units themselves.

  4. John 98

    Ermm - wot's this got do with Apertures (tm)?

    I haven't had much coffee yet, but i do not see the W word anywhere in this article. And isn't the writer aiming to draw our attention to a noteworthy project? Confused ...

  5. Horridbloke
    Thumb Up

    Proper computing

    Based on my playing with them I'd say the Pi is broadly comparable to a mid-to-late nineties workstation. That makes this perspex box-of-stuff surprisingly similar to the lab-full of Sun sparcs we used for distributed computing experiments back in college.

    Nice work.

  6. John Robson Silver badge

    Fans overkill?

    Is it just me that thinks that 4 fans is overkill for a device which basically doesn't need cooling in normal operation.

    I know it will be running flat out more than most, but 4 large fans?

    1. auburnman

      Re: Fans overkill?

      That's an eighth of a fan Per RP. Keep in mind they'll be in close proximity, so the heat radiating out from one will meet heat from another 7+ coming the other way.

    2. Stuart Castle Silver badge

      Re: Fans overkill?

      Surely it is better to have too much cooling than not enough?

    3. Down not across

      Re: Fans overkill?

      Probably no so much overkill. Considering there are 32 RPis, all running in Turbo mode. In terms of CFM it might not be necessary to have 4 fans, but to get a good airflow across all of them is easier with 4.

      And keeping things cool, will help with stability.

    4. Fred Flintstone Gold badge
      Thumb Up

      Re: Fans overkill?

      Is it just me that thinks that 4 fans is overkill for a device which basically doesn't need cooling in normal operation

      The nice thing about having 4 of them is that you can run them at greatly reduced speed and still have enough airflow to matter, just make less noise doing so.

      I believe the rack is open front & back, so all he really needed to do was to mount the thing in an angle (say 45 degrees) and there would be enough natural convection to keep things cool. However, it wouldn't look half as cool and you'd have a larger footprint - another advantage of his approach is that the whole thing isn't much bigger than a normal PC case.

      Thumbs up from me.

      1. Andy Miller

        Re: Fans overkill?

        OK, so what if you mounted the boards at 45º within the rack? Would that generate a convection current (and look quite cool without increasing the foot print much) ?

        1. Anonymous Coward
          Anonymous Coward

          Re: Fans overkill?

          Good point, you'd only increase the height then. However, the mechanics could become like hard work, and that is incompatible with the BOFH ethics we ought to imbibe in newbies.. :)

  7. Len Goddard

    Lateral thinking needed

    You have to cool them because you have piled them into a rack in close proximity.

    Now, if you suspended them from the ceiling by ethernet cables of varying lengths you would have a combination passively cooled Beowolf cluster and hi-tech mobile. A really practical and useful addition to office decor.

    1. Fred Flintstone Gold badge
      Thumb Up

      Re: Lateral thinking needed

      I like that idea. In addition, with all those LEDs you could also cut down on office lights in its vicinity :)

    2. rlphillips
      Happy

      Re: Lateral thinking needed

      Daddy would you like some sausage. Daddy would you like some sau sa ges.

      http://www.youtube.com/watch?v=8ZYrutVyZ-A

    3. TeeCee Gold badge
      Happy

      Re: Lateral thinking needed

      You may be on to something. ISTR that there were moves afoot in the Pi world for the things to support power over ethernet at some point in the future, which would make this eminently practical as well as outrageously cool.

      You'd have the only cluster in the world where the failure of one of the little plastic tang things on an ethernet cable causes it to drop a node..........both figuratively and literally.

  8. Lee D Silver badge

    Spent $2000 on RPi's. Wasted $50 on absolute junk (LED's, unnecessarily flashy fans, etc.). Then taped it together with electrical tape. Spent a huge amount on a managed switch, which I can't see being used on a "all-machines-the-same" internal network (probably just a few 8-port switches with Gb port would have been better, actually)

    Got performance basically in the range of, say, an 8-core x86 processor. For $2000 he could have just bought direct from here:

    http://www.titanuscomputers.com/Quad-AMD-Opteron-up-to-64-Cores-HPC-Workstation-Computer-s/26.htm

    And wiped the floor with it on a single machine, without electrical, cabling, machines, etc. concerns (and got a GPU for free, 8Gb RAM, stupendous storage not on SD cards, etc.). And his system takes 200W-ish, so you don't actually use much more power (the above would complete the task in less time, hence less power, and you could also use it as your main desktop so you don't have to recompile everything and chuck it over to the RPi's for computation from your laptop anyway).

    I'm sure it's all a nice experiment and good "experience" for a quick play / setup with MP systems but it's really nothing interesting - especially not from a PhD candidate. Hell, I'd be rather peeved at him wasting his time and money on building that system and having the gall to write it up compared to buying a computer that runs his dissertation work directly without having to faff about. Especially the "performance" section of his write-up that successfully manages to imply that his system is actually worse than the others (performance per dollar) but with graphs that try to convey the opposite and then ploughs on to describe how much faster having more RPi's is than having just one (without any comparison to the alternatives).

    If a 15-year-old had done this, I'd be saying good on them and well done for experimenting. But a PhD? Really? This is children's toys and the reason the "real machines" he wanted to use are booked up and expensive is because they wipe the floor with this for much less overall cost. Even the term Beowulf cluster died out many years ago when people realised that, actually, joining lots of commodity machines together wouldn't really beat the performance of whatever-the-next-most-expensive processor was, and if it did, then not by much and only for highly-parallel workloads. And nowadays, you find that the average desktop GPU will wipe the floor with even such a system (unless you have a cluster of GPU's of course, but that's an actually *interesting* project even if it's still old hat).

    1. lurker

      Way to completely miss the point. This wasn't built in order to provide a lot of processing power for the price. It was done as an economical way to provide a platform on which the PhD student could develop massively parallel software without having to share access to the university's "real machines". In case you still don't get it, this was NOT his PhD project, it was something he built to help him with his PhD project.

      1. JEDIDIAH
        Linux

        Cub Scout projects

        Like the other guy said. The problem is that this is a project not for adults but for novices and children. It's a proof of concept system at best. In that regard, it's very much like any other cluster built out of ancient or woefully underpowered hardware. It's the butt of every joke about building a Beowulf cluster of something.

    2. Anonymous Coward
      Anonymous Coward

      The point of this cluster is not to see how fast it can go, but to work with a real (though slow) cluster. Clustering has its own issues, e.g. where data is, how do you make sure the right data is in the right place at the right time. Simulating a cluster on a big single CPU box is not the same as doing it for real.

      There are lots of useful lessons to be learnt from doing this, perhaps one big one is that people incorrectly think a cluster is simply one big computer, its not.

    3. Charlie Clark Silver badge

      Re: wasting money

      For $2000 he could have just bought direct…

      This is a custom build with lots of unnecessary duplication for a particular requirement.

      I imagine any systems builder would be able to provide a rack-based system using a similar configuration for a fraction of the cost. Of course, they'd want to increase chip & core density to make it commercial. But being able to build systems at this price makes it a lot easier to try things out.

      1. James Hughes 1

        Re: wasting money

        The Raspi Foundation has always said from a pure power point of view, clusters of Pi's do not make sense.

        However, they do make sense as a teaching tool. As this student has so helpfully shown.

    4. Imsimil Berati-Lahn
      Happy

      Fnarr fnarr...

      "http://www.titanuscomputers.com"

      hehehehehee... he said "tight anus"

  9. Tom 7

    Any performance figures

    I mean how cost effective is it? If it costs the same as the Xeon Linguine PC can it out perform it??

    1. Geoff Mackenzie

      Re: Any performance figures

      Guess the performance will compare pretty nicely once your Xeon PC is running 32 VMs so that it looks like a Beowulf cluster. There's a lot of point-missing going on in this comment thread; this is not built for throughput, but to work like a Beowulf cluster. The same goes for Glasgow's Raspberry Pi cloud - they're not planning to replace Google's data center anytime soon, just to understand and model it so they can develop for something that 'shape' without tying up the big one that does the real work.

      1. Tom 7

        Re: Any performance figures

        No but figures would be nice and presumably not to hard to get - they may even be in the PDF but I cant be arsed to read that,

        The parallela bunch have just got linux running on their chip so clusters should be even cheaper soon enough.

        That's $99 for 90GFlops...

  10. phil mcracken
    Thumb Up

    It's a shame the Pi doesn't support PoE (Power over Ethernet)...

    ... If that was the case he could have used any PoE switch to power all of them and wouldn't have needed the 5V pinout.

    Regardless, this looks like a brilliant (and fairly inexpensive) way to experiment with distributed computing.

  11. J.G.Harston Silver badge

    English fail

    "That's a lovely facility and is therefore much in-demand"

    No it's not. The facility is much in demand. It's an in-demand facility.

  12. Christian Berger

    It might be interrested once the main processor is usable

    Currently you can only use that tiny little ARM core on the Raspberry Pi, while the main processor, a large DSP used for it's video outputs, is largely unusable running some closed realtime OS.

    Opening up that DSP would enable a whole new set of applications. For example you could do fast data processing, for example to do sonar or even radar on that little board.

    1. James Hughes 1

      Re: It might be interrested once the main processor is usable

      No, not really. The GPU runs at 250Mhzx, and is a twin core 16 way vector/scaler unit (not a DSP). It's not hugely faster, and is ONLY faster when you can SIMD your code. Which is not easy in itself, and harder since it's all done in assembler.

      The realtime OS on the GPU is threadx btw.

      Opening it up wouldn't get the benefits many seem to think it would. There would be some, but not much for the majority of users.

  13. Anonymous Coward
    Anonymous Coward

    Turbo mode might be risky if he's running off SD cards. There are known issues with the corruption of SD cards when overclocking.

    1. James Hughes 1

      The SD card corruption is actually quite rare - I've never seen it in many months of overclocking. It is being looked in to though, as it's is definitely there. It may be power or interference related.

  14. Bushman1234
    Thumb Up

    It looks like Boise University have their very own ORAC - Excellent!

    1. Long John Brass
      Unhappy

      +1 for the Blake 7 reference

      Doubt many of the kiddies here will get it though :(

  15. batfastad
    Linux

    Boffin?

    Did the boffin follow the guide published on this very website by any chance?

    http://www.theregister.co.uk/2012/09/12/raspberry_pi_supercomputer/

    Thanks to the awesome work of real boffins at Southampton Uni...

    http://www.southampton.ac.uk/~sjc/raspberrypi/

  16. Jim 59

    Nice

    Nice project. NB those addressing Eadon's posts. DON'T. Trolls want to turn every thread into a discussion about them, have generally nothing to say, no insights, and contribute nothing. Referring to his/her comment merely enables that.

    1. Anonymous Coward
      Anonymous Coward

      Re: Nice

      NB those addressing Eadon's posts. DON'T.

      .. but, but, but ..

      Who are we going to bait then?

      1. Anonymous Coward
        Anonymous Coward

        Re: Nice @AC 12:21

        "Who are we going to bait then?"

        Fair point, but I'd say point at and laugh rather than bait. For Eadon all the bait that's required is "Microsoft" or "Windows" in the headline. Then he's off before most people have even read said headline. He's such a card :) He sounds fairly sane in his posts on space exploration stuff, so the Windows stuff is evidence of some major OCD thing, I'd guess. Sometimes I can picture him sat in front of the screen and splattering it with blood when the throbbing vein in his forehead goes pop ...

        1. Geoff Mackenzie

          Re: Nice @AC 12:21

          I used to be like that too. I grew up on little micros, first met MS properly when I got my first PC, and gradually fell out of love with them between DOS 5.0 and Windows 95. After that I was borged by the penguin army and moving from the (basically functional, but) irritatingly crude platforms I'd been used to over to this rather beautiful Unix derivative, combined with very strong opinions and over-confidence in my knowledge (I was a teenager after all) made me quite the zealot for many years. It took half a decade of coding for Windows to convince me that it's actually a plucky little platform and even its quirks are quite endearing when you've seen it soldier on, day in and day out, for years, doing useful work with respectable consistency.

          So, as a former MS hater myself, I think it might have to do with an overestimate of the extent to which other platforms' advantages over MS products are actually *news* to anyone at this point, combined with a refusal to recognise that Microsoft actually do produce usable software. I still run Linux on my own boxen, with only one willing to occasionally boot Windows 7 for Kinect development; I still consistently prefer Linux when it's an option but no longer feel the need to ram it down everyones' throats to quite the same extent.

          1. JEDIDIAH
            Mushroom

            Re: Nice @AC 12:21

            > It took half a decade of coding for Windows to convince me that it's actually a plucky little platform and even its quirks are quite endearing when you've seen it soldier on, day in and day out, for years, doing useful work with respectable consistency.

            Soldier on? Consistency?

            You've got to be kidding.

            Microsoft lowering the bar with Windows is a very big part of why ARM devices are so trendy right now.

            1. Geoff Mackenzie

              Re: Nice @AC 12:21

              Yup, I'm only reporting what I saw. Several tens of Windows XP machines (mostly aging Dell desktops) running critical systems (the older stuff mostly written in VB6, the younger stuff mostly in C#). They were shielded from most threats by running few applications (mostly, one in-house application) and not having full access to the network (and no access to the internet), never running a browser and so on, but soldier on they did, surviving patches apart from a couple (I exaggerate not, I remember 2 occasions in 5 years across multiple versions where an OS or Antivirus update floored the machines briefly), and better management of the roll-out would have avoided those minor hiccups too.

              My own development laptop was a Windows XP machine and never once fell prey to malware in the 3 years of general bashing it took (including plenty of temporary software installations and removals, which left a little cruft as we've come to expect but didn't break anything) and ran absolutely consistently (a few little niggles, nothing serious) until it was replaced by a faster model (also on XP) which performed similarly well every day until I left the job. Obviously as a 'tech savvy' user I wasn't taking risks with it, but I wasn't treating it all that kindly either since I knew I could re-image it in half an hour if necessary. It never was necessary though.

              I know MS software has some weaknesses; all the Windows boxes were rebooted hilariously frequently from a Linux fan's perspective, and there were a million little silly faults (the file copy dialog's pointless animation stayed squint for *YEARS* between NT4 and Vista, where it was finally replaced IIRC) and the overall user experience is really quite bitty and unaesthetic in my view, but these are not serious problems. The machines did their actual jobs flawlessly, and the development tools (even lowly old VB6) were great.

              I'm still a GNU/Linux fan myself, as I say, and prefer it when I have the option. Usually, when I use Windows, it's for a bad reason (mostly platform lock-in, the Kinect being a case in point), and I don't like the way MS license their software or behave towards other businesses, but I'm not going to pretend the flaws are worse than they are and I no longer gurn when someone wants something built for Windows. Fair enough, it's a perfectly decent target for development and will run what I write just fine.

              1. Geoff Mackenzie

                Re: Nice @AC 12:21

                ... Just to be clear, if I'd been asked what platform these systems really ought to run on, I'd have recommended GNU/Linux. Any of my former colleagues would confirm that I raised the possibility at every excuse, ad nauseum. The machines were more powerful (and power thirsty, and physically fragile) than they could have been running Linux and disabling virtually everything on the Windows boxes meant we were barely running any of the code we were (as far as I was concerned, needlessly) paying for licenses to use. It's not the way I would have designed the system if it had been up to me.

                Integration with other aspects of the technical environment would have been prohibitively difficult. We needed to talk to old COM drivers which would have needed to have been replaced or wrapped somehow, and it just wasn't worth it given that the Windows boxes worked, and the infrastructure people liked the remote management and monitoring features they had. The development teams all had Win32 experience, with relatively few of us having done much with Linux. Overall, for that business, Windows made sense for the bulk of what they did. (There were a few Linux boxes in the mix, too, and more were arriving around the time I left).

                I don't think there's any sense being dogmatic.

  17. Geoff Mackenzie

    Oooh! Oooh! 'Us' too!

    Glasgow Uni has a Raspberry Pi cloud also:

    http://raspberrypicloud.wordpress.com/

    And they used Lego! I wasn't involved in the project (hence the apostrophes around 'us' in the title - just trying to give the impression I'm a real part of this institution, rather than a 31 year old undergrad desperately trying to hang on for another year, heh) but saw a couple of presentations about it recently as all the level four projects were winding up.

    This isn't a Beowulf cluster of course, but it's another 'big' stack of tasty Pi so I thought I'd mention it.

  18. Anonymous Coward
    Anonymous Coward

    cheaper than a PC?

    I can't help thinking he would have been better off with a Mac Mini i7 upgraded to 16GB and SSD, and simply run 32 VMs on it for the sensor simulation. Smaller, more powerful, easily reconfigured, and as cheap or cheaper.

    As for those talking about running Windows on it: this is 32 separate and independent machines each with 512MB RAM, not a 32 core SMP box where all cores have shared access to the same memory space. You'll need 32 licences plus some cluster layer on top to do anything useful (htcondor perhaps)

    1. JEDIDIAH
      Linux

      Re: cheaper than a PC?

      This too is a pretty old idea. There have been ready made cluster node VMs available for a number of years now. Before that, you could roll your own if you wanted.

      1. Dave Hilling

        Re: cheaper than a PC?

        That was the only thing I was thinking, why not build an ok pc and put 32 linux vms on it same concept still pretty cheap if you only went with like 128MB per vm you still only need 4GB of Ram sure disk i/o would be an issue but as he said its not fast on the pi cluster either. Neat idea and all but i guess the cost conciousness in me says use a spare pc with ESXi or KVM and do the same thing without spending anything most likely.

        1. No, I will not fix your computer
          Stop

          Re: cheaper than a PC?

          Although it's not super-clear from the article, it's specifically the low-level IO that he was trying to talk to with the cluster, which of course you would have to specifically emulate with the hypervisor (as the point of the hypervisor is to present an agnostic interface).

          Of course the other reason to do it (just as valid in my opinion) is "because".

  19. All names Taken
    Happy

    Hee Hee intel?

    Beat that doodz @ intel :-)

  20. ericstob

    gpu

    I think to get anywhere near decent performance from this you will need to use the gpu not the cpu.

  21. Wind Farmer

    Err: rpi00-rpi32 = 33

    Please explain node rpi00 outside the enclosure.

    1. This post has been deleted by its author

      1. Wind Farmer

        Re: Err: rpi00-rpi32 = 33

        If you examine the accompanying photo, each node is numbered rpiXX, where XX is in the range 01 to 32, inside the enclosure. There is also rpi00 on top of it. Better?

    2. No, I will not fix your computer
      Meh

      Re: Err: rpi00-rpi32 = 33

      My guess is that it's an "admin" machine, possibly running a different OS, perhaps even vanilla raspian, not required for the running cluster (possibly managable from any OS with a browser).

      it's probably all documented in the PDF.

  22. agricola
    Gimp

    Massively-Parallel Computing: the elusive goal (so far....)

    One can only hope that this new generation of ultra-low-cost computers will keep resulting in the design and construction of more and more massively-parallel (MP) computers.

    Why? Because what is and has been missing from the MP scene in order to realize the full potential of MP computers is the SOFTWARE to fully make use of the hardware. As the cost of the hardware approaches zero--relatively speaking--we self-important computer engineers are going to have to stop kidding ourselves that we are doing something REALLY IMPORTANT by designing and building the MP machinery; we've got to finally 'fess up to the fact that all along, that's been the easy part. Now we've got do the really HARD work: design the assemblers, compilers, and high-level languages to make it all really, REALLY WORK.

    And one more nice feature of low, low cost MP computers: don't bet against a ten- or twelve-year old boy OR girl--or a group of them--doing the seminal work on the MP critical software (their advantage over us? They, unlike us, don't know that it can't be done).

    Regards...

  23. Steven Burn
    Thumb Up

    Ooooh

    Performance may not be stellar, but damn does that thing look good!

  24. Master Rod
    Linux

    Hmmmm, add a SATA Raid 5 to the cluster with openSUSE 12.3, twin screens, and, and droooool, drooool. Arrrghhh! I want one....

    Master Rod

    ps Uhm, sorry, must have lost it there.....

  25. Master Rod
    Linux

    Performance Numbers

    Will we get an up date with performance numbers? I am curios cuz i have 48 Timex Sinclairs I want to cluster........

    Master Rod

This topic is closed for new posts.

Other stories you might like