back to article Google Go strikes back with C++ bake-off

In early June, Googler Robert Hundt published a paper comparing the performance of four programming languages: C++, Java, Scala, and a rather new addition to the world of systems programming, Google's own Go. Go is designed to provide the performance of a compiled language like C++ and the "feel" of a dynamic language like …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Holmes

    Some coder

    Even I could tell the benchmarks were bullshit, I don't understand how it got so hyped up.

  2. nemo20000
    Headmaster

    How much?

    You cannot put the word “only” before the phrase “257MB of memory”.

    1. Anonymous Coward
      FAIL

      Computer science much?

      So you're saying the size of the problem being solved doesn't enter the equation at all?

    2. Anonymous Coward
      WTF?

      Of course you can!

      Otherwise how would anyone make money from this computing business?

      I mean Windows and Office have approximately bugger all new features compared to 10 years ago for a good 80% of their users. I fell back on an old 3.11 / Office 6 laptop a year or two ago to get a report written and- aside from the crappy black-and-white DSTN screen- I didn't notice a huge difference. It still does styles, it still has the same fonts, it still has the same font/pragraph dialogues and it still does word count. What else do most people need?

      I'm really not sure what makes up all the bloat in modern software.

      1. sT0rNG b4R3 duRiD

        "bloat in modern software"

        In a monolithic kernel, I suspect it's device drivers/hardware support. Won't mention microkernels and such, that's such a thorny issue in itself.

        Also consider the amount of work needed to drive a display screen. The deeper the bitplane, the more memory you'll need. Probably a lot of file buffering etc

        Then there are libraries.

        It's quite likely your old windows 3.1 programs were written the old school C way. There may have been C++ frameworks available then but I'm not completely sure, I certainly don't remember using them. I'm pretty sure MFC came much later.

        I'm not sure whether there was Delphi for Win 3.1 but you'd have to google that to be sure.

        So ultimately in those days, less of that OOPS stuff, DLLs were probably not that many or large.

        Still though, remember the DLL hell that existed even back then?

        Then there are programmers who swap and change and probably don't even know wtf 90% of the rest of the source does. I'll bet most most people working on windows don't know what most of it does. Because it's not open to scrutiny we'll never know. Hell it's hard enough for one just to try and understand the linux kernel laid bare open.

        1. E 2

          I could be wrong

          Windows 3.0 was mostly written in Pascal, and Windows 3.1 also. The Win32 API at that time had a C binding but the Pascal heritage was there to see in the headers.

          Me, I think the bloat has more to do with (1) collusion btwn M$ and Intel and hardware makers: every new Win release drove hardware sales... kickbacks are hardly unheard of in the computer and software business; (2) marketing - people believe(d?) that if Office N needed twice the computer hardware than Office (N-1) then the newer Office must be better; (3) ever newer languages and bindings from M$ - Win7 spends hideous amounts of time pre-compiling assemblies for .NET and those assemblies take up a heck of a lot of disk; (4) and yes DLL hell - Win7 maintains a DB of software titles and versions, and different versions of DLLs for each software title/version - DLL Hell problem is now a fully managed feature of Windows.

        2. shayneoneill
          Alien

          Delphi

          Yeah there was a delphi for Win16, and it ran like the clappers. We used to actually use it for writing little device driver dlls for our hardware because Object Pascal was paranoid as all hell about type and pointer safety and would throw up all over its shirt then tell you to f*** off if you even looked at your structs in an unhealthy way. The compiler itself produced amazingly fast code and was comparable to borlands C++ compiler which at the time was up there with microsoft and intel's offerings , back in the day when borland actually gave a damn about developers.

          I actually miss Delphi a lot, it had a fantastic developer community that cared about its code, and open sourced a shit tonne of libraries to make devs jobs easier, but we where always at the whim of a company that seemed to be slowly losing touch with its developer base and increasingly pricing its dev tools out of reach even for many professional developers, let alone the student base thats utterly vital for keeping a language alive. Such a shame too.

          1. Colin Wilson 2

            Delphi

            You miss Delphi? How come? It's still there. It still rocks. Faster, simpler & more powerful than ever. It's still the best language for writing anything that's not '.net'

            Ok - the job market for it is probably approaching zero - so I probably answered my own question :(

  3. Beebs

    shenanigans!

    "the Go program ran slightly faster – though the C++ program was slightly shorter and easier to write because the C++ code uses automatic deletes and allocation instead of a cache."

    So in fact the same set of optimisations were not applied to both the Go and C++ code, since memory caching is clearly an optimisation.

    1. Stephen 1
      Thumb Up

      DMA

      That's the bit that caught my attention. That isn't just an optimisation, that is the mother of optimisations. If the Go version had optimised memory caching and still didn't wipe the floor with the C++ version that didn't, then they should probably have kept quiet about it.

      1. cloudgazer

        I think you're misunderstanding

        The problem was one of garbage collection, so the only way that they could prevent their objects being deleted was to explictly stick them in a cache - in C++ they could probably simply allocate a vector at the start and be done.

        To put this another way, in C++ it's easy to move allocation out of the inner loop, whereas in a garbage collected language it isn't.

  4. Buzzword
    Coffee/keyboard

    It's the Dvorak keyboard all over again

    Remember the Dvorak keyboard? It was supposed to make us x% faster at typing, thereby more efficient and productive. Only problem was everyone was familiar with the old QWERTY layout. So nobody changed over.

    C++ might not be perfect, but in this example Go is only slightly faster and the C++ code was actually easier and shorter to write. There's doesn't appear to be a compelling advantage to Go. For existing C++ coders the new C++0x standard will be an easier transition.

    You owe me a new keyboard - a Dvorak one.

    1. Notas Badoff
      Megaphone

      It's a valid point and question

      I don't understand the downvotes. Go as a reaction to the faults of C++ and other languages is 'okay!', but C++0x as a reaction to the faults of C++ and other languages is 'bad'?

      It will be interesting to compare the new language Go with the new version of C++0x. When of course it is available. But then, we waited for how many years for Go, so I can rely on the Go-its to give it a chance, right?

    2. Marvin the Martian

      What have we learned from the article?

      1/ languages that are more established are easier/faster to write a quite-efficient program for ["round 1" of C++ vs Go; Java and Scala not mentioned much anymore],

      2/ more time programming/optimizing may make it faster. "I only hacked on it an hour, if I'd known... I'd spent many hours" [and subsequently he/they did].

      Not really learned a lot, hence.

      Question remains, how many hours of programming did it cost to speed up that Go thing? And in the real world, how many is it worth to speed up any given computing task?

      In other words, nothing learned. Move on, nothing to see.

  5. John 137

    quoted

    "we are not skilled at writing efficient programs in either of those languages, so the comparison would be unfair" - Russ Cox

    If only Hundt had come to the same conclusion before trying to write a Go program.

  6. Tchou
    Pint

    Benchmarks

    are nice for PR because it's so easy to make it say what you want.

    But real life application can show diffrent results because it gave to deal with real life complexity, wich is NOT the mere sum of (theorical) scholar tests.

    For now, any serious work is done either in C or in C++.

    It sure does not mean it will be forever so.

    Show us a major serious application coded in Go (Chrome OS done in Go?)

    1. Destroy All Monsters Silver badge
      Facepalm

      "any serious work is done either in C or in C++"

      For varying values of "serious", I reckon.

      1. Destroy All Monsters Silver badge
        Meh

        A downvote?

        The parochialism of certain people is pathetic.

        "Incompetent and unaware of it" comes to mind.

        1. Tchou
          WTF?

          Parochialism

          or merely the way a computer *actually* works?

    2. E 2

      That's because...

      C is an excellent abstract model of a load/store CPU.

      C++ is an excellent OO and template-meta-programming extension of C.

      No scripting language I've ever used could be called a model of a load/store CPU. Thus serious work tends to get done in C or in C++.

      Worth noting that FORTRAN is still used a lot for similar reason as C.

  7. Anonymous Coward
    Mushroom

    Go ... nowhere

    Title says it all.

    Should have been named Goo. That's what you get when you mix Python with C++.

    Yes, flamebait.

  8. steve75oz

    When in doubt....

    "Then, they made the same changes to Hundt's C++ program, and in the end, the two programs ran at similar speeds."....

    ........Move the goal posts.

  9. Paul Shirley

    another unbalanced comparison

    Now optimise the C++ and try to back port it to Go.... which still wont be any fairer a comparison! FFS optimise the damn things natively or don't bother comparing them. One or the other. And if you aren't going to start with an optimised algorithm (ie no low hanging fruit) go get a job cleaning toilets, you aren't a crack programmer.

    While I find the best way to optimise the Java I'm writing is to strip the native Javaisms and use the same strategies I would in C/C++, when it hits the 'extreme' level Java just can't express the evil hackery I resort to in C++ or assembler.

    It's going to be the same in any pair of languages, a certain level of code tweaking simply won't be translatable.

  10. jake Silver badge

    This is getting tiresome.

    Those of you who have been following along with us commentards for a while will know I prefer K&R C with inline assembler for serious coding. But I'm a hardware guy.

    I also do COBOL, FORTRAN, Forth, Smalltalk, perl, (ba)sh, yadda, yadda ...

    But the basic bottom line is that bricks are bricks. All are used to build structures. Some are small and red, and used to build small red structures (your house, perhaps). Some are slightly larger and grey, and used to build largish grey structures (your dorm, perhaps). Some are longer, and made of iron, and used to build skyscrapers. Some are irregular, and used to build massive structures like Machu Pikchu. Some are more massive, and used to build even more massive structures like Egypt's pyramids ... And some are even more massive, like Ada ;-)

    ALL of them have their place. Playing one off the other is a fool's errand ...

    1. Shakje

      While I agree with your general point...

      All of them have their place? What about whitespace?

      1. jake Silver badge

        @Shakje

        Well ... I learned a lot from Intercal thirty years earlier ... whitespace has it's place, if you understand why writing your own compiler, in ANY language[1], teaches you more about programming in general than any number of university hours or years in the industry.

        [1] Except BASIC, of course ;-)

  11. Ian 3
    Trollface

    But isn't...

    ...c++0x just an attempt to make c++ a bit more like c#?

    /me runs away.

    1. E 2

      No it isn't.

      C-octothorpe syntax is quite different from C++0x, and C++0x does not introduce automagic garbage collection. C++0x just makes C++ somewhat more convenient to use.

    2. Tomato42
      Thumb Down

      C#

      The bit that makes C# bad is the MS stamp on it ("portable" language that works on one platform, please). I have to agree that that you can write code fast in it, and I hate the darn thing.

  12. Anonymous Coward
    FAIL

    25 years of optimisations vs theWorld's biggest Advertising Hucksters

    Who do you think wins?

  13. Daegroth
    Thumb Up

    Google+ as a platform

    With the release of Google+ beta (which I haven't managed to register for before they closed it off). I am intrigued by Go, it's Google's own flava that they've been trying to push for some time now. If Google+ is to be at least as successful as Failbook then it'll need 3rd-party application support, and which language do you suppose they'll use to develop their own proprietary system?

    Google have been very smart about all this, and love or hate them you've got to admire the skill with which they can come late to a market and blitz the competitors through good positioning and a bit of fore-thought.

    I can't say I'd like another variant language out there, I'm a standards kind of guy, at least there's plenty of choices.

    1. trstooge
      Happy

      Google+ API(s) and language(s)

      +1 to your post... I did get to open my Google+ account using an invite sent directly from someone at Google and it's nothing short of amazing. It's better than failbook, no doubt about it. Of course now they'll need to get people on board but if the thing takes off, it's going to be amazing.

      I was wondering exactly the same: which API / language will they give to developers to write 3rd party apps?

      They're indeed persevering (after the Buzz and Wave fiasco) and they're moving very, very fast. More power to them.

  14. Pperson

    56.92s to 3.84sec? 1604Mb to 275Mb?

    Reducing the Go code from 56.92sec to 3.84? That's a 15x improvement! And 1604Mb to 275 = 6x better. Unless the original Go code was rather inefficient, this almost sounds more like they changed the fundamental algorithms used rather than merely tweaking the code. And as Paul Shirley said above, porting this back to C++ is not a useful comparison since specific optimisations chosen for the Go version may well do nothing in C++.

    But in any event, these new benchmarks only tell us how fast this particular algorithm can be made to run if you are intimately familiar with Go. I'd rather know how it is to write "good-yet-fast" code. Personally, I find C++ fairly easy to write decent and fast code but hard / nasty to write very fast code. C tends to be speedier but a bit klunkier, Java a bit easier than either but a little slower for some things and a lot slower for others, and C# better than Java on both counts but not as fast as C/C++. I'm inclined to believe the original Go results that suggest Go is not there yet with efficiency of compiler-outputted code (Java got a lot better on this score over the years), so 'normal'/easy code is slower. You can of course tailor your code to do the compiler tricks yourself, in which case it will run fast as well, but who wants to do that every time? Especially when you'll have to maintain the mess later!

    1. Joe Cooper

      no title

      This kind of jump is normal when you optimize things. I once had some code that did some processing to a text file in Java that needed about 3 to 5 minutes to run on some systems and it was getting annoying.

      I rewrote it by tossing the regex and replacing it with a bespoke function and it shrunk it to 10 to 15 seconds. Roughly a 20x improvement.

      It had nothing to do with anything except comparing some code I pulled from my rear to code that was carefully crafted according to the specifications.

    2. Paul Shirley

      and it's still low hanging fruit

      You should probably go look at the code and see just how little they did to get that 15x. By my standards they're still picking off obvious low hanging fruit - how does any programmer not notice they could use a Vector and choose a Map? The code is now at about the state serious optimisation should be starting, good enough to use but still improvable *if needed*.

      I think all this demonstrates is how dangerous adopting the idioms of a language+support infrastructure can be, it seems to stop programmers thinking about whether the component they choose is the most appropriate one for the job. ...and that GC is evil ;)

    3. trstooge

      The problem with Java

      The problem with Java is not that it's not fast: it can be seriously rock. I'm doing scientific number-crunching using Java. It's multi-threading facilities are very convenient etc. But the problem with Java is still, after all these years, exactly the same: the GC is slow. So slow. People will keep telling that nowadays its fast but, no, it isn't. Not by any stretch of imagination. The entire Java mindset and ecosystem has been built around the "object-are-cheap-to-instantiate-and-GC" waste mentality and as a result the GC needs to kick in constantly. Writing efficient Java software means trying to instantiate as few objects as possible and use libraries that do not instantiate objects. There's a reason why a HashMap{Integer,Long} is put to shame by Trove's TIntLongHashMap. 99.5% of Java programmers out there don't understand that. Even those that think they do really don't. That's why we've got no Photoshop written in Java. That's why there's hardly any serious desktop apps written in Java. There are exceptions, of course, like JetBrains and their amazing IntelliJ IDEA (well, guess what, they're using Trove ; )

      1. Joe Cooper

        Java GC

        @trstooge

        I've ran into this before. I had some media playback code that would load frames and display them, and there was a visible lurch due to garbage collection, so I wound up recycling image classes manually to circumvent the create-and-collect approach. It worked splendidly after that.

  15. Giles Jones Gold badge

    Learn something and stick with it?

    Isn't this the whole problem? people keep throwing in new languages left right and center just when some of us have begun to master the ones we are using.

    It takes quite a while to become proficient in a language and what is needed is refinements to what we have rather than new languages.

    Just like the world doesn't really need another human spoken language every five years.

  16. Anonymous Coward
    Anonymous Coward

    Qualitative differences

    Reminds me of the old adages about lying with statistics. It's hardly surprising to see this stuff coming out of Google, a company built on fuzzy statistics. These papers are also a reminder that computer scientists aren't necessarily great coders.

    What struck me about Hundt's paper was the clarity, brevity, and relatively good performance of Scala, apparently with less implementation/optimization effort than all 3 other languages. The Go code looked cryptic by comparison. I'm not switching languages because of this, but I do keep an eye out for C/Python alternatives. Scala just keeps looking better and better. Now if only it allowed inline assembly... :)

  17. Anonymous Coward
    Anonymous Coward

    Is there a runtime or not?

    Does Go use a runtime or does it compile to native code?

    A runtime based language as fast a C++ would be somewhat impressive.

    If it compiles to native code then I would expect speed similar to C++ but the compiler is as important as the language.

    1. cloudgazer

      those aren't mutually exclusive

      every language has a runtime, even C++. Go clearly has a more sophisticated runtime as it is garbage collected, but again there are GC'd languages that compile to native - such as Objective C. I think what you are asking is, is it an interpretted bytecode or is it native.

  18. -tim
    Go

    Go's future

    I'll repeat this from another go thread... I like the direction where go is headed but it lacks the proper tools to do currency right. If it had a bcd or fixed point type (like every modern CPU support), then it could be very big in many fields that are still fighting over floating point money.

  19. JDX Gold badge

    The solution to programming being hard...

    ...is NOT to keep making more and more programming languages.

  20. Aitor 1

    What we need is the algorithm

    We should know what kind of algorithm implementation we are comparing.. otherwise, it is moot.

    As for optimizing.. these days most code goes to live servers unoptimized, and it is only optimized if the servers can't cope, as HW, for low user load, is way cheaper than people.

  21. Francis Fish
    Paris Hilton

    Long term maintenance?

    I wonder how readable each was? It's long term maintenance that bothers me. The fact that you can optimise the hell out of some code doesn't mean it's readable or won't be so brittle it blows up in your face if you want to make a minor change.

    Very leery of optimising code. Profilers can ofter point to subtle bugs where you're doing things too many times, initialising something expensive a lot, but always skeptical about optimising things too much, as it's usually me that has to pick things up afterwards.

  22. Torben Mogensen

    Pointless comparisons

    It is pointless to compare languages for speed, as what you really measure are specific implementations of these languages. And since the programs are not identical, this reduces the measurements to comparing two different programs in two different implementations of two different languages.

    This just leads to a coding war where the proponents of each language compete to find better algorithms and tune their programs to better fit the respective implementations of the languages. Claiming that this is a meaningful comparison of the languages themselves is silly.

    Obviously, there are some very slow implementations of some languages around, and some languages have features that are difficult (though not impossible) to compile to efficient code, so you can easily have cases where the standard implementation of one language gives faster code than the standard implementation of another language. But, again, it is not really the languages that are compared. Even so, I think language designers should try to avoid the features if they care about speed.

    So what are these features? Certainly not GC -- this only adds a couple of % overhead over stack allocation and can even have benefits over C-style manual heap allocation due to lower fragmentation. Here is my take on some features that matter more:

    - Reflection and dynamic types: These require all values to carry type information around at runtime, so each new created value needs to be tagged with a description. In all but the simplest cases, this overhead is very difficult to get rid of during compilation.

    - Implicit null pointers in every type: This forces tests on every dereference and, like above, these are hard (though not impossible) to get rid of during compilation.

    - Low-level features: While these may be easy to compile to the specific hardware that the feature is derived from, they may be extremely bad fits to other hardware. For example, pointer arithmetic is easy enough on systems that have a single flat memory file, but if you have distributed or segmented memory, it may be very costly to implement.

    The first is typical of scripting languages, the second of most OO languages and the third of C-like languages. So neither of these are, IMO, ideal if you want your programs to run fast -- especially if you want this to be true over a wide range of hardware.

This topic is closed for new posts.

Other stories you might like