back to article Google pits C++ against Java, Scala, and Go

Google has released a research paper closely comparing the performance of C++, Java, Scala, and its own Go programming language. According to Google's tests (PDF), C++ offers the fastest runtime of the four languages. But, the paper says, it also requires more extensive "tuning efforts, many of which were done at a level of …

COMMENTS

This topic is closed for new posts.
  1. DZ-Jay

    Title

    >> C++ "also requires more extensive "tuning efforts, many of which were done at a level of sophistication that would not be available to the average programmer"

    But the question is, did tuning programs in the other languages improved them over the stock C++ version? If not, then it doesn't really matters that C++ is hard to optimize, when you get the speed virtually for free.

    dZ.

    1. sT0rNG b4R3 duRiD

      Agreed

      But concurrency in C or C++ is a real bitch. C++ when it comes to it is down right fugly.

      And Java is a real bitch too... One of my pet peeves - I hate it when the GC cuts in just when you don't want it to and slows things down. But it is slightly more elegant than C++.

      Scala looks interesting but I've not the time for the moment.

      Go?

      I'll stick with C and C++ and bitch about that instead.

      1. Giles Jones Gold badge

        Fix C++?

        This is what is annoying though. Big companies and individuals look at C++ and then list its weaknesses before embarking on creating yet another language instead of improving C++ and moving things forwards.

        Improving C++ improves games and productivity software which are never going to be written in a language that requires a VM or doesn't compile to machine code.

    2. Destroy All Monsters Silver badge
      Boffin

      No use putting go-faster stripes on your family van

      "But the question is, did tuning programs in the other languages improved them over the stock C++ version? If not, then it doesn't really matters that C++ is hard to optimize, when you get the speed virtually for free."

      No. The tradeoff is:

      Java may be slower and have larger memory footprint

      but

      you get rid off the C++ "writing time" memory management problems, debugging efforts and all-around shoot-yourself-in-foot possibilities. The skillset needed is also lower [concomitantly, the "do not interrupt me now" requirement is weaker], which, believe me, is a _very_ good thing.

      Ok, back to writing servlets in Groovy.

      1. E 2

        You have heard of RAII?

        If you know how to write in C++ then you know how to write in C++.

        I'm quite good at coding in C++ - I do not assume that ability and the methods I use carry over to, say, Java.

    3. jibal

      "But the question is ..."

      It's only a question for those who didn't read the paper.

      "If not, then it doesn't really matters that C++ is hard to optimize, when you get the speed virtually for free."

      If 1 == 2, then all sorts of things follow.

  2. peyton?
    Paris Hilton

    Optimization

    Is this strictly humans tweaking it, compiling a la "-O3," all of the above...?

    1. Anonymous Coward
      IT Angle

      that would be

      Very limitating for C++ . I know C++ programmers who don't use compiler optimisations, because it would slow down their code. That's'because they have a deep understanding of the language architecture at the machine level.

      I want to point out that C is even better for this, because the language is infinitly more simple.

      With higher level language, specially the ones that run on top of a virtual machine, it is simply impossible.

      To answer your comments, the article said that the C++ program were optimized beyond the capabilities of average programmers. I guess the average C++ guy can do \O3.

      1. Ken Hagan Gold badge

        Re: that would be

        "I know C++ programmers who don't use compiler optimisations, because it would slow down their code. That's'because they have a deep understanding of the language architecture at the machine level."

        Er no. That's because they are C programmers. The C++ standard library is made up of templates that are written to be as broadly applicable as is feasible. Consequently, they require function inlining, constant propogation and the removal of unreachable code to even get close to acceptable performance levels.

        Even in pure C, it would be somewhat heroic to write code that was already as good as the optimiser can do for free, and vanishingly unlikely that you could beat a modern compiler on a large body of code. (Even where you can, it is *then* unlikely that you couldn't do better still by dropping down to assembly language for that hotspot.) Perhaps your friends made some measurements about 30 years ago and haven't revisited their assumptions since.

        1. Anonymous Coward
          Anonymous Coward

          I agree for most but

          Not all. You're right about C.

          You're wrong about compilers.

          These observations are made on actual products.

          See Trustleap.com for more info.

          And I am not tied with Trustleap nor their products by any means.

          1. Ken Hagan Gold badge

            Re: trustleap.com

            Looks like snake oil to me. They claim that they are over 5 million times more efficient than IIS in serving web pages, but also that they aren't bottlenecked on the CPU or network. Assuming a network pipe measured in Gigabits, that means that IIS cannot manage more than a few kilobits of network traffic under any circumstances.

            I know this is IIS we're talking about, but that sounds a little harsh.

      2. Anonymous Coward
        Anonymous Coward

        Compiler Optimizations make faults in production systems harder to debug

        An ISV I previously worked for didn't optimize a lot of their C/C++ code as it was difficult to debug using a core file from a customer's critical enterprise production system.

        When debugging a production problem, time is of the esscence. You also have the issue of having to be familiar with different debuggers/compilers for each platform too.

        Sure C/C++ could be faster than Java, but Java has a lot of benefits, such as security built-in, less chance of hanging yourself, platform portability, tools common across platforms, huge availability of libraries. Java is not always suitable and C/C++ sometimes can be justified.

      3. jibal

        Your statement could only be made ...

        ... by someone who knows nothing about how compilers work and has never benchmarked their code. I grew up on assembly language and wrote C for 30 years and I know for a fact that your statement is nonsense because I can't control the details of the generated code unless my C program consists entirely of asm statements.

        You also know nothing of how virtual machines with JIT compilation work. And finally, you know nothing of how higher level languages like Scala make it more feasible to use much more efficient algorithms.

  3. cum grano salis
    FAIL

    yay

    Yay for irrelevant benchmarks.

  4. Rocketman
    Boffin

    C++ > The Rest

    Hm, nearly irrelevant. The very few places where code must run fast must be coded to be fast, optimization done by way of intelligent design. All the rest of the code doesn't really matter.

    1. Ken Hagan Gold badge
      Mushroom

      Re: nearly irrelevant

      Agreed, as long as you accept that *most* code needs to run faster than it does at the moment,

      I'm fed up with people telling me that code speed no longer matters, when I can *still* out-type programs on a machine that is several orders of magnitude faster than the one I was using 20 years ago.

      Google's paper is quite interesting. The raw results are that C++ is about 2.5 times faster than the best of the rest and the worst is about the same factor further behind. That's quite a big hit for the worst case (Go).

      Furthermore...

      When the sample programs were offered to Google employees to tune, a roughly similar improvement (3x) was seen in every case. For C++, there were easy pickings by replacing O(n) methods with O(1) methods in the standard library, and changing data structures to improve locality. I'd call these "low-hanging fruit" rather than "sophisticated". For Java and Scala, one could tune the garbage collection. For Scala, one could adopt a more functional programming style. I don't know how clever those changes were, because I don't use those languages, but let's assume they are *not* (in Google's words) "complicated and hard to tune".

      The point is that we're talking about a factor of "several" performance improvement that is available with code reviews or a change of language, and probably an order of magnitude that is available if you do both. It would take Intel or AMD *years* to deliver the same performance improvement, and then you'd have to pay for the new hardware, so it is clearly worthwhile, but it doesn't happen for some reason.

      Maybe the average programmer is just crap?

      1. David Dawson
        Joke

        average programmer

        Maybe the average programmer is just crap?

        -----

        could be truer than you think... for a given type if average, half of them are going to be even worse than that...

    2. E 2
      Joke

      @Rocketman

      So are you suggesting that God, not humans, created software optimization?

    3. Charles Manning

      Fast code is very important

      Whether that be in :

      Embedded devices - where fast code can achieve more with a small micro reducing product costt.

      Or mobile devices (phones & laptops)- where fast code uses less clock cycles and therefore the battery lasts longer or you can use a smaller and cheaper battery.

      Or data centres where fast code means more work can be done with less CPUs and less power consumption, therefore reducing costs and power consumption.

  5. Tom 7

    Ah the old WTWSDNMS bottleneck

    Wishing Things Were Simpler Does Not Make It So.

    No computer language can make the problem you are trying to solve any less complicated. A language that makes certain aspects of problem solving 'easier' will just mean you spend more time hoping the problem you are trying to solve consists only of those things your language has 'simplified'.

    Think of it a bit like weightlifting - lifting 5kgs is easy but if you have to lift 1000kg then you are going to have to make 200 lifts. If you practice a bit so you can lift 100kg in a go then it only takes 10 lifts. 'Modern' languages encourage you to leave the weights alone and go and stand on the sides of the running machine watching a video instead.

    1. disgruntled yank

      lifts

      Perhaps when you have a refrigerator to move you put in some time at the gym first. Myself, I'd be more inclined to find a two-wheeled dolly, maybe one with a strap to secute the load (an "appliance jack" as they called them when I was a skinny stockboy).

      But I do admire those who lift their weights while standing in the middle of the running machine...

    2. jibal

      When *comparing* programming languages,

      that's equivalent to saying that no computer language can make the problem you are trying to solve any more complicated, which is obviously not true.

      "'Modern' languages encourage you to leave the weights alone and go and stand on the sides of the running machine watching a video instead."

      This is the sort of statement that someone who wrote assembly language on 24x80 terminals 40 years ago but hasn't written a line of code since might make.

  6. bazza Silver badge
    Angel

    Caution - old git moan

    Once upon a time C/C++ where the primary language of choice for software development. A C/C++ programmer was an 'average' software developer because that's almost the only langauge that was used. Now Google are saying that they're effectively superior to 'average programmer'!

    Sorry about the gap, was just enjoying a short spell of smugness.

    @sT0rNG b4R3 duRiD. Concurrency in C is just fine, it's no better or worse than any other language that lets you have threads accessing global or shared memory.

    I don't know about yourself, but I prefer to use pipes to move data between threads. That eliminates the hard part - concurrent memory access. It involves underlying memcpy()s (for that's what a pipe is in effect doing), which runs contrary to received wisdom on how to achieve high performance.

    But if you consider the underlying architecture of modern processors, and the underlying activities of languages that endeavour to make it easier to have concurency, pipes don't really rob that much performance. Indeed by actually copying the data you can eliminate a lot of QPI / Hypertransport traffic especially if your NUMA PC (for that's what they are these days) is not running with interleaved memory.

    It scales well too. All your threads become loops with a select() (or whatever the windows equivalent is) at the top followed by sections of code that do different jobs depending what's turned up in the input pipes. However, when your app gets too big for the machine, it's easy to turn pipes into sockets, threads in to processes, and run them on separate machines. Congratulations, you now have a distributed app! And you've not really changed any of the fundamentals of your source code. I normally end up writing a library that abstracts both pipes and sockets in to 'channels'.

    Libraries like OpenMPI do a pretty good job of wrapping that sort of thing up in to a quite sophisticated API that allows you to write quite impressive distributed apps. It's what the supercomputer people use, and they know all about that sort of problem with their 10,000+ CPU machines. It's pretty heavy weight.

    If you're really interested, take a look at

    http://en.wikipedia.org/wiki/Communicating_sequential_processes

    and discover just how old some of these ideas are and realise that there's nothing fundamentally new about languages like node.js, SCALA, etc. The proponents of these languages who like to proclaim their inventions haven't really done their research properly. CSP was in effect the founding rationale behind the Transputer and Occam. And none of these langauges do the really hard part for you anyway; working out how a task can be broken down in to separate threads in the first place. That does need the mind of a superior being.

    1. John Smith 19 Gold badge
      Thumb Up

      @bazza

      "and discover just how old some of these ideas are and realise that there's nothing fundamentally new about languages like node.js, SCALA, etc. "

      It's a point *well* worth reminding people about.

      This process of threads -> processes and pipes -> sockets sounds almost like a candidate for pairs of macro definitions with a flag to shift (SOLO?) to determine which set of definitions gets used.

    2. Destroy All Monsters Silver badge
      Meh

      You smugness will cause your downfall, little one.

      "there's nothing fundamentally new about languages like node.js, SCALA, etc. The proponents of these languages who like to proclaim their inventions haven't really done their research properly"

      These people are done their research quit well, thank you. They are even saying so explicitly:

      http://www.scala-lang.org/node/143

      "Scala rests on a strong theoretical foundation, as well as on practical experience. You can find below a collection of papers, theses, presentations, and other research resources related to the Scala language and to its development."

      And then: http://www.scala-lang.org/node/143#papers

      1. Ken Hagan Gold badge

        Re: smugness

        I think the OP's smugness was directed against the proponents (fanbois, in register-speak) rather than the creators of these languages.

        If that is the case, then I think his point stands. There has been very little fundamentally new in programming language design for several decades.

        1. bazza Silver badge
          Pint

          @Ken Hagan

          Thank you Ken; one's smugness was indeed primarily derived from Google implying that C/C++ programmers were superior beings...

          My beef with proponents of languages like SCALA and node.js is that yes, whilst they are well developed (or on the way to being so) and offer the 'average programmer' a simpler means of writing more advanced applications, they do not deliver the highest possible performance. This is what Google has highlighted. Yet there is a need for more efficiency in data centres, large websites, etc. Lowering power consumption and improving speed are increasingly important commercial factors.

          But it that's the case, why not aim for the very lowest power consumption and the very highest speed? Why not encourage programmers to up their game and actually get to grips with what's actually going on in their CPUs? Why not encourage universities to train software engineering students in the dark arts of low level programming for optimum computer performance? C++, and especially C, forces you to confront that reality and it is unpleasant, hard and nasty. But to scale as well as is humanly possible, you have know exactly what it is you're asking a CPU+MMU to do.

          From what I read the big successful web services like Google and Amazon are heavily reliant on C/C++. We do hear of Facebook, Twitter, etc. all running into scaling problems; Facebook decided to compile php (yeeuuurk!) and Twitter adopted SCALA (a half way house in my opinion). The sooner services like them adopt metrics like 'Tweets per Watt' (or whatever) the sooner they'll work out that a few well paid C++ programmers can save a very large amount off the electricity bill. Maybe they already have. For the largest outfits, 10% power saving represents $millions in bills every single year; that'd pay for quite a few C/C++ developers.

          A little light thumbing through university syllabuses reveals that C/C++ isn't exactly dominating degree courses any more. It didn't when I was at university 22 years ago (they tried to teach us Modula 2; I just nodded, ignored the lectures and taught myself C. Best thing I ever did). Google's paper is a clear demonstration that the software industry needs C/C++ programmers, and universities ought to be teaching it. Java, SCALA, Javascript, node.js plus all the myriad scripting languages are easy for lazy lecturers to teach and seem custom designed to provide immediate results. However, immediate results don't necessarily add up to well engineered scalable solutions. Ask Facebook and Twitter.

          1. This post has been deleted by its author

      2. bazza Silver badge

        @ Destroy all monsters; Less of the little one, more of the old one

        My whole point is that there's nothing really new to SCALA's concurrency models. Both the Actor and CSP concurrency models date back to the 1970's. Pretty much all that fundamentally needs to be said about them was written back then. Modern interpretations have updated them for today's world (programmers have got used to objects), but the fundamentals are still as was.

        [As an aside I contend that a Communicating Sequential Process is as much an 'object' as any Java class. It is encapsulated in that it's data is (or at least should be) private. It has public interfaces, it's just that the interface is a messaging specification rather than callable methods. And so on].

        No one in their right mind would choose to develop a programme as a set of concurrent processes or threads. It's hard, no matter what language assistance you get. The only reason to do so is if you need the performance.

        CSP encouraged the development of the Transputer and Occam. They were both briefly fashionable late 80's to very early 90's when the semiconductor industry had hit a MHz dead end. A miracle really, their dev tools were diabolically bad even by the standards of the day. There was a lot of muttering about parallel processing being the way of the future, and more than a few programmer's brows were mightly furrowed.

        The Intel did the 66MHz 486, and whooosh, multi GHz arrived in due course. Everyone could forget about parallel processing and stay sane with single threaded programmes. Hooray!

        But then the GHz ran out, and the core count started going up instead. Totally unsurprisingly all the old ideas crawl out of the wood work and get lightly modernised. The likes of Bernard Sufrin et al do deserve credit for bring these old ideas back to life, but I think there is a problem.

        Remember, you only programme concurrent software if you have a pressing performance problem that a single core of 3GHz-ish can't satisfy. But if that's the case, does a language like SCALA (that still interposes some inevitable inefficiencies) really deliver you enough performance? If a concurrent software solution is being contemplated perhaps you're in a situation where ulimate performance might actually be highly desirable (like avoiding building a whole new power station). Wouldn't the academic effort be more effectively spent in developing better ways to teach programmers the dark arts of low level optimisation?

        1. Destroy All Monsters Silver badge
          Holmes

          @bazza: I see what you mean...

          Apologies for the earlier flaming. Been twitchy for the last few months. Information overload probably.

          >> My whole point is that there's nothing really new to SCALA's concurrency models. Both the Actor and CSP concurrency models date back to the 1970's.

          Well ... yes. Although Milner's "Communicating Mobile Processes" added something. No, I haven't managed to fully get through his book yet.

          >> CSP encouraged the development of the Transputer and Occam. They were both briefly fashionable late 80's to very early 90's when the semiconductor industry had hit a MHz dead end.

          Sure did. I had two of those PC-ISA transputer evaluation boards. The T400 CPU [2 links only] is still in my "collection", not yet encased in lucite.

          >> Remember, you only programme concurrent software if you have a pressing performance problem that a single core of 3GHz-ish can't satisfy. But if that's the case, does a language like SCALA (that still interposes some inevitable inefficiencies) really deliver you enough performance?

          Mnnno... The trend toward less powerful ("green/power-saving") cores in multicore packages as well the demand for less-specialized applications for which multiple processes make sense (servers that need more than a single event-handling loop for example) pushes in the direction of giving developers tools that enable them to actually exploit all this hardware, with abstractions that are better than the ones standard Java itself provides.

          Nothing that could not be had in earlier approaches to be sure (Occam. Limbo. Linda for IPC. Or you could whip our the MPI library), but now the demand for easy multi-processing can be satisfied with something that is in the general orbit of the Java Mass [i.e. runs where the JVM runs, can use the Java libraries, can integrate with existing code, can be sold internally, can be used with a known IDE, has a somewhat familiar syntax] so it's arousing interest.

          Thus Scala. A bit further, with less-familiar syntax, Clojure with its "transactional memory". And even further, with less-familiar syntax and on a non-Java VM, Erlang.

          >>Wouldn't the academic effort be more effectively spent in developing better ways to teach programmers the dark arts of low level optimisation?

          When you write Scala code, it will run on a VM, yes. But then again, the VM will compile it down at runtime, and if you need to, and you can optimize that. If the language-level abstraction is well chosen, that should give you all the optimization you need.

          Obligatory references:

          Java developer in a multi-core era:

          http://kadijk.net/interviews/Java%20developer%20in%20a%20multi-core%20era.pdf

          Communicating Sequential Processes for Java:

          http://www.cs.kent.ac.uk/projects/ofa/jcsp/

          Clojure and concurrent programming:

          http://clojure.org/concurrent_programming

          Communicating Mobile Processes. Introducing occam-pi:

          http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.159.3693&rep=rep1&type=pdf

          1. James Hughes 1

            Multiprocess programming

            Every mobile phone (not talking apps - talking the code that make the phone work), uses multiple cores and multiple processors to get the required performance at low power (and hence GHz) rates. There are quite a few very good coders out there working in that area (Usually C rather than C++, dropping down in to assembler where necessary). It's not just mobiles of course, many embedded devices are just the same. Being competent in concurrency is more common than a lot of people think.....

  7. Christian Berger

    How did they find multiple C++ experts?

    I mean there are maybe about10 real C++ experts around. The rest are people who believe they know C++, but instead only know a subset of an older version of C++ thus being likely to fall into one of the many pitfalls.

    The main problem is that people tend to believe that C++ is a high-level object oriented language when it's instead just a macro assembler with a really strange syntax. Mind you, it would only be half the problem if the standardisation people would get it. Instead they add more and more non-orthogonal features every few years.

    I wonder why nobody talks about OOPascal anymore. There are now several free compilers around. It's fast and has just the features you need for C++-style OO. Most importantly some misfeatures like implicit object copies have been removed. The := operator only copies a handle to the object, not the object itself.

    There's even a platform independent GUI toolkit coming with it. It even looks native on every platform.

    1. Destroy All Monsters Silver badge
      Go

      The market, Chris...

      "I wonder why nobody talks about OOPascal anymore"

      For the same reason that no-one was talking about Object Oberon or Oberon 2 before Java 1.0 downloads clogged the T1 lines.

      I was amazed at the uptake back then. People were torturing themselves with C++ like crazy and bitching and moaning about it then all of a sudden...

    2. Ken Hagan Gold badge

      multiple C++ experts

      You clearly aren't one of the ten. Rather a lot of standardisation effort has gone into making the existing features more orthogonal. You'd be hard-pushed to find any non-orthogonality in the major features of the language now.

      Oh, and in C++, if your class is an object type, you can prohibit copying with a single line in the declaration of your object base class. If it is a value type, deep copying is exactly what you want. This has been known for about half a century, ever since somebody managed to change the value of 1.0 in their (very early) Fortran program. C++ has no "mis-features" in this area. It merely gives you the tools to support more than one style of programming.

      1. wayne 8

        1.0 = x

        "This has been known for about half a century, ever since somebody managed to change the value of 1.0 in their (very early) Fortran program. "

        I had forgotten all about that feature from Fortran 101, good for a laugh. Thanks.

  8. Anonymous Coward
    FAIL

    Lost the plot

    I stopped programming in C++ when I switched to Windows programming. A simple "hello world" was at least 500 lines of code, and frameworks like MFC only complicated things further. To get anything done you had to code in VB6, which took away all the power. Then C# came out and it was a reasonable compromise. I'm now writing Java, Javascript, Flex, and a myriad of other languages, wishing they all just had the speed (and pointers) of C++ but with the libraries of C#.

    I really think that is where they lost it (Microsoft anyway), in the complete mess of libraries. C#'s biggest advantage was not pointer abstraction or garbage collection, but a huge Java-like framework that let you do anything with a supporting class, from image codecs to network access.

    I've seen LLVM, provably secure compilers, that company that has a hardware "VM" for Java, protected mode processors, etc. I am looking at it now going, "what the hell is so great about a VM you can't do in a real processor?"

    C++ had its faults, but it wasn't the language, it was the libraries. Lets just circle back round to 20 years ago and catch up.

    1. Ken Hagan Gold badge

      Re: Lost the plot

      int WinMain(HINSTANCE,HINSTANCE,LPTSTR,in)

      {

      return MessageBox(NULL,"Hello World!","",MB_OK);

      }

      1. John Smith 19 Gold badge
        Thumb Up

        @Ken Hagan

        Nice.

        Often forgotten.

  9. John Smith 19 Gold badge
    Happy

    Always remembering *the* golden rule

    *Premature* optimisation is the rule of most evil.

    1. Ken Hagan Gold badge

      And the silver bye-law

      Refusal to optimise is the root of Windows Vista, and changing your mind after shipping is the root of Windows 7.

    2. Charles Manning

      but don't leave it too late

      Many of the performance issues are an inherent result of the architecture. If you leave things too late you probably can't change and optimise easily.

      1. John Smith 19 Gold badge
        Happy

        @Charles Manning

        "If you leave things too late you probably can't change and optimise easily."

        True.

        My old copy of Code Complete pointed out for best results you need to start with the actual *algorithm* you're going to use first.

        Poor choice (bubble sort anyone?) here will stuff *any* amount of code tuning.

        However a well *partitioned* architecture will let you swap out the poorly performing modules and replace them with something better. Doing this partitioning well seems to be quite tricky.

  10. Stephen Booth
    Boffin

    Very hard to do this right

    I looked into something similar myself a number of years ago. It is very very hard to do this in an unbiast way.

    My results were that the actual compilers (I'm including the JVM JIT in this) tend to be equally good at optimising code. The difference between different languages is that the different language features tend to cause the programmer to make different design choices (some choices are just not available in some languages) these effect performance.

    If you can identify the performance critical sections of the code and encapsulate them (and their data) then it is possible to re-write them for performance however these sections tend to look very different to "normal" code in that language (e.g. Java code that uses arrays and looks more like C than java).

    I'd be willing to bet that the winning C++ verision was making heavy use of templates (template metaprogramming) this is really a language all of its own and can give very good performance but (IMHO) damages the code in terms of maintainability and intelligability.

    1. Ken Hagan Gold badge

      Idiomatic code

      "I'd be willing to bet that the winning C++ verision was making heavy use of templates (template metaprogramming)"

      The article linked to the paper which in turn gave a URL for the code. I suggest you check it out before accepting any bets.

      The actual coding exercise under study was a graph algorithm or two. The C++ code used the collection classes in the standard library. These /are/ templates, but require no template meta-programming to use them.

      The C++ code did not use a dedicated graph library, despite the fact that boost have one and it is almost certainly bug-free and tuned to within a gnat's arse of divinity. But even that wouldn't have required any meta-programming for this exercise.

      Since templates are endemic in the standard library, it is almost impossible to write idiomatic C++ without using templates. OTOH, it is quite easy to write rather a lot of code without much meta-programming on your part. If you are simply averse to C++ syntax, then by all means avoid it, but don't assume that everyone else feels the same way.

  11. TonyHoyle

    pascal is not the answer

    If C/C++ is uber now.. my assembler skills must be godlike! (smug mode on).

    To the poster above.. pascal is a complete nonstarter. It's like programming java with one hand tied behind your back.

    Object pascal? Maybe the poster only used it in a school environment? I help maintain a 1.5 million line project. Suffice to say, its single pass nature cripples it fatally. (I had a whole rant here but deleted it..).

    Most of the 'good' ideas from delphi ended up in c# - you'd be surprised how similar they are (right down to identical function names).

    90% of the time the language you used is determined by the task. I write Android in Java, iOS in ObjC, my main job in Delphi, maintain others code in C or C++.. One isn't 'better' than another.. Every language has its 'WTF?' moments. What matters is you get the job done, and you don't write an unmaintainable behemoth that will drive the guy you comes after you quetly insane.

    1. Paul Shirley

      It's like programming java with one hand tied behind your back.

      No. It's like programming java with the other hand tied behind your back AS WELL.

  12. Frank_M

    FORTAN

    The only "high" level language that regularly beats C++ is FORTRAN. 50 years old and still the champ. Of course the FORTRAN code must be recompiled for each machine architecture and the FORTRAN compiler is written in C++.

    1. Anonymous Coward
      Anonymous Coward

      Re: FORTAN (sic)

      I guess you really mean Fortran 90 or a descendent of it, in which case it's really a C like language with a vague resemblance to FORTRAN 77 to keep the old farts happy. The FORTRAN of fifty years ago, or even of 20 years ago, hasn't been in widespread use since the early 1990s.

      1. Pigeon

        You might be right

        I used to do most of my computational stuff in a macro assembler (under Primos). When I disassembled compiled code, Fortran77 compilers were almost as good as assembler. Other compilers produced utter tripe (especially C). The Fortran model is good, although I really didn't like using it. F77 was far enough: all the guys with problems could still present me with non-indented (etc) POS's and ask whats wrong with them.

        A previous post mentioned orders of magnitude. This is my recollection. I haven't got the hang of assembly programming in unix. It just seems like too much work.

        Down with complexity, I say.

        1. sT0rNG b4R3 duRiD

          @Pidgeon re: Assembler.

          Writing an assembler program in Unix is not difficult at all.

          In fact it's probably heck of a lot easier that say, for a ZX81 or whatever. For one, there's tons of libraries. Once you got the ABI figured out, you're set. This is not hard (Learning the libraries are). Of course, you're no longer on bare metal, you're in userland on top of the OS which both makes things easier and more difficult.

          But ultimately, you will stop and think... damn, I'd get pretty much the same and more done much quicker in C.

          Also, I would caution too liberal use of assembler, for the very reason that by and large today compilers (and I can only speak mainly of C compilers) generally do a pretty good job. Chips today, even with the same instruction set, are so heterogenous, think out-of-order intel 'cores' and in-order atoms...These kind of issues.

          Don't get me wrong. I grew up having to learn assembly (in fact, before I learnt C). I've just grown to respect the fact that chips now are so complicated, and they keep changing so quickly. C compilers today are also MUCH better than they were before. I'm even talking about gcc, not just intel's.

          I'll be honest, in the past few years, there's not been many an occasion that I've been able to beat tuned c code out of a c compiler with assembly with any degree of signficance. I can't think of any instance off the top of my head, apart from correcting occasional silly redundant things a compiler does, but that's really improving on the code put out by the compiler.

          I still believe however, a programmer should start learning his craft from bottom up.

  13. @thecoda
    Thumb Up

    What *really* happened

    The optimisations involved were all performed by humans - rewriting the code with the explicit goal of making it run faster. In the case of the two languages on the Java platform (Java and Scala), the optimisation also involved tuning GC parameters.

    Interestingly, all of the changes made in the Scala code to speed it up were available to the Java code (They both compile down to the same bytecode). So what happened here is that Scala took techniques which would just be too verbose and otherwise impractical in Java, and made them more generally accessible.

    Now *that's* a result.

    1. breakfast Silver badge
      Happy

      Too verbose in Java?

      Are there any techniques that are *not* too verbose in Java?

      I've used it often enough, but if there was ever a language designed for people who *really* like typing, it's that one.

  14. Richard Taylor 2
    Megaphone

    Oh gawd

    BCPL it really is - let's just not get too excited

  15. Sentient
    Joke

    C++ vs all the other

    C++ is the best language ever when I am writing the code.

    When I am reading somebody elses code it sucks.

    (10+ years experience in offshoring)

  16. Rolf Howarth

    @Ken Hagan

    "*most* code needs to run faster than it does at the moment"

    I would say "some" rather than "most". If computers are too slow it's only because we continually push them to do new things which weren't previously necessary until they break, in which case they will ALWAYS be too slow by definition. I mean, I'm sure it's very clever that I can write a Unix emulator in a Javascript interpreter running in a browser running under Linux running in a virtual environment running under Windows running under Mac OS X, but really, that's hardly something we NEED to be able to :-)

    I'm continually amazed at how fast Java is these days. You can do quite serious graphics or scientific programming in Java and effortlessly have it run faster than a heavily optimised native program only a few years previously. My favourite adage is still that CPU cycles are cheaper than developer cycles!

    1. bazza Silver badge

      @ Rolf Howarth; Not always...

      "My favourite adage is still that CPU cycles are cheaper than developer cycles!"

      Not when you're having to build your own power stations to run your data centre they're not.

      http://www.wired.com/epicenter/2010/02/google-can-sell-power-like-a-utility/

      1. Tom 7

        You dont want to start from here...

        Your right bazza - Google can save a bucket load by optimising C++ code, but for a lot of apps and companies it would be a lot cheaper in the short term to just bang in another processor - moving from a single core to a dual core would cost about 10minutes of a good C++ programmers time.

        But by doing that you are potentially putting in roadblocks for the future. If you think your app is going to go worldwide then you have to make it 'enterprise' compatible from the start, this may cost you a bit more but in the long term it will potentially save you billions.

  17. Werner McGoole

    Languages all have their own tricks

    An important factor is whether the language allows (or entices) you to use constructs that defeat compiler optimisation. For example, much of the speed of old-fashioned Fortran came from the absence of anything like pointers - so the compiler could more accurately assess the scope within which a variable might be referenced.

    C's use of pointers is probably the main thing that makes it slower than old fashioned Fortran, but if you code carefully and don't use pointers, that gap will close. With C++ you're one more step removed from knowing whether the compiler will be able to optimise what you write, so more frequently it can't.

    To give another example from Java, garbage collection can be a real problem but can often be mitigated by avoiding unnecessary object creation/destruction. Unfortunately, this is again something that a compiler is unlikely to manage on its own as it's part of the logical design of the software and at a higher level than compilers work at.

    So while compiler optimisation is always a good thing, good software design also lies at the heart of run-time efficiency. Those who say you can't beat the compiler at optimisation may be right at the level of loops and method calls, but at a higher level it's easy to design something so it runs slowly in any language you like. Knowing what the language does fast and what it does slowly is where the solution lies. So there's no real substitute for experience with the particular language in question.

    Of course, this does make C++ programmers superior beings, as few are able to gain much experience with this language without shooting off both their feet at some point.

  18. E 2

    @Werner McGoole

    "C's use of pointers is probably the main thing that makes it slower than old fashioned Fortran, but if you code carefully and don't use pointers, that gap will close."

    Can you elaborate that? I ask because I am thinking about the use of pointers in C to avoid eg copy-on-call when passing large structs to functions...

    1. Alan Firminger

      Query

      Are there published benchmarks to demonstrate that pointers increase speed ? My tests showed that they are about 20% slower than arrays with incrementing indices.

      I was surprised because pointers look as if they should be faster. Certainly pointers were often closer to the concept, so faster to code.

    2. Ken Hagan Gold badge

      Re: Can you elaborate on that

      Aliasing. If I have a function...

      void f(int* a, int* b) { ... }

      then a C or C++ compiler has to assume that a and b might point to the same storage. (Or if they are arrays, that their ranges might overlap.) Therefore, whenever it has written through *a and subsequently needs the value of *b, it has to reload it from memory. That reduces opportunities for keeping values in registers, not just values of the function arguments, but anything numerical results that were computed from them.

      In Fortran, the compiler is allowed by the rules of the language to assume that no overlap exists. If that is not true, you need to code the function differently. The advantage this gives is probably the main reason why Fortran still has the edge on numerical codes, and the motivation for "noalias" style pointer qualifiers added in more recent versions of C and C++.

      1. sT0rNG b4R3 duRiD
        Thumb Up

        ^ what he said (Ken Hagan)

        But I wouldn't let it put you off using them in C. A lot of people say pointerless languages can be faster because of this possible indeterminacy. But, with my limited experience, i can't say if it is that much of a liability.

        Just consider what you're doing. Avoid pointers if you can but realise what power you have in them.

        Nothing wrong in passing pointers to large structs, imho.

  19. maccy
    Trollface

    the real problem with java is...

    did anyone notice, it needed the greatest number of lines of code? Verbose doesn't begin to cover it.

  20. Morten Bjoernsvik
    Pint

    simple used to be faster

    Legacy Fortran code is fast because the F77 compiler was really simple, no function stack and no dynamic memory allocation. Just fixed arrays and static libraries for everything. basic stuff like recursion was not allowed

    Pure C-compilers used to be fast also before all the OOP stuff was added.

    And not to mention all the kernel, multitask, GUI, interrupt libraries you have to incorporate.

    For many simple tasks todays computers are too complex.

  21. John Lilburne

    The 10% of C++ programming

    We are a C++ shop and we limit the use of C++ language features. We have some 20+ years of developing in C++ and our standards are built from field experience in our application area, and constant profiling of new algorithms. If I were to outline some of the restrictions we put on using language features they'd be a swarm over this post giving it the thumbs down. Yet in our application area, creating tooling for the manufacture of 3D objects, we are the fastest and most accurate in the world.

    1. sT0rNG b4R3 duRiD
      Thumb Up

      Hehehehe...

      If you guys are doing what I think you are doing, good for you! Less is more.

      What I can't stand about C++ is not being able to see in my head what the compiler is likely to be doing. And the messier (or more C++ features used), the bigger my headache gets.

  22. Peter Galbavy
    Facepalm

    It's all just philisopher's stones

    While the academics and the clever folk put lots of work into developing new languages and the frameworks/ecosystems they live in, to me the user's of these new systems always seem to go through a very specific cycle; Namely, the "wannabes" rush to every new thing in the hope that it will contain the secret that prevents them from having to learn, think and do.

    It's the new Philosopher's Stone. The kiddies all think "this will make my code into gold!! ... nope, sorry. Education, experience, hard work. The language is a tool not a magic rock.

  23. E 2
    Go

    @John Lilburne

    Mr Lilburne:

    By all means please post your list of restrictions. I am very curious to see it, and I promise not to flame or down vote you.

  24. Sandtreader
    Thumb Down

    NPOV?

    "C++ and Java require statements being terminated with a

    ’;’. Both Scala and Go don’t require that. Go’s algorithm

    enforces certain line breaks, and with that a certain coding

    style. While Go’s and Scala’s algorithm for semicolon

    inference are different, both algorithms are intuitive and

    powerful."

    I don't think that would pass Wikipedia review... One might argue that inference of syntactic elements from whitespace is ugly and error prone, and enforcement of K&R style doubly so - unless you do it properly and get rid of braces altogether, like Python. Adding semis is like breathing, you don't even know you're doing it; so why mess with it?

    Also, in terms of conciseness, it hardly seems fair to compare ISO C++ with something brand new like Scala and Go: Why not C++0x, which instantly gets rid of the lot of the verbosity with 'auto'? And Scala's fancy for comprehension structure was the first thing they threw out when optimising it!

  25. -tim
    Go

    Go's future?

    I like the direction where go is headed but it lacks the proper tools to do currency right. If it bad a bcd or fixed point type (like every modern CPU support), then it could be very big in many fields that are still fighting over floating point money.

  26. Torben Mogensen

    Comparison to non-OO language?

    All the languages in the test are object oriented to some degree -- though C++, Go and Scala less than Java. OO makes a language difficult to compile for fast execution speed and difficult for humans to optimize their code, so I would really like to see a similar experiment include languages with no OO features.

    Garbage collection is also less efficient in OO languages than not, partly because updating of old objects to point to newer objects is prevalent and partly because the GC is required to call finalizers on collected objects. So the mentioned problems with GC on JVM need not apply to non-OO languages.

    1. jibal

      Um, Scala is most certainly not less OO

      than Java. And C++ code can be entirely free of objects, so "languages with no OO features" were already included, because C was essentially included. Also, Go isn't even OO, it's only object-based.

  27. Shoddy Bob
    Megaphone

    Functional languages

    Surprise, surprise a functional language comes out best, even a half-hearted one that is crippled by running on the JVM.

    Maybe they should have tried a decent functional language like Haskell or ML that would have been a fraction of the code of Scala and yet compiled to speeds close to C++. It could have even auto-parallelized some tasks to run across multiple CPU cores.

    Look up 'The Great Computer Language Shootout' for a much bigger comparison of languages across various useless benchmarks.

  28. Anonymous Coward
    Anonymous Coward

    Scala FTW

    Google supposedly hires the smartest of programmers, and Scala supposedly requires programmers to be smart ... it should be a good match. But sadly Google uses a lot of Java and C++ and is focusing on Go, a new language with the design of which ignores almost everything that people we have learned about language design over the last decades, repeating the major mistakes of Java that leave the programmer to do a great deal of the work that the language should be doing, especially not promoting code reuse. If Larry and Sergei were personally to learn Scala and sit with Martin Odersky and learn just what it offers and how it is so much superior to Go, Java, and C++, Google could revolutionize their practices and knock the programmer productivity ball out of the park.

  29. David Martin
    Meh

    Buried in the report

    "Jeremy Manson brought the performance of Java on par

    with the original C++ version. This version is kept in the

    java_pro directory. Note that Jeremy deliberately refused

    to optimize the code further, many of the C++ optimizations

    would apply to the Java version as well"

    But no "Java Pro" line in the benchmark table...?

This topic is closed for new posts.

Other stories you might like