back to article Sutter: C++11 kicks old-school coding into 21st century

There's a new C++ in town: C++11 has been approved and published by international standards chiefs. C++11 is the first major revision to one of the world's most popular programming languages in 13 years. The update will position apps using the language for coding for the next two decades. C++11 was published by the ISO on …

COMMENTS

This topic is closed for new posts.
  1. Grandcross
    FAIL

    Redefinition is not standardization

    I've never understood how breaking every existing code base out there can be called standardization. This is something the C/C++ standards body have been doing for years.

    Change can be good and constructive, but this is no more C++ then a Prius is a horse. They're not interchangeable.

    Create a new language and be done with it. Stop breaking my builds.

    1. Anonymous Coward
      Anonymous Coward

      What has been redefined?

      I'm interested to know what breaking changes you have come across. Please tell me!

      (I assume you are being flippant when you say "breaking every existing code base out there").

    2. Field Marshal Von Krakenfart
      Trollface

      "I've never understood how breaking every existing code base out there can be called standardization"

      That's how mickysoft defines standardisation.

      I'd also like to see the criteria they used to determine that C++ was one of the most popular languages in the world.

    3. Tomato42
      Facepalm

      then compile your programs with --std=c89, as you obviously did for the past 20 years

  2. E 2

    ... clean and safe ...

    If the programmer knows what s/he is doing then the code is clean and safe regardless of the language.

    1. Gordon 10

      Missing the point

      90% of programmers don't know what they are doing to that level of detail.

      A lot of programmers have learnt on the job and never covered these areas. Hell most of us don't even have time to.

      If OS programmers employed by the likes of MS - who would be by no means stupid - still leave gaping holes what hope is there for the average code monkey?

      Safer all round but much less efficient and elegant to have the language plug most of the holes for you.

      Downside of course is that ultimately it's less safe than a programmer whole truly knows what they are doing at a low level.

  3. kissingthecarpet
    Thumb Up

    Any reports

    of the death of C++ have been greatly exaggerated. Well written C++ is a thing of beauty and a joy forever.

    1. Steve McPolin

      A programmers dream....

      It really is! Code written in many other languages goes stagnant over years of being ignored. Nobody can remember how it works if it doesn't require routine maintenance every few months. C++ has integrated mechanisms to ensure every bit of code requires constant touch up and frequent overhauls.

      I recon that without C++, most of the regular programming work would have dried up, and we'd be forced to design new SCMs to keep ourselves employed.

    2. Voland's right hand Silver badge
      Devil

      I just choked on my coffee

      If you like a grotesque monster which grows template warts and shi(f)ts to output all over the place - yeah... It is beautiful...

  4. lyngvi

    uh... details?

    Kind of lacking in detail here. I'm going to latch onto the one talking point I did see mentioned:

    "One of the biggest changes in the spec helps make C++ a little more Java-ier: the introduction of a standardised multi-core-friendly memory model that Java has had since 2005."

    I've found C++'s legacy memory model perfectly amenable to multi-core systems, so long as volatile variables are declared volatile, variables are shielded correctly by critical sections (locks), etc. What is different about C++11's memory model, how does it avoid breaking existing C++ code, and why should I want it instead of what I know works (however clumsy the syntax may be)? What makes it like Java's, and what makes its (or Java's) merits superior?

    1. Steve the Cynic

      Actually...

      "volatile" isn't good enough by itself. Read about memory barriers. The executive summary:

      I declare the singleton pointer to be volatile, OK, fine. I put locks around it, OK, fine. Thread A on core1 goes lock/read and decides that it is NULL, creates the singleton and writes the pointer/unlock. Thread B on core2 dives in and does lock/read. Without memory barriers you do not have an absolute guarantee that the memory has been written before core2 reads it. (Granted, if core1 and core2 are on the same chip, it will most likely come from on-chip cache, but if there are actually two chips, this isn't guaranteed, and a single programming language should not mandate that full-whack inter-chip cache coherence linkage is present everywhere on all machine architectures.)

      1. lyngvi

        focus

        Yes, yes. The only time I've ever used the volatile keyword is for flat integer variables, in conjunction with InterlockedIncrement/Decrement calls or __sync_fetch_and_*. These generate full memory barriers on Intel systems, can't speak for other archs. So far as I'm aware, they're also the basis for most locking primitives (eg CRITICAL_SECTION, pthread_mutex, etc) on the above.

        But my question was not "How do I correctly write threading primitives in C++", it was "How is the C++11 memory model similar to Java's and different from the existing C++ model?" I'm not even quite clear on what is meant by 'memory model' in this context (describing "allocate whatever memory you need and clean up after yourself" as a "memory model" sounds rather self-congratulatory)... Maybe I'll just have to read the spec.

        1. Tinker Tailor Soldier
          Go

          Isn't that the point?

          InterlockedIncrement etc are technically win32 APIs and __sync_fetch are GNU extensions to C++. Now these barriers are standardized into the language and guaranteed to be the same way for all compilers. It helps more of your code be platform/compiler/processor neutral.

          People always forget that for locks to work, they also have to imply memory barriers, so the gnashing of teeth around optimistic code that subsequently acquires a lock is normally unwarranted (unless it is functionally incorrect).

      2. fxmcbob
        Linux

        Correct, to a point.

        Volatile is indeed insufficient, and actually not even necessary. Barriers are what ensure cache consistency between threads (and are very expensive). However, locking is actually sufficient, as any lock worth it's salt is going to use a barrier.

  5. Christian Berger

    So... does it fix any of the bugs?

    Does it now separate code from data so invalid array accesses won't overwrite the stack?

    Does it now allow me to check the bounds of an array or any allocated memory region at runtime?

    Does it automatically check memory accesses? (Pascal does this now optionally, and I've never seen any degradation of performance)

    Does it now stop making implicit object copies?

    No? Then I still see no reason why I should use it.

    1. James Hughes 1

      So don't use it.

      Meanwhile people who do use it and like it can continue to do so.

      Just because it doesn't do what YOU want (or you cannot make it do what you want), doesn't mean it's no good for anyone else, who find it DOES do what they want.

    2. Ru
      Boffin

      Bugs?

      Why should everyone else be encumbered by a fancy heaviweight memory management system just because your coding style is somewhat lacking? If you cannot be trusted to write code that does relatively low level memory access, perhaps you should stop trying to do so.

      Learn to use the STL correctly (between std::vector and std::array buffer access, allocation, deallocation and size limits are pretty painless) and you'll be rather better off. Your code may make implicit object copies, incidentally, but that's because of the way you've written it. Instead of restricting the language to fit your demands, you've now been given the opportunity to use rvalue references instead. The STL classes handle those just fine, incidentally, and if you didn't want to learn about them you could make more use of normal object references instead.

      And all those other wonderful languages you use instead, do they manage RAII? How about explicit memory management? Or little syntactic luxuries like const correctness? No? Didn't think so. Realistic arguments against C++ should probably revolve around things like ease of refactoring, or static analysis perhaps. These are serious concerns and major impacts on programmer productivity and bugfinding... everything else you've listed is just whining.

      1. DrXym

        Encumbered?

        Adding GC does not "encumber" anyone. If it were defined in C++ it would probably be invoked by a special construct that if you did not use, you would not incur a penalty from. I also assume that any gc behaviours would be defined as C++ templates that could be overridden or otherwise hooked.

        It is also obvious from the popularity of Boehm (a poor man's gc) that it is a highly sought after feature.

        So no it wouldn't hurt C++ to have GC. Quite the opposite in fact.

        1. Tinker Tailor Soldier
          Thumb Down

          GC tends to be viral.

          At least there are no languages that aren't either majority GC'ed or explicitly managed (maybe behind smart pointers). But, yes, getting a workable optional GC'ed mechanism would be interesting.

          I think the bigger issue is support for RAII. Sorry, written code in lots of languages and its just useful. C# comes somewhat close with IDisposable and using.

          1. Ru

            Re: GC tends to be viral

            RAII implies deterministic cleanup of resources; this is often very important, at least in the fields I usually work in. IDisposable is a horrible hack that's been stuck onto the language almost as an afterthought... its semantics are totally undefined, and using (...) {...} is basically syntactic sugar for try...catch...finally { foo.dispose(); } and requires that the coder remember they are using a resource that requires cleanup and add the extra code to ensure that happens.

            By contrast, knowing that a stack-allocated object will be destroyed when it goes out of scope or the stack is unwound means I can put cleanup logic in its destructor and be happy to know that it will be run regardless of whether any future coders remember to use the using syntax. Relying on them "doing it right" isn't really enough for me.

            Of course, they could just create a new heap allocated object and avoid that nice feature, but there's just no helping some people. Though I suppose a private constructor and a factory method that returned a smart pointer to an instance might work... but I ramble.

        2. Ru

          Re: Encumbered?

          You may note neither I nor the original commenter mentioned garbage collection. The original comment was talking about checked memory access; I'd expect you to get for free from any language with a greater level of abstraction (eg, no pointers) and in turn those generally imply garbage collection, but they're not essential.

          I'm talking about the kind of memory management system where every memory access needs to be validated by the userland before it happens; the STL containers and smart pointers perhaps fulfil this role. Their overhead is optional, and choosing to avoid it for whatever reason is relatively painless (eg, you could just get pointer to the first element of a vector and treat it like a normal array, if you wanted).

          Making a GC and an unmanaged memory model work together seems to be clunky at best... C++ CLR does indeed use extra templates such as pin_ptr and gc_root and safe_cast as well as new keywords like gcnew and the addition of object Finalizers. Its effectively two languages crudely grafted together, though I'd like to think it is possible to do better.

          GCs are popular because they are a crutch... they take away some immediate hassles, and replace them with magic and nondeterministic behavour, but this is a tradeoff you may be happy to make. They are not essential for C++ by any means, even less so under C++11. If you feel that you absolutely cannot work without one, or that whatever project you are currently involved with requires one then you are probably trying to use the wrong language for the job, and you should look to Java or C# or whatever else instead.

  6. Herby

    One of these days...

    We will all go back to a proper language, Fortran 66 comes to mind. In the meantime, the next version of C++ will probably be close to PL/1 in attempting to be all things to all people. This will be confirmed when it includes picture formats (see your friendly Cobol or PL1 manual) in the standard library (assuming they aren't already there somewhere).

    When will we get an update to Bjarne's book, increasing it to over the inch it already is.

    1. Ross R

      New Book

      It's not by Stroustrup, but The C++ Primer Plus, Sixth Edition is due to be published next week and covers the new standard. It's 1200 pages!

    2. John Styles

      To my mind

      ... the biggest tragedy in computing is that Fortran stuck at 77 for so long because of in-fighting, allowing the weenie land-grab of C (fine as a platform neutral symbolic assembler with expressions, hopeless as an application programming language), with Fortran marginalised to hard-core numerical stuff. If there had been a Fortran 84 or thereabouts this could have been headed off.

  7. Eddie Edwards
    Joke

    Two decades?!

    "The update will position apps using the language for coding for the next two decades."

    Well that seems like a step backwards, since a regular C++ app can be written in only two years.

  8. James 47

    @Jophn Styles

    How is C hopeless as an application programming language???

    1. Anonymous Coward
      Anonymous Coward

      I think he means

      if you use VB, Java, something like that you can knock out a program that does what you want nice and quick. Then, if necessary, you can go back and debug it, sort out error handling, that sort of thing.

      Take VB- Want to do TCP transfer? Create a new TCPClient and do "[TCPClient].write ("[message]"). Half a dozen lines of code and you're blasting data across the Internet.

      With C, by the time you've finished writing the program the whole thing's written. And generally a proper programmer- one who knows enough to use C properly- will take time to incorporate error checking etc as he goes. So if it's time-sensitive C can be an utter ballache. If you just want to knock up some code to sort a problem or test an idea, it's useless for all but the simplest of tasks. Yes you get nicer code, but it takes longer.

      While typing this comment (taken.. ooh, 15 minutes?), I've also been VB-ing and can now stream multiple channels of serial data to my cousin in Australia from some equipment. Yes, it's slow compared to C and yes, it's crap code. But you try doing that in C in that timeframe!

      Real programmers use Assembler anyway :P

      1. Sam Liddicott

        "Real programmers use Assembler anyway :P"

        Your final statement "Real programmers use Assembler anyway :P" is possibly true but ironic as C is portable assembler + a standard library which you don't have to use...

  9. Anonymous Coward
    Boffin

    No Garbage Collection => No Object Orientation

    If the coder is still worrying about the bytes being assigned and released by their data structures then you're kidding yourself calling it Object Oriented; it's just automated namespace management. Which is nice as far as it goes, of course.

    1. Ben 42
      WTF?

      Eh, what? That really makes no sense. You seem to be conflating object orientation and memory management, and they're not the same thing.

      Besides which, any C++ programmer worth their salt knows to use smart pointers (my understanding is that they're part of C++11 now too) these days and so _doesn't_ have to worry about allocating and deallocating memory. But if they have a need to they still can, which is kind of the point.

    2. Ru
      Boffin

      Automatic memory management => no deterministic behaviour

      Leaving aside your failure to understand object orientation, you're still quite incorrect. I can create block, function, class or application-scoped object instances and know exactly when they will be destroyed. I can use smart_ptr and unique_ptr to conveniently handle most other kinds of memory management. I can use (r)value references to pass around object instances without ever having to use new and delete.

      Garbage collectors generally imply an absense of RAII, a very useful pattern, and nasty hacks like .net's using {} and IDisposable stuff for trying to do the same thing. They also make it very difficult to offer any kind of execution time guarantee, or memory footprint restrictions. These aren't important for many kinds of application, but when you do need them you will definitely notice their absense from your happy shiny garbage collected managed memory model.

      1. Kristian Walsh Silver badge
        Thumb Up

        +1,000,000 (all instantiated into the same reused block, of course)

        Garbage collection is pure, distilled and concentrated evil for interactive or real-time systems. The best it can offer you is a statistical value for event latency, but with such a wide deviation that you need to run the software on a grossly overspecified platform in order to make sure that even if your event occurs during a GC run, you'll still get to handle it in time.

        And I still haven't come across a good argument for why you'd need GC, let alone one that would make me give up "Resource Acquisition Is Initialisation". Most arguments for GC grossly overestimate the amount of direct dynamic allocation a typical C++ application programmer does, and never fully take into account the deferred cost of cleaning up later.

        Fifteen years later, you can *still* tell that an application was written in Java. It's got that "sticky" feeling, like a watching a machine with a slipping gear, as that wonderful GC jumps in at the most inopportune moments. It's fine for quickly building client/server applications that can hide behind network latency, but you'd need your head examined to write something like a real-time embedded operating system in Ja--- oh, never mind...

        On C++11 itself, while the memory model work is great, very few will notice it, and while that does sum up most of the new features, there are some standouts: As someone who had to teach C++ many years ago, I welcome the new meaning of the "auto" keyword with open arms: it sure beats typing "std::map<std::string, std::map<std::string,std::vector<std::string> > >::const_iterator" . On that note, the official recognition of what the two characters ">>" together should mean when found inside a template declaration is a small, but also very welcome, addition.

        Other high-points for efficiency are the introduction of move construction and rvalue references, both of which remove the spurious extra objects that can occur when passing by value. Sure, they're mostly of interest to library authors, but they've been added in a way that often won't require rewrite of client code.

        Before complaining, I'd suggest a read of the FAQ (http://www2.research.att.com/~bs/C++0xFAQ.html) might be in order. Of course, if you're already convinced that Java or C are the world's best languages, there's not a lot that would convince one otherwise.

        1. Anonymous Coward
          Anonymous Coward

          I don't think that GC is the problem on most Java systems, I think it's more to do with using a VM. You can't beat native code when push comes to shove (and not that big a shove either)

          1. This is my handle
            WTF?

            Not a full-time java guy myself...

            ... but I have been writing code in it since JDK 1.1 or thereabouts and I think GC actually can be an issue. It's gotten much, much better over time, but if you can't afford to have an application arbitrarily running a bunch of free()'s when it's least desirable you'd better stick w/ C or C++.

            OTOH, as has been said, a java programmer of skill level X can get the job done in a third the time it will take an X-level C++ programmer, which leaves lots of time for performance tuning, refactoring, porting small critical parts of your app to C++, etc.

        2. Anonymous Coward
          Anonymous Coward

          Java is fine when written by competent coders.

          GC is not a major problem in Java provided coders coders decide which objects should be short or long lived to reduce the load on the chosen GC algorithm and tuning parameters; yes we can choose this! Some JVMs GC algorithms (especially advanced JVMs) can actively de-fragment memory fragments in ways that are just not possible in C++, because it uses pointers, C++ code is also not session portable, thus far less scalable, unlike advanced JVMs.

          e.g. A lot of the slowdowns in GUI code are due to coders not realising that Java has the Event Thread and that all GUI changes must be jobs on this thread to avoid thread-safety issues and the code must run busy code in another thread to avoid stalling the Event Thread, thus the GUI.

          I would not want to go back to C++ because it is so easy to trip yourself up with null pointer issues, array underflow and overflow issues, casting issues, and vastly less _standard_ libraries than found in Java.

          Java provides far more protection from bugs, crashes and various security issues than C++; issues which many programmers would not fully understand and are not adequately protected against by defensive C++ compiler and library extensions.

  10. Osvaldo
    Thumb Up

    The point of a wel-defined / modern Memory Model is...

    1) You can implement full-userland concurrency (locks, semaphores, atomic objects etc.), with "high-level" code in the given language (no recourse to ASM), and no costly context switch to use OS concurrency facilities unless really necessary (inter-process sync).

    2) You can do very dangerous and complex, but very fast concurrency algorithms. Code that is tolerant to controlled data races, lock-free concurrent data structures, etc. See for example Java's Disruptor, or even the internals of many java.util.concurrent APIs. Notice that at this level, the Memory Model is not useful to most language end-users (like I say it's really advanced, highly complex programming that's hard to do right with a good and well-defined MM - but impossible without such MM). It's a feature target at experts who write low-level libraries. Most programmers are better off using higher-level things like volatile variables and concurrency APIs.

    3) Even for high-level code, the MM makes easier to have compiler optimizations that are powerful and portable because they can happen on the HIR level (dealing with happens-before or similar attributes) and not in the code generation phase (where every CPU is different); each CPU back-end only needs to provide its own lowering of memory model pseudo-instructions. So this is good for multiplatform C++ compilers like GCC and CLANG. Java VMs have long relied on the MM in order to do all sorts of interesting optimizations, such as lock elision (lock/unlock operations are simply discarded when the JIT can prove it's safe... and yeah this happens VERY often). Even much before 2005, JVMs did many dirty concurrency optimizations, it was just not standardized so you could write advanced code that relied on well-defined behavior of races, but it would not be portable.

  11. DrXym

    Hmmm

    A new C++ standard is always welcome and this release looks far more suitable to multithreading but let's be honest here. It took 13 years to produce this standard, and probably another 2 or 3 years before implementations are close to it to rely on the new features. And it still doesn't support garbage collection, nor many of the other features that other languages have taken for granted for years.

    While garbage collection in C++ is bound to be contentious, the reality is it's absolutely necessary before it can claim to be as safe to program as higher level languages. Even Objective C 2.0 has garbage collection demonstrating it's possible to. And managed C++/CLI. If it takes another 13 years for the next revision, I think C++ will be a relic consigned to the lowest layers of the OS.

    1. Alfred

      There is no one true language. Use the right tool for the job.

      "While garbage collection in C++ is bound to be contentious, the reality is it's absolutely necessary before it can claim to be as safe to program as higher level languages."

      Is there some special bonus for being as safe as some other language? Use the right tool for the job. If you need that level of safety provided by some other language, use that other language. If you don't, then don't. I do not want the extra overhead of such things, and am prepared to pay the extra time and care in coding to ensure I don't need them. Other people have different needs and should use a different tool.

      1. AndrueC Silver badge
        Boffin

        I don't mind if there's an optional GC available but in my experience RAII will solve most issues because most processes are fundamentally hierarchical. Comes from having a program stack :)

        It's funny how things change that way. Back when I was developing for DOS everything had to be on the heap and the new/dispose keywords were being used every five minutes. But as soon as I moved to Windows and had an effectively infinite stack they were consigned to being rarely used. Having the STL was just the icing on the cake. I'm not a huge fan of std::basic_string but it's a helluva lot better than 'char *' and it doesn't take more than an hour to knock up a friendly wrapper class for std::basic_string.

        If you want safe memory usage in C++ just learn how/when destructors are called then train yourself to treat new' and 'dispose' the same way you treat 'goto' :)

    2. Richard 12 Silver badge

      Garbage Collection is fundamentally *unsafe* in many contexts.

      In a lot of contexts, you really need to know how long a given operation is going to take.

      Not because the program internals might change, but because the external environment might change. (Or the user might get bored)

      A GC can and will pop in at any moment and clean up - thus you cannot ever know "This procedure will take 180 to 200 time units to complete", you can only say "Most of the time it will take 180 to 200 time units, but sometimes it'll take 10,000 to 20,000 because the GC swept by"

      Alternatively, if you are responsible for cleaning up your own garbage, then you can do that cleanup at known points in the program, and as you know exactly how much garbage there is you know how long it will take.

      It gets even more fun on multi-processor, because if Process A gets paused by the GC while Process B is waiting on it, then B has to wait ages. Without GC, each process can do its cleanup at a time of the programmers' choosing - eg once B has the data A can clean up because nobody is waiting anymore.

      Garbage Collection is like having your manager pop round every so often and insist that you stop and clean your desk *right now*, regardless of what you're doing at the time. You also never, ever clean your desk at any other time.

      Is it not better to clean your desk when you've finished a task?

      1. Anonymous Coward
        Anonymous Coward

        GC FUD!

        Cleaning up at the end of the job is fine, provided you have enough stable desk space, however you often need to do several mini cleanups even during a job, otherwise mistakes can occur or you lose vital stuff under piles of paper!

        GC does not have to be slow, there as some very fast implementations which can handle massive memory pools without slowing down and without fragmentation.

        1. AndrueC Silver badge
          Boffin

          What amuses me about .NET GC is the amount of trouble it can get you into. MS have published at least two white papers explaining how it works and what you should do to avoid upsetting it (boxing v. not-boxing etc.).

          http://msdn.microsoft.com/en-us/library/ms973837.aspx

          Maybe .NET just has a bad implementation but when you are advised to do things a certain way to avoid 'upsetting' your runtime then something has gone wrong. RAII is simple and (once you've written the class) automatic.

          Now writing an RAII class can be tricky but the thing is - you only have to write it once. Then you re-use it. GC seems the other way around:Zero upfront effort but continuous effort using it.

  12. AndrueC Silver badge
    Boffin

    Lambda/anonymous functions

    I've never really liked anonymous functions.

    They seem to go against the grain of code re-use and I think they make it harder to read code, not easier. Granted if you want to know how the function does what it claims you have to go elsewhere in the source to find out but if a function is well named and works why should you care how it does what it does?

    Can anyone explain what is saved in not bothering to write a named method other than a bit of typing?

  13. dajames

    C++ has supported GC for years.

    If you REALLY want to use GC in C++ there is a quite nice library written by Hans Boehm at HP that you can downoad and just use. It works with standard C++ today and doesn't need any change in the standard.

    http://www.hpl.hp.com/personal/Hans_Boehm/gc/

    When people talk about "adding GC to the standard" they mean making GC built-in rather than something that you'd use an external library like Boehm's for. They're not talking about any language change being necessary to support GC.

    Do try and keep up at the back!

  14. Anonymous Coward
    Anonymous Coward

    Who supports it?

    Which compilers support it (and which test suites)? I assume that most vendors have tracked it but I am too lazy to google at the moment.

    As most commentators have noted with regards to the quote from Sutter adorning his pic, s/is/can be/.

    1. Bronek Kozicki
      Boffin

      RE: Who supports it?

      I'll refer to Scott Meyers : http://www.aristeia.com/C++0x/C++0xFeatureAvailability.htm

  15. JDX Gold badge

    @Ru

    In my experience anyone who plays the "I'm good enough I don't need fancy features" card is the one to watch for bugs. It's like driving - most of us think we're better than average.

    1. Richard 12 Silver badge

      This is true, but irrelevant

      Ru was saying "RAII works better in many cases than GC"

      I'd agree. Most large programs written with a GC'd environment end up having to jump through hoops to

      Actually, RAII is pretty much a 'fancy feature' - well-written constructors and destructors, use of fancy pointers instead of dumb ones etc.

      The weird part is that if you use constructors and destructors properly, GC causes pauses. If you don't, then GC doesn't really help you anyway 'cos it can't ever collect something you never let go of.

      I can't really think of a good reason to use GC at all.

      1. Anonymous Coward
        Anonymous Coward

        How about double or early free crashes for non-stack variables.

        GC will never allow that to happen.

This topic is closed for new posts.

Other stories you might like