They're talking about moving an array from one scope to another. They are specifically not talking about copying something within a scope.
C++ 11 is “far better than previous versions”, says the inventor of the language Bjarne Stroustrup. He was speaking at an online event marking the launch of Embarcadero's C++ Builder XE3, a rapid application tool targeting Windows and Mac OS X. C++ Builder XE3 is a promising but curious product. Delphi and C++ Builder were …
They're talking about moving an array from one scope to another. They are specifically not talking about copying something within a scope.
You can't do that in C.
He's talking about declaring an array inside a function and then outside said function being able to move (not copy) that array over to another variable.
Show me how you do that in C without copying the contents of the local stack array into a passed pointer or reference.
>>or you just used a smart pointer
smart pointers are officially part of C++ already. But if you want to transfer ownership of a chunk of memory from one object to another, you still have to do some mucking about - it's not simple reference counting as that achieves something different.
How will this be achieved/implemented in the new version anyway, anyone got a code snippet?
>Every time the developer is responsible for memory, an opportunity for an error is introduced.
Evert time the developer is responsible for anything an opportunity is there. Until the day "programming" consists of asking the computer to do something for you then nothing will change. IMO memory handling is no harder - and conceptually a lot simpler - than plenty of other parts of modern development that language geeks rave about. Eg OO, generics etc.
Is he talking about the Perl 5 source?
I lurve Perl - you can make as much mess as you want and nobody tells you off. :-)
Hahaha. True. But it's not clear to me why a Perl mess should evoke merely meh, wheres a C++ mess (or a C mess, for that matter) - and it is invariably code you didn't write - evokes the reaction that the author was an incompetent idiot who shouldn't be allowed anywhere near a serious compiler.
Because perl is executable line noise used by admin to cobble together small helpers together. When the start language has no decent syntax or rules, why bother with the code
I programmed in C++ from 1987 until 2005, when we switched to C#. I still do have to maintain our old legacy C++ code from time to time.
My personal opinion is that switching from C++ to C# was the best thing we could have done! Most of what we were doing was UI code, and C++ is frankly totally crap for that. It's pretty good for device drivers and graphics libraries and other low level stuff. But for anything else, it's truly awful. And I say that from a position of having a GREAT DEAL of C++ experience.
Yes, I read - and understood - the seminal C++ books "C++ Templates" (Vandevoorde/Josuttis) and "Modern C++ Design" (Alexandrescu). Oh. My. God. To think I used to think it was all so cool - now I just think the language crawled up it's own arse and died... ;)
nah , now you have managers that can write a "hello world" program in 20 minutes so they are sure it'll only take you an hour to write and test all that code they want.
Oh dear god, that Alexandrescu book. I found myself thinking that I'd rather shovel pig shit for a living than have to work on a code base where ideas from that book had been used. It took the complexity and obscurity of template meta-programming to a point where I had to wonder about the author's sanity.
There was a time (my glory years as far as C++ is concerned) where I actually understood most of Alexandrescu's book. The object factory template stuff was weird but I got the gist of it.
Sadly since then I've been moved onto C# and r&d so my C++ skills are rusty.
C++ does take more effort than most languages but it also has the power and flexibility lacking from those languages. The trick to C++ is to code up your own 'meta language'. Wrap all the low-level arcane C++ in classes. Getting things like copy ctors and assignment operators working can be a pain but you only have to do it once.
The STL and Boost provide templates for most things you'll need - although I never liked basic_string and friends as it seemed too picky. String handling is so common that you don't want to be fighting a template to get the job done. One thing I loved about Borland's 'AnsiString' was that it had constructors for every type under the sun.
AnsiString wibble(10) gave you a string containing "10".
displayString(AnsiString("The answer is ")+a*b));
The ambiguity rarely caused me any problems.
"It's pretty good for device drivers and graphics libraries and other low level stuff."
If I need to write something that has to manipulate data as quickly as possible, I'd opt for C++ and all of the wonderful memory control it provides; if I need a GUI or high-level RESTful web service (etc.), I'd go for Java or C# depending on any existing infrastructure.
My time is money, as is the time of the people that need to maintain my code in future, and C++ just costs more of both.
Sod the "best" arguments, the purpose of a programming language is to abstract away the problems you don't need to solve for your current work. If memory efficiency isn't important, why include it within your current solution when you could spend the time on other things?
That's the reason I don't do assembly at work, anyway. YMMV.
"Wrap all the low-level arcane C++ in classes."
While I noted your qualification there, and I certainly have an do indulge in a bit of wrapping myself, I am finding that it is less and less necessary now because std libs bundled with most languages are pretty mature now and there seems to be quite a bit of feature swapping going on (this is a good trend IMO). :)
Frequently I have found wrapping to be a major source of obfuscation and screws ups (this ain't unique to C++ either). Essentially you're adding functionally useless code to a system that is almost guaranteed to have bugs of it's own.
It's not uncommon to find multiple wrappers for the same stuff in a large and/or old project simply because successive generations of developers have decided they don't like what's gone before but are too lazy/time starved to change it appropriately. :)
Also with respect to Borland's AnsiString thing, I liked it too. But I quickly worked out that the << and >> were just as easy (for me) - and the best bit is that they are ubiquitous.
"I found myself thinking that I'd rather shovel pig shit for a living than have to work on a code base where ideas from that book had been used."
Unreal Engine 3 used ALL of them.
Then wrapped them in macros for the win.
Your pig shit metaphor is too kind.
Sadly I've just not done much C++ work of late. Mostly it's been C# - although that does show you the strengths of wrapping things up. Most of C#'s speed comes from having generally very good class libraries.
What I love about C# is the reduction in hassle. Having reliable syntax error highlighting and no headers/LIBs to locate and track is great. The hiding of pointers is good as well - if something compiles it nearly always runs. You almost never get the surprise crashes of C++.
But I also miss scrabbling around the covers. I suppose vehicle engineers have the same feelings. They probably love the ease of swapping out faulty components and going onto the next job whilst being a little sad that there's no carb to take apart and clean :-/
Oh and for the record garbage collection will never supplant RAII in my affections :)
It will be expressive, efficient, and have an elegant simplicity. People will be able to master all its features in a week, while being able to easily express powerful abstractions and complex data structures. It will neither leak memory, nor perform garbage collection at random intervals. It will be automatically and transparently thread safe, and support inter process communication and synchronisation as part of the language. It will be fully deterministic, so that it can be used in real time applications, as well as being provably correct, for safety critical uses.
In the meantime, just pick whatever is least damaging to your sensibilities and most suited to your purposes.
That's why in most programming languages, particularly in C, people tend to define their own languages for their problem. For example by coding the logic into data and having a simple interpreter for that data. That data can then easily be understood.
Hmm... yes I have seen attempts at that.
There was one in the early 80s called "The Last One". Sure enough it didn't last.
Fully deterministic? There goes multiple threads.
Nor will it ever be.
F# comes pretty close though... If only it didn't need the .net or mono vm it would be even closer.
There will never be a best language. Languages are tools, and we use the right tool for the job. Or sometimes, the wrong one, for lack of a choice, or out of familiarity and/or prejudice. "If the only tool to hand is a hammer, all problems look like a nail".
"It is easy to teach a list of features, but hard to teach good programming."
Which was my opinion of Stroustrup's "C++ Programming Language" - piss poor as a tutorial or a reference, more of a stream of consciousness as he walked through the myriad ways of accomplishing any one thing in C++. I have very little confidence that his new book will be any better, and I don't even see a need for it when Koenig and Moo have already written an exceptionally good introductory book on C++.
I taught myself C from the Kernighan and Ritchie book, which was a delightful jewel of simplicity (both the book and the language).
Later, I learnt C++ from the Stroustrup book, which by comparison seemed to comprise a large collection of complex ideas dumped from author's brain to paper with very little organisation in between.
The best thing I could say about Stroustrup's book is that it spawned a lucrative C++ publishing industry. Many authors must have made a good living selling us more books to explain what the hell Bjarne was on about.
Stroustrup's book is the worst computing book I ever bought, mostly because he just isn't a good writer.
You better start with "Programming Principles and Practice Using C++"
After that you will be ready for "The C++ Programming Language".
Well, that would be a great idea, except that Bjarne didn't get round to publishing it until 2008. I was learning C++ in the early 1990s so it wasn't really an option was it?
Clearly you *can* learn C++ from that book. I and many others did, because that was all there was. The book just made learning harder than it could have been.
Mr Stroustrup is undoubtedly a brilliant computer scientist, but that doesn't make him a great writer. You can't simply excuse a disjointed, turgid and dense writing style by saying it's the reader's fault for not understanding the material. That's just lazy criticism.
The preprocessor is awesome. Macros and definitions can be incredibly useful and make central points to change program behaviour that have no runtime cost, they're all processed at compile time.
From my cold, dead, C-programming hands!
Hey, I've WRITTEN preprocessors but even I can see the danger in having huge amounts of code spewed out at compile time :)
Ironic, given C++ was originally a preprocessor...
"...they're all processed at compile time."
Except when they're not. Like when mathematical expressions involving floats are concerned.
A lot of "programs" are better expressed as data rather than procedural code (Finite State Machines come to mind). It's a pity few preprocessors approach the flexibility of BLISS macros (or even Macro-11) for this purpose, but you can usually make some headway with C/C++.
A lot of other programs are rendered almost unmaintainable by a tangled nest of conditional compilation that ought to have been abstracted out a much lower level.
Most of the use of the preprocessor I've seen in C++, unfortunately, falls firmly in the second category.
I found that the m4 macro processor is much more powerful than the crapola named CPP. You can even use it to replace that slow abomination called "template classes" in most cases. Debugging will be a breeze because you will debug into the generated source code, which is fully expanded and trivial to understand as compared to debugging into template code.
CATIA, for example, uses only macro-based collection classes.
“A lot of people look at C++ and want to understand every dark corner. This is what you do if you want to be a compiler writer, but most people should look at what is the easiest way to write a good solution for their problem," said Stroustrup.
Oh no. Heck no. Hell no. F*** no.
That's hacking, not programming. It leads to training people to hack, rather than to program.
If I appear to be repeating myself, it's because it's a Very Important Point.
The world has enough different languages already, generating lots of incompatible code. When your language encourages people only to learn subsets of itself, it becomes effectively a myriad of closely related languages, but undefined and uncontrolled languages. "So you're a C++ programmer. Does that mean C++.subset(a,b,c), .subset(d,g,z) or .subset(f,m,dribble)?"
I suppose it may be fairer to call it "dialectisation", but the end result is that code is harder to share and maintain, because programmers' expectations and assumptions vary so far from each other's.
People hack the solution from the coding techniques they know, because the right way to do it is obscured.
If you really want a language that supports multiple subsets of functionality, you have to find a way to segregate and mark them clearly and unambiguously, so that programmers are able to identify what they know and look for what they don't know.
I'd like to ask that on C++ it's also important to know how the system works. The language kinda tries to hide the system from you, luring you into a false hope that you don't need to know what kind of machine code is being created.
So it's a big question whether you are a C++ programmer who thinks that out of index array accesses create exceptions, or one who knows what really will happen.
"So it's a big question whether you are a C++ programmer who thinks that out of index array accesses create exceptions, or one who knows what really will happen."
Ah, the joys of creating fencepost errors and wandering pointers, then wondering why your program is behaving "oddly", cannot be overstated.
"Oh no. Heck no. Hell no. F*** no."
Mmm, yes. He's kind of being forced into that position though. He can hardly turn around and say "Yes C++ is big and complex and has warts - DEAL WITH IT!!!" can he? I actually bought his "Programming Principles and Practice Using C++" out of interest and it is actually pretty good. His time as a lecturer does seem to have improved his writing and he is definitely acutely aware now of the issues involved with teaching programming.
He is very keen to attract the crowd that would otherwise run straight off to Python or C# etc. That's why he's trying to keep it simple for newcomers. Besides, learning through hacking is no bad thing. The problem is if one never gets beyond that stage! :-)
I cant see how C++ 11 will compete with Java or C# for ease of programming and rapid application developing. C++ is designed for embedded and systems programming allowing the programmer do dive into assembly when needed. The language is perfect for operating systems and some compute intensive calculations. The Java and C# languages are better suited for application programming and information systems programming. Compute intensive problems can be coded in C++ and accessed as modules by Java.
The reason some people have problems writing good computer programs is that they approach the subject from a bad perspective. The education system develops their minds to rote learn answers to questions. So if you ask them what is the Square Root of -1 they will correctly say i. This can also be extended to programming so an exam asking them to describe the difference between passing variables by reference or passing variables by value would most likely be answered correctly by most. What the people that cant program dont understand is that when writing a computer program there is no right or wrong answer. Its more like an Art where you craft your own kind of science to logically break down a problem and deal with. The overall programmed solution is deemed to be correct if your solution is internally logically consistent with itself and is able to correctly solve the problem outlined.
I'm sorry, but for embedded systems you want to use C, not C++. C++ is just to resource intensive... unless you ignore most of it and code in C. And even C is an overkill for most embedded systems. (think of washing machines and electric toothbrushes)
There always seems to have been this misconception amongst embedded systems programmers (of which I am one, although seemingly more enlightened) that C++ is somehow inherently more resource hungry than C. This simply is not true. There are actually very few language features that impose memory or execution overhead (basically just exceptions and RTTI) and only do so if used.
There are actually very few language features that impose memory or execution overhead (basically just exceptions and RTTI) and only do so if used.
And, in the case of exceptions, the overhead is typically no more than the size of the error-checking code you'd have to write if you didn't use exceptions.
This is, it's only overhead if you weren't going to do any error checking.
C Primer Plus and C++ Primer Plus, by Stephen Prata, were both great introductory books. Haven't looked at that type of work in 12 or 13 years, but I see there are newer editions. Both those books explained issues If seen expressed above in highly understandable language.
PS I am not related to or have any relationship with Stephen Prata.
Glad someone mentioned these books. Cut my teeth on the old C Primer Plus when it was pre-ANSI. Excellent book, second only to the K&R C book.
The the father of C++ is just coming round to the idea that passing pointers (or references) is a better idea than passing full scale objects.
Maybe too much believing that the OO paradigm is the one true path?
Which was once the domain of Fortran, is now almost exclusively C++.
The computing core, not the the GUI, of course.
There many high quality, high performance frameworks that can be used to write amazingly complex simulations that would be just impossible to do in a single lifetime (ok, I exaggerate) if you'd start from scratch.
If you're curious about what can be done with C++, check these things out:
These are just a few I have used one way or another, there are many many more. These mostly use very advanced C++ techniques, template metaprogramming and all that horrible stuff to provide tools that do very sophisticate operations with a reasonably simple interface. It's basically what one of the first posters to this discussion said: you have to encapsulate the C++ complexity. For many domains in scientific computing, this has been already done for you.
It really is quite unthinkable today to consider developing a large, complex simulation code in another language (but of course there are a lot of people who do that; I just think they are crazy :)
Have you ever seen Holerith format used in anger? I have :)
I know about the Fortran holdouts, they're the crazy people I mentioned :)
Things are of course different when you have a code base in Fortran that has been developed for 35 years, you'd be crazy to just chuck that out of the window and restart from scratch in C++. Perhaps the majority of academic and scientific programming is writing and modifying a couple of subroutines in an existing program. Even if it takes the student six months to make heads or tails of the program, it's still better than rewriting from scratch. It'd be years before you had anything working, without any scientific advance, and no M.Sc., no Ph.D., and no papers during this period. All around Career Seppuku.
However, every once in a while, people do start new projects from scratch. It's a good time to start looking around for alternative development environments. If the project involves computational mechanics and finite elements, it's quite likely that by using FEniCS/Dolfin (for instance), you'll get results a lot faster.
At the end of the day, I think we'll see evolution in action, if you depend on writing computational mechanics code to get scientific results, you'll get them so much faster with a proper environment (say FEniCS/Dolfin or OpenFOAM) that you'll leave old school scientists/programmers behind. You can't just wait for the old professors to die out (that hurts a little bit because I think I can be considered one of them), because they (we) tend to leave our disciples in our places. It's necessary to out evolve the strain :)
I'm pretty language agnostic as well, and I also use Python a lot (and matlab, and fortran, and macro languages of commercial finite element programs, and pretty much whatever it takes..) It's not really a question of C++, it's much more a question of the supporting frameworks, that nowadays are mostly written in C++. In fact, some of these have excellent interfaces to Python, using these interfaces really is the best of all worlds.
A few people code in Python - and quite a lot of these people use it for intensive tasks, which makes me quite worried for the laptops they then try and run it on
If they know what they are doing, they are using NumPy and SciPy and suchlike. That means that you do the high-level stuff in a high-level language (Python), but delegate the number-crunching (inner loops, BLAS) to library code written in whatever low-level language the library author chose to use. Actually, there are multiple choices: the same library interfaces compiled with different optimisations (say for Athlon, or Intel Xeon, or rewritten using CUDA for offloading onto an NVidia GPGPU).
It's a VERY productive way to go.
The principle was much the same back in the 1970s when a sensible scientist called a NAG library routine whenevr he could. Do as litle coding as possible in (back then) FORTRAN, leave the details of how to get the most out of the hardware and how to tame the numerical methods to the experts. All that's really happened is that a much more powerful and expressive language (Python) has displaced FORTRAN, C and C++ as far as stringing together calls to library code is concerned.
Incidentally the overhead of doing inner loops and all in Python is a factor of ten at worst. In cases where the prospect of getting ten times as much computing done isn't attractive because human thought, not CPU-hours, is the rate-determining factor, then why care about efficiency?
Firstly, one has to specify which FORTRAN. 77? 95? 2008? The language has evolved a great deal, possibly even more so than C++
Secondly, people who attack it fail to realize that even FORTRAN-77 had two huge advantages over its competitors.
One was for the scientist/programmer. The compiler could/can autogenerate code to check array subscripts at runtime. Given the declaration REAL A(100,200) then any reference A(I,J) is invalid if I<1 or I>100 or J<1 or J>200. With the compiler generating subscript checks, many bugs immediately crashed the program, rather than randomly corrupting random data elsewhere. C compilers couldn't do this.
Note also that (say) I=101 and J=1, or I=2000 and J=-1, are detected as programming errors even though in both cases the result of blind address arithmetic will be within A
And when the program was debugged and ready for use in anger, you recompiled with checks off and other optimisations on. Which meant that number-crunching code in FORTRAN could be faster than in C, the second huge advantage.
In particular, a FORTRAN compiler is permitted to assume that in a subroutine
SUBROUTINE FRED( A, B, M, N)
REAL A(M,N), B(M,N)
there is no memory overlap between the arrays A and B, which allows for many optimisations of statements like
A(I-1,J-1) = A(I-1,J-1) + B(I,J)
IN C-style languages A and B are pointers to chunks of memory, and the no-overlap assumption can't be used in nearly so many contexts.
Since F77, FORTRAN has advanced so that now many operations on arrays can (and should) be expressed as a single statement with no sequencing specified by the programmer. The compiler is free to perform whatever sequencing and parallelisation it thinks will work best. Your FORTRAN 2008 code is hardware architecture-independant. Your compiler generates a realisation that best exploits whatever it's running on, be that a pair of Intel Xeons with four cores apiece and a single RAM address space, a top-of-the-range or a cluster of a few such beasts, or tens or hundreds of them, that you wish to use in parallel.
Automatic parallelisation is a big and hot topic and probably still in its infancy. Recent FORTRAN languages have at least freed the compiler from arbitrary constraints accidentally imposed by a programmer who previously had to specify an arbitrary sequence and who couldn't indicate that he really didn't care about the ordering of this or these loops.
That said, I'd stil choose to write the outer parts of my programs in Python (using NumPy, SciPy, and suchlike) and call from there to number-crunching codes written by experts.
Hey, don't educate all these C++ retards who think that their language is "more modern" than Fortran. You could case serious depression when they discover that all these low-level features like "every pointer can be used like an array variable" are actually a regression.
Don't tell them about Fortran's automatic for-loop reordering by the compiler; that could damage their adolescent brains forever due to an inferiority complex.
And please, don't ever pair Fortran with a powerful macro language (such as m4) or custom code-generators to rip the crap out of C++. You have to think about that disadvantaged youth who never had time to get a real education into advanced numerical processing ! You want to make them unemployed or what ???
Biting the hand that feeds IT © 1998–2017