Hi Verity, are we up for some more frolics and fun? Nope. It's a spot test today. Using your neatest handwriting, write out a standard C++ loop on the nursery blackboard. No copying. This will count towards your final grade. Joy. What brought this on? Oh, ok – gimme the chalk, let's get it over with: for (int i = 0; i < …
Thank god - some common sense!
Damn right about standard library algorithms. You spend all your time looking up preconditions, postconditions, provisos and excuses when you could read the for loop in a second or two and known all you need to know.
They are even worse for maintainability than they are for initial development. First, theres the 'oh, its a standard library call, it most be ok' effect. Then, when you do spot the problem, you need to figure out which slightly differently named algorithm you should have called. And carefully compare all the preconditions, postconditions etc to make sure that you're only changing what you intend to. The traditional 'oh - that < should have been a <=' is apparently just too quick and easy to tolerate.
And iterators are very likely the reason all this happened in the first place. If you read Stroustrup carefully, you will notice that inserts and deletes into containers may invalidate iterators. Not just change which item they refer to, *invalidate* them. Including the current position and end position for that iteration you are doing.
You have no guarantee about what data structure implements the container. The standard doesn't say. In fact, it explicitly states that any data structure can be used so long as it meets certain requirements. And there are few guarantees about what iterators end up pointing to when you modify a container.
Very likely the only real advantage to the standard library algorithms is that they presumably have specialisations to make sure that they work for all containers. In other words, the algorithms are a bad fix for a problem that shouldn't exist in the first place.
For me, there are three fundamental rules that no library should ever break. It must be easy to know when and how to use the library. The resulting code must actually be (not just superficially appear) readable and maintainable. And using the library in combination with normal everyday coding patterns must work reliably. If C++ container iterators are not robust enough to cope with a for-loop that modifies the container, then C++ container iterators are broken, pure and simple. And if you spend more time looking up the precise specification of a library algorithm than it would take to read or code it directly, that library is worse than pointless and should never be used.
By the way, I *have* written my own container library. It is a bit higher level than the standard one. The iterator equivalent, for instance, is 'maintained' so that it keeps on referring to the intended item irrespective of insertions and deletions. There are slight overheads in this, of course. A million random inserts followed by a million random deletes may actually take a second or two longer. But a major part of the rationale for high level languages is that CPU cycles are cheap, whereas programmer hours and a reputation for producing buggy, unreliable software are expensive.
Implementing containers like this isn't trivial. Even after a couple of years, I still find the occasional bug in them. But I virtually never get an error from using them wrong. Whereas bugs in code that use the standard containers are, in my experience, not that unusual. Recent example - a sorted vector was replaced with a set. This resulted in dozens of bugs as the iterator behaviour changed - iterators pointing to items above the insert/delete point are no longer affected by vector insertion since the container is no longer implemented by a big array. Of course no-one should have been relying on those iterators anyway, according to the standard, but that's just weasel words. In the real world, a lot of programmers don't know this. Sure they should know better, but better still the library shouldn't be so flakey in the first place.
One last thing - about postfix ++. Expressions with side-effects are generally a bad thing. Personally, when I override assignment operators, they almost always return void since I'm unlikely to ever use that return value anyway. So there's no overhead since there's no need for a copy.
The problems that result from a bad habit (such as the overuse of side-effects) are a good reason to change the bad habit. Changing other patterns to mitigate some of those problems is missing the point.
Rant over - time for some valium ;-)
Strikes a chord.
An afternoon wrestling with for_each and bind_1st/2nd etc where a 3-line for loop would have sufficed taught me something very useful.
Rest assured that it was nothing to do with working out the correct syntax.
Great article btw.
I preferred what scott meyers said
That sometimes loops work well and othertimes algorithms work well, algos if you can use a standard functor or if it makes sense to decouple your loop body into a functor (normally will for a non trivial body) - once you do it a few times you dont waste time trying to get it to compile
And the downside of not using algorithms is you clutter up what the code does with the mundane affair of stepping from element to element
Reply to Dan Re: element-stepping clutter
Come on. If the containers were written properly and iterators were robust, the only code anyone would ever need to step from element to element would be ++i or --i. Hardly clutter.
One particularly annoying case with standard library containers is removing all items that meet a condition. The problem being that an iterator cannot refer to a deleted item. With my own library, this is easiest to handle as follows...
for (bool ok = i.Find_First (l_Container); ok; ok = i.Step_Next ())
if (condition) i.Flag_Delete ();
Where Flag_Delete tells the cursor to delete the item before moving to another item or destucting. This requires a bit of overhead in the cursor object, which is why there's a separate 'c_Del_Cursor' class for when its needed.
If I'm in a mood where I don't like these deferred deletes, there are various options including...
for (bool ok = i.Find_First (l_Container); ok; ok = ((condition) ? i.Del_Step_Next () : i.Step_Next ()));
The step can be written as '++i' and the cursor has an 'In_Range' method, but I prefer the find and step methods with the boolean return values.
This is robust code. It works reliably and identically on vector-like, map-like, set-like etc containers. When you switch container types, your code still works. Even if I change the underlying data structures, the code will still work.
And it doesn't need a separately defined predicate object, so it is more readable than using std::remove_if with standard containers as well.
- World's OLDEST human DNA found in leg bone – but that's not the only boning going on...
- Lightning strikes USB bosses: Next-gen jacks will be REVERSIBLE
- Pics Brit inventors' GRAVITY POWERED LIGHT ships out after just 1 year
- Microsoft teams up with Feds, Europol in ZeroAccess botnet zombie hunt
- Storagebod Oh no, RBS has gone titsup again... but is it JUST BAD LUCK?