back to article Experts reboot list of 25 most dangerous coding errors

Computer experts from some 30 organizations worldwide have once again compiled a list of the 25 most dangerous programming errors along with a novel way to prevent them: by drafting contracts that hold developers responsible when bugs creep into applications. The list for 2010 bears a striking resemblance to last year's list, …

COMMENTS

This topic is closed for new posts.

Page:

    1. Boris the Cockroach Silver badge
      Flame

      Paul 4 : A good thing

      That little factoid just depresses me even more than I already am, that 3 people in IT just make it up and fumble along as best they can

      And that I cant get a job in IT with 20 yrs in industrial robots, qualifications from the OU, and a damn good knowledge of 8 bit asembler code.

      My depression is lifted though by my anger at the "buffer overflow" bug, or "array bounds checking" bug, come on people... this is 2010.

      But then maybe its the managers saying "we have'nt got enough time inside your buffer writing routine to see if its over run, so leave out the checking"

      Perhaps every programmer should spend a year writing life critical software like flight control systems , or industrial robot control programs.

      That would seperate the good from the useless.

      However..... would I want to fly on a plane or operate that robot afterwards.........

      1. Anonymous Coward
        Pint

        Keep the faith, the work is out there.

        Management still has nothing but deadlines/money (security are people that watch cctv monitors) in mind, for example, a company owned by a Australian with a Scottish name, bought D/C racks to run automated tests in an attempt to up the abysmal quality rate of their HD STB. The pointy haired manager, decided that the code written by developers would be maintained by help desk staff with no programing training. To effect this cunning plan, "none of that programmer nonsense" like design, structure, meaningful identifiers/ OO / unit-tests or indeed tests of any kind, would be used on a code base that provided OCR/Image recognitions over a network while attempting to compensate for latency when simulating user interaction with set top box.

        I walked out at the end of the first day, never to return, the other chap who'd foolishly taken a little longer to notice the smell of fail emanating from the boss's office, stayed a month, by which time the project had failed, and the manager been promoted on the back of his *genius* cost-cutting plan.

        So it's hard to find decent employment, but I'm largely self-taught, did a couple of City & Guilds software development courses back when they offered Unix/C++ and Portable/C qualifications. (circa 1999) Got my first contract completely by chance, loitering outside an internet cafe, ended up chatting to a random, a cyber-squatter/domain broker, and hacking out a little application, a grand for a weeks work, he made rather more with the software but that's life. Since then, I've written encryption software for the embedded market, parallel processing software for the HPC market, and for my sins financial software (never again).

        I've written polished, unit/integration tested code in C, C++, Perl, Ruby, Java, Python, Pascal(Delphi) and Sparc/x86 Asm (these tend to be inline in C apps, rather then complete asm except for the smallest of boards). I've also written really shonky code in VB and made all the mistakes on the list in various languages, including a major missing one: not *just* writing the simplest code that would work, on the basis it'll come in handy at some unspecified time in the future. I've also spent some big chunks of time out of work, but I use that time in developing my skills and underlying codebase. ( I've some interest in code generation/ toy compilers) So long as you know what you're doing, and you keep your head up, it'll be ok.

        As for industrial robotic skills, are you any where near bristol? There's always people looking for embedded / industrial development staff round there.

        as the title, sed.

        Beer for the west country pubs.

  1. Anonymous Coward
    Happy

    Changing focus

    Several of the items in the list used to be called exploits. Now they are programming errors. Is there a danger of shifting focus from the real villains here - the exploiters?

    If it is legitimate to blame the coders, when do the ISPs, hosters of all these error-filled webpages, have to take any responsibility for what they allow on their servers?

  2. FoolD
    Badgers

    Blame >>= 4

    Web design = coding ? Bleh

    Most of those 'vulnerabilities' could/should be fixed at a lower level so sloppy coders can't break anything. Try pestering the language/platform makers to be more secure in the 1st place - not the poor saps trying to make the best of insecure tools available to them.

    The rest is a matter of you get what you pay for - hire experienced staff and train them properly. You won't get the contract if you do though - the sweat shop next door will undercut you.

    In other news: Software vendors agree a contract to stop night following day [small print: or shift the blame when it does]

  3. Anonymous Coward
    FAIL

    "top 25 programming errors"?

    Perhaps top 25 WEB DEVELOPMENT programming errors.

    We're not all hacking out our code for the moronic hordes who frequent Facebook et al.

  4. CD001

    some of this stuff...

    I just looked at the examples on some of these - like the PHP include/require one - and thought, "oki - that might be a serious vulnerability but who the hell in their right mind would actually do that?"

    Then I remembered some of the god-awful, shonky, half-arsed, crap PHP code I've seen over the years and sighed. If web-devs want to stop being mocked by "real programmers" it might help if they actually put some effort into learning their trade properly - there are some very good web-devs but there also seem to be quite a few feckless tossers who really couldn't care less.

    Having said that though, "real programmers" often make the same mistakes as many "web-devs" (when they're forced to write web-apps); lack of ability to write good (X)HTML/CSS and forgetting they're coding in a warzone where they have no control over user-environment or interface software (browser) and everyone from here to Dubai and back again can have a crack at breaking your system.

    Paranoia is not a mental health problem when you code for the web - it should be a way of life :)

    Your ideal web-dev should be an expert in server/client architecture, able to write, optimise and load-balance applications, be a security expert and part time lawyer (your system needs for conform the DPA and DDA legislation in the UK)... so it's not surprising maybe that the good ones are really good (and rare) whilst the poor ones are awful (and in plentiful supply); considering a web-dev will earn maybe 50-66% of that of a Java programmer for example...

    1. Anonymous Coward
      Pirate

      They should also...

      Be devious sods, able to think of things that people might try to do that a normal person would only expect a villan from James Bond to think of...

  5. Red Bren
    Gates Horns

    Commodity Software

    Part of the problem is that companies have been lead to view software as a commodity product, to be bought off the peg and customised, rather than something designed and tailored to the individual business' needs. Unfortunately the licence generally exempts the supplier from any liability if the software doesn't perform, while preventing anyone from fixing it in-house.

    It's the same mentality as the chav putting a bean can exhaust on his Vauxhall Corsa and expecting it to perform like a Ferarri! Quality costs more upfront, but pays for itself in the long-term.

  6. Stevie
    Thumb Up

    Hurrah!

    This "holding responsible" thing is the greatest idea ever. But why stop at developers?

    Let Microsoft, Apple, Adobe. Symantec, Quicken and a raft of others be held responsible for material losses incurred as a direct result of their software not coming up to snuff, not matter what weasle words are written in the EULA.

    Let those insidious wreckers of reputations, the credit bureaus, be held responsible for the crappy state of their records and the reprehensibly wide latitude in their queries used to construct the credit reports that are forwarded to banks, employers, police etc. I've never seen such shoddy work.

    Let the IRS be held responsible for proving what they allege as to your financial cheating of the state *before* they are allowed to enact draconian measures to "ensure compliance".

    Let idiots who ignore the noises coming from an apartment and the obviously battered appearance of a child who lives there for years , then criticize the welfare authorities and/or police when that child is killed by a "guardian" take responsibility for their callous disregard for the consequences of their "minding their own".

    Let jurors take responsibility for their verdicts without the now-mandatory "not *my* fault" interview on national TV after the case is over.

    Let the Police rather than the taxpayers take responsibility for monetary damages awarded in respect of injuries sustained as the result of misconduct. Let people sue the pension funds instead of the state and the blue wall of silence would soon crumble.

    By jimminy, this anything but obvious idea has legs!

  7. Graham Bartlett

    Problem solved - if you want to pay for it

    For any real programming issues on that list, talk to an embedded software engineer. Especially talk to anyone who's ever worked at higher SIL levels, or DO178B, or similar high-reliability systems.

    It'd be nice to be able to report that it's easy. It isn't - it's a constant battle, all the time. But a standard QA plan for high-reliability software sets out how the battle will be fought, so that the chances of a bug making it through are as close to zero as humanly possible. The downside is that this costs money. A lot of money. Multiply your worst-case coding estimate by 10, and you've got the total time it'll take for design, coding, reviewing, testing and auditting.

    But there are still no-brainer solutions to a lot of problems. Static analysis, for example - if you're coding in C and you're not running Lint (or equivalent) on your code, you're preparing to fail. Or incomplete requirements - if you're sending stuff over a serial link and your spec doesn't say what the endian is (been there), you're preparing to fail. Or testing - if your test spec doesn't quote a source requirement for every test you're doing, and if you haven't done a cross-reference to make sure your test spec has at least one test for each requirement, you're preparing to fail.

  8. Keith Doyle
    FAIL

    Coding Malpractice Insurance anyone?

    If you're going to treat coders like doctors and sue them for malpractice, you have to give them the absolute authority to do it right. That means the authority to determine how long it will take, and what techniques and tools will get used. The coders don't make those decisions now, except in a few rare instances, management does-- in the name of "getting the product to market in a timely fashion."

    Not only that, coding is a team effort, and often ancient preexisting code and libraries are foisted upon coders who have neither the time nor expertise to fully understand what risks may be contained in their newly-found inheritance.

    If you're going to treat them like doctors, they have to have the same sort of authority, the authority to actually make the decisions relevant to their responsibilities. And you'll have to pay them about three times as much. Any takers? I thought not.

    1. DPWDC
      Thumb Up

      RE:Coding Malpractice Insurance anyone?

      Yup, over worked, under paid = errors. Always going to happen when the contract goes to the lowest bidder.

  9. Anonymous Coward
    Thumb Down

    Meh.

    Wake me up when someone publishes the list of 25 most dangerous management idiots.

  10. Neil Cooper

    Ridiculous use of the word dangerous.

    Lol.. this article lists a bug that allows cross-site-scripting as the most _dangerous_ coding error.

    I work as a software developer on avionics systems. Some girlie little website bug is never going to be considered even slightly dangerous compared to what we can screw up.

    1. Oninoshiko
      Coat

      True.

      Anyone who even has a passing understanding of the Therac-25 case-study knows better. The problems caused that are mostly all listed here, but XSS never killed anyone, the Therac-25 killed atleast 3 rather gruseomely due to radiation posioning.

      mine's the one lined with lead... thanks.

  11. Anonymous Coward
    WTF?

    More FUD against open source.

    Microsoft seems to be on a crusade these days touting their own process security. This study seems more FUD on that line. Follow the money.

    The ideas exposed are bordering on the ridiculous. I'd like to see Linux kernel developers lining up for background checks. And guess what is the prevalent platform for the gears that power the tubes of the Internet nowadays?

  12. SisterClamp
    Grenade

    Just the start...

    C'mon guys, I know there are a lot of real techies here. Aren't you just the least bit sick of "developers" who wouldn't know a pre-test from a post-test loop? Managers who don't know the difference between a web server and a mail server? As a computer scientist, I've seen the industry go to hell in a handbasket, and it's not just due to the Indians. Tell me you haven't seen a complete dunce get employed just because they know the hiring manager? I've seen History graduates, secretaries and frickin' carpenters hired as IT developers and consultants. Where's the quality under such conditions?

    HP used to have a policy that only people with computing degrees were employed within the company. (Okay, they also had one against hiring women, but let's just go with the positives for the moment.) I say we get back to that and maybe claw back some of the self-respect that disappeared when our promotion got given to someone who could swim 200m faster than anyone else. Bitter? Not much.

    1. Anonymous Coward
      Stop

      "people with computing degrees"

      They're half the bloody problem.

      Incompetent IT staff who are employed just because they have a piece of paper are ten-a-penny.

      Give me experienced staff (regardless of qualifications) any day.

  13. John Smith 19 Gold badge
    Coat

    Demonstrates the shifting sands

    #1 is cross site scripting.

    Which only matters in a *web* environment.

    20 years ago (I'm guessing) they'd be looking at memeory management (particularly unassigned pointers and memored which had been freed before last calling, mostly in C)

    Things change. had Borland included function pointers (Yes full Pascal *does* allow you to construct a table of functions like C) before (IIRC) version 5 perhaps the world would be a *very* different place

    Mine's the one with "Code Complete" in the pocket, which demonstrates that construct.

    1. Anonymous Coward
      Anonymous Coward

      Bit rusty on delphi

      If memory serves, the syntax is something like the following, been years since I touched delphi so this might all be way of base.

      What's code complete say on the subject ?

      type

      TFunc = function (n: integer) : integer;

      TFunctPtr = ^TFunc;

      TFuncTab = array[0 ..1] of TFuncPtr;

      var

      fptr:TFuncTab;

      function my_square(n:integer):integer

      begin

      my_square := n*n;

      end

      begin

      fptr[0] := my_square;

      fptr[0](10);

      end.

      1. Beelzeebub
        Flame

        Beelzebub@hotmail.com

        Eh?

        int: c=0;

        int b=1;

        function add()

        (

        c++;

        b++;

        return (c+b)

        Answer = 3

    2. John Miles

      re: had Borland included function pointers

      But Borland did -

      Delphi event model processing is based around pointers to "functions/procedures on a object instance" and hasn't changed since version 2 (probably 1) and is I believe an extenstion of the pascal function pointers (I think was in turbo Pascal - but as I haven't used it for > 15 years really can't be 100% sure)

  14. Anonymous Coward
    FAIL

    Completely backasswards

    Devs do what customers pay them to do, not one bit more. Hold the system owners responsible, preferably with high costs associated with security breaches, and devs will automatically be tasked with increasing security. Setting up development standards in a vacuum is not going to change anything at all. But of course, if you're sitting high up in an ivory tower the dirt and drudgery down here on the ground looks quite random. Whip the bastards, that'll teach 'em to code right!

  15. Annakan

    Maybe just suppressing the LAWS that prevent COMPANY to be responsible

    Would do a GREAT deal of good.

    It is like these developer are working in a vacuum ... not hired by anybody that does not impose them the langage the schedule the training.. just evil incompetent devils ...

    Remember the DMCA and such .? they BEFOREHAND remove software company from liability something we would not accept from any other industry (Toyota mess anyone ? )

    Maybe THEN we will see a move to memory managed langage that would remove 80% of those bugs and vulnerability, and see a better emphasis on doing it RIGHT before doing it NOW.

    NOBODY asks you to do it right in a development team, the security and quality is something you have to shove in yourself on top of the workload even if that quality would pay down the road in support and maintenance it has ZERO priority so it is like blaming the rail worked for the route of the railroad : he could put the rail more solidly in the ground sure but only IF he was not asked to do it on sand.

    And to have those thing changed, software shops need to be responsible of the damage they cause like any other industry.

    I need NONE of the "new" capabilities of office 20XX I would prefer to have it more logical, simpler to use and safer, and that goes with Operating systems and all the rest.

    Drop C and C++ for christ sake, this is NOT a professional langage, just a glorified macro assembler.

    1. Anonymous Coward
      Badgers

      Memory_management != the_issue

      First off, Memory management is *not* the issue with C/C++, *concurrency* is the issue, and that`s

      solved by using concurrent languages (erlang etc) not a garbage collector.

      You can use *deterministic* finalization to have the stack manage *allocated* memory using RAII patterns, 'C' also lets you do this with a bit more work ,try that in say, java, oh wait no deterministic finalization.

      I'm a fan of V-HLL (ruby/perl etc) for lots and lots of things, but Language as panacea for bad resource management whether the resource is memory/db connections/sockets/threads/files open etc, is unhelpful in my view.

      Finally, there are huge amounts of existing code in C/C++, you want to rewrite/wrap it all in some managed language de jour, go right ahead.

      C/C++ are languages that you build your layers on, if you use the naked stdlib/stl you'll end up making more mistakes then if you write a handful of decent wrappers, like the following for realloc.

      Almost all the stuff on the list in every language comes down to design issues, you can't get rid of error but you can design out most of the causes, aside from users.

      /* Resize a chunk of memory obtained by a previous call to malloc()

      * The behaviour is different to stdlib realloc in that 'old' is always freed

      * Null pointers and zero sizes are not supported, use malloc/free directly if that behaviour is

      * ... desired.

      * This means that p = utility_realloc(p,size) is safe while as we all know

      * p = realloc(p,size) is a leak waiting to happen

      * On success: Returns a pointer to size bytes of uninitialized memory freeing 'old'

      * On Failure: Returns NULL on failure modifiying errno AND freeing old.

      * EINVAL: invalid args passed, 'old' is null or size is '0'

      * Function may fail and set errno for same reason as realloc()*/

      void *

      utility_realloc(void *old,size_t size)

      {

      void *p;

      int err;

      errno = EINVAL;

      if(!old || !size)

      return NULL;

      /* old is freed on success */

      if((p = realloc(old,size)))

      return p;

      /* old is not freed on stdlib realloc failure */

      err = errno;

      /* it is now */

      free(old);

      errno = err;

      return p;

      }

      1. BlueGreen

        @sed gawk

        > Memory management is *not* the issue with C/C++,

        muppet

        > *concurrency* is the issue,

        muppet

        > solved by using concurrent languages (erlang etc) not a garbage collector

        muppet

        > try that in say, java, oh wait no deterministic finalization

        find out why, muppet

        1. Anonymous Coward
          Grenade

          @bluegreen

          Thank you for your reasoned and well thought out critique, you have a genuine flair for language and the sheer variety of your response was most refreshing. ;)

          Well since you ask java doesnt' do deterministic finalization because

          Java's designers made design choices in the generational G/C to not equate garbage collection with destruction,

          This means

          1) post/GC objects can become reachable again, in a rise from the dead leading to all sorts of fun and games in unwary code.

          2) JVM prefers to dump all memory on exit(i.e. without finalizing) rather then run GC if possible, fast path with no finalize stub rather then slow finalize path which might never run but adds overhead anyway.

          3) language scoping rules can't be used to implicitly bound object lifetime imposing implicit ordered finalisation (see point two), it has to done explictly using weak references making some useful design strategies cumbersome to implement,like say throttling a resource using an object reference counts(think peer nodes associations) to dynamically drive load management in a distributed system, additional ref, extra notch on the power, one less reference, means you step it down, scope managed ref counting is a nice way of implementing that sort of throttling principle

          Try that with a java generic, add_ref() on construction is easy, how do you make sure release() is *always* called *for* you rather than *by* you, can you sure you caught every edge case? is it exception proof?, how easily can you test it?

          I use whatever *tools* work best for the *job* in hand, I'm not picking on java so much as saying

          4) *Memory* is no different from any other sort of *resource*.

          5) Strategies exist in languages with determinstic finalization e.g. allowing resource management to be implicitly bound to object lifetime with the language doing the work rather than the developer.

          6) A data point, there are plenty of garbage collectors for C/C++ yet GC is not that widely used for what ever reason, perhaps because of point five).

          7) These stategies are examples of designing out problems rather than coding round them, using say explicit calls to synchonization primitives like mutexes.

          8) Some of these strategies are less effective/more difficult to implement without language level support for deterministic finalization, hence the java reference (had to do this before and it's a pain)

          9) Most of the items on the secure coding list are design problems, for example failing to sanitize user input is really failure to have/use user facing functions that sanitizes input for you.

          10) Realloc() is a source of leaks in C code because people treat it as "free(old); return malloc(size);" when it's really "return (pnew=malloc(size)) ? free(pold),pnew : NULL;"

          11) The realloc wrapper I posted plugs that *really common* class of realloc leak simply by including a header with a macro, i.e. no source change needed

          12) Exactly that realloc leak in the JVM no less http://gcc.gnu.org/ml/java-patches/2008-q1/msg00092.html ("fourth google result for 'ralloc leak'"), easy fix.. #define realloc utitity_realloc

          13)I don't think resource/memory management too big a deal, so much as I think some interfaces need wrapping for sanity including my own sometimes, not too often I hope.

          I'm not advocating one language over another just saying we need better implementation designs.

          Memory *really* is just another *resource* is it not, and almost all the coding flaws on the list come down to bad design, whether architectural/interface/implementation.

          Granted some library interfaces in C/C++ are easy to make mistakes but as I said wrap them with

          the safer/saner interface of your choice, why throw such a flexible tool away simply for the lack of using one of the thousands of decent library interfaces for what ever resource management issue you have, or here's a thought write a C/C++ extensionn for perl/ruby/python whatever and get the best of both worlds, memory managed access to all the C/C++ libraries through a thin shim layer.

          14) Shared Memory Concurrancy, even with automatic management of resources, and all the tools we have is a right pain, it's even worse

          trying to do anything even vaguely realtime and parallel that way in C/C++ for a few reasons, some of which are changing with boost/c++0x , but some are basic language issues.

          The dominant model with c/c++ is shared memory concurrency(pthreads and the like) and everyone does it slightly differently, scaling to large numbers of cores/gpus cries out for language/library support for expressing something as clean message passing co processes without having to manage the details of the concurrency explicitly or give up the portability/expressiveness/compatibility of C/C++.

          As I said previously, I think concurrency is the issue, those "muppets" over at intel, seem to agree that language support is needed http://software.intel.com/en-us/articles/intel-concurrent-collections-for-cc/

          you've heard of Intel right?

          I don't have memory leaks, I have spread work across distributed processes issues, erlang solves that for me by inconnecting C/C++/whatever consumer/producers and letting them ignore concurrency completely, this thing from intel looks interesting too.

          wow that was far too much.

          1. BlueGreen

            @sed gawk

            > Thank you for your reasoned and well thought out critique, you have a genuine flair for language and the sheer variety of your response was most refreshing. ;)

            Positively Wildean, I grant

            > Well since you ask java doesnt' do deterministic finalization because [reasons]

            It's more complicated than you made out. Generational, Mark-sweep, compacting -- none of these can immediately pick up all dead objects. The only thing that can approach this is reference counting which has other problems (speed, overhead), and still can't make immediacy guarantees (consider cycles).

            More: <http://msdn.microsoft.com/hi-in/magazine/bb985010%28en-us%29.aspx>, look for "There are several reasons for this" for a summary, but this doesn't do justice. Conflating garbage collection and object finalisation was recognised as a bad idea back in the 80s by Modula 3's designers, but Java's creators were too witless to learn the lesson (like so many others they fail to learn), so Microsoft had to follow on and we all move backwards, again.

            > think peer nodes associations

            I would if I knew what they were.

            > 5) Strategies exist in languages with determinstic finalization e.g. allowing resource management to be implicitly bound to object lifetime with the language doing the work rather than the developer.

            I think you are thinking of C/C++ where object lifetimes are explicitly and sharply delimited by free() or implicitly by subroutine returns. It is reasonable, I suppose, to ask that Java provide some kind of equivalent to smart pointers so things happen automatically on subroutine returns, but there's not much you can do about other objects you intend to have longer lives. If you want deterministic deallocation for "local" variables, you have to wrap a try/finally around the routine body.

            > 6) A data point, there are plenty of garbage collectors for C/C++ yet GC is not that widely used for what ever reason, perhaps because of point five).

            These are conservative garbage collectors (I'm sure wiki has an article. Read up on them and have nightmares) and they don't provide the behaviour you are asking for.

            > 7) These stategies are examples of designing out problems rather than coding round them, using say explicit calls to synchonization primitives like mutexes.

            I don't know what you're saying that I have a tendency to build frameworks to hide grubby detail in the same way that you suggested wrapping realloc(), but typically on a bigger scale. I guess that's easy to say though.

            13)I don't think resource/memory management too big a deal,

            then you're a better person than I.

            > or here's a thought write a C/C++ extensionn for perl/ruby/python whatever and get the best of both worlds, memory managed access to all the C/C++ libraries through a thin shim layer.

            I don't see how this would get you the kind of deterministic memory management you want.

            > (Stuff about concurrency)

            I'm not saying that concurrency was easy or irrelevant, only that you were speaking too generally, and if you like Erlang's model, perhaps you should recognise that it is a model and not a language, and see if there is a framework to support what you want. Perhaps this is of interest <http://www.google.co.uk/search?hl=en&source=hp&q=actor+framework+%22c%2B%2B%22&btnG=Google+Search&meta=>

            > (re. erlang) I don't have memory leaks,

            Hmm. Is that because Erlang has a garbage collector?

            > erlang solves that for me by inconnecting C/C++/whatever consumer/producers and letting them ignore concurrency completely

            if it's that simple, why are you using Erlang? Just for producers/consumers? I'm missing something.

            If you want deterministic finalisation, here's how to do it: work out the exact rules that fulfil the requirements, then slap a preprocessor over Java to provide it. How's that?

            1. Anonymous Coward
              Pint

              @bluegreen

              > Well since you ask java doesnt' do deterministic finalization because [reasons]

              >It's more complicated than you made out. Generational, Mark-sweep, compacting -- none of these can immediately pick up all dead objects. The only thing that can approach this is reference counting which has other problems (speed, overhead), and still can't make immediacy guarantees (consider cycles).

              >More: <http://msdn.microsoft.com/hi-in/magazine/bb985010%28en-us%29.aspx>, look for "There are several reasons for this" for a summary, but this doesn't do justice. Conflating garbage collection and object finalisation was recognised as a bad idea back in the 80s by Modula 3's designers, but Java's creators were too witless to learn the lesson (like so many others they fail to learn), so Microsoft had to follow on and we all move backwards, again.

              Agreed, more to it than meets the eye.

              Re Actor, I've come across that before, but thank you for the link.

              Re Design/Frameworks/language stuff,

              I suppose my point is just that most errors security or otherwise are design flaws that can be eradicated if you want to throw enough time/money at the problem.

              The MSDN link has a nice example on it, take this part when referring to c# finalizers vs c++ destructors "Don't let the identical syntax fool you." (this is a design flaw, two separate concepts that are easily conflated, with the same syntax)

              Re Erlang/Concurrancy

              It's more that there are messages buses like rabbitmq that do the heavy lifting, I'm know you can do the same thing in other languages but this works out of the box for my application needs YMMV.

              Using a message broker e.g. rabbitmq, just makes the producers/consumers simpler to write, they aren't aware that erlang/rabbitmq is used.

              nice. that aside, it's quite a balanced article, again ta for the link.

              Re java pre-processor,

              I made a crude attempt at this years ago (pre-java generics), using java as the output from a simple generation language that added explicit calls to allow *more deterministic* code. Generics replaced 95% of the benefit of my little tool, so I mothballed it. But using Java pre-processing is already here, I think (too lazy to verify) at least one of the google tools pre-processes some other language into java, and annotations while not preprocessing in the trad sense surely blur the line.

              using C++ would work too.

              > think peer nodes associations

              this patent troll explains it quite well http://www.faqs.org/patents/app/20080307094

              > 5) Strategies exist in languages with determinstic finalization e.g. allowing resource management to be implicitly bound to object lifetime with the language doing the work rather than the developer.

              >I think you are thinking of C/C++ where object lifetimes are explicitly and sharply delimited by free() or implicitly by subroutine returns. It is reasonable, I suppose, to ask that Java provide some kind of equivalent to smart pointers so things happen automatically on subroutine returns, but there's not much you can do about other objects you intend to have longer lives. If you want deterministic deallocation for "local" variables, you have to wrap a try/finally around the routine body.

              I was, and I accept that smart pointers won't solve everything.

              > 6) A data point, there are plenty of garbage collectors for C/C++ yet GC is not that widely used for what ever reason, perhaps because of point five).

              These are conservative garbage collectors (I'm sure wiki has an article. Read up on them and have nightmares) and they don't provide the behaviour you are asking for.

              I know what they are, cheers.

              > 7) These stategies are examples of designing out problems rather than coding round them, using say explicit calls to synchonization primitives like mutexes.

              I don't know what you're saying that I have a tendency to build frameworks to hide grubby detail in the same way that you suggested wrapping realloc(), but typically on a bigger scale. I guess that's easy to say though.

              That realloc bug is present in a decent proportion of C code from compilers to virtual machines, that's a pretty big R-O-I for ~15 line of code, that designs out the error rising from the disparity between how people percive realloc() and how it's specified by the C standard.

              Exactly as C# finializer syntax reintroduces the "realloc bug" by virtue of the disparity between how people percive "~foo" in C# and how it's specified by the C# specification/standard.

              Design Again:

              For example when you enter a numeric value in an application ui/ (This is more about preventing cockups rather than malice)

              1) you can enter a number into a text input field then validate

              2) pick from pre-verified data e.g. valid numbers from drop down.

              13)I don't think resource/memory management too big a deal,

              then you're a better person than I.

              I'll take your word for it ;) but I only meant that shared resource management problems are well documented and understood, distributed resource/concurrency access issues are less well understood.

              > (re. erlang) I don't have memory leaks,

              Hmm. Is that because Erlang has a garbage collector?

              No, it's because it's implements the non-shared state concurrency model in a functional language which *has a garbage collector* :)

              Pint, because I need one, why don't you join me in raising a glass.

              1. jake Silver badge
                Pint

                @sed gawk & BlueGreen

                See my comment to Chris & Trevor earlier ...

                Regardless, beers all around, and I never use icons :-)

                1. BlueGreen
                  Paris Hilton

                  @sed gawk, @jake

                  @sed gawk, as I'm not fond of beer and there's no whiskey icon it'll have to be the good lady . If you're interested in reliability, check this <http://en.wikipedia.org/wiki/SPIN_model_checker> which totally fails to explain what it can do, so try this <http://www.albertolluch.com/research/promelamodels> and look for 'deadlock'. One day I'll actually read the book I bought on it.

                  @jake: cheeky bastard, it's paris hilton for you too.

  16. John F***ing Stepp

    Drop C?

    Annakan, really.

    The language does not make the problem.

    About 35 years ago I was working Assembler; not a nice guy language.

    Not a language at all.

    Your language is just an abstraction layer and just one that you know.

    So bashing C or C++ is about like me saying that Pascal is retarded (but it is, dammit.) We have to do what we can with what we are given.

    "Why did my creator give me two left hands?"

    Thank you MarrCy Shelly.

    (little insertion error check above; Hey Dave, it is still full of bugs; HAL.)

  17. Anonymous Coward
    Anonymous Coward

    They should start by

    Getting management to read the Mythical Man Month and realise though software development techniques have moved forward in last 40 years - most management of it hasn't

  18. Anonymous Coward
    Stop

    LOL @ Mythical Man Month

    Because I have read it...

    But still in software you expect bugs an insecurity. To say that nobody makes a mistake is just insane. Software takes time and in this case if you are to hand over all responsability to the developer then you need to increase the amount of testing. To garentee no problems, that could take a very long time which a lot of management don't like to hear.

    Like others have said expecially in a team environment and with object oriented programming languages with use of inheritance. Someone could make a change and that could make what someone else has written insecure. So who would be in the wrong there.

    I mean sure if you make idiotic mistakes then you will probably just be fired instead.

  19. Dodgy Dave
    WTF?

    Can I have some of their drugs, please?

    In what universe does a customer go to a vendor, and asks to buy their software, then tries to impose contractual conditions on how that software came to be written?

    "I'd like a copy of Microsoft Office, written in Ada, using ClearCase for source control, developed entirely by US citizens who were wearing ties at the time."

  20. John Smith 19 Gold badge
    Happy

    @SED GAWK, @John Miles

    Oh dear, I merely meant to point out that that the focus of any kind of concensus coding error list would shift over time and suggested what it *might* have been if it were compiled a couple of decades ago.

    Yes I was aware that Delphi relies quite deeply on function pointers (as AFAIK did the original C++ macro processor that Bjorn thingy used to implement his language at Bell Labs).

    My point was that Turbo Pascal dates from 1983, while Delphi dates from 1995 and AFAIK TP did *not* incorporate pointers (to anything) until version 5. This feature seems to be a *very* popular implementation idiom for C programmers. I speculated that had it been in the most common version of Pascal at *launch* the benefits of Pascal and the developers aproach (IDE, longer variable names, better type checking while retaining fact edit/link/compile cycle and availability of the compatible but better optimising Stoneybrook compiler) would have been obvious and the world would be a *very* different place.

    Cracking my copy of "Code Complete" to page 276 onward. I'll be truncating code compared tot he book. I'm presuming you know when I'm skipping stuff as you know Pascal. The example reads records composed of multiple (and varying) fields by breaking them down to their individual fields and processing them. A new record type can then be defined as a list of fields to be processed.

    Start by defining an enumerated type for the data field

    Var FieldTypes = (FloatingPoint, Integer, TimeOfDay)

    Define a "Procedure type."

    type

    HandleFieldProc = procedure

    ( FieldDescription: String;

    var FileStatus: FileStatusType ):

    Define an array of this type.

    var

    ReadAndPrintFieldByType: array [FieldTypes] of HandleFieldProc;

    Initalise the array.

    ReadAndPrintFieldByType: array [FloatingPoint] :=ReadAndPrintFloatingPoint;

    ReadAndPrintFieldByType: array [ Integer ] := ReadAndPrintInteger ;

    ReadAndPrintFieldByType: array [ TimeOfDay] := ReadAndPrintTimeOfDay ;

    Not shown (in the book) is the master array "FieldDescription" indexed by a messageID number. consisting of a NumFieldsInMessage, FieldType and FieldName entries.

    Setting MessaeIdx gives the following code.

    MessageIDx :=1;

    while ( MessageIDx <= NumFieldsInMessage ) and (FileStatus = OK) do

    begin

    FieldType := FieldDescription[MessageIDx].FieldType ;

    FieldName:= FieldDescription[MessageIDx].FieldName;

    ReadAndPrintFieldByType[FieldType](FieldNamem , FileStatus)

    end;

    Hey presto a processing loop extendible to any number of record formats or field types.

    I note the use of variable names with the same names as arrays to hold instances and (possibly Pascals *most* annoying routine feature) its insistance that the case of variables matters.

    Hope that answers any questions as I am exhausted.

    1. Anonymous Coward
      Pint

      Cheers

      Thanks for the effort, have a pint for your efforts

  21. John Smith 19 Gold badge
    Coffee/keyboard

    @ Dodgy Dave

    "in what universe does a customer go to a vendor, and asks to buy their software, then tries to impose contractual conditions on how that software came to be written?"

    Congratulations you chose the #1 language where the #1 customer would do *exactly* that.

    Standard US Govt contracts are *highly* prescriptive and the purpose of Ada was to reduce the number of languages supported by the DoD (around 1900 IIRC when they did the survey that decided 1 language to run them all). the aim being that their contracts *would* specifiy exactly that.

    This is not as Draconian as it seems. Firstly you'd be pretty dumb not to have seen this on the RFP and still bid if you had *no* Ada skills and secondly you could probably whine to the Dod (as a US con-tractor) and get funds re-train your code monkeys (espeically if they already had security clearance).

    If you're talking about shrink wrapped software I agree. It's a done deal.

    Personally I think you can write garbage in *any* computer language. Hard coding the contents of an array seems permissible in any language I can think of, but I don't think anyone reckons it's a good idea to *do* so, *except* newby programmers who've never had to fix/upgrade the software afterward.

    OTOH it is very quick to do and if they get promoted for doing such quick work they never have to sort out their mess.

    I also believe that robust secure code can be written in any language *provided* tools exist which support that language and recognise the coding errors people are prone to in it. But selecting (or building) such tools does not get the program written, only get it written *faster* and more robustly once you start. This will continue as long as managers don't get paid on how many bugs they don't make or how much time the *avoid* wasting by fixing them. A recipe for some truly "Dodgy software (TM)".

    OTOH if you're in the software business and make your money on the support contracts most of your customers sign then why *make* that investment. From this perspective a software house is a machine for *making* bugs. Like any parasite as long as the host is not damaged too much the relationship can continue indefinitely.

Page:

This topic is closed for new posts.