back to article How secure are your applications?

Let’s be blunt. The fine heritage of application development has not traditionally incorporated the pre-emptive creation of secure code, i.e. programs that are built from the ground up to be secure. There are a number of potential reasons for this – not least that in the old days, before every system was connected (either …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    It just increases cost

    and bear in mind that most developers, especially those in the Windows World, don't see what they do as an art; closer to barbarism.

    There are many tools, to help automate the checking of software, but really it starts from design, so you can have both a flaw in the design that leads to insecurity, or a bug that can be exploited.

    The flaw is hard to find once the system is implementation, or it can even be championed by some deluded know it all, despite other people's best efforts to dissuade.

    But, most of this pales in comparison to social engineered attacks, if the objective is to breech a target. Insecure software can fall victim to wide automated attacks, so there is an argument to sorting it out, but it increases the overall time to produce an application, and frankly there is just not the skills out there.

  2. Pete 2 Silver badge

    Simply put: there's no money in it.

    The next tech-spec you get, check to see if there's anything like "the program must not leak data from any interface when incorrect or malicious data is applied". It's a bit like saying "the program should not contain any bugs". All fine sentiments, but in practice not realistic.

    Since development cycles for a lot of major software is measured in months, with new releases coming in twice a year, there simply isn't time to test every combination of duff or malicious data entry - when the commercial imperative of getting it out-the-door to make money (or fix other bugs in the earlier release) are looming.

    Putting aside the philosophical problems of being unable to prove a negative statement (in this case, that there are no security holes), the time it would take and the money it would cost to detect fix and retest every single, possible security problem far outweighs the commercial benefits. Any company that tried it would go bankrupt before their first release (plus every competitor's product would have added many generations of new features, in the time, too).

    As it is, even if someone, somewhere came up with a clean, secure product it wouldn't make much difference. Most security problems are caused by people: users, installers and even hackers. the best you can hope for is that the supplier will be tryue to their words and fulfill the promise "fixed in the next release".

  3. Anonymous Coward
    Pint

    I beg to differ

    "in the old days, before every system was connected (either directly or indirectly) to some kind of network, a certain code of conduct was assumed between developers, operations staff and users, that nobody would try to break anything."

    If the 1970s count as the old days, then my experience was that developers had to assume that users were akin to the proverbial infinite number of monkeys and would eventually stumble across any slightest chink in the armour. Admittedly, this was usually out of stupidity rather than spite, but defensive coding and extensive validation was the norm, even with only kilobytes rather than gigabytes of program memory available.

    The icon? Developer fuel...

  4. Roger Lee 1
    Megaphone

    Let's not forget the tool vendors part in this...

    One snag is the way that some tool vendors keep releasing significantly different toolset upgrades. Although I'm a fan of Microsoft's development tools/environments/languages etc., and most development shops can take these changes in their stride, organisations that do, or commission, a significant amount of internal development end up with problems. Not only is there the continual cost of training, but also the underlying framework on which a given application relies becomes often becomes obsolete in fairly short order.

    Given that most organisations don't refactor their own apps EVER, but do try to keep abreast of new developments, they inevitably end up with a load of mis-matched, soon to be legacy, liabilities.

    This breeds two evils. Firstly, even if the code doesn't have holes in it, the underlying frameworks in use probably will for the first couple of years and, more importantly, end users will start to use "work arounds" that may involve all sorts of spreadsheet and MS Access nastiness, to say nothing of things like exposing old databases that were designed for internal use to the Internet with huge attention to graphics but absolutely none to its basic suitability and security.

    When I see these adverts promising "manageable code", "long term support", and all the other nonsense designed to tempt IT managers to part with their inadequate budgets, I say a small prayer that the ad's target market are a completely unreconstructed bunch of cynics who have to get wet before they'll believe it really is raining.

    Windows carries a huge amount of bloat to ensure backwards compatibility, for perfectly sensible reasons - Couldn't tool vendors make a few more compromises here too?

    Change isn't always good.

  5. Bassey

    Re: Anonymous Coward @ 10:54

    I'm with you. I've been coding since the early 80s and the assumption has always been the users are morons and there is no such thing as common sense so you have to verify EVERY piece of data you allow into your system.

    I can't remember who said it but I think "Idiot Proof? Idiots are surprisingly resourceful!" sums it up quite well.

  6. Anonymous Coward
    Coffee/keyboard

    Re: Bassey

    It was drummed into me as "it's all very well trying to make your software idiot proof, but the problem is that the world keeps creating bigger and better idiots".

This topic is closed for new posts.

Other stories you might like