back to article Sane people, I BEG you: Stop the software defined moronocalypse

A raft of potential vulnerabilities was found in whitebox Software Defined Networking (SDN) equipment. This is the beginning of the saga, not the end. The issues with the Internet of Things promise to be far, far worse. SDN solves a lot of problems and is broadly applicable (once it gets cheap enough). This brings with it a …

Page:

  1. Cronus

    Leaky analogy

    It's a good analogy except for the slight oversight that you don't tend to get fired for refusing to give someone the keys to their cars.

    1. sabroni Silver badge

      Re: Leaky analogy

      But back in the days when some people thought drunk driving was acceptable you might have been.

    2. Anonymous Coward
      Anonymous Coward

      Re: Leaky analogy

      But you may get some broken bones for the effort.

  2. Anonymous Coward
    Coat

    The REAL problem is liability...

    Because it's so much easier to blame a computer or a piece of software instead of admitting your own incompetence !

  3. rcoombe
    Pint

    As my father-in-law always said

    "Thank god I brought the car: I'm far too drunk to walk home"

    1. DropBear

      Re: As my father-in-law always said

      There is some truth in there in the sense that it does indeed take far less effort / sobriety to _drive_ (sitting in a seat) than it takes to _walk_. One can perform the former long, long after the latter is categorically out of the question. Not that this should be practically demonstrated, of course - but it is definitely true.

      1. Charles 9

        Re: As my father-in-law always said

        While this may be true, the consequences of going astray are usually far less severe for a drunk pedestrian than for a drunk driver. Drunks on foot are rarely in a condition to adversely affect other people in contrast to one commanding a one-tonne rolling mass of metal.

      2. quartzie

        Re: As my father-in-law always said

        There is a basic problem with the definition of driving in your response.

        Technically speaking, you can sit in a car and make it move without ever going through driving school, but actually *driving* a car means you take into account road rules, traffic around you and general safety.

        With those included in "driving", then no, it isn't easier to do when drunk.

  4. sandman

    Might take a while

    Drink driving is single-point stupidity - easy to identify and offering a simple (albeit) tough nut to crack. Poor software development on the other hand is polymorphous stupidity - tricky to define and hard to legislate against. Also, let's face it, developers are usually highly intelligent people, quite capable of inventing 6 new and impossible forms of stupidity before breakfast. Why yes, I have done testing and QA, how did you guess?

    1. Alistair
      Coat

      Re: Might take a while

      I've a heard of DEVOPs types. I'm a SA for the hardware. You can thank folks like me for not seeing some of the @#% that we killed before it got to QA.

    2. keithpeter Silver badge
      Windows

      Re: Might take a while

      "Poor software development on the other hand is polymorphous stupidity - tricky to define and hard to legislate against."

      Civil engineering a better possible model?

      Big, complex projects that take a long time to complete and which will have an impact on their surroundings for decades or centuries into the future. Most legislatures have evolved a legal framework and set of contractual relationships that define liability. Civil engineers have professional training and standards of (professional) conduct. That all took a lot of case law and quite a few major business failures and (physical) collapses to get to. A few say the regulation has now gone too far and is stifling innovation.

      How would the process get started?

      1. JLV

        Re: Might take a while

        Can we please stop the specious engineering analogies? Lots of engineering relies on known, quantifiable methods to achieve nearly the same exact results as 100s or 1000s of nearly identical projects. Even if not identical, components are limited in number, dont change as quickly and have known physical characteristics. Know your field , have a lot of talent, apply a generous amount of overengineering and you should have a somewhat predictably safe product. If it's not, then you're in trouble but the next iteration will fix that flaw and leave most of the rest of the system the same.

        If it's super complex, a la space shuttle, dev time is in decades and 1000s of folks check and recheck everything.

        Even complicated risks like earthquakes are gradually addressed by years of aggregated wisdom in cookbook recipes, i.e. building codes. Overarchingly you have proven mathematical models to check your systems with.

        Many of these conditions apply very differently to development. Wishful thinking and self-flagellation doesn't mean it's a easily transferrable model.

        We are faced with nearly the same level of complexity, constantly evolving threats and dev tools, and essentially operate on a custom artisan model where everything is always new. And we most certainly don't have formal mathematical verification methods. And security vs ease of use is not nearly as much in tensuon in most engineeing fields.

        Agree with the article though, we need to seriously up our game.

        1. Trigonoceps occipitalis

          Re: Might take a while

          "If it's super complex, a la space shuttle, dev time is in decades and 1000s of folks check and recheck everything."

          Except the O Rings.

          1. WolfFan Silver badge

            Re: Might take a while

            "If it's super complex, a la space shuttle, dev time is in decades and 1000s of folks check and recheck everything."

            Except the O Rings.

            That is not completely true.

            In the first place, the whole reason why there were O-rings was due to politics. A certain Senator from Utah insisted that NASA place some work for the Shuttle project in Utah, or else he'd hold up funding. So they did. Originally, the solid rocket boosters were supposed to be built near the East Coast, or in Missouri, and would have been barged down to Florida, via the Mississippi if built in Missouri. This couldn't happen in Utah. The boosters would have to be transported by train. Problem: they couldn't be transported in one piece, as there were several curves on the rail lines (there be mountains in Utah, and more mountains between Utah and Florida) and the boosters were too long to negotiate the turns. This meant that they had to be cut in seven pieces, several of which were joined together at the factory so as to have the largest sections which could be moved by rail, which in turn required the O-rings to seal the remaining pieces.

            In the second place, the O-rings worked (mostly) as long as they were used according to specs. They misbehaved only when used out of spec, such as when very cold. There were problems with the initial design, but a rebuild handled most of them. Except when the temperature dropped below freezing. The manufacturer's engineers on site told NASA before the launch that launching when that cold wasn't a good idea. NASA launched anyway.

            In the third place, later investigation showed that the primary fault was in the attachments for the solid rocket boosters and the way they had been placed, which caused stress on the booster bodies. This, in turn, was not a problem except when it was cold... and the O-rings froze up and didn't do their job properly. If the attachments had been placed slightly differently, the flexing which allowed the spurts of hot gas (a.k.a burning rocket exhaust) to hit the external fuel tank (a.k.a big bomb full of liquid hydrogen and liquid oxygen) would not have happened. There was a redesign of the boosters which changed the attachment points. It should also be made clear that the spurts of hot gas burned away at least one attachment point, causing the booster to actually hit the external tank, not something that you want to happen while you're under acceleration.

            Quote from the Wiki article:

            The loss of Space Shuttle Challenger originated with a design flaw and system failure of one of its SRBs. The cause of the accident was found by the Rogers Commission to be a faulty design of the SRB joints compounded by unusually cold weather the morning of the flight.[11][12] The commission found that the large rubber "O-rings" in SRB joints were not effective at low temperatures like those of the January 1986 morning of the accident (36 °F (2.2 °C)). A cold-compromised joint in the right SRB failed at launch and eventually allowed hot gases from within that rocket booster to sear a hole into the adjacent main external fuel tank and also weaken the lower strut holding the SRB to the external tank. The leak in the SRB joint caused a catastrophic failure of the lower strut and partial detachment of the SRB, which led to a collision between the SRB and the external tank. With a disintegrating external tank and severely off-axis thrust from the right SRB, traveling at a speed of Mach 1.92 at 46,000 feet, the Space Shuttle stack disintegrated and was enveloped in an "explosive burn" (i.e. rapid deflagration) of the liquid propellants from the external tank.[13]

            During the subsequent downtime, detailed structural analyses were performed on critical structural elements of the SRB. Analyses were primarily focused in areas where anomalies had been noted during postflight inspection of recovered hardware.

            One of the areas was the attachment ring where the SRBs are connected to the external tank. Areas of distress were noted in some of the fasteners where the ring attaches to the SRB motor case. This situation was attributed to the high loads encountered during water impact. To correct the situation and ensure higher strength margins during ascent, the attach ring was redesigned to encircle the motor case completely (360 degrees). Previously, the attachment ring formed a 'C' shape and encircled the motor case just 270 degrees.

            Additionally, special structural tests were performed on the aft skirt. During this test program, an anomaly occurred in a critical weld between the hold-down post and skin of the skirt. A redesign was implemented to add reinforcement brackets and fittings in the aft ring of the skirt.

            These two modifications added approximately 450 lb (200 kg) to the weight of each SRB. The result is called a "Redesigned Solid Rocket Motor" (RSRM).

            https://en.wikipedia.org/wiki/Space_Shuttle_Solid_Rocket_Booster

            https://en.wikipedia.org/wiki/Space_Shuttle_Challenger_disaster

            1. Snafu1

              Re: Might take a while

              Try the BBC 'documentary' currently available on YouTube: https://www.youtube.com/watch?v=DT7Yx5kxYco

              Dramatised, but fairly accurate

        2. Destroy All Monsters Silver badge
          Holmes

          Re: Might take a while

          Lots of engineering relies on known, quantifiable methods to achieve nearly the same exact results as 100s or 1000s of nearly identical projects.

          Only for pre-built housing. Each bridge (or ship) is its own development. Corners may be cut even there of course but the corner cutting the development projects is beyond ridiculous, frankly mafia-styling building (as seen in such countries as Italy, Greece, Southern France, Japan, Afghanistan etc.)

          And we most certainly don't have formal mathematical verification methods.

          We most certainly do and they are getting better. The fact that people don't bother to learn about these ("I'm a developer, not a mathematician") and prefer to start hacking wildly (going so far as to ignore compiler warnings and fart in the general direction of lint) just is testimony to the utter immaturity and irrealism prevalent in the "industry".

          And security vs ease of use is not nearly as much in tensuon in most engineeing fields.

          This is best solved by applying a label on the box "consumer-grade, use at own risk" vs "pretty good, comes with assurance and insurance, pay more". This already happens today but the message is intentionally mixed. For example, a pretty expensive but rather lousy WinNT is targeted to the whole range of demands, with the sole differentiator the price (a "feel good about this" pricing model). In all cases, if something happens, you are on your own. That's not the way to do it.

          1. JLV

            Re: Might take a while

            First, let me start out by saying that I agree we do need to learn to be better and safer as devs. And, yes, engineering has some things to teach us, because it is a technical discipline in problem-solving like ours. On that we don't disagree.

            However, I think we could also learn from the medical profession in how to problem-solve in conditions of uncertainty, complex interactions, evolving threats and emerging base of knowledge. Sounds less like a fit than engineering so, just maybe we wouldn't be gulled into thinking that it was THE answer.

            >Each bridge (or ship) is its own development

            Well, maybe, but they generally do the same thing, don't they and we've been building bridges for thousands of years. 20 years ago there was no javascript or generalized public-facing access points, which is what websites are. 30 years back, nearly no networks to attack via.

            A bridge engineer will design many bridges in her life, but she won't necessarily switch to building airports. Yet, devs are mostly expected to switch subject matter and languages quite quickly.

            >We most certainly do and they are getting better.

            I took a quick looksee at wikipedia for formal methods, just to make sure I hadn't missed any new development about those and I found, as expected, that they are very costly and very limited in use. Hardly a solution for the average dev, is it? A bit like arguing that the design technique, funding and sheer expertise applied to building the Burj Khalifa are available to your local house builder.

            They could be, if cost was no constraint.

            On the other hand, the Reg has had several recent articles about applying automated analysis of software behavior in order to highlight possible security weak points. Now, the prospect of getting that to work gets me all hot and bothered, not your formal methods, sorry.

            >I'm a developer, not a mathematician

            Before becoming a dev I graduated with a BS in Electrical Engineering. What you don't get is how intrinsic math is to engineering. You start out building a solid foundation of math and then you learn how to apply the equations relevant to your particular field. Engineering is a combination of applied mathematics and then creativity/skill in the field, but math is foundational to any engineering field and the formalism that the mathematical underpinning allows is what gives engineering the qualities which the OP suggests we borrow from civil engineering.

            There is no such underpinning mathematical foundation in general software.

            So, no, we agree to disagree on that too.

            Also, to be fair, many of the big hacks wouldn't be helped just by better coding. Heartbleed? Yes. Apples Yosemite root hack? Definitely.

            Bradley Manning, copying hundreds of MB for classified docs? The system was working as designed. OPM hack, 22M profiles hacked? How many IT subsystems in the US Federal needed to trawl through 22M records? Shouldn't access on that scale have raised alarm bells? Shouldn't those records have been guarded like Fort Knox? Ditto Targets 40m cc hacks. What about Ashley Madison? Why did those morons really need to have the CC info on such a lucrative and publicity-shy bunch of users? What about all security issues due to compromised and reused passwords?

            Better coders, yes. But how about deploying systems with better heuristics about normal vs anomalous use? Rate limiting access to sensitive data? Watch-dogging the networks to see when data flow from particular sensitive nodes are unexpectedly high? Most of all, how about borrowing, from the military, not the engineers, the notion of need-to-know. As in, limit which systems can access which data, at what rate. And, even more so, limit the data that your own organization retains in the first place. If you don't have some info, then getting hacked will not have compromised that info. The marketing gals will hate you for it, but it should be our first line of defense.

            >Friends don't let friends code in flash

            Extremely good point, but that is not a dev call. That is a system architect call, and to be frank, if all IT security weaknesses had as simple a solution as not using obviously unfit tools like Flash, then we wouldn't be in the shit storm we are in.

        3. Vic

          Re: Might take a while

          And we most certainly don't have formal mathematical verification methods

          We most certainly do. Formal Methods has been taught at University for decades.

          The trouble is, it's extremely expensive; it means that you don't produce very much code, and there is a huge amount of work to be done for every line you do produce. And that means Managment calls you into a meeting room for a little chat about your productivity...

          We're not going to improve general software quality until we get TPTB to care about it - and that's not going to happen whilst the criteria for promotion to management positions largely consist of bullshitting and cajoling. We need to get some integrity into our management chains - but integrity seems to be the one thing that is actively avoided by those that pick the promotions...

          Vic.

          1. ecofeco Silver badge

            Re: Might take a while

            Exactly, Vic. Well said.

      2. channel extended

        Re: Might take a while

        Code of Hammurabi, for software - If it has a security fail you die.

        1. Charles 9

          Re: Might take a while

          But then who gets the axe? Such complicated projects tend to have so many developers, usually working across each other, that assigning blame is going to be an exercise in futility. And you can't do a blanket execution because that would catch innocents in the crossfire, making the work too risky to undertake. IOW, go too draconian and you'll soon find yourself without developers.

          1. Graham Dawson Silver badge

            Re: Might take a while

            It was traditionally the architect. I think that would translate to senior management.

          2. deadlockvictim

            Re: Might take a while

            The project managers. It is their job to get the complex organized. If it fails, it is their fault. They failed in management. Project management is stressful - on account of the unknowns that inevitably occur, on account of the slippage in time that inevitably occurs, and so on.

            If the requirements or resources are insufficient, they need to say so at the earliest possible time that the project can not be met with the given requirements and resources. Devs are resources. If they are working sloppily or not to spec, they are a poor resource and should be given a task force on website usability.

            1. Anonymous Coward
              Anonymous Coward

              Re: Might take a while

              "If the requirements or resources are insufficient, they need to say so at the earliest possible time that the project can not be met with the given requirements and resources"

              And if you're like those Atari programmers who were faced with a hard deadline (because they just had to make the critical holiday shopping season) and are told by the top brass, "The deadline's absolute, the budget's tapped out, and there's no other staff. You WILL make do or else..."?

  5. heyrick Silver badge

    Driving drunk is an obvious stupidity. Coding insecure software is not the same thing. People may not be aware of the potential risk to an attack vector they never considered or don't have experience of [1], plus large projects are split across many people and a potential flaw may take but a tiny hiccup in just one piece of code.

    1 - For example, I know nothing about SQL injections but I'm not that bothered as I don't do anything with SQL...

    1. Destroy All Monsters Silver badge

      For example, I know nothing about SQL injections

      Just use the correct library which will do the escaping for you. The delta between OUCH and GOOD is sometimes very narrow and just needs a bit of coaching.

  6. Naselus

    Most people can relate to the idea of drink driving being dangerous. Most have been drunk, most can drive, and therefore most can see that doing the two at the same time is bloody stupid. Meanwhile, the general population do not understand the notion of attack surfaces, escalation of privilege, or IT security in general. Before we can even start to train the next generation of software developers to write secure code, we need to train the next generation of users in the basics of IT security. We don't need the CISSP to be put on the year 10 curriculum... but the GCSE Sec+ wouldn't hurt. Hell, this may even lead to politicians who understand that re-running the crypto wars is a Bad Thing.

    Rather than a campaign against writing shit code, I'd sooner we had a campaign against running shit code. Writing shit code will dry up pretty quickly once the user base are culturally conditioned to refuse to use it.

    1. Trevor_Pott Gold badge

      So you want us to just turn off the internet and pretty much our entire manufacturing and power generating capacity, not to mention all of our tanks, jets, warships, satellites and our entire bloody society?

      That'll go over well...

      1. Charles 9

        Besides which, how will the average Joe be able to tell the difference? If it weren't so complicated, perhaps it would be better to insist on a formal standard of code screening for as many vulnerabilities as is practical. I mean, it's too much to ask for a formal proof of everything (there's only one formally-proved OS available today, and it's only true if there's no external DMA access which hurts performance). And KISS is running into the brick wall of necessary complexity (either because it's part of the core function or because it's a necessary evil in order to close the sales and make the money to stay running).

      2. JLV

        How about putting penetration testing/blackhat stuff formally on CS curriculums? And having more, lots more, training offers for it post-school. To stop a thief it helps to think like one.

        In a way, admins are more security aware than many devs because you are more exposed to threats daily than we are. For a majority of us devs that don't work on public websites we may know, somewhat, about best security practices, but they remain theoretical and we don't get pen tested.

        Until we f*** up and it's too late. That's why more early exposure to how the other side operates would help.

        1. heyrick Silver badge

          And having more, lots more, training offers for it post-school.

          Didn't the Americans answer that with parts of the DCMA? Otherwise known as the ostrich approach. See no evil, hear no evil...

          https://www.eff.org/wp/unintended-consequences-under-dmca

  7. RonWheeler

    Completely daft article

    IoT does not equal software defined anything.

    1. Charles 9

      Re: Completely daft article

      He's saying they're too branches of the same problem: increased vulnerability. SDN is the proverbial one basket with all the eggs while the IoT is basically a war on a hundred fronts.

      1. Destroy All Monsters Silver badge
        Trollface

        Re: Completely daft article

        IoT = "software defined accidents"

  8. Nate Amsden

    more automation more than just a security issue

    For me security is less of a concern when it comes to more automation. Systems and networks are already often built to be how do they say, "soft" on the inside? Generally trusted zones..

    For me massive automation comes down to more fear of large scale breakage. Humans are of course error prone but software can fail in pretty spectacular ways too. It's really difficult to make software *really* robust.

    As time goes on the more software I see I guess you could say the less faith I have overall in the quality being put out by just about anybody(open or closed source whatever). Also as time goes on my standard for quality continues to inch higher.

    The last 15 years of my career have been working in support of software development teams(from an infrastructure perspective generally) making products for others to use primarily you could say in "SaaS" types of environments.

  9. Anonymous Coward
    Anonymous Coward

    The 80% of us who think they are "better than average" drivers

    This always infuriates me. It is quite possible for 80% of a sample to be above the average.

    Don't believe me? What is the average of {1, 10, 10, 10, 10}? A little bit of skew in your distribution works wonders...

    Hyperbole aside, while a normal distribution isn't necessarily a bad assumption, it is still an assumption.

    1. Chemist

      Re: The 80% of us who think they are "better than average" drivers

      "while a normal distribution isn't necessarily a bad assumption, it is still an assumption."

      Quite. The number of times I've argued with HR people on just that point.

    2. Charles 9

      Re: The 80% of us who think they are "better than average" drivers

      The point is psychologists can easily point to the phenomenon of self-inflation: overconfidence in our own capabilities. If you ask drivers on a scale of 1 to 10 how good a driver they are and then put them down on a road test and grade them more objectively on the same scale, odds are the self-assessment will be higher than the road assessment. Perhaps as a psychological survival instinct, humans are usually innately self-optimistic, and since everything we do is colored by this perception, it can lead to problems.

      1. JeffyPoooh
        Pint

        Re: The 80% of us who think they are "better than average" drivers

        "...a road test and grade them more objectively on the same scale..."

        How can any experienced driver not get a perfect score on a formal road test? Every point deduction is a clear cut mistake, it's not judged on style or clipping the perfect apex. Perfection is just knowing the rules and not showing up drunk.

        Once upon a time, I had to go in for a re-test (due to too many tickets), and during the road test - just for laughs - I shifted gears so as to keep the RPM below 1200 the whole time. Almost, but not quite, lugging the engine. For the sole purpose of adding some amusing challenge to the test. Still got a perfect score, but she commented on the low engine RPM.

        1. The Grinning Duck

          Re: The 80% of us who think they are "better than average" drivers

          It's not the formal test that's the problem, people tend to get it right when they know they're being watched. It's the informal one, otherwise known as the real world (that you clearly cocked up, hence the 'too many tickets' part), where the laziness, bravado, stupidity, whatever creeps in.

      2. Anonymous Coward
        Anonymous Coward

        Re: The 80% of us who think they are "better than average" drivers

        > humans are usually innately self-optimistic

        Not just self-optimistic, but optimistic in general.

        Otherwise there would be no lotteries, casinos or bookies.

        1. Charles 9

          Re: The 80% of us who think they are "better than average" drivers

          "Not just self-optimistic, but optimistic in general.

          Otherwise there would be no lotteries, casinos or bookies."

          No, self-optimistic. Lotteries, Casinos, and Bookies are all fed on the idea of, "I can beat the odds." I admit, I play them once in a while, but only as a tiny "one-in-something-other-than-a-million" longshot with my loose change.

        2. nijam Silver badge

          Re: The 80% of us who think they are "better than average" drivers

          > Otherwise there would be no lotteries, casinos or bookies.

          Or marriage.

    3. Tony Haines

      Re: The 80% of us who think they are "better than average" drivers

      //This always infuriates me. It is quite possible for 80% of a sample to be above the average.//

      Not if the average in question is a median - and that's a better average to use for this purpose.

      Looking at the sample data-set you gave, should the four '10' drivers say "I'm about the same as everyone else - except that one guy who doesn't have a licence, never uses indicators, uses the mirror only to shave, and tends to stop by running into something - therefore I'm definitely better than average", or "I'm about the same as most of the other drivers, therefore I'm about average".

  10. The_Idiot

    Once upon a time...

    ... organisations paid people to spend hours, days and weeks optimising machine code to show-horn being-able-to-do-something-useful into bugger-all hardware.

    Because hardware, in mass-production terms, cost five arms, four legs, and a mortgage.

    So we got, for only one example, a working spreadsheet that was only 27K bytes big, and ran on a 32K byte Apple II.

    Er - how big is Excel now?

    Feature creep aside, hardware is cheap. As the article says, people are not. So if (and it's not necessarily a given, I'd suggest, but what the hey) we accept the idea that doing things without people leads to a greater attack surface, and if we want to make the alternative approach viable, we have to make it more expensive (arms, legs and mortgages) to do it without humans.

    So I'd suggest, and remember I'm an Idiot (blush), stop caring. About people or not people. Just make it !@#$#$%%^&^ expensive to mess it up. Make the whole arm-leg-mortgage thing a real cost of being all or part of the vector for any successful attack or penetration.

    Of course, there's likely to be a small side effect - that of the whole industry collapsing :-(.

    Failure, the consequences of failure, and being part of the vector for such failures, part of the attack surface creation, has become such a widely accepted 'cost of doing business' that by and large, it carries no real cost. Some bad publicity? Sure, but we can always say 'the X did it, and they're nasty'. Or 'well, it happens to everybody'.

    In today's environment, if it don;t mean dollars lost, it don't make cents to care - or so it seems (and yes, I should be shot for puns like that (blushes again :-) )).

    1. Charles 9

      Re: Once upon a time...

      "Failure, the consequences of failure, and being part of the vector for such failures, part of the attack surface creation, has become such a widely accepted 'cost of doing business' that by and large, it carries no real cost. Some bad publicity? Sure, but we can always say 'the X did it, and they're nasty'. Or 'well, it happens to everybody'."

      Unfortunately, what you describe is a natural human progression. Especially in business, one of the driving goals is to reduce risk. Because when you reduce risk, you raise the odds of a payoff, and when you do that, you encourage investment, so risk becomes a two-way amplifier, especially in a competitive market. So it becomes second nature for businesses to dodge risk. Why do you think the Limited Company (in America, the Limited Liability Company) was established in the first place? Because people weren't willing to risk the farm on an investment.

  11. Badger Murphy

    You'll never get the public to understand enough to care

    The general public often doesn't even know who most of their heads of state are, and that has plenty to do with their life, future, and well-being. I think it is a bridge too far, even with some of the proposed educational models, to expect the general populace to be savvy enough in infosec to drive a demand for greater security.

    This is the exact type of scenario where regulation must step in. Look at SOHO routers as a prime example. Most of them are totally ownable right out of the box, and manufacturers keep on making them, and we keep on buying them. The manufacturers don't fix it, because that costs money, and since the customer doesn't know enough to care, that situation continues in perpetuity.

    I don't claim to have the magic bullet to kill this problem, but I do believe it starts by making it financially painful, via fines, to an organization for getting owned, provided that it can be demonstrated that their security is a joke.

    This is where the devil in the details hides, though. We have to punish the negligent without blaming those that are genuinely victims. Only then will we see security treated with the importance it deserves.

    1. Charles 9

      Re: You'll never get the public to understand enough to care

      "I don't claim to have the magic bullet to kill this problem, but I do believe it starts by making it financially painful, via fines, to an organization for getting owned, provided that it can be demonstrated that their security is a joke."

      But this kind of regulation can only go as far as sovereign borders. And increasingly businesses are going trans-national, meaning they can play the shell game to get around regulators. We're reaching a point close to Gibson's Sprawl where transnational businesses can transcend national borders and basically become sovereign entities in their own right, at which point the rules get changed yet again.

  12. Stevie

    Bah!

    Do you really think the problem is morons coding in the workplace? Because I don't.

    *I* think the problem is the culture that has arisen in which Google is cheaper than training, and experience is seen as too expensive compared to new graduate hires.

    And *I* think that anyone who enjoys their job even a tiny bit would rather do it well than be found to be doing it badly. *Everyone* takes pride in what they enjoy doing. That's what *I* think after thirty-five decades in IT.

    And I think the security issues we see are more the result of lack of knowledge in the front-line troops than advice offered and rejected by middle management, though I'm sure elements of that pollute the waters too.

    See, it's my experience that upper management gets the message "why pay for the cow when the milk is free" just as well as your average teenaged freetard does, if not better. They also have been educated in a three decade period when the value of the intangible assets like experience of the enterprise information systems is as near zero as to be indistinguishable from the financial statistical noise.

    Gotta love what the MBA has done for Big Business, eh?

    1. Philip Stott
      Childcatcher

      Re: Bah!

      35 decades in IT

      Crikey, who would have guessed Charles Babbage is still alive and reading The Reg :-p

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon