back to article Confessions of a sysadmin

I would like to say that it has been a few days since my last malware infected computer. I have been dealing with a string of these lately, and I’ve had quite enough of them for now, thank you. I would also like to say my network was the epitome of configuration perfection, with every system fully patched, and a team of …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Thumb Up

    Full marks for the hands-up, but...

    Don't "management" ever turn their machines off? Default once a windows update is downloaded is "Install Updates And Shut Down". If they're switching their machines off at night, then they're actively changing that in order to not install the updates.

    B'sides, there's settings in WSUS to not nag for a restart after installing - that should sort them quite happily with auto-updates. Yes, the odd update might cause a little odd behaviour - but the good ol' helldesk "Have you tried restarting your computer?" will rectify in short order.

    1. Trevor_Pott Gold badge

      @AC

      Nope; they never turn thier machines off. We live in a VDI environment, and they *all* have external RDP access. Look for my upcoming articles on VDI; one of the major pains in the neck is that your users can simply "disconnect" instead of rebooting. (Though this can be solved with GPOs...)

      Anyways, I don't want to give too much away, because then I'll be out of material for the VDI article set!

  2. Obvious to me...

    Heard of Solidcore?

    Solidcore have a good solution for this type of 'defenceless' system - At least it used to before it was bought by McAfee. You can still get to the Solidcore website at Solidcore.com for info or look at the McAfee 'Application Control' product.

    Should tick a few boxes - Low system overhead, AV, no signature downloads.

    Might be worth a look.

    I hasten to add that I have no connection (and never have had) to McAfee or Solidcore - I just think Solidcore came up with a good solution.

    1. Trevor_Pott Gold badge

      @Obvious to me...

      Never heard of it, but based on your recommendation I will take the time to check it out. :D

      Honest and true; when you filter the noise out of the signal, the real value of El Reg is in it's readers. <3 commenttards.

  3. Anonymous Coward
    Anonymous Coward

    Ho hum.

    You can do quite a lot to isolate things beyond subnetting. Modern switch hardware has all sorts of separation and even packet blocking features.

    I don't agree it would be a nice idea to move to static IPs for a few machines. That's like saying (and I used to know someone who actually claimed) that DHCP was fine for small networks, but not usable on big ones. Uh, sure mon. DHCP was invented to distribute configuration across the network. It's up to the admin to properly configure the tools, TYVM.

    Then there's the question why large and dangerous machinery requires extremely brittle control software that imposes domain-external requirements on the underlying OS that makes it more brittle. Apparently comes with the choice of using an essentially unsuitable product for the OS. That assessment is justifyable solely on observed "should not happen" problems, qv. obscure patch interactions not relevant to the function of the control device. I'd say the same if I'd observe that on, say, a solaris box.

    1. Anonymous Coward
      Stop

      "Why large and dangerous machinery requires..."

      Quite simple. Because it's bloody expensive kit, and - when it was new - that was just the way the world works. W2k. That sets a rough age to it...

      Just because IT has short asset life, doesn't mean that other kit does too...

      1. Trevor_Pott Gold badge

        @AC 12:52

        Yar. In this case something on the order of 15-20 years...

    2. Anonymous Coward
      FAIL

      Static=Small

      Wow, sounds like it was opposite day for him.

      If anything Static is for small networks.

      DHCP with Mac filtering is the way to go, IMO.

    3. Trevor_Pott Gold badge

      @AC 11:22

      "You can do quite a lot to isolate things beyond subnetting. Modern switch hardware has all sorts of separation and even packet blocking features."

      Yes it does. In fact, even my old-ass semi-managed switches offer VLANing. There are many features that high-end switches (such as Cisco or Procurve) offer that could solve this problem. It's just COMPLETE OVERKILL for this situation.

      Subnetting will put my system “out of reach” for any system under my control, because the OS will honour the subnet and refuse to allow randoms to connect. It will also prevent anyone plugging into my network from simply getting a DHCP address and finding the system. Virus and attackers can configure the network card to NOT respect subnets, and so the subnet really should be combined with a VLAN.

      Being sensible, I will subnet the system *and* VLAN it. Anything beyond that is both totally unnecessary...and not possible with my current switching gear. While I could easily toss together a bit-flinging box to be a router and do more than vlans, I just don’t see what it gets me that simple subnetting and a VLAN don’t.

      My switching gear is okay for an SME; but cisco it is not. Replacing my network infrastructure with Cisco would eat my IT budget for the next two solid years, and I frankly fail to see any benefit. Cisco, (or their trained minions) have yet to show me a single thing that would actually care about doing on my network that their gear can do better than a well configured Linux or BSD box. And for the cost of one of their routers, I could build a multi-system RAIS* 5 (with hot spare) Linux or BSD routing cluster. Since I only have to route gigabit, a bloody Atom can fling all the bits I need.

      *RAIS: Redundant array of inexpensive systems. Fancy clusters on steroids.

    4. informavorette
      Badgers

      why large and dangerous machinery requires extremely brittle control software

      It's kinda company culture problem.

      I recently had to do with the software creation process of one of the biggest engineering manufacturers in the world. They will sell you anything from a single PLC to a nuclear plant. They are full of engineers - you know, real engineers, of the kind who smirk when they hear the programmer wimps call themselves "software engineers". Of course, they cannot get around the fact that their equipment has to be controlled by some kind of software. And of course real engineers who can design a whole power plant can easily oversee the creation of some kind of software for this. The actual coding is done in India anyway.

      Of course there is some kind of QA for the software. It definitely works on the developer's machine, manually tested by hordes of cheap offshore workers. And our manufacturer, unlike Microsoft, cannot afford to sell products with software bugs in them, unless he'd like his nuclear plant to replace Ariane-5 in the software engineering textbooks. So he just goes and tells his customer that he guarantees flawless work - as long as the configuration specifications are met. Which sometimes happen to include a Pentium II or whatever the developer had at the point of writing the software.

      Now imagine you are their customer. You want to purchase a new fully automated production plant for manufacturing your product, worth more than Zimbabwe's annual GDP. Believe me, you won't even glance at the IT specifications for the controller hard- and software. If it requires you to run a Lisp.NET application on a Condor-managed grid of 139 iPads, the cost of finding a specialist who understands this configuration is still only a zillionth of your budget. And besides, there are less than 10 companies in the whole world who can build this for you, and probably none of them offers a better software. So you just buy based on other criteria, and your admin has to make sure your purchase is usable, no matter how. And once you have it working, even if your production plant builder miraculously offered you a new, upgraded version of the controlling software which runs on a newer system, you have no reason at all to change. It will cost a lot, and you are paying an admin to make sure everything runs smoothly.

      Or do you, personally, find the idea of your local nuclear plant updating the controlling software from a working Win2000 version to a new shiny Vista version (bleeding edge, as yet not used outside simulations) appealing?

      1. Anonymous Coward
        Black Helicopters

        Tried and Trusted

        Isn't the UK Navy's Windows for Warships based on W2000?

      2. Anonymous Coward
        Anonymous Coward

        Re: why large and dangerous machinery requires extremely brittle control software

        Thank you for the nice explanation, even if I already knew most of that, and I assume most of us do. I perhaps shouldn't have left the comment stand rhetorically as I did. So, to correct the oversight: I think we, and that's a far more general ``we'' than just the IT janitors guild, but including smirky engineers, senior management, and so on and so forth, must recognize that brittle software itself is inherently dangerous as well as needlessly expensive. And then we need to figure out what to do with it.

        Even if the smirky kind of engineer certainly does have a point about those who cobble together software to the point that outsourcing to india is still an option in the minds of those who commission it. I've been on the receiving end of that, and especially outsourcing the system design (and obstinate project manglers and highly irritating and utterly clueless ``business consultants'') was a key factor to making the project into a spectacular if classic project failure.

        We really need to do better. Cue any comment on software quality by the late E.W. Dijkstra. And it's up to the IT janitors on the receiving end to start communicating this back up the chain and eventually back to the smirky engineers. Read up on management (_The Essential Peter Drucker_ is a nice and condensed overview of the whole thing if a tad abstract and not entirely free of faults) to see how that yanking the chain works, and why management had better listen.

        1. Trevor_Pott Gold badge

          @AC

          Systems Administration is a blue collar gig. The brass never listen to blue collar folks, they just tell 'em to "get it fixed." They don't care about the details of how it gets fixed, or why it happened in the first place; that's why they are hiring you. When was the last time anyone actually paid any attention to what the plumber or the electrician or Janitor said? Only if something bites them in the ass multiple (expensive) times do they begin to ask “why did this happen” as opposed to just barking out the orders “make it better.”

          Sysadmins are digital janitors, and if they agitate too much about things like crappily designed software they risk being replaced; after all, thanks to things like outsourced, offshoring and cloud computing…there’s a massive oversupply of them.

          So every sysadmin has to walk to terrifying line between making enough noise to cover their ass when things go boom, and not speaking out so loudly that when cutbacks roll around theirs is the first face on people’s minds to get rid of.

          1. Anonymous Coward
            Anonymous Coward

            Yes, no, pay attention now.

            Well, that's the point. You have to talk management talk. It's not so much a class issue as a communication issue. Which amounts to the same thing, except the former has a mindset problem tacked onto it, but bear with me.

            It's "IT"'s job to tell "brass" that things work /this/ way, and not /that/ way. Management gets a simple choice, put to them in simple terms. Do you want $very_bad_thing_for_business to happen? Then do that. No? Then do this. In business terms.

            Arguably this interfacing stuff is the line manager's job, but if you are your own line manager, well, sucks to be you. I was, in fact I was the only one left standing and even VP I reported to got booted, leaving me in a void for over a year, and I burned out. Then I read up on management. I'm not there yet, I'm still learning. But getting management right certainly does have hack value.

            1. Trevor_Pott Gold badge
              Unhappy

              @AC

              Agreed. For everything there must be a business case, and it must be presented properly. The issue that blue collar types in general, and Systems Administrators in particular have is that even with the most rational presentation using all the right buzz words and a solid business case...

              ...the brass reserve the right to be batshit crazy. At some point you just have to walk away from the situation and understand that there are some battle you flat out can't win. No matter how rational the argument or how strong the evidence, when you run up against the preconceptions of some people, it's like bouncing off a force field.

              Go here: http://arstechnica.com/science/news/2010/05/when-science-clashes-with-belief-make-science-impotent.ars

              Read that. Then apply the principles and ideas therein to working with management, and you see the difficulties. Dealing with the brass is a large part of my job. Both at my day job, and the various additional networks I administer on the side.

              Blue collar folks, (which includes most Systems Administrators) are generally task or problem oriented: they look at the issue in front of them, and deal with that. The better ones are task oriented, but can visiualise how their particular task affects the larger picture. That comes with experience, and it’s how you tell a good worker from a great one.

              Management on the other hand plays politics. Feelings and egos come into play; presentation and perception, “buy in,” how any given project, admission of guilt, mismangment or requirement for change will reflect on them, their bosses, customers, the company and a host of other factors.

              To an unfortunate amount of upper management, solving a problem is never about the problem itself, but rather about who gets the benefit from solving a particular problem, and who might in some way be offended.

              A Sysadmin will want to solve a problem because a problem exists to be solved. An upper manager might order you to leave the problem in place because it does more harm to their rivals than it does to them. I have fortunately been largely spared during my career, but I have many friends and colleagues who have not been so lucky...

              1. Anonymous Coward
                Anonymous Coward

                UI: The higher science of incompetent management

                Yes, you're quite right that management are not as technically or even as rationally minded as sysadmins. I'm not sure I'd like to call that blue collar work, even though the IT janitor comparison is sadly apt in practice. Blue collars, to my mind, with long experience excel at practicality, but not necessairily rationality.

                Is, oh, stone masonry blue collar work? Definitely. Does it bring the far-reaching impacts IT ops so often has? Well, a cathedral with walls of clay will come down with the first rainfall. Still, it's the architect that has the vision, the mason'll just do his thing.

                The sheer complexity forces sysadmins into either chicken waving or if that doesn't work (more chicken waving, ad nauseam, and then) taking the thing apart, understanding how it ought to tick, fixing it, and on to the next problem. --Tangentially, that's very close to the original definition of engineer back when it was a military term. But I digress.-- In that, we're conditioned to use an extremely classical view. Contrast this with the romantic view that many of the rest of us have. These terms are from Pirsig's _Zen and the art of motorcycle maintenance_ where he explains them quite well, and they're good to understand here.

                In that light I don't think scientific incompetence is much at work here. Rather, it's a symptom of information overload, perhaps somewhat akin to why people turned to religion in the first place: For security. Science is the very epitome of insecurity, so it's easy to villify. As long as scientists insist on mumbling jargon, it stays an easy target. The assumption that human beings are rational is largely false for the simple reason that thinking is hard. Capable of rational thought, arguably, but rational beings? A very different thing. Don't believe me? Look at the energy costs. It's much easier to re-use a canned good enough solution. Even if it isn't. Is what? Good enough? Solution? Easier to use? True? Making sense? Yes.

                You're right that (bad) management plays politics, something that gets easier if you don't look too deeply into the matter at hand, qv romantic view. I think that as head of IT even of a small department there's more management involved than what a mason'd stomach. I liked the following two blog posts for their not to be taken entirely seriously analysis of televised fiction:

                http://www.ribbonfarm.com/2009/10/07/the-gervais-principle-or-the-office-according-to-the-office/

                http://www.ribbonfarm.com/2009/11/11/the-gervais-principle-ii-posturetalk-powertalk-babytalk-and-gametalk/

                I for me am willing, regardless of being able, to deeply cynically (ab)use whatever makes the brass tick. A good sysadmin is obsessive with making systems work, after all, and if that includes the management system... well, alright, there are limits. But I'd try.

                But I'm not afraid for my job. I've survived before and I can survive again on no money at all. I've gotten out, I've taken the pay cut, I'm happier than I was. Absolutely worth it. Of course, I'll keep looking for an even better spot, a fatter paycheck, both.

                It's mindset that does you in, and it's mindset that gets you out. I *know* I'm good enough to run a shop, regardless of what my CV says. And if the management is finicky and stupid about it, well, let them. Please fire me, I'll find a better place. Or perhaps it'll find me. It has happened.

                If the economically incompetent insist on running themselves into the ground, that's good news for everyone else. Except the investors. But that's their risk, so tough cookies.

                If your problem is romantic management that refuses or is entirely incompetent to take a classical view, well, you may have to learn to play their game. Learn to negotiate, learn to communicate, even if that means focusing on superficialities to get the noses pointing in the direction dictated by underlying causes only you understand. You'll likely have to keep abreast of at least what's happening and where the company is going. Having to play politics, however, is a sure sign of rotten management (even for management), so it's time to get out. (Drucker again.)

                Your problem is management and believing they ought to behave rationally. They're the boss, they're under no such constraint at all. But, on the other hand, you're free to use science on them.

        2. informavorette
          Unhappy

          there's more to it.

          I must say that we have quite an interesting general discussion here. I'd like to see the admins succeed in convincing the management of some sound principles, but I'm not very optimistic, based on my own observations of admins and (even IT-friendly) managers.

          Then we have even more complex situations, like the one discussed here. Namely, you can be the admin of the company which buys a large machine together with its embedded controller. And let's say that you're even able to convince your own manager that solid software would be a good thing. Trouble is, your manager has absolutely no leverage with the development team at the machine building company. He is on the demand side of a market with a supply oligiopoly - if not a virtual monopoly, because if he's once bought some automation from, say, GE, he cannot just add some Siemens machines to his setup later on. If he goes to GE and tells them "we want to get better controller software from you next time", they'll laugh in his face.

          Don't get me wrong, I'd love to live in a world where everyone from a Fortune500 CEO to the flower seller in the market realizes that when software is ubiquitous, it had better be good software or else we're in for some bad surprises. I'd even settle for a world where average people know the difference between good and bad software - as of now, most of the CS students I've met don't know it. Only the big question is: how do we get such a world?

          1. Anonymous Coward
            Anonymous Coward

            Re: there's more to it.

            I don't have a simple solution, but the key is communication. This for the appalingly simple fact that deals are made by communicating. So even if you have no leverage you have to make clear that you expect better software. How to get leverage, make the communication lines work, et al. that's management, which is why I mentioned Drucker earlier on. He's dead now but he spent a life long thinking and writing about it, if not exactly from a techie's viewpoint, and he certainly didn't get everything right, as he'll be the first to admit. Management as formal sport is quite the new kid on the block. Which is understandable seeing how large corporations and therefore a pressing need for multi-layer management that works smoothly is relatively new, as such things go.

            As to CS students, well, we've commoditised higher learning and dumbed down the courses so as to make no difference between what graduates are ``produced'' (yuck) here and in areas with several generations backlog of technological development. Deciding whether that was a good idea is left as an excercise for the reader.

          2. Trevor_Pott Gold badge
            Megaphone

            @informavorette

            Well, I believe we are now entering into the realm of philosophy. I can be fairly clear on things that can be observed and catalogued; behaviours that can be qualified as well as quantified. I am good with IT because “action A provokes response B” within a reasonable margin of error. If action A provokes a response other than B, or fails to provoke a response at all then it’s troubleshooting time.

            Computer systems are supposed to work within norms. It’s impossible to say “they should work the same way every time,” if for no other reason than that our hardware is imperfect. Still, they should fail in predictable ways; if our software is good, in graceful and predictable ways.

            Your comment on the other hand veers off of this track. “I'd even settle for a world where average people know the difference between good and bad software - as of now, most of the CS students I've met don't know it. Only the big question is: how do we get such a world?”

            Change the world? If you find the answer let me know. At the end of the day it’s all greed. Software is shoddily designed either due to expediency or lack of knowledge. Lack of knowledge is usually due to the individual being too greedy with their leisure time to devote the proper skills to their craft. There’s a margin of error there for folks who genuinely aren’t greedy, but make me critical mistake of overextending themselves thus resulting in an inability to learn the craft, or finish the project.

            In large part though, shoddy software seems to stem from the very human desire to cut every corner possible in the attempt to do it faster, cheaper or what-have-you. We are all of us guilty; each and every person on the planet. The man who tells you he never cuts corners is not only lying as he says it, he’s so utterly terrified of appearing to be a failure that he will completely overcompensate for it. (A liability in technical circles, a potential ally in political ones.)

            The little things in life are usually where it shows the most. Take cooking for example; when you are alone, and cooking for yourself…how many corners do you cut? If you are cooking for friends or family, you generally try you very best to make a satisfying and delicious meal, but when alone you reach for the box of bachelor chow and the microwave.

            This approach doesn’t end there; it’s extended to every facet of life. Software developers will cut any corners they don’t feel is important, and managers won’t put pressure on developers to not cut corners unless something makes doing that important to them. The company investing in software development needs an incentive to not cut corners and on it goes. People are greedy not only with their money, but with their time and effort as well.

            The only advice or philosophical truth I can offer is that it is because of the recognition of these very facts about our human nature that the field of ENGINEERING was born. There is a difference between a builder and an engineer. There is weight and value to that Iron Ring. An engineer belongs to a fraternity of people sworn NEVER to let such concerns cause the failure of their projects. You don’t have Adam the cheap labour builder create your train bridge across the river valley; you rely on an engineer because it is important and it has to be done right, with zero cut corners. If that engineer ever cut a corner; just a single one and was found out, he would lose his livelihood. The rules of his profession are strict, and they are final.

            Just as being disbarred is a career death sentence for a lawyer, or a Doctor can lose their license to practice medicine, and engineer can have that iron ring taken away. It is what makes these more than trades, but honoured and venerable PROFESSIONS.

            Too many people misuse the term engineer, for that matter, people misuse the term “profession.” If you want a world where software is designed right, the first time, with zero cut corners than you need to make development of software an actual profession. You need to make “software engineers” real, actual engineers, and they need to be bound by that iron ring.

            If you want a world where all software is developed along these lines, then development truly must be a profession; common people are forbidden to practice it unless they belong to an accredited organisation. Just as I cannot claim to be an MD and practice medicine, proclaim myself counsellor and practice law, I can not claim to be an engineer and practice engineering. It is illegal, and were I do attempt to do so and were caught, I would go to jail for it. (It is not a crime you merely get a fine for.)

            This is the world that would have to exist for software to be “done right, the first time.” Software would be enormously expensive, but it would last for decades. It would be slow to evolve and change, but it would cope with a variety of issues.

            If you want a world without the cut corners, then all of us IT folk, be they developers or systems administrators need to accept who wand what we are. Whether you have an MCP, a one year certificate, two year diploma or even a bachelor’s degree, unless there is an iron ring on your hand, you aren’t an engineer. You aren’t even close; don’t claim to be one, don’t pretend you are one, and don’t think that you have the slightest idea what the difference between what you do and what a real engineer does is.

            The IT industry’s collective need to pad our egos is probably responsible for more terrible design issues, implementation failures, buggy code and downright asshattery than anything else in all of human history. For decades we have told ourselves that we are “new’ and “disruptive” and that we “weren’t recognised as being legitmate.” We told ourselves anything possible to convince ourselves we were ‘as good as” doctors or lawyers or engineers.

            Yet we have *never* held ourselves to the same standards. We have never put on the IT equivalent of the iron ring. We have never sworn an oath, and we have never collectively walked away from jobs because what is being asked can’t be done without cutting corners.

            We are tradesmen; food at what we do, and capable of making computers and software function in ways they were simply never designed to. We are problem solvers and tinkerers all, but we are most emphatically not engineers.

            So while I know there are many people who will disagree with this post; in large part because we all want to see ourselves in the best possible light, I’ll hit the submit button anyways, and let the flames fall where they may.

            My name is Trevor Pott, I’m a Systems and Network administrator, and I stand before you to say this is a trade, not a profession. I wish I was an engineer, and if I could do it all again I would have become one, but I am not. I am a digital janitor; a plumber of the tubes.

            Who are you?

  4. Anonymous Coward
    Anonymous Coward

    Good stuff!

    Excellent article!

    What about private vlans, assuming you are using cisco gear for your LAN switches? These would enable you to keep the offending w2k workstations on the same subnet but allow you to lock down where they can broadcast to.

    Also thanks for malwaredomains.com, I had no idea such a site exisited for malware!

  5. Matthew 3
    Thumb Up

    Useful article.

    Many thanks for the honest article. I think that we all learn a lot more from when things go wrong and the typical temptation to gloss over failures makes any lessons less useful. By describing your 'warts and all' situation the article is far more useful and relevant than a bland best-practice guide.

  6. Anonymous Coward
    Thumb Up

    Cracking article...

    ....thanks.

    R.

  7. Anonymous Coward
    FAIL

    Defence in Depth

    Excuse me - what about IDS / IPS updates...?

    These are now available from all the main security vendors as UTM or xTM appliances. IE cheaply.

    The Conficker "vulnerability" - not the "exploit" - was patched by some of those vendors months before MS released the patch or the worm was released into the wild. This means the worm would have been blocked at the perimeter.

    Also, put a bridge mode firewall between the office LAN and the equipment. PVLAN is *not* enough.

    1. Trevor_Pott Gold badge

      @AC

      Wait for the next article on the IDS stuff...

      As to "bridge mode firewall between the office LAN and the equipment," I would like to be enlightened as to why you feel this will provide more security than a separate subnet + VLAN on the switch. (Especially if the equipment will be getting its own firewall.)

      What real protection does your approach provide that mine doesn’t?

  8. Anonymous Coward
    Flame

    Pointy Haired Bosses

    ...will probably remember this and be a little bit more willing to support good security measures.

    Why they think they don't need automatic updates, I don't get. Probably current MBA courses also include one year of Windows Admin training ??

    Seriously, Powerpoint and Excel are well-tested against Windows patches by MS themselves, so these "exec" PCs should have auto-update, too.

    1. Anonymous Coward
      Anonymous Coward

      Forgot about that, whoops.

      ``Seriously, Powerpoint and Excel are well-tested against Windows patches by MS themselves, so these "exec" PCs should have auto-update, too.''

      Oh yes, that's what I originally wanted to note well before I went off on DHCP and other tangents.

      To management, much like the rest of the warm body brigade, their boxes are supposed to *work*, not bother them with upgrades. So those would actually be the first I'd patch if I was into running a windows ``network'' (which I very much am not). Whether to run them automatically or do it manually during their extended lunch break is something else entirely. And how you communicate that (``down for updates, guv, better extend that break some more''), well, that's site specific, innit? Communicating is key though, but do find a message that intrudes on their nice and quiet routine the least.

      On another tangent, it never ceases to amaze me how the ``but it has to just woohooork'' crowd puts up with a company notorious for crappy software that excels in finding new annoying messages to pop up and have its users click on to ignore. You can give me all the perfectly sensible reasons in the world, I can name a hundred myself, and I'd still not get it. Why are we putting up with this, exactly?

  9. Anonymous Coward
    Megaphone

    Another Thing - Compartmentalization

    All network Administrators should think about useful partitioning strategies for the network they manage. Most of the time the marketing people don't need to contact the PCs and the servers of the finance or the HR people. If they need access to something, make that available selectively.

    R&D systems contain a lot of valuable data, so why does HR need access to that ? Different divisions often have not much to communicate to each other, so this would again be a good opportunity to set up isolated subnets.

    This kind of compartmentalization is very similar to modern ships, which can tolearate quite a few leaks because they consist out of waterproof compartments.

    Some companies configure their server in the intranet the same way they configure them for the internet - lock down all unused ports and allow only the IP addresses/ranges that really need access. Also a pretty good idea.

    1. Trevor_Pott Gold badge

      @jlocke

      I couldn't agree more! In larger enterprises this is an honestly top-notch approach. Sadly, in smaller companies every staff member is "an exception." Everyone is wearing two or three hats. Just setting up the windows security permissions is a nightmare; it's probably as complex (or more) for an SME of 150 people than it is for a corporation of 1000.

      When your staff wear so many hats that most of them can't even be reasonably given an actual job title...network security via compartmentalisation becomes a pipe dream.

      I can isolate some of the back-end equipment, but isolating desktops, file servers and similar equipment just ain't gonna happen.

      Still, wherever you can compartmentalise...do so!

  10. Anonymous Coward
    WTF?

    WSUS Optional?

    Madness...

    I have it running and enforced for everyone, myself included - patches are done once approved at 7am every morning, and then nag very 15 mins for a restart.

    I sold this to the office on the basis of Administrator rights - they can have those if they are willing to put up with AV being locked down and strict and the WSUS being aggressive.

  11. A J Stiles
    Stop

    More radical solution required

    As long as anyone continues to buy software which is shipped without the Source Code, this sort of thing is going to keep happening and happening and keep happening again.

    If something is closed so only one person can fix it, then it's *not* secure.

    If something is open so anyone can fix it, then you are damn well going to build it properly in the first place rather than risk being laughed at.

    It's time for drastic measures. The Nuclear Option: Legislation to enforce user access to Source Code. Sure, it would be great if The Market worked and it was enough just to demand the Source Code or threaten to take your business elsewhere; but that hasn't happened in practice. Keeping Source Code hidden has done nothing to prevent rampant piracy, while inconveniencing everyone along the way.

    Just how much more collateral damage is it going to take before anyone realises there's a problem?

    1. Filippo Silver badge

      Re: More radical solution required

      I fail to see how providing source code would help. The typical factory business that uses heavy machinery does not employ anyone who's even remotely qualified to modify control software, and given the extremely specialized nature of said software (it's usually custom-made) you can't hope somebody on some Internet forum will make a patch. You could hire a consultant, but he will just pretend to look at it, tell you the program can't be fixed, and would you like our own solution instead. The source is useless.

      As for why businesses use brittle and insecure control software that only runs on Win2K, it's probably because said software was custom-made for DOS in 1985, it would have to be rebuilt from scratch, and this would cost money.

      The outrage isn't the software, it's that somebody was allowed to use the machine to check his email. It's OK for a box to be unpatched because of old software that understandably wasn't designed to run on OSes 20 years in the future, but at least keep it off the internet.

      1. Trevor_Pott Gold badge

        @FIllipo

        The e-mail checking did not occur on the brittle Win2K box. It occurred on the user's virtual machine. (The one he is assigned to use.) That VM happens to be on a subnet capable of reaching the brittle equipment, and then all things went *poof*.

        The real hell of it is that I can’t take “Local admin” away from the guy because he’s a manger. So if you have local admin on your VM, decide to open an attachment that you shouldn’t….there’s really **** all I can do about it. That system will get pwned in about 0.2 seconds flat. And it ain’t just windows this happens on. (Seriously, I am getting SO BLOODY SICK of cleaning up after Macs that keep getting pwned by these damned “download this file and execute it” Safari exploits.

        So yeah, PHB opens mail, pwns his local system…that reaches out to our network and pwns anything not fully patched. As my article points out: “mea culpa.” That box should have been on it’s own subnet, one that regular desktops or personal VMs can’t route to.

        1. Anonymous Coward
          Anonymous Coward

          Email Virus Checking?

          Did the infected email come through your corporate email system or a private/webmail system? If you have a centralised corporate email system I would have thought it was possible to scan incoming emails for viruses at the server, so the viruses don't make it to the user's inboxes. This would have prevented this infection, wouldn't it?

          HTH

          1. Trevor_Pott Gold badge

            @Stephen Roberts 2

            Check my replies later in this thread...I do explain it all more in detail...

      2. A J Stiles

        How it would help

        Providing Source Code would help by forcing the people who wrote it to write it properly in the first place, lest other people point and laugh and say things like "Aha! What a delightfully stupid schoolboy error!" (Early Mozilla and OpenOffice.org, being formerly-closed code that was opened up, were full of these. OOo 1.x wouldn't even build on 64-bit systems, because someone had assumed that (1) an int and a pointer were always the same size and (2) nobody would ever see the code and find out about this horrendous bodge.)

        Also, if it went wrong, or even if it just didn't quite suit your existing workflow, you could get someone besides the manufacturers to fix it.

        And finally, if the underlying OS changed, it would be possible to get the code to run on the new OS. Maybe just as simple as re-compiling, but at any rate a lot less bother than you're used to.

        1. Trevor_Pott Gold badge

          @AJ Stiles

          "Providing Source Code would help by forcing the people who wrote it to write it properly in the first place, lest other people point and laugh."

          Have you ever used any open source anything? Better yet, have you ever actually looked at the source of open source anything?

          The /vast/ majority of it is as badly coded as anything proprietary, and a truly unfortunate amount of it is utter crap. It is largely coded to solve a specific problem, stupendously inefficient and with zero consideration for extensibility, dealing with errors or unexpected input.

          So essentially identical quality to proprietary code then.

          The only difference is other coders can tear it up /if they choose/.

          They very rarely choose to do so, because frankly it would be less bother to code it from scratch anyways. I think you have some overly idealistic views of open source, sir…

          1. A J Stiles

            Yes

            Yes, I run as near as possible 100% Open Source and have been doing so for the best part of 8 years now. I have even occasionally done some tinkering with the code. Our company now runs almost exclusively on Open Source and in-house written applications accessed through a web browser.

            I really would not have it any other way.

    2. Brian Miller
      Alert

      Yeah, sure, local sysadmins rewrite all softwware

      I've had the unenviable job of working with software that had ten years of cruft on it. Yes, 1/3 assembly and 2/3 C running on MS-DOS x286. No sysadmin has the skills and time to rewrite and port crufty code to another platform.

      As for solutions to problems like Conflicker, I would place little NAT firewalls in front of all of these old machines. Then the problem is solved because the worm can't access the computer. Of course using a NAT box isn't feasible if the machine has an exposed file system. The Conflicker worm uses NETBIOS vulnerabilities to propagate, so you'd need a different solution.

    3. Trevor_Pott Gold badge

      @AJ Stiles

      Your solution, while very passionate and open-sourceily noble would leave us without any of the equipment that actually makes our company money. When there isn't an open source version available, you buy what you can.

      IT exists to server the business, the business does not exist to server IT's ideology. This equipment is what it is, and it is what pays for the wages of everyone I work with. It is my job to make sure it runs, regardless of ideology.

      I am pretty certain this is true everywhere else in the entire world, with the possible exception of certain areas of California.

    4. Nagy, Balázs András
      FAIL

      Re: More radical solution required

      Ever heard of this?

      http://en.wikipedia.org/wiki/Thompson_hack#Reflections_on_Trusting_Trust

      No, using open sourced software is not inherently safe from built-in "extra" features. Plus reviewing, rewrinting or even understanding all code you use is completely and utterly impossible. Especially in heavy industry, where a bug or miscalculation will cause millions in damage and possibly the loss of lives.

  12. jake Silver badge

    This is why friends don't let friends ...

    ... put computers/OSes with a single user, single-tasking, "this desktop only" mindset onto a world-accessible network. It's only asking for trouble.

    Yes, I know, it is done world-wide. Doesn't make it secure. Or the right way to do it.

  13. Ian Sawyer 1

    Dufus

    Your an amateur

    1. foo_bar_baz
      Grenade

      What an insightful and constructive post

      "Your" one eloquent SOB, I wish we all could reach your high standards.

  14. plrndl
    Linux

    Windows for WSUSies

    Anyone who has a WUSS server on their network is asking for it.

  15. Robert Carnegie Silver badge

    Efficient management practice

    If you log off at the end of the working day, you just have to start all your applications again the next day. The smart manager can save time and just switch the monitor off. The "mon-i-tor" is the television thing on top of the box thing.

    1. Keith Williams
      Thumb Down

      Efficient management practice

      You forgot to mention that they have 20 to 100 emails open that "they are working on"

    2. Trevor_Pott Gold badge

      @Robert Carnagie

      No desktops. VDI. That measn you can just disconnect from your session. THAT MAKES IT FUNNER.

      *sob*

  16. Anonymous Coward
    Happy

    Critical Systems, Network Access

    If the aforementioned is a system that controls something via an ancient SCSI cable, and you can no longer patch it anymore, what's the need for network access?

    Just have it run that one application, nothing else. If you are so worried about performance, then that's the only function it should be performing, thus decreasing the number of applications installed/running at once!

    No network = no virus

    Disable drive access too, unless admin.

    This seems VERY preventable via common sense, not even patches! :)

    1. Trevor_Pott Gold badge

      @yehasher

      This system exists to receive files from a master command and control system. It then takes those files and Does Stuff (tm). Without the ability to use the files it is getting over the network it is a quarter million dollar paperweight.

      1. M Gale

        Quarter million dollar paperweight

        Not a vinyl router is it? Only I remember being called up by a friend in a signage firm asking if I can help fix the thing. I thought, and said "A.. err... what?"

        Turns out that what I was expected to fix was a snooker table-sized thing that looks for all the world like a giant air hockey table with an equally oversized plotter bolted onto the top of it. The C&C machine in this case was a PC from the year dot running (or rather, failing to run) DOS v5.00.

        Took a new motherboard/CPU/RAM combo in this case. Cheapest thing in the shop did fine. As far as I know, it still runs DOS...

        1. Trevor_Pott Gold badge

          @M Gale

          Not a vinyl router...but close. Different kind of media production, same basic idea.

  17. Nick Ryan Silver badge

    LitePC

    Problem with old system not patching, or not being allowed to patch?

    Disable EVERYTHING that is not required for it to operate. And that specifcally includes all the "features" in "file and printer sharing" that MS commingled together in a splendid effort to make their systems as insecure as possible. This may not utterly prevent infections, but it reduces the chances hugely.

    LitePC... a good tool of choice for disabling everything that isn't required.

    1. Trevor_Pott Gold badge

      @Nick Ryan

      Thanks! WIll look into it...

  18. Mike Bird 1
    Flame

    Letting Management Decide to Patch

    What I found worrying was that the author permits management to decide when to implement patches that have been downloaded to thier computer via WSUS.

    Zero use in doing that.

    We use WSUS and all the user workstations get patches ENFORCED.

    Its not a "please would you mind installing that patch .. thank you very much" its more of a "HERE IS A PATCH - YOU WILL DEPLOY PATCH NOW ! " method.

    1. Trevor_Pott Gold badge

      @Mike Bird 1

      The author likes to get paid. Jumping up and down on the egos of those who sign the cheques is not an efficient way to do so. You can get away with it once and a while...

      ...but you pick those battles carefully.

  19. Anonymous Coward
    Anonymous Coward

    Sorry, but our sysadmin failed

    Knowing his network is a fundamental duty of anyone who deserves to be called a sysadmin. Our victim here knew he had those special purpose machines that for various reasons can not be patched and can not run anti-virus so why didn't he separate them from the rest of the network ? I've worked in several places where there were exactly this type of equipments : custom PCs running the no service pack version of Windows and with a drastic service contract that prevented you from installing anything on that machine. So guess what ? They were all put on a network segment with no access from/to outside world. To transfer the information in and out, we put a Unix host on the same segment doing SCP with the outside world and Samba with those pesky machines. And the only information transferred was in plain old ASCII text or manufacturer's proprietary format. As for the workers who needed to surf for pr0n, everyone got a second PC with fully patched and protected Windows OS. This system worked for years without any single glitch.

  20. M Gale

    I am not a network admin..

    ..so I'm really rather curious as to how IPv6 won't let you use NAT. Since IPv4 doesn't "support" NAT either, as NAT is something of a hack to begin with. What's to stop the same old "create table of connections, map internal requests through the table and make external request on internal machine's behalf" approach of IPv4 (s)NAT?

    Granted, 2**128 addresses means NAT isn't going to be needed unless you absolutely do not want a machine with a "real" IP address, but no need to chuck the baby out with the bathwater.

    Maybe I need an IPv6 primer or something. I'm confused!

    1. Trevor_Pott Gold badge
      Flame

      @M Gale

      IPV4 grew beyond it's original Spec. There were standards agreed upon for NAT, NATPT, NAT traversal, uPNP etc.

      It Has Been Decided By Those That Know Better That Know Better Than Anyone Else that IPV6 shall not NAT. This means that the open source community won't design an IPV6 NAT, (largely because they all agree that stuffing their idea of how the internet should work down everyone else's throat is a Grand Idea.) Most commercial organisations won't touch the idea of IPV6 NAT until there is a standard to code around. Interoperability on something as fundamental as internet protocol is something (most) companies don't **** with.

      I say most because it should be noted that Cisco told Those That Know Better That Know Better Than Anyone Else to stuff it and made a NAT for IPv6 anyways. We will see if this ever becomes any form of standard, or if They Who Sit In Ivory Towers win the battle and it becomes an evolutionary dead end.

      For now, there are no “private subnets” in IPV6…no equivalent to 10.X.X.X or 192.168.X.X. (there used to be, but it was removed from the spec because “private addressing, NATs etc. “break the internet” and are bad.) Similarly there is no NAT you can hunker down behind; the idea behind IPV6 is that all your kit is online all the time. The argument by Those That Know Better That Know Better Than Anyone Else is that well /everyone/ should have a hideously complex firewall sitting on the edge of their network that you configure, change and tweak on a regular basis to protect yourself from the internet. Anyone who doesn’t know how to use one shouldn’t be allowed on the internet! Simples!

      In essence, the folks behind IPV6, those who currently control the spec, believe that “the internet belongs to everyone.” The idea of anything that breaks their precious “end to end” model makes them apopleptic. Disregard the part where the few ISPs that do native IPV6 are handing out only 2-5 IPs and charging for more, or the fact that significant chunks of the corporate world don’t WANT their devices addressable from the outside.

      The answer from Those That Know Better That Know Better Than Anyone Else is, and always will be stubbornly insisting that The Proper Way is a big stonking firewall on your edge device that controls access to your systems and this will be The Only Way.

      SO for all intents and purposes, unless Cisco can defeat the entire internet establishment, the days of hiding your systems in a private address space are over. No longer will only your edge devices be addressable from the outside, but so will your desktop, your cell phone and that internet connected toaster you need so very badly.

      But your firewall (that will never ever be misconfiguration, ever,) will save you.

      Yep.

      It will.

      And I in no way disagree with this approach. Not at all.

      Flames, because I will soon be covered in them after this post.

      1. Rob Swarbrick
        Coffee/keyboard

        Guess I'll be keeping...

        ...some of those wee 4-port IPv4 router/NAT boxes that are so common now. IPv4 to IPv6 bridging will go on for quite some time.

      2. Jay Daley
        Stop

        @Trevor_Pott

        IPv6 was specifically designed not to need NAT. There is even an RFC that explains that part of the design and how to do with IPv6 what you used to use NAT for:

        http://tools.ietf.org/html/rfc4864

        Worth a read.

        1. Trevor_Pott Gold badge

          @Jay Daley

          Read it. Still don't agree with their approach. IMHO it’s not even close to enough. I want address space that simply isn’t routable on the public internet, space that I can assign either manually or DHCP so I control rigidly what lives on my network, and I want these systems to be able to get at Internet resources without those resources seeing anything about my network. What I want is NAT. Not some touchy feely “if people code their routers and applications to respect the rules, then you will have your privacy and security,” but actual “the only information you are getting about my packets is that they come from my edge device.”

          I don’t happen to *like* the end-to-end model, and see no value in preserving it. There is simply a fundamental philosophical difference at work here. I don’t want the internet ever able to address anything behind my edge systems, I don’t want the internet to be able to uniquely identify systems behind my edge, and I want to be able to run around behind my firewall with as few shields as possible. Unless my understanding of IPV6

          ULAs aren’t a help at all because ULAs are still supposed to be globally unique. I don’t want globally unique addresses. I want a block of private addresses that *everyone* uses and that for all intents and purposes can’t be routed across the internet. Even if you put your own router into place that would disobey the rules, other routers along the way would simply refuse to forward those packets because they are private address space.

          “Untraceable IPV6 Addresses” is smoke and mirrors. There’s no actual anonymity. There’s simply “well, there are just SO MANY addresses that you can randomly assign them!” I don’t want to randomly assign them. I want them assigned sequentially in a way that makes sense TO ME, THE HUMAN WHO HAS TO RUN THE DAMNED NETWORK. But I don’t in any way want you seeing the structure of my network, or the local addresses of my systems. You should see *nothing* except the address of my edge system.

          For that matter, stateless auto-config can die in a fire too. I don’t want devices on my network auto-configuring. If you aren’t handed an address, then you languish in .169 and don’t get to communicate with anyone. Network control is eleventeen squillion times more important than being friendly or neighbourly. My network; noone else’s.

          I learn to work with IPV6 as it is because I am forced to; I have no vote or say in how it is deployed. Is wrong, absolutely nothing prevents someone from coding a router or network stack that takes what should be “private” or “privacy assured” IPV6 addresses and harvesting the unique information out of them. I don’t trust my ISP, do you? Why should I trust any router past the one on the edge of my network?

          But the complete disregard for the concerns of folks like me makes me a Sad Panda. I have nothing but disdain and zero respect for the attitude of the folks who oversee IPV6 I never thought I’d say this, but I really hope Cisco wins. The internet establishment needs to be taken down a peg on this. Have you READ these documents? “The perceived benefits of NAT.” Who the heck are these folks to determine if someone else finds benefit in a technology or not? They can run their networks their way, let us run our networks our way. With any luck, NATPT6 will be widely available regardless of the desires of the IPV6 committees.

          AND I WILL CACKLE WITH GLEE.

          Until then, I will shield everything system I have, and direct as much negative mental energy as I can in the direction of the Ivory Tower folks who decided they knew better than the rest of us.

          If that sounds bitter and angry, then so be it.

      3. Anonymous Coward
        Anonymous Coward

        @Trevor

        I was offered as many IPv4 addresses (within reason) as I wanted and a /48 IPv6 address when I ported my home ADSL over to AAISP. That /48 is rather more than one or two addresses - it's about 1.2 x 10^24 which is enough for my home usage.

        I took 7 IPv4s to start with a /32 for my router and a /29 for other devices. I probably wont have to ask for more but I think I'd get them.

        I'll grant you, I do take firewalls somewhat more seriously now at home. I probably wont give an external address to my WII.

        I also get to see the latency of my link from the ISPs perspective and a button to restart it from BT or the ISP end.

        Now I quite like the convenience of NAT but it is a menace when it comes to say SIP. IPv6 has certain safeties built in to address security (IPSEC for example) and lots of other things that I am still learning.

        Just because your router does NAT does not make your LAN secure. Most routers are pretty sophisticated embedded devices:

        Do you keep yours patched?

        Does your vendor even bother with security patches?

        Do you even know?

        These last questions are not directed at you personally but at any admin who thinks WSUS is the end to patching and a NAT gateway will sort out the rest of their security.

        1. Trevor_Pott Gold badge

          @gerdesj

          I never once claimed that "because my router does NAT my LAN is secure."

          I point out that it is common practice, ESPESSIALLY inside a home or very small business. For these people it may not be "secure" in the truest sense of the word, but it is GOOD ENOUGH. Aunt Tilly the Home User or Ma & Pa's 2 PC Bakery don't have the resources, training or time to deal with the kind of network security and management that IPV6 is going to bring to their doors.

          They don’t care about SIP, or any of the crap that the end-to-end model makes easier. They want to get to a couple of web pages, get their e-mail, and upload their taxes once a year.

          IPV6 in its current format will be a nightmare for these folks, and the entire internet establishment doesn’t seem to care. All you get is “well, the way they are doing it is just plain WRONG, so they should learn better.”

          The way they are doing it has worked for FIFTEEN YEARS; these people are going to get their hackles up when told to change, and for no good reason they can understand.

          I am sorry, but it drips of arrogance and lack of end-user usage scenarios to me.

      4. Nagy, Balázs András
        Thumb Up

        Re: @M Gale

        Well, that's one of the best of the summaries that I read so far about IPv6 NAT. Kudos, for it was funny and actually factually correct.

  21. heyrick Silver badge

    I've met confiker

    A USB key plugged into a internet kiosk terminal at work got infected. Avast noticed on my machine, but I would have quickly noticed an autorun.ini that I didn't put there (don't autorun myself). I told management, and printed out and translated some info on it, given the terminal is plugged into the company network. This was met with an extreme lack of interest. I suggested (since the terminal is an old XP with IE6 and Flash 0.1 or something) that I could set up a bootable USB device with Ubuntu and Firefox. Silence...

    You'd have thought they might have been interested in the discovery of a network-happy virus on a machine on their network, but...

    Don't ask about their IT support. There's a myriad of things I'd slap down, like NOT leaving yourself logged into your account with a VNC server active. Like NOT using the work computer with company sensitive documents to access Yahoo! mail. Like... no, I'm sure you all know the list. It's pretty universal. And it's pretty much not my job, and I think they've made it clear how much my input is wanted. Oh well. That's one less headache.

  22. Anonymous Coward
    FAIL

    NAT IS NOT A FIREWALL

    You wrote:

    "Even on a 'trusted' network like your home or corporate one it’s just not a good idea to be running without firewalls enabled on the local systems any more. (Get used to it folks; IPV6 doesn’t support NAT...)"

    For gods sake ... NAT is not a firewall. Equating NAT with a firewall is mistake #0 in your entire mindset. ABSOLUTELY NOTHING precludes you from installing a firewall, with or without NAT and with or without IPv6. You really need to go do some homework, I think.

    1. Trevor_Pott Gold badge
      Thumb Up

      @AC

      I never said NAT was a firewall. Not once. I commented on the common current practice of people only running firewalls on their edge systems. Why? Because only their edge systems have externally addressable interfaces. The rest live in a non externally routable private address space, and a good number of sysadmins leave internal-only-facing systems running with no firewalls. (They trust systems on their local network, whereas they wouldn't trust systems on the internet.)

      IPV6 on the other hand promises to bring us the wonderful life of every system on our entire network being externally addressable! Thus the only sane network management approach will be a big stonking firewall on the edge as well as firewalls on each and ever system behind that edge, because they too are now externally addressable as well.

      IPV6 means your entire network has become "the edge," but you can sort of deal with this in that it will all eventually pass through a router, on which you can have a ridiculous set of intrusion detection systems and a far more badass firewall than you are now required to have on all your other systems.

      IPV6: where your ma & pa shop can have network design and security requirements that are the equal of present-day large multinational corporations.

      Thumbs up because that's just awesome.

  23. Terry Walker
    Happy

    I'm suprised ...

    ...That you don't have SCCM implimented along with WSUS. On the patching front that would give you the ability to send WOL packets to the switched off machines (provided the NICs support WOL and that its switched on of course), compliance reports and installation deadlines so that patches that aren't installed manually by the user are automatically installed regardless!

    Excellent Article btw!

    1. Trevor_Pott Gold badge

      @Terry Walker

      SCCM = $. :(

  24. Ben Jackson

    Admin Rights

    Interesting article, regards dealing with the admin rights, how about Beyond Trusts Privalege Manager http://pm.beyondtrust.com/products/PrivilegeManager.aspx ?

    1. Trevor_Pott Gold badge

      @Ben Jackson

      Worth looking at. Thanks!

  25. Robin 1
    Grenade

    NAT

    "Flames, because I will soon be covered in them after this post."

    Yup, probably. A little deservedly so though...

    I won't flame you, but in truth, NAT does break things, especially in the IPTel world.

    I was a network guy before I became a security guy. From that perspective I can see both sides of the argument. While I understand what NAT does to simplify security (especially for home users), the benefits to simplifying networking outweigh it. And, to be fair, there really are some solid security benefits to eliminating NAT too. (transparency at the very least!)

    1. Trevor_Pott Gold badge

      @Robin 1

      I don't care if NAT breaks things. I want them broken. I don't care about IPTel or other people's applications. Broken by default is good. I don't care about anyone's network but my own. If the lives of other people trying to peer into my network or run applications that affect systems beyond my edge are made miserable...I'm strangely okay with that.

      My network doesn’t belong to the internet. It belongs to me.

      1. Anonymous Coward
        Anonymous Coward

        Aw

        Don't be a sad panda. You can run those things link-local if you wish. And have the gate drop all external traffic with them. And a couple other little tricks. When done properly you're better off than with NAT.

        There are annoying side effects that make even your NATed network not entirely yours as soon as even part of it is connected to the wider internet. Transparency is a good thing indeed for network admins that have to figure out where the deluge is coming from.

        Like, oh multicast floods, leaked reverse lookups for private space (as112 anyone?), and a host of other little things that shouldn't happen but in the real world do so anyway.

        Isn't your basic lament that the world is not ideal? Isn't that all of our's?

        No point to insisting to keep what is destined to join the dodo for cause. That's what lusers do.

        1. Trevor_Pott Gold badge

          @AC

          Keeping your stuff link local doesn't give it access to outside resources. I want access to those outside resources, but don't want people having access to my resources, or being able to track my internal machines. As far as I am concerned NATPT is ideal. What I lament is that because someone thought their preferred approach was better, the choice is removed from the rest of us. What harm to those people if NATPT on IPV6 were allowed to exist? There is enough address space they could choose to implement it, or not, as THEY saw fit. That people who prefer to limit the choices available should be given the “right” to tell the rest of us how to run things is what I lament.

          1. Anonymous Coward
            Anonymous Coward

            Building a network together.

            Well, that's the difference between a temporary solution and a well-engineered solution, innit?

            NATPT is fine as long as it works properly, but if it doesn't, you get extremely hard to debug problems, weird nasties, and other stuff, generally affecting some other network, not the perpetrating one. Plus, you know, the alternative is supposed to be less of a dirty hack. So the rationale is all there.

            And yes, the IETF does get to decide how the overall system will work. That's sort-of how the model works. You're free to join in and have your say.

            Personally I think IPv6 suffers from a rather perverse kind of second system effect (they tried to not let it affect them) and it doesn't surprise me much that it hasn't been deployed widely despite pressing need. And the fact that it already feels a tad stale doesn't make it more appealing. Perhaps the main problem with IPv6 in general is that it is a bit outsized. Like, 128 bits address fields? Ya hafta be effin' kiddin' me! And so on. That's not rational, but hey.

            I think that a suitable stateful ``one way mirror'' type firewall will give you most of what you want and in fact will even work nearly the same way (same state table), but it can be a bit more efficient perhaps and is transparent for other network admins too. And that is largely a good thing.

            1. Trevor_Pott Gold badge

              @AC 23:05

              "I think that a suitable stateful ``one way mirror'' type firewall will give you most of what you want and in fact will even work nearly the same way (same state table), but it can be a bit more efficient perhaps and is transparent for other network admins too. And that is largely a good thing."

              Why? Why is the transparency of my network to other admins a good thing? It's advantageous to them, a security risk for me. Your reasoning makes no sense.

              As to "well, you can put a stateful firewall at the edge and get all the protection you want," it's not a question of "the protection I want." It's the ease of that protection. I can personally maintain an IPV6 network in my sleep. I'm a trained systems administrator.

              What about my parents? My Aunt and Uncle? What about that nice older couple who own the soup shop? To say “well, they just need to learn proper network security” is trash. Existing network security, (which is far easier and more forgiving than life under IPV6) is still far too complicated for these people.

              But it’s okay because the nerds want it that way? IPV6 is beautiful on paper; but the actual details of implementation and ease of use were totally ignored, and it’s too late to go back now.

              Unless we can add a layer (like NATPT) to IPV6 to make it more forgiving for people who understand less. Those who like IPV6 see the end-to-end model as the only important part: this is what they want to be forgiving, because they care about their pet applications, and making life easier for developers.

              I look at it and say “the end-to-end model doesn’t matter; developers get PAID to deal with the fact that life sucks.” As far as I am concerned the only people who matter are the end users; whatever is settled on must be simple, easy to use, forgiving, and most of all EASY TO UNDERSTAND for them.

              So far all I have ever seen from the anti-NAT crowd is “well someone will create a device that’s easy to use, and just as forgiving as NAT without any drawbacks whatsoever.”

              I have yet to see one. Or even some beta software. Or anything really that makes me think IPV6 is going to be even AS EASY as IPV4 for these people. Let alone actually EASIER.

              IN other words: show me the business case. One that benefits me, and my users.

              Not some developers I don’t care about who are paid to deal with it anyways.

              You want to make networks more complicated, you are putting money in my pocket; but you are taking it out of the pockets of my friends and family and small businesses that can’t afford it to do so, and for no good reason that I can see.

              1. Anonymous Coward
                Anonymous Coward

                The title is required, and must contain letters and/or digits.

                I say you're being a mite unfair here: The ubiquity of NAT-IPv4 ``router'' boxes didn't come up in a day either. Given enough market pressure there will be similar ``zero configuration'' boxes that offer Pv6, and they'll be just as crappy, if perhaps Ia bit faster. IPv6 merely follows a different model to provide much the same. *shrug*

                Transparency is a good thing in the sense that it helps pinpoint problems and before the advent of NAT --called a dirty dirty hack by the internet engineers, and rightly so-- which was necessitated by scarcity, not security. Your insistence to keep NAT for its ``security'' side effects is to misunderstand, as a network professional, the hows and whys of NAT.

                To wit: Despite NAT, it's still easy to build large botnets. Most of the payload delivery is done by piggy-backing on connections the victims initiate themselves. You have a point that attackers being able to see through the mirror might notice juicy targets, except it doesn't matter.

                For one, that's not how the attack model works. If you're really concerned, include ICMP ECHO in the one-way stateful filter, and blind the attacker. For another, if you do that and they still can see you, they can also look through NAT. It's not that much extra work to infer what machine initiated which connections in that case. The only difference it makes is that a NOC operator somewhere else will just see the external IPA in case of NAT, and otherwise can give you better error reports. The latter can save you from ``we don't know what's happening at that IPA, we'll just drop you off the net instead'', possibly. So yeah, you can win in continuity.

                Plus, the added perception pressure might cause vendors to take more care in providing better software, which long term is the better option, against realistically neglible direct trouble for a decent admin. Such a firewall (packet filter, really, firewall implies higher-level smarts) is, er, a few lines of configuration on all the firewalls with state tables I've seen so far, and they don't change much, or at all, across sites.

                There's no reason you can't have IPv6 ``zero configuration'' router boxes that come standard with a firewall that does everything NAT does except the actual translation step. And really, if that isn't (and it isn't) adequate for securing the kind of industrial walking disasters we talked about earlier, then neither is NAT. So where gramps et al. would be happy with such a box, you'd still have to insert a bastion host to contain the disasters.

                Not saying IPv6 is necessairily better. I note the ten year lack of uptake despite obvious looming need. But if NAT dies tomorrow I'll shed no tears. And I daresay IPv6 can do much the same, even if in a slightly different way.

                Also, as a sidenote, you need to visit the scary monks if you haven't done so already.

  26. garetht t

    A/V?

    Thank you for your honesty - one of the best articles I've read on the Reg for a while.

    I have one question... what was the A/V software running on the user PC that allowed conficker to have a LANparty?

    1. Trevor_Pott Gold badge

      @garetht t

      At the time, both NOD32 and Microsoft Security Essentials

  27. The Original Steve
    Happy

    Great article

    I want more!

    No matter how "good" a sysadmin is, elemements outside of our control (such as patching regime, security budget, network budget, politics, enforced applications that suck etc.) can lead to a disaster; and I thank you for writing about it for your peers to see.

    One small question though Trevor... what happened to your AV? Both at the desktop and gateway level? Shouldn't that have stopped the malicious email in the first place?

    On a side note, it's often only times like these that management in SME listen to the sysadmin and you can get your concession to enforce WSUS patching.

    Also, not knowing your desktop estate, I'd suggest looking heavily into Windows 7 (or even a Linux desktop if your that way inclined). Between Vista, 7 and a LOT of hard work I've managed to pry the special S-1-5-32-544 SID membership from management's hands.

    Look forward to the next one - I'm thinking about VDI for the future and about to re-deploy our IDS so I'll be keeping an eye on the RSS.

    Cheers.

    1. Oninoshiko

      VDI v. App Virt

      been looking at VDI, myself. turns out once you start talking to MS about it, licensing gets ugly fast. from what I have found so far, just virtualizing the Apps is easier (also much more effecent on the servers)

    2. Trevor_Pott Gold badge
      Pint

      @The Original Steve

      Well, you asked for details, and here they are.

      The e-mail is protected by a forcefield consisting entirely of a CentOS VM. This VM runs ClamAV, spamassassin, Pyzor what-have-you. It filters and scans all inbound mail, and then passes it along to the exchange server. It drops any mail it finds contains a virus, and marks spam with [SPAM ASSASSIN DETECTED SPAM] in the subject. The exchange server (through some powershell jiggery-pokery) dumps every e-mail it sees with that tag in the subject into the user’s Junk E-mail folder. The Junk e-mail folder cleans itself every night, deleting anything older than some preset age whose number I cannot remember off the top of my head.

      When it is working, this works like a HOT DAMN, letmetellyou. It’s free, stupendously simple to set up, and catches bloody-near everything. So how in the nether fnord did this little gem squeak by?

      Well that would be the fact that CentOS is complete and utter PANTS at keeping a halfway up-to-date install of clamAV in it’s repositories. Of course my system is set to auto-yum-update certain packages every night, and run freshclam twice a day…but eventually the good folks who run ClamAV decide your version is too old, and they stop supplying new defs. Since CentOS updates ClamAV with a frequency similar to that of planetary glaciation events, this means that eventually the time will come when my ClamAV is too old, and it Just Doesn’t Work Anymore. This spamserver has since been replaced.

      To cope with this, my latest spam server is based not on CentOS but on the latest Fedora, (at the time of last build, 12,) with the addition of the RPMforge repository. (http://dag.wieers.com/rpm/FAQ.php#B). This is because RPMForge do me the favour of almost keeping up with ClamAVs spectacular update pace. We will see if this new one is any better than the old one.

      What was running on that user’s desktop that allowed the little zipped ball of yuck to walk right into his system and pwn the living crap out of it? Both NOD32 and Microsoft Security Essentials. Don’t get your hopes up that any other scanner is any better; I have similar horror stories for AVG, Trend Micro, McAfee F-Secure, ClamWin, Kaspersky BitDefender, Avira, G-Data and of course Symantec. Useless, the lot of them. You have to keep them around in the vain hope theywill at least let you know that you’ve been pwned, but frankly it’s safe to just rebuild every six months on general principal.

      As to migrating to Windows 7; that project is currently underway.

      Now, understand that these are all preventable issues. I should have been all over that spam server the instant the clamav defs refused to update. (Naughty me for not configuring the cron job to e-mail me the results. I get a slap on the wrist for that.) I also should have run around with a cluebat telling people DO NOT OPEN E-MAILS WITH ZIPPED ATTACHMENTS CLAIMING TO BE FROM COURIER COMPANIES. (Okay, actually I did that several times, but really you have to do it on a regular basis or they forget.)

      There are other things I should have done; regular threat sweeps, actually checking my damned IDS software…you know the drill.

      The truth is…we missed it. There are three of us, (myself, a senior sysadmin and a junior sysadmin.) We’re looking after a network with 130+ server VMs, 60+ Desktop VMs, 60+ physical (non thin-client) desktops and 45ish physical servers. We deal in data volumes measured in Terabytes of network traffic a day across the internal network. Peak external traffic input to the network topped 100GB a day in 2009. There are scads and scads of industry-specific software (and even hardware) to deal with, as well as some pretty demanding functionality requirements handed down from They Who Sign The Cheques.

      I am the head of IT, but the company has a CTO and a CEO who both have some reasonable say in IT policy. I have to design the network, make sure it’s implemented, maintained, secured and all the other things. The other sysadmin I work with is fantastic, and our bench tech knows his stuff. (He’s learning the trade, and even taking on junior sysadmin tasks.)

      You get one project “working,” but not “fully complete and polished,” and you toss it on the line because it needs to be in service *EFFING NOW* and then move on to the next pile of combusting feces on your desk. By the time you even think about the first project thrown into service at 80% completion, it’s been MONTHS, and there are not only configuration changes that need to be made, but it’s now a critical service item, and you have to schedule your downtime as well. (That gets interesting when you have 5 networks in 4 cities stretched across 3 provinces covering 1900km. Oh, and there are roaming users based in cities all over Canada.

      Somewhere in there I also have to deal with vendors, prototype new systems, vet patches and system updates, design and maintain websites and intranet services, keep the printers and the phones and the blackberries and the gods-only-know what else running…

      So the really short answer is; In all the cloud of things to do, I simply missed the warning signs, and neglected parts of my network. I could sit here and list for you eleventeen squillion things wrong with my network. They are like an infinite number of needles in my eye; I know my network very well, and that includes knowing what’s wrong with it as well. I could spend three months just locking down the desktops, or cleaning out the AD. I could spend a year cleaning up the Linux estate, and doing All The Things That Should Be Done.

      Truth is though, I’d never ever in a million years get caught up. SO at the end of the last year, I decided enough was enough, I’m burning the entire network down and replacing it. Every server, every desktop. The research I have put into this, the testing, the vetting, the experiences of it all are beg documented, so that I can share my experiences with those who might benefit from what I have learned. Some of it I have (and will be) documenting on my personal Blog, (http://www.trevorpott.com) but the majority of it ends up here. Somehow someone decided that they had had enough of me running around the comments section of this website shooting my mouth off at everyone and everything, and I should instead be putting my experiences to use for the benefit of Vulture Central.

      And that’s how you get El Reg’s desktop management blog.

  28. Sean Kennedy

    A few points

    There are some steps that any admin can take RIGHT now to protect against most viruses out there. The best part? They're mostly free.

    1) Update A/V. You have one installed right? Then this is a no-cost solution

    2) WSUS, + GPO + force updates. 'nuff said

    3) Windows Firewall.

    The above combination will block just about anything that gets loose on an internal network. Now that you've survived a conficker infection, you will need to check to make sure the BITS and Automatic update services are still running. They probably aren't, and most A/Vs won't reenable these services. Meaning no patches from MS.

  29. Sean

    Management being a PITA

    I get the sense that a few of the people who've poster here have dellusions of BOFH'dom.

    As a consultant and project engineer - I routinely go to sites where I set up IT policies and deploy initial configurations and have the conversation with managers about forced patching. Mostly I get my way and everything gets patched in a sensible fashion.

    The cold reality is though - if I get push back - there's only so much I can do before the management types just replace me with another IT monkey who'll STFU about forced patching.

    One bad patch deployment event in '02 that's made the management team patch shy, or a managerial culture of 'it's a nuisance and it doesn't matter' are both common events. I do my best to educate and I get sign off on reccomendation and warnings about what happens if they don't keep their environment well patched so I can say 'I told you so' and bill them at emergency call out rates for the clean up when it does go pearshaped - but sometimes getting paid means that the solution isn't right or proper. Reality of the workplace is that the IT guy is very rarely the one authorizing cheques - and if you like getting paid - sometimes you need to put up with disasterously bad practice in order to get through the role.

    IT/Business Management interaction in industrial situations is the fine art of getting paid enough to put up with this crap.

  30. John 75

    Simple solutions to problems like this

    Systems that can't be patched need to be isolated, I don't mean lacking network communication. I mean isolated so that only designed communication to specified machines is allowed. It is called use internal firewalls.

    No browsing from production non patchable computers enforced by a firewall.

    It really is not that hard, I have done this in a production (factory/manufacturing) environment. It does not cost much and you don't need cisco.

    John

  31. Chris in NZ
    FAIL

    crap os

    I know this doesn't help the debate one bit, as we use what we have, but when will we have well-built desktop OSes? No dll-hell, no rebooting to install an app, or to install a patch (unless the very kernel is patched, and even then, mainframes have done that for many many years).

    The last improvement from MS in that area was removing the need to reboot to change screen resolution (remember this one? Ha!).

    Yeah, bring aero and UAC instead. Pathetic!

    1. A J Stiles
      Linux

      We have well-built desktop OSes already

      We already have desktop OSes with things like privilege separation (meaning normal user accounts can't poke about with things they aren't supposed to) and differences between executable and non-executable files (meaning you can't just execute any old file).

      They just don't come from Microsoft.

    2. Trevor_Pott Gold badge

      @Chris in NZ

      Actually, Vista/7 have mostly done away with DLL hell. I abhor the solution, but their "WinSXS folder" does seem to solve that particular issue.

      Windows 7 can take many more kinds of patches in stride without reboots than XP, but it admittedly falls *far* behind it’s competition in this regard; this is largely due to the outdated fundamental security model of the OS…something that Windows 8 is supposed to address.

      Windows 7's approach to UAC and privilege separation is as good IMHO as anything Linux or Mac have to offer, and if you wonder when Windows will catch up with Linux and Mac on the rest of the security model, please go read everything Mary Jo Foely has to say on the topic of Midori. If the rumours are right, and they are using Midori as the basis for Windows 8, then things are going to get VERY interesting in a couple of years.

      Midori is considered to be Microsoft’s first “post-Windows” operating system, and will likely form the basis of the next generation of the Windows brand. The model for the operating system is so radically different from what exists now that I frankly haven’t found myself this excited for a new OS since OSX was about to be launched.

      In any case, I can’t do this topic justice, get the hence to Mary Jo Foely’s blog, and read what she has to say.

      http://www.zdnet.com/blog/microsoft/on-the-road-to-midori-redhawk-minsafe-and-sapphire/1477 is a good place to start, but there is plenty more…

  32. Chris in NZ
    Happy

    crap os still :)

    winsxs like uac and the rest is just another layer of plaster on top of a rotten limb. For the same reason you decided to toss your network, MS must toss its trashy OS foundations, and I am nor optimistic. I envy you for being excited by a MS initiative, so I will look into about W8 (thx for the link), just in case I can share the excitement. As far as I am concerned, the last excellent thing coming out of redmond was midtown madness.

    Also, don't be put off (you don't seem to be) by some commenters' after-the-facts "easy to avoid" advice, or stupidly, their contempt. Your "By the time you even think about the first project thrown into service at 80% completion, it’s been MONTHS," says it all and the vast majority of us share it.

    Side note: That's why IT mostly is NOT engineering. A bridge, a plane, a car or any other machine don't stop at 80% design and test completion. The first 2 typically are 100% completed and verified, the latter not far behind.

    On the other hand, except for things like chips and some specific equipment where IT costs are zip compared to the consequences (space, health, military, traffic control, manufacturing machines like your vinyl machine, etc), cheap and good enough is the rule in IT, especially in IT departments (and their suppliers) where IT is a means; Few "engineers" there.

    1. Trevor_Pott Gold badge
      Pint

      @Chris in NZ

      It's impossible to be "put off" by "some commenters' after-the-facts "easy to avoid" advice, or stupidly, their contempt."

      I've been a reader of El Reg for 10 years, and a commenter since the system was first put into place. I’m from the internet. Enough bad mojo directed at me might get me down…but you develop a think skin fairly quick once you’ve been “tossed into the pool.” I’m not exactly at the epic levels of trolling achieved by some of the commenttards, but I am certainly no digital saint either.

      Given the number of times I have taken a shot at Andrew O, torn up another commenter, harassed the Moderatrix or generally been a fantastic pain in the ass, I don’t begrudge any of these commenters a shot or two at me.

      After all, if El Reg’s fantastic commenttards didn’t partake in the time honoured tradition of taking the piss out of absolutely everyone all the time…

      …who would I have to argue with?*

      *The real question sometimes has to be: do I start arguments in the comment sections of various articles around here because I am bored, I because I actually believe fervently in whatever I am arguing about, or some combination thereof? Oh wait; I’m a commenttard, so it’s obvious that the real answer is Paris Hilton.

      1. jake Silver badge

        Not arguing. Pointing out the obvious.

        "The real question sometimes has to be: do I start arguments in the comment sections of various articles around here because I am bored, I because I actually believe fervently in whatever I am arguing about, or some combination thereof?"

        I have no idea if you really believe it, but clearly you are bored. I can't think of any other reason for you to have spent the amount of time you have spent replying to comments here, in this single thread, over the last three days ... during the work-week, no less. Don't you have a bad-by-design corporate network to redesign?

        Side note: I have never, in my over a third of a century of working with computers, heard of completely rebuilding corporate infrastructure as "burning it down". In fact, the only place I have ever heard that phrase was in a Mel Gibson movie. Next, you'll be telling us about how you're putting things into place to "avoid a fire sale".

        While I'm at it, Paris Hilton is *NEVER* the answer. Never was, never will be. Even her Grandfather has disenfranchised her. Do the math.

        1. Trevor_Pott Gold badge
          Pint

          @jake

          It is burning it down. It's all virtualised, so you spin up your VMs on a bunch of test hardware, make sure your configs are good, and then one day *snick* old network turns off, new network turns on.

          Probably some minor re-wiring required here and there, but since it's all in racks, the whole of the network changeover from old to new will probably be doable in a single night.

          To me, that’s burning it down and replacing it.

          As to “bad by design,” I would not really be capable of describing the network as it stands in such a fashion. Bad by LACK of design maybe; or perhaps “the result of evolution rather than revolution.” The last time I truly redid the network from scratch was about 7 years ago; and it was WONDERFUL. I also had two servers and 14 desktops. In one site.

          It’s rather easy to work the bugs out of a stable network and evolve it to perfection. It’s a whole other thing to take a rapidly, (and sometimes chaotically) expanding network and evolve it even towards /functionality/, let alone perfection. Remember that in the real world, IT doesn’t make all the decisions about what applications, hardware or whatnot will be put into place. Sometimes you come to work and someone made a decisions, (and signed a purchase order,) for completely non-IT-related reasons.

          For example: “this device was shown on the demo room floor doing exactly what we want it to do for the price we want it do so. Let’s buy it.” It is then brought back to IT with the expectation that it be integrated because that is what we get paid for. How it affects the purity of the network design is of zero concern to anyone outside IT. Once, twice…any network can take that. When this happens several times a year for the better part of a decade…

          As to being bored…sometimes I am. I keep El Reg open in my home VM, which I am usually RDPed into from work. Sometimes, particularly after a bit of research where my brain is still processing the information, I need to disconnect from what it is I am doing. Some of that time is wandering around outside, some is going to get a coffee…but I prefer to take my brain off task by poking at El Reg. It’s at least IT related, and thus at least tangentially related to my employment.

          Besides, when not arguing with contrarians, I get some really fantastic feedback from other commenters. Some of the things they have suggested as applications or procedures have really helped over the years.

          And Paris Hilton is always the answer! Just like most of the super-idealistic nerds that become discontent forum trolls, there are some people only the internet can ever love…

  33. Ed Gould
    Megaphone

    Never say never

    I have been in IS for about 40 years. I have long ago learned never sit down and say my job is done. There is ALWAYS a problem to work on and if you are not looking that will land you behind the 8 ball so fast you will never recover. The price of a good clean conscience is to be observant and always on the look out for issues. A good operating system lets you know about issues before it becomes a major issue.

    The first thing I did after starting a new job is to start looking at logs. Logs are absolutely the greatest things since sliced bread as they say. Hardware and Software logs are a daily must. I cannot tell you the number of times I saw issues coming before anyone else knew there was a issue just by looking at the logs. I made it an absolute number one priority to do every morning was to go through the logs from the previous 24 hours. The number of items that I found was just fantastic.

    The next item was to look at the operator notes of issues from the night before, those were just filled with good information although I could tell the good operators from the bad ones by the notes. That was excellent feedback and went to how much of a raise the operator got that year.

    It should be no surprise that todays sysadmin is a feint shadow to yesterdays.

    1. Anonymous Coward
      Anonymous Coward

      RE: Never say never

      A query for you: when you started looking at logs, how many systems were you directly responsible for? What were your responsibilities?

      I understand that being pro-active is a good thing. However, in many situations a person simply does not have the time to daily look through log files of all systems they are responsible for amongst their other duties.

      Yes, there are additional utilities which can monitor these things and send alerts if they detect something, but that requires time + money (purchase, install, configure OR develop, install configure) and in most SME's, neither of those resources are often readily available for additional projects beyond those handed down from above.

      1. Trevor_Pott Gold badge

        Now that...

        ...sounds like an Anonymous Coward who has been in the SME trenches. Your assesment very much so mirrors my own experience.

This topic is closed for new posts.

Other stories you might like