Why is it that when we see the word "exploit" or the phrase "security problems/issues", the article is always about Microsoft.
People need to give themselves a shake and stop using MS products!
Underwriters are reportedly refusing to insure energy firms because poor security controls are leaving them wide open to attacks by hackers and malware infestations. Lloyd's of London told the BBC they had seen a surge in requests for insurance from energy sector firms but poor test scores from security risk assessors means …
Did you see the words "SCADA" or "Air Gapped" as I've looked through the article and can find no mention of "Microsoft". Or were you talking about "McIntosh" who is actually one of the People that was interviewed for the article.
I believe that if you read up on the history of this article with out any prejudgement on why they've got these problems, you'll find that the problem really has nothing to do with microsoft, rather it's to do with custom PCB's with very old firmware controls that were originally installed in an environment where you needed to be fairly close (geographically) to the system in order to make changes to it while now you need to be fairly close (network connectedly) in order to make changes. This means that the security vulnerabilities, that were probably actually ease of use/access features at the time, leave the system wide open to anyone that can breach the outer defences and get onto the internal network.
OH Please, give it a rest.
I hate Microsoft probably as much as you appear to; but save the criticism for when it is truly deserved.
Unless the SCADA system itself is using Windows as the operating system, or uses Windows to effect day to day control, then you are way off base.
So many of these systems were designed to connect via dedicated circuits, and the brain dead PHB's looked at costs, and ordered IT to find ways to cut those costs. Consequently shit never designed to be connected to the public net finds itself wired. Thank the fucking beancounters for that one.
"Unless the SCADA system itself is using Windows as the operating system, or uses Windows to effect day to day control, then you are way off base."
Actually that is the case in most cases today. SCADA uses standards like OPC (OLE for Process Control) which are based on legacy Windows technologies.
Those old legacy systems are probably much less of a problem since they were simple. Unless you are a total idiot and connect your internal bus to the Internet you are not likely to have any problems.
The newer stuff is much more of a problem, since it's not just Windows based, but done by that breed of 1990s Windows programmers we thought had died out with the .com crash. The people who think their C++ compiler does bounds checking, who believe in security through obscurity, who think SQL databases are a great way to store settings for desktop software and who believe in software licensing files which need to be regularly updated. (even though you already bought the hardware which is essential for the software and cost millions)
When insurance is cheaper than security, its generally a no-brainer for management. Limbs, lives, and livelihoods are abstract liabilities that can be insured away if someone is willing to underwrite, but the bottom line, now that's a hard, sacred reality.
Weird to be boosting for an insurance company, but if refusing to cover someone with dangerously bad security practices gets an improvement that's in the public good, I don't care whether they decide on principle or policy profitability.
"SCADA systems have not been patched in years for various reasons: isolation of SCADA networks making the process of patching awkward; lack of motivation to perform what is sometimes seen as a risky process to a critical plant component; terms of software support contracts".
Or, as a business mentor told me early in my career, "nobody gets promoted for preventing 'screwing-up'. Nobody gets promoted for taking preventative actions"
"nobody gets promoted for preventing 'screwing-up'. Nobody gets promoted for taking preventative actions"
Twas ever thus; Take the time to do it right, you get yelled at by management for being slow, then either laid off or forgotten.
Slap something together in no time that falls over every six weeks, then to management, you're the hero that save the day every six weeks.
"My take is that the energy company IT dept finally gets the question of security up to board level whereupon it is immediatly thrown back with the instruction to just get some insurance cover."
I doubt it. We look to manage all risks to the business, and that means taking precautions and insuring against worst case scenarios.
However, there's an important reason why operational systems may be out of date - because of the policy disaster that afflicts energy (courtesy of politicians), most thermal plant is now out of the money, and that situation is getting worse. In Europe, relatively recent CCGT plant is achieving load factors of 25%, with a drop to 20% expected next year. The UK's not quite that bad, but it's getting worse.
Why would you bother to spend money on system updates when the plant stands a good chance of being decommissioned and sold to China in the next few years, assuming it isn't already facing a finite short term life under the EU Industrial Emissions directive?
Having decided that you're not going to spend money, seeing if you can insure is a logical next step, and if you can't do that then you factor that into your plans for managing the plant down, and try and hedge your imbalance risks through the trading arm.
So you see, another unintended consequence of the Greenpeace energy policy that has been foisted on the happy bill payers of Europe. Who would have thought that some fool mistaking correlation for causation on a chart would eventually lead to a chance of you and I being plunged into darkness by state sponsored hackers from the other side of the world?
"So you see, another unintended consequence of the Greenpeace energy policy that has been foisted on the happy bill payers of Europe. Who would have thought that some fool mistaking correlation for causation on a chart would eventually lead to a chance of you and I being plunged into darkness by state sponsored hackers from the other side of the world?"
Well you say that, but not that long ago none of this was connected to the internet at all; the internet didn't exist! Yet we were able to generate quite a lot of electricity back then no problems at all.
So how and why did hooking it all up to the internet become a business imperative? There's clearly no particular benefit (because we managed perfectly well without it being netted). Whatever business improvements that have been brought about it could almost certainly have been achieved another way (e.g. point to point dial up? Seriously, just how much datacomms bandwidth does an oversized kettle or a big switch actually need just to say whether it's on or off?).
Using the Internet as a default choice seems to have been a lazy and 'cheap' solution to needs easily satisfied by other cheap alternatives that are inherently far hard to abuse from the other side of the world.
"Well you say that, but not that long ago none of this was connected to the internet at all; the internet didn't exist! Yet we were able to generate quite a lot of electricity back then no problems at all."
Re-read the article. Air gapping is used, but as has been comprehensively demonstrated in Iran, that's no defence. We're not talking about script kiddies bringing down power plants, or Romanian thieves after your on-line banking details, we're in the realm of state sponsored expert hackers, who possibly have access to stolen (or simply bought) SCADA source code, and if they don't have that they probably have the resources to reverse engineer it if the so wished. If they've got the will, then circumventing an air gap is going to be easy.
You seem to assume that Olde Worlde SCADA was not connected. What the f** is the point of systems control and data acquisition if you still need all your experts on each site to pull the levers and twiddle the knobs? In fact, SCADA systems were running over PSTN before the semiconductor era, and the main defence was security through obscurity (plus an even stronger firewall of ignorance to the idea that somebody might want to maliciously interfere). We know better than that now.
Why the hell does an energy company need home workers with access to systems that can seriously compromise security in the first place? If it is just access to customer records (anything related to plant running being online is utter madness so I hope it is) or HR style information then why aren't they using one of the many many solutions out there that are already secure?
It reminds me of the time Homer got fatter and worked from home...and that ended well.
It's 3am on Monday morning in January and you can't get you're first oil burner in because the PLC is seeing a problem with an instrument that is locking the start up sequence. You're the Shift Manager and you'v confirmed there is no safety risk. Do you wait for the senior C+I engineer to drive in to work log on and find the problem and "frig out" the sequence and expose yourself to missing you sync time for generation and a cash out in the market of £100k's or do you ask the C+I engineer to log onto the company intranet from home and change a 1 to a 0 from the confort of his own home. But more importantly within 10-15 mins of you identifying the need.
The UK power industry is on it's knees it can't afford the manpower to have shift C+I engineers (or any other type for that matter) to fix problems. So problems that can be fixed remotely need remote solutions.
In my experience SCADA systems aren't used to update Mrs Smith Direct Debit.
"The UK power industry is on it's knees it can't afford the manpower to have shift C+I engineers (or any other type for that matter) to fix problems. So problems that can be fixed remotely need remote solutions."
You say it can't afford an extra 60k a year to keep an engineer on site at all times (one extra guy to share the night shift) yet it can seemingly afford the infrastructure and software changes to put the systems online and then pay several million pounds a year in insurance. I'm stroking my Jimmy Hill chin at the moment.
No they can't afford an extra £60k to maintain shift C+I engineers. With the glorious exception of the nuclear industry and maybe embedded co generation at an oil refinery I would be amazed if there is a thermal power station in the UK with a shift C+I engineer. The systems will have been online for years because they can't afford the lost man hours of the specialist engineer at the central engineering headquarters driving back and forth to highly distributed assets to to update PLC's or SCADA systems. That facility also allows the site engineer to log on remotely if needed to.
Plenty of power conmpanies are downsizing like there is no tomorrow. Didcot A, Cockenzie, Kingsnorth, Ferrybridge C, Isle of Grain, Fawley, Littlebrook, Iron Bridge, Keadby, Tilbury all closed or closing soon. The remaining UK conventional fleet is trimming numbers because power generation is not profitable and influenceable costs like maintenance and staff numbers are being squeezed. First they cut the fat, then they trimmed the meat, they've sucked the marrow and now they're knawing at the bone. It's the same in Europe too, in the Netherlands 2 brand new super efficient gas stations have been mothballed because there is no profit. If you aren't sucking at the Government teat taking subsidies you don't make profit in the UK power industry anymore.
> If companies can't make money out of generating electricity then the market clearly isn't working
Correct. It's not a market as you or I would recognise it.
It's like going into a shop to buy a bottle of whisky. You look and on the shelf there are loads of bottles at £10, some more of the same at £20, a few more at £50, and a couple at £100. Naturally you want to buy one of the £10 bottles - why pay 10 times as much for the same thing ?
But the cashier tells you that you can't buy any of the £10 bottles until all the £20 ones have been sold, and so on up to the £100 bottles. Apparently the only difference is that they came from different suppliers, and the £100 bottle supplier is really unreliable meaning they have to keep loads of the £10 bottles in stock "just in case" even though they can't sell many, and the supplier of the £10 bottles is fed up because he has to keep stopping and starting his production line as customers expect him to be able to supply when the £100 supplier doesn't, but won't buy from him when teh £100 supplier bothers to deliver.
Those £100 bottles are the wind farm output, down to the £10 bottles coming from coal and gas. The rules over here are that the energy companies have to buy all the output from the wind farms whether they need it or can even handle it - worst case they have to pay the windfarms to "turn down" their output !
At the other end of the chain, operators of gas and coal plants have to turn up and down (or on and off) to balance the grid - when the wind blows (but not too much) they lose their market, but when the wind doesn't blow (or blows too much) then they are expected to fill in the gap. All this is over and above the daily variation in demand.
Because of the extra stops/starts and power changes, maintenance/wear and tear on the plant costs go up - but because they spend a lot of time shut down they don't get to make as much money. Costs increased, income reduced, sometimes the profit margin is negative ! It's hard to make a profit when you have to pay the customer to take your product off your hand.
Of course, none of this is factored into the "aren't we cheap" numbers put out by the wind industry.
For a better view, try this link :
Power stations used to have sufficient manning that external day to day support was not needed and there was no connection between the control systems and the outside world. However skilled manpower costs money - so to reduce the costs a lot of the on-site staff was made redundant and much of the monitoring was done remotely instead. In a ideal (no-threat) environment this makes sense as by grouping the monitoring function it is possible to manage more generators with the same amount of people. However this (and the demand for computer based remote control of generator output to meet the trading systems requirements) requires communication from the power stations to the control and monitoring locations. For cheapness this is done by TCP/IP and often over the internet. The power station control systems were designed as isolated systems with no outside connection so security was never a design requirement. Given the difficultly of making the control systems secure (downtimes of months to years could easily occur), the security needs to be put between the power station system and the outside connection.
Minimum requirements for reasonable security
1) NO UNUSED USB PORTS (disable any unused non-removeable ports by filling them with epoxy or by using a locked cover over the ports). (Note that some plant interfaces and printers may be connected by USB.)
2) Dedicated non-Windows system (Linux, Unix or OpenVMS) running a stringent firewall application as the sole interface between the power station control system and the external site(s)
3) Encrypted comms between the firewall system and the external site(s)
4) No public TCP/IP address for the firewall system or any part of the power station control system
5) Enough trained staff at the power station to allow continued operation (including requested changes of output) if the remote link fails.
For the people who say that the control systems should have been designed with security as a prime requirement - this is like saying that a WW1 ship should be designed to stop sea skimming missiles. At the point where many of these systems were designed the current threats did not exist and even if they had, the isolation of the power station control network from the rest of the world would have made them of negligible significance.
New systems being designed now (or that were designed in the last 5 years) should have security as a major design requirement.
Dedicated non-Windows system as a firewall? You think using an OpenVMS(!) system as a firewall would better than using a, you know, actual firewall product like one of Cisco's? That would only be true under a "security through obscurity" rationale thinking that there aren't many hackers familiar with OpenVMS.
"However, insurance is only a plaster over these underlying weaknesses"
...surely insurance isn't a plaster at all, it's just a way of moving the risk onto someone else?
A plaster would be to put in some decent firewalls or air-gap the networks from the internet in the first place. Then replace with secure systems.
I would guess though that a lot of these systems are many years old and the coders that knew what they were doing have had their jobs off-shored to improve the bean-counters profit margin.
"Legacy systems, often built before the internet existed, were simply not designed with the levels of interconnection and security threat we see today."
While I'm sure it's theoretically possible to compromise them, surely legacy systems that predate the internet (Jesus Christ critical infrastructure is practically running on abaci btw) have a strong level of inherent security unless they have been specifically modified to take remote instruction?
"have a strong level of inherent security unless they have been specifically modified to take remote instruction?"
Other than those directly involved, no one knows how the Stuxnet infection was introduced into the core system. The prevailing theories are either introduction via an infected USB stick (involuntary or voluntarily) or by infecting an engineers laptop that was then connected to the 'secure' local network and it propagated from there.
Once infected, the central control system sent 'valid' messages to the equipment being controlled. These 'valid' messages forced the physical equipment i.e. centrifuges, to work outside their design parameters, either creating over-pressure or speeding up past their design limitation.
So it seems that air-gapped systems still need to be physically secure, and the local networks they inevitably rely upon also need to be secured. It's not as easy as just saying the control systems shouldn't be accessible from the internet.
"So it seems that air-gapped systems still need to be physically secure, and the local networks they inevitably rely upon also need to be secured. "
For traditional utilities that's not the point. They used to use hard-wired control systems which could not respond to software redirection.
Having systems which inherently cannot be controlled and or their configurations reconfigured by software is inherently a much safer option. Until there's guaranteed security, that's how it should be.
"They used to use hard-wired control systems which could not respond to software redirection. Having systems which inherently cannot be controlled and or their configurations reconfigured by software" (etc)
I find this comment somewhat puzzling.
Go back to the 1980s and your PLCs from Modicon, and their equivalents from Allen Bradley, Siemens, GE, and others, could all be remotely accessed, remotely controlled, remotely reconfigured, and so on. Not rocket science, even then.
Thirty years or more, the stuff's been somewhat remotely vulnerable.
In the late 1980s, I was a hired hand helping commission the first multi-site SCADA network at a major utility. The site I was just starting work at, my sponsoring employee was off sick and security understandably wouldn't let me on that site in the absence of authorisation. So I went to another site (same company) where the sponsors were more helpful. Using the intersite LAN (TransLAN, in fact) I continued to remotely access and remotely configure their automation kit, despite not being allowed onto the site in question. This isn't really a new issue. Mind you, that was a VMS-based setup so lots of other security was in the picture.
That was before the SCADA world started using Windows for the "programming panels" and MMIs, which was a bad idea.
Using Windows for SCADA was a *seriously* bad idea. The PHBs thought it would be cheap and cheerful. Nobody asked the insurers back then (or maybe the insurers didn't understand, at that time).
2015. Five years since most of the industry started to pretend Stuxnet didn't change anything. What could possibly go wrong?
Thirty years or more, the stuff's been somewhat remotely vulnerable.
Correct, I remember that time and even before that. However, I was referring to a time pre-1980s—a time before microprocessors when industrial control consisted essentially of pre-wired banks of relays, mercury switches and such.
Remember, the microprocessor came of age during the 1980s—the period to which you are referring. That was the Reagan/Thatcherite era, thus it's little wonder the latest electronics was readily adopted by all and sundry to help implement the new political economy, and utilities were about the first targets in the gunsights.
Nevertheless, utilities as we know them were around for at least 150 years before the 1980s—back to the 19th C. days of Bentham and J.S. Mill. I can assure you I remember a time in the '70s when everything was hardwired and most important procedures were still manual—opening a dam sluicegate, syncing power station generators and even hooking up a police telephone wiretap to a Strowger-switched exchange—all had to be done manually.
(That last example is the quintessential one, just compare the effort required to do just one manual wiretap on a Strowger exchange with that of the global reach that the NSA has now achieved since the introduction of the AXE and other computerized exchange switching equipment. This NSA example beautifully illustrates how computerization has enabled and empowered the hacker by many millionfold.
In my opinion, there's no better example (technically speaking) than NSA spying to show why critical infrastructure should be both hardwired and totally offline!)
"I was referring to a time pre-1980s—a time before microprocessors when industrial control consisted essentially of pre-wired banks of relays, mercury switches and such"
Your picture may well be right but your timeline may be out by a decade or so, if you look at pioneers rather than mass market.
Have a read of Modicon's 1972 patent for their 084 PLC (I don't know if this was the first in the industry).
Their 084 industrial controller was programmable by someone used to the language of relays and switches, although the heart of its "computer" was actually a PDP8-compatible. And it was remotely accessible via telephone for diagnostics, management, programming, and configuration purposes (all of which were inhibited if the front panel keyswitch was set to "secure" (or equivalent)).
Some PLC vendors even offered their customers central archival facilities where they'd connect remotely to your PLC and a copy of the program could be uploaded to paper tape for safe offsite storage (in the equivalent of "the cloud"). I believe Modicon's UK facility was in Basingstoke (Jays Close?).
So even if the products hadn't been adopted in volume in the 1970s, the concepts were known in the industry.
I'm not disagreeing with any of that. However, in the '70s PDP8s were thin on the ground--at least where I was. The only access I had was the one used for the university's student batch processor and that was at the end of the decade, '79 or so. It had to be fed with penciled-in Hollerith cards which were batch-processed. A decade earlier, I at least had access to IBM KP26 and KP29 card punches (much better than penciling-in), but the mainframe was only one of about six in a city of 2 million.
The Modicon PLC wasn't the first, there was stuff made in the '60s that used core memory and discreet transistors. I recall a contraption build as a demo to compete with the museum's tick-tack-doe/nougats & crosses relay-driven exhibit but it ran somewhat slower than the electromechanical monster.
The first inkling of change was when the EPROM became available in the very early '70s but it wasn't until about '75 before I got my hands on one. Things really took off with Intel's Multibus controller card system which came in out about '75/75 but these early ones were really only toys. It took until about '79 for Multibus to be taken seriously (about that time I was purchasing cards with 8085s), and by '82 I knew Multibus had made it because I'd seen it used in railway signalling systems (but even so it was a pretty primitive arrangement, the logic was simple and the speed far from fast but certainly fast enough for signalling). The main purpose wasn't to replace existing railway signals--no one would have trusted Multibus over long-established railway signalling practice, rather it tracked the electromechanical signals to provide status and readout indicators.
It wasn't until the mid '80s that industrial controllers came into their own, and when they did, they took off like wildfire. There were 8080s, 8085s, Z80s, 6800s, 68000s, and 8051s everywhere. However, they weren't being used for truly serious work such as syncing power station generators, rather they were confined to jobs such as TV camera remotes, although by then the telephone industry was using literally millions and millions of 8048s in switching equipment. In reality, the '80s was the decade of learning how to use microprocessors, it wasn't until the '90s until things got serous, that's when they had become sufficiently proficient to complement and or replace workers (which business and industry wanted, as downsizing and outsourcing had become the economic mantra of that time).
I'm an enormous fan of industrial controllers, and the latest incarnations are truly amazing devices. Nevertheless, all too often and from years of experience, I've seen many instances where they've been installed as interfaces between human operators and machines and often they've made things worse; alternatively, they've changed the paradigm to the extent that now new workers have no practical feel for the equipment they're operating yet they rely totally on the controller for everything. It is this phenomenon that makes hacking industrial infrastructure so pernicious, as nowadays operators are isolated from the equipment. (It is extremely difficult to replace an experienced operator who is familiar with both the feel and foibles of his analog gauges with 'equivalent' digital counterparts, as there's things--sounds, vibrations, sensations, meter gauge dynamics (is the gauge critically damped and such) which are important but which are never digitized.)
Frankly, I'm horrified by the way many young electrical engineers have taken to and adopted industrial controllers without an adequate understanding of the analog world that's often behind the controller. For example, I have almost come to expect that a young engineer skilled in digital electronics will not understand the importance of damping in electromechanical instruments. In large complex environments, power stations, chemical plants etc. this increasing specialization (seemingly unavoidable because of increased complexity and that workers are no longer taught related skills (such as those on the other side of the interface etc.)) only strengthens my view that we've come too far to fast when it comes to controlling critical infrastructure.
It's little wonder the insurance industry is getting twitchy, from the evidence, it has good reason to be.
Hey if they can even figure out the OS running most of the applications at your typical insurer they are doing better than the department supporting it.
These are all very insecure. When they were built it simply was not a requirement. However, the numbers of people with the know how are miniscule and there is far more money to be made exploiting Bitcoin transactions.
Back in the early 90's I wrote an OS for an embedded system to control a water purification process.
Back then, the idea would be to install the box, attach a computer via the serial port and configure the process and let it run.
Security wasn't a major concern because these were stand alone systems placed in secure environments.
So security was never an issue.
Now that everyone wants everything connected, these 20+ year old systems need to be rebuilt. Either the software source code is gone, or there needs to be a serious rewrite to update and ugrade along with adding security.
That's a very expensive proposition and not an easy one.
The skills required to write low level and embedded software and RTOS are lacking.
You can't go an pull in an untrained body from India who barely groks Java to do this.
Not just the language skills but the hardcore engineering discipline that goes along with making it error free...
Do they even teach C these days in Universities?
"The skills required to write low level and embedded software and RTOS are lacking."
They're not completely lacking. The people with them are just not willing to work for the same shit wages and conditions as
"an untrained body from overseas who barely groks Java"
"Not just the language skills but the hardcore engineering discipline that goes along with making it error free..."
"Do they even teach C these days in Universities?"
Not much as far as I can see from the local graduate intake in the last few years. Though I was rather pleased when one of them this year admitted to having read The Mythical Man Month. He's leaving though, he's worked out that the existence of a decent corporate graduate recruitment scheme does not necessarily mean there is a medium term future for smart conscientious people.
People fitting this description are still available. Some of them are even willing to travel given sufficient motivation. They mostly do not come straight from college. They are mostly willing to share their skills and experience with younger staff, if given the opportunity. These 'old timers' do not cost a fortune in comparison with a badly delivered project, but these people may want more than the minimum wage that the IT Director thinks is appropriate for the usual Windows-centric IT staff and presentation layer people.
"These legacy systems are increasingly being connected to the internet, essentially to make them easier to manage remotely."
From the beginning I have been completely perplexed (and still am) as to why a controversy ever developed over the matter of internet security in connection with the control and running of utilities, power generation and distribution etc.—the internet should never have been connected to utilities until it was guaranteed to be totally secure.
Long before the internet, power utilities etc. had very sophisticated systems in place to run their distribution networks, For example, there were well established communications systems and procedures for syncing/phase-locking of generators at remote and disparate locations as well as interlocks to stop switching yards connecting networks unless they were fully in-phase. Such procedures are absolutely essential or the consequences would be disastrous.
There's nothing wrong with updating and modernizing, and in an ideal world using the internet would make sense. However, it makes no sense to retrofit an insecure internet onto fully functioning legacy networks just because it can be done.
To unify everything to the internet just for the sake of it/fashion or just to save a few dollars makes little or no sense, especially given the potentially disastrous consequences. Keeping utility networks on their pre-internet control systems is the best protection assurance available.
Whatever has gone wrong with the great tradition of practical commonsense amongst professional engineers?
We certainly know they've fucked-up big-time when insurance actuaries have to intervene.
To the best of my knowledge syncing and connecting of generators to the grid still requires manual control on all "signficiant" UK power stations. OCGT's and wind farms will have auto and internet controlled syncing arrangements but other plants still require a person to hold down a button on a phyically connected electrical comtrol loop not a networked one.
That's also my understanding too. It's the same here in Oz where I am but I'm not sure for how much longer that'll be the case given the pressure on 'deregulated' operators to make money with networks that have been left without sufficient maintenance since the '80s. It's different in the US however, where just about all technical infrastructure has succumbed to control via the internet.
(What I find alarming is the new breed of engineers who want to automate everything without question as to whether it's necessary or not or whether it's going to be reliable. Practical reliability testing/state analysis already shows that it's nigh on impossible to fully model all the states/conditions in something as prosaic as a domestic VCR let alone a sophisticated control network, yet these engineers are quite prepared to take such risks with the added complexity. That's very different to the belts-and-braces approach and "keep-it-simple for reliability and efficient maintenance" environment in which I was educated.
Even august institutions such as the IEEE aren't as independent and prepared to speak out against bad practices to the extent that they once did. Again, this trend goes back to the '80s when many engineers were ousted from corporate management in favour of accountants, economists and lawyers.—a time when profits won out over the need for engineering excellence.)
Biting the hand that feeds IT © 1998–2019