Researchers have identified a kernel-level vulnerability in Windows that allows attackers to gain escalated privileges and may also allow them to remotely execute malicious code. All versions of the Microsoft OS are affected, including the heavily fortified Windows 7. The buffer overflow, which was originally reported here, can …
Just check ALL buffers are sacrifice a little performance, ok? Goddam this has to end. lol
So even chunks of the "all new" Vista Kernel was just recycled code...
Not necessarily recycled code. It could have been freshly minted code. It did, however, have to conform to the same spec (or else it wouldn't be compatible with existing Windows apps) and was therefore susceptible to the same design errors, particularly if the new version was written by someone familiar with the old version.
Parts of the Windows API were devised in the 80s for a machine with less than 640k of memory and no protection. (CreateDIBPalette isn't quite that old, but close.) The former point encouraged "packing" structures and "re-using" fields for different purposes depending on the values of other fields. The latter point meant that programmers had to be trusted anyway. If you'd insisted on a tighter spec then the resulting product wouldn't have fitted on the target platform and wouldn't actually have been any more secure as a result. That's a cost with no benefit. Cutting corners with the Windows API in the 1980s was a perfectly rational thing to do.
Fast forward twenty years and Microsoft probably don't *have* a mathematically rigorous spec for the Windows API. If they did they'd probably find that it was self-inconsistent and provably insecure. The twist, however, is that the closed source ecosystem means that after you've found a problem you may find you can't fix it without breaking existing apps and pissing off your customers even more. Closed source ecosystems are intrinsically less secure than open ones because sometimes you aren't allowed to fix them.
Which brings us back to the "all new" Vista kernel you mentioned, since one of the big criticisms leveled against it was the fact that MS took the plunge and redesigned all the kernel interfaces, with the result that zillions of hardware devices were no longer supported. Those hardware vendors that were still in business eventually issued new drivers for their more recent offerings, but that still left a lot of hardware unsupported. (And as we've read this week, XP's market share is still larger than Vista+7 put together. Co-incidence?)
And in other news
Bears defecate in the woods.
one legged ducks swim in circles :o)
Seriously why is anyone surprised?
This is such a regular occurrence that I'm astounded that people are still surprised by this kind of news.
Let's face it the "all new" vista kernel was more of an "xp kernal with some old junk removed", likewise the win7 kernel is just the same again with even more derelict and forgotten code removed or tweaked, you don't have to work for M$ to know that, it's pretty much common knowledge.
I didn't know there was supposed to be evidence to the contrary, after a cursory google I can't find anything to suggest that vista's kernel was supposed to be all new. Got a good link for that?
(all I can find is random blogs and stuff about "Longhorn")
Re:AC at 00:05GMT
win32k.sys isn't a necessary part of kernel per se - it's a lot of the functionality that in *NIXes is provided by desktop environments, but moved to kernel mode in NT 3.5 (?) times to improve performance.
If I wrote code...
I'd check for buffer overflow problems. You'd think they'd have caught onto this sort of thing by now.
I do write code
I also teach students to code. High performance languages such as C and C++ are highly susceptible to this kind of error. I and just about any coder I have ever taught to use such languages have written code with buffer and heap overflow possibilities, probably many times over. Most of the code you create won't ever be used in a hostile environment, until this use context creeps up on you, when these security bugs really matter.
You will minimise occurrence by improving programmer education and by maximising code peer review, in some cases helped using automated code analysis tools. Even very experienced coders with deadlines to meet and insufficient time for peer review will create buffer overflows.
A good defence is likely to include opening up the source code to all interested. This doesn't defend against such bugs in open source code which isn't being inspected by many interested eyeballs. It does defend open source code which is being openly inspected. In this case there will still be some eyeballs finding these bugs and more interested in covert criminal or intelligence agency use of them than in reporting them and providing fixes upstream.
The security case for closed source is worse than this. Those with access to closed source code who are not the mainstream developers are more likely to have a covert interest than in reporting problems to other users and developers. Software development shops are rarely leak free and programmers with criminal intent are not deterred by closed source intellectual property restrictions. Also governments won't purchase Windows unless their intelligence agencies have access to the source code.
"You will minimise occurrence by improving programmer education and by maximising code peer review, in some cases helped using automated code analysis tools. "
It might help if you also point out that the *fastest* piece of software is the one that ignores whatever is input and finishes IE it does nothing useful but boy, does it do it *very* quickly.*
Response speed (which is *one* kind of performance. There are others) is *rarely* critical in the real world (most of Windows appears to be interpreted through the Common Language Environment).
However *when* it is it's usually linked to other issues like security, and reliability. Control systems for everything from machine tools and boilers (both of which, unlike cars and avionics don't AFAIK have specific development standards to work to) but could kill someone if their software was written by a halfwit spring to mind. Also low (or relatively) low level wrapper code (IIRC In Windows the DIB prefix means "Device Independent Bitmap")
So if you're training people to develop for the lower levels of mass market products (Windows OS) which *will* be attacked, or deeply embedded systems (which *might* be attacked and would probably hurt or kill someone if they go wrong) where "performance" *is* an issue and you're *not * instilling them a *deep* interest in things like testability, sanity checking input data and even (dare I even breath it) verifiability of their code their future employers should get ready for a whole lot of fail.
Even very experienced coders with deadlines to meet and insufficient time for peer review will create buffer overflows.
No doubt. They might find that writing a tool (well in principle I'd guess some C macros) that spits out outlines of functions in either a full parameter checking or a take-everything-at-face-value version might be a worthwhile investment.
Mine's the jacket with "Premature optimization is the root of most evil. DE Knuth" on it.
copsewood, John Smith 19
I have another slogan for you: "Assumption is the mother of all fuckups".
Sanity checking takes time. Yet as a beginning coder I found myself writing code to check everything, everywhere. That's reasonable, though at some point I stopped silently folding invalid input into some default value; better to just give up and teach the programmer to make sure the inputs are correct. That way you can, carefully, lift most of the sanity checking and speed up the code. It also shows very clearly where you must do a lot of sanity checking: Right where your code receives inputs it cannot afford to assume anything about. It also encourages to perform every check exactly once.
It's also why I believe that encapsulation, bordering off areas of responsibility, creating well-defined interfaces between parts, is the most useful thing that OO gave us. The rest, like hierarchical inheritance or even multiple inheritance, and polymorphism, is window dressing. Useful window dressing, not always but often enough, but window dressing. And yes, "I believe". That's highly opinionated personal opinion right there.
Anyway. As a programmer you're free to assume whatever you want, as long as you've thoroughly checked and confirmed your assumptions. That ought to be basic practice for everyone from architect down to "cheap indian" implementor. I trust our instructors teach that nowadays?
To hear the informed opinion of your self and the posters to which you replied!
I'm a Software Engineer as well and can appreciate the nightmare it CAN be to ensure all POTENTIAL vulnerabilities are captured and dealt with accordingly.
Refreshing as well because usually these sorts of stories start with some Linux zealot banging on about how it could never happen on their platform of choice or a Fanboi giving it the same. Well, news for you chaps. It can and does!
"Assumption is the mother of all fuckups".
Agreed. Especially the ones about the interfaces between different levels of code modules written by different programmers and the fact that no user or developer wants to do Bad Stuff (TM).
"That way you can, carefully, lift most of the sanity checking and speed up the code. It also shows very clearly where you must do a lot of sanity checking: Right where your code receives inputs it cannot afford to assume anything about. It also encourages to perform every check exactly once."
That would be *appropriate* optimization based on data collection of a *running* system and analysis of the results.
Note The function referred to is part of the Windows API. It's in the manual and publicly accessible. It is *definitely* a part of Windows that *will* receive input from *almost* anywhere. It is likely to be a wrapper for a bunch of device specific stuff but called rarely enough (AFAIK "Device Independent" bitmaps are not the *performance* option for anything) that checking its parameters should not hit performance (premature optimization again).
My gut feeling is that it should be feasible to code a lot of the sanity check code automatically from a spec of the functions definition. It would seem the sort of thing macro processors were written for. Note that working through an API manual and feeding them with pretty near anything *but* the valid ones as a way to break the OS (ideally to a state useful to bad guys) has been SOP since the late 1960s.
If it can be called from user space it's fair game.
On premature optimisation
Personally I like to work toward optimisation wherever I can. Note that this is different from getting down with the code and tweaking so it'll run faster, which is where the cost is. There is a big difference between doing that and planning ahead for possible later optimisation. And, of course, where better algorithms trump bit twiddling any day, so does better architecture trump better algorithms. The best optimisation possible is not making sure that all the work is done in the most efficient manner, but twisting things so that the work ceases to be necessairy.
Knuth basically says "don't waste your time", and I like to think big not wasting my time. The fact that the optimisation (in the previous example, lifting input checking on non-api subroutines) is now possible is the salient point. Do that in a timely manner, and the now-superfluous sanity checking code didn't need to be written in the first place. What do you mean, lose time optimising?
I probably should've clarified. So here I do.
As to feeding invalid input into APIs, that should be SOP. Apparently it isn't; it's quite amazing what you can break still with even simple "fuzzers". Just about everything will break down eventually. Not too long ago someone figured out how to tickle TCP stacks (remotely) such that the system ran out of *timers*. OSes don't much like that, no. It's also quite hard to prevent that sort of thing through simple input filtering. But as we've seen, even that regularly doens't happen in the right places. But by certain reports, lots and lots in all the wrong places. Common sense has it that shotgunning sanity checks is still effort well spent. I think it's just as much a waste of time, though justifyable if finding the right places turns out to be too hard. It'd still have me ask hard questions like: Why? So you don't actually know what the data flow in your code is like? Oh?
That's why I like the encapsulation part of OO programming: It gives me language tools that help provide guarantees about the structures I'm manipulating and provide barriers against meddling from the outside after the input checking has been done. That and the (same) tools that let me clearly define boundaries and interfaces between program parts.
Whew! That was close
"from Windows XP SP 3 to Windows Vista, 7, and Server 2008"
Thank goodness my beloved MS BOB is safe.
Vista or Win7 is XP with code removed? Obviously you haven't noted the installed code size. They bolted on every possible insecurity they could think of, in addition to the regular pre-existing bugs.
MS worse than Apple
Apple has only reached Lemonaid Version 4 and still hasn't got it right. but MS after over 10 years, seven Versions and service packs beyond my fingers and toes still can't get it right.
Makes Linux look pretty solid.
are you honestly saying linux doesn't get this kind of vulnerability? If so, I'm glad you're not in charge of my systems. I suggest you subcribe to your distribution's security list, and start reviewing it.
Granted, an issue as generally exploitable as this comes up rarely, but "local attacker can get root if you're using n" issues come up all the time. I've got a "PAM vulnerability" from 7/7/10 (ubuntu) in my inbox right now, for instance
"local attacker can get root .."
Give them some credit
As a Mac user I have some sympathy for MS, genuinely I do. This is code that is very complicated, hundreds of thousands of lines of spagetti code. Quite a bit is probably so old that most of the devs that wrote it, have long since moved on, it would need loads of manpower to sort out. Couple that with the marketers and PHBs bearing down on the devs to get the latest greatest code out the door, these poor devs, who I am sure would love to fix it properly, are simply not given the time to do the job properly.
I am not advocating they do an OSX, dump the current line and start from scratch, but sooner or later most companies reach a point with their lead product, where they simply have to cut their loses and start again. MS need to sit down and have a serious think about how much time and effort is going into fixing and fending off these bad press stories. Then again I suppose so long as the license revenue keeps flowing, it far outweighs the cost of a few hundred developers time to fix a few small niggles every few weeks.
Time is money, and as they say money talks and BS...you know the rest.
I'd be surprised if they had the ability
MS are mostly a technology acquisition company now, they buy stuff they can't write themselves, has been true since the Frontpage and I think even MS office days...
They do some development, sure, but rewrite a new OS? Doubt they could do it. Not without hiring some programmers ;o)
"dump the current line and start from scratch"
.NET could be the tool to allow for this! (No - don't laugh!)
As a layer of abstraction between applications and the OS, in theory the OS could be replaced for something newer/more secure, but still keep application compatibility at the .NET API level.
I grant you, it is a big IF, but technically possible. It would require all major apps (including MS Office) to be re-written against .NET, but this could be done today, before the OS is actually switched out.
I guess it all depends on how much longer the fat desktop paradigm remains fashionable. With cloud computing and better browser apps (HTML5 et al) looming, who needs Windows?
I, for one, don't like the idea of putting my data into the hands of a company that either wants to sell my privacy to the highest bidder or just doesn't give a damn about my stuff OR puts my stuff beyond an unreliable net connection. But I do believe this time will come - it's pretty inevitable.
So maybe the onerous task to abstract the OS via .NET is pointless - Windows is a dead OS walking?
Forget I ever said anything.
Start from scratch?
What do you think .net is? Ultimately that will be the OS and any legacy native apps will run in virtual machines. Phone 7 is just the beginning.
@it would need loads of manpower to sort out
ms has that manpower. tells customers they are designing from the ground up.
ms took the time to write vista and produced a camel.
the main selling point for 7 apart from security and stability is you can run it in xp mode
remember this is the os you pay for. why isn't it way better than all those free oses?
MS already tried to do this...
MS already tried to do this... it's called .net. It's secure, easy to manage, fast and extremely efficient. At least that's the marketing BS that MS released said anyway.
Back in the real world, it's DLL hell overload, bodged APIs layered on top of the old existing APIs, and is so inefficient it's comical. The first versions missed out half of what real developers actually required so struggling developers had to lever in place so many bodges and kludges to call normal APIs it was unfunny. There are now around 5 different versions of .net to download, install and maintain on every system and that's before MS start the shenanigans of certain versions only working on certain underlying OSes... Apparently all this is good.
Wasn't Vista and Windows 7 meant to be the 'dump it all and start again' product though?
Re: .NET could be the tool to allow for this!
Already being worked on , see Midori
Itn's not rocket surgery!
Preventing buffer overflows isn't exactly that difficult. You know how big your buffer is, so you only accept as many bytes as will fit. If somebody throws a huge mess of bytes at you, you just take what will fit and send the rest to the bit bucket. Problem eliminated.
Of course, finding every, single, last place you've used a buffer and correcting the code in something as big as Windows is going to be a long, hard, difficult job; no question. However, there's no reason in the world to add new code with a potential overflow issue.
why the news story because some researcher is blowing their own trumpet?
To see how common Buffer overflow issues are - follow some Linux security advisories - e.g. there are only 4 mentions of buffer overflow on http://www.debian.org/security/ currently (showing 11 July on) - then sitting under "several vulnerabilities" items such as:
It was discovered a buffer overflow in libpng which allows remote attackers to execute arbitrary code via a PNG image that triggers an additional data row.
It's a feature.
Really, it allows end users to bypass the kernel DRM and install open source drivers in the kernel.
Do you really think
that Microsoft don't sit down and have serious thinks? Or that they don't have plenty of software engineers who do nothing but review and fix code. Considering the cost of a Windows license in the Western hemisphere, compared to the average wage of a software engineer in, for example, India, I thnks that's highly likely.
It's fantastic that there are so many people out there who either genuinely want to fix exploitable code, or who want to bash Microsoft so much, that they find these weaknesses for us and Microsoft.
Lifts Head, Sniffs The Wind And Detects..............
...........the inevitable, tiresome comments about Windows and MS.
Like Pavlov's dogs you respond with saddening predictability.
Headline, "vuln affects all WindowIs versions". Woof woof, a chance to repeat what's been said countless times before. Is Linux perfect, is OSX, is BSD? A "vuln" in Windows, sorry Windoze, has the potential to be more damaging because of usage numbers. But that's the way things are. I agree that there should be more balance. That people should be using Linux more. But they're not. If they were, we'd be reading headlines about "vulns" in Linux.
How many of you MS, sorry M$, bashers actually know how to write "code"? You talk the talk. As if, were you were given the chance, you would be able to write a completely secure operating system.
Please, either be original or do us a favour, give it a rest. Instead of preaching to the converted, try to convince the un-converted or rather the un-aware.
@ Lifts Head, Sniffs The Wind And Detects
C null terminated strings
C null terminated strings and similar stupidity.
C, C++ and all related languages suffer from this. Modula-2 you can put inline compiler directives to turn on or off RUN TIME array bound checking. Since 1983 or earlier.
Software engineering wise PC programming is still living in the 1970s. The only useful thing (misused) was C++ "objects" which is really just an automatic way of hiding a pointer to an instance of struct where some members are pointers to functions. 1987
Windows 7 is the latest version of NT3.1 (1993) based on 1985's IBM/MS OS/2 (MS had an OS/2 version of their own in 1989 with lan Manager added, hence NT first version is 3.1).
So OS programmers in MS (and Solaris, Linux, Mac OS X) have been knowingly writing insecure unreliable software even though "technology" existed since early 1980s to avoid this well known problem even then.
In 1986 you didn't just get an application error, Array bounds (buffer overflow) error did everything from rebooting the PC to erasing the disk.
All versions of Windows, or all currently supported versions of Windows?
The article says both and implies that they mean the same thing, when obviously they don't.
To be fair
to m$, if this is legacy code , then it will have been written without the overflow checking, but then whats it doing in the latest and greatest products?
More than likely the PHBs decided "screw this, we have'nt got time to write a decent secure .dll, shove the old code in so we can get the product out of the door"
But there again , even in ye olden days of times past (ie the 1990s) how much extra processing would it have taken to go
Query: what size is your data
Println "Oi your data is bigger than you say it is fek off"
But then I'm using linux so I'm happ.... bugs in that too? aawww s**t
Give MS a chance ...
I am no MS lover, but from what I can see the finders of this bug have gone public as soon as they found the problem. They should have reported it to MS and gone public some time later - a month would be OK. That would allow MS to fix & get the patch out. What they have done is to make it easier for crackers to attack end user systems.
Having said that: it does seem that part of the problem is that MS has too much running at kernel level, things that do not need to be there. Thus problems in code have greater consequences than they ought to. This is a big design error in MS systems.
Oh: it is NOT a remote exploitable problem as El Reg suggests.
No Innocents Here
This is a right-across-the-board piece of stupidity that affects all mainstream OSs - No exceptions.
I recognised buffer overflows as an important issue in the early 1980s doing assembler programming on a BBC model B, so how is it that major software devs STILL write unsafe code? If you don't know for an absolute certainty that underlying code has length checks, do it on your own code. Don't be a lazy bastard!
1) Writing a new version of Windows from scratch is just the dumbest thing they could do. You don't drop a code-base of 100k+ (Windows is probably several million lines) lines of code because all those fixes you put in over time will be lost.
2) When was the last time anyone here wrote a giant software project running for several years, that doesn't get bugs reported even in old features... and that's without world+dog specifically looking for such things. I bet any code ANY of us have written has a higher number of bugs per line than Windows... but our software doesn't get relentlessly hammered.
3)I always have wondered - there _must_ be similar holes in MacOSX and *nix variants. It's simply impossible there are not. And in fact don't they have hacking contests on different platforms? What happens about them... do *nix people have an equivalent of Windows Update or are they reliant on updating the OS to a new version, or what? Genuine question...
so speaks the uninformed....
sure this article is several years old.
just taking the numbers from this, but you can check the article yourself.
linux had an average of 0.17 bugs per 1000 lines of code
commercial software has about 20-30 bugs per 1000lines of code.
when that was written XP had around 40million lines of code. there are issues with the study in that they weren't able to look at the source code for XP,
but are you seriously going to claim that M$ are going to have a "much" lower bug count than linux...
and talking about hacking contests, I point you towards
vista and OS-X were beaten, ubuntu wasn't.
sudo apt-get update (Updates ALL applications and OS)
Apt based packaging from central repositories makes updates easy.
This is why I love Ubuntu.
Actually the default is to run this for you each week as "Update Manager", so you don't get swamped with updates each day.
Security updates are push out immediately.
Re : 3 things (3)
Most Linux distros update the entire installed software base as often as you like ( check once a day or once a week) The updates can be automatic or user authorized ( root password needed on most). Because only kernel updates need reboots in most cases this can all happen without the user being aware until for example a new version of Firefox is started and announces the fact.
Is the current code base of windows untenable?
"1) Writing a new version of Windows from scratch is just the dumbest thing they could do."
I don't know really. Sometimes the 'dumbest' solution is the best.
Granted it will take people who really understand what the hell is going on behind the scenes, and smart people at that, and it would be 'somewhat' labour intensive, sometimes when a code base becomes to ugly, it may just as well be time to nuke the entire thing from orbit, as it's the only way to be sure...
I use windows (I am platform agnostic somewhat). This is what I would like to see.
Re: Give them some credit
AC says: "As a Mac user ... I am not advocating they do an OSX, dump the current line and start from scratch,"
Well OS X is simply recycled UNIX from NeXt Step with a new shiny UI added. Nothing to be proud of and after 10 years is still < 5% market share. It's really an inferior security model, but just less attacked.
Certainly not from scratch. Suffers from the same bloat as Windows and Cruft and stupidity going back to 1976 that Linux has.
Windows 7 is latest version of a 17 year old OS based on work started in 1980s
OS X is an 10 year old OS based on work started in 1970s
I'll go now. Mines the one with Knuth's "Fundamental Algorithms" in the pocket.
GDUG FTW! GNU/Debian/Ubuntu/gnome :-)
Every OS has flaws but windows is a bundle of flaws masquerading as an OS.
Checking and quick patching are the key points.
GNU/Debian/Ubuntu/gnome is spot on in this regard.
With open code lots of ppl are looking and checking and fixing.
Big name like Google and IBM rely on it.
Apt based packaging from central repositories makes updates easy.
Windows is a nasty mess with previous compromises between security and marketing coming back to haunt them.
A first step for anyone running windows is to do the easy obvious stuff, get a Linux based router and run windows behind it, for a start.
Run firefox or chrome not IE, install Security Essentials. Don't use dodgy copies of XP.
If you are short of cash, use Ubuntu or at a push download the 120day copy of MS Server 2008 and turn it into a workstation, you can bump it to 240 day legally.
But seriously unless you have corporate IT keeping an eye on security for you, just take the time to learn Ubuntu.
Using windows is like wearing a big target on your back, all the viruses are targeting YOU!
Get a Xbox for games.
Running Ubuntu... use Virtualbox and run a copy of XP in that for any apps you need.
Run "sudo stop qemu-kvm" before you start it and you are good to go.
Systematic search of code paterns?
No company can keep *all* their developers active *all* the time.
Logic says there can't be *that* many *patterns* of code that have the profile that will give this failure.
Here's the point. Pick up the bugs (and this *is* a bug) in the *source* code not run loads of tricky (but ultimately ineffective, given they have found this for how many versions?) program exercisers, stress testers etc.
Hint. This is not *just* a bug in a function. It's a bug in your development *process*.
Speaking of which how long ago was that root and branch code review of Windows?
credit, but only where due
@Mage: Unix certainly has roots in the 70's and some of the Mac OS X underpinnings therefore go back that far. But don't criticize code *just* because it's old. The big advantage of old open-source code (Mac OS X's Unix underpinnings are all open-source) is that it's open, and has been extensively inspected and tested.
I don't criticize Apple for not rewriting the kernel every couple of revisions - that would be silly.
But Microsoft DID put a big, sparkly "all new" sticker on Vista. Now we find that, well, maybe they fudged the truth. Or possibly, to one poster's point, it *was* rewritten but compatibility issues meant that the bug remained. Humbug. If they rewrote the code and retained the unchecked-buffer-overflow behaviour then they deserve all the wrath they're getting, and more.
As to .NET being the saviour, MS has very, very infrequently cut off backward-compatibility. If they continue this with their next OS-cycle, whatever it is, then they're just carrying the same bag of fail.
Apple cut it off after decent transition periods - and handles the transitions very well IMHO. But 68K is no more, PPC is gone, OS 9 is dead and they only have to support Leopard and Snow Leopard.
Yet H.M. Gov't still prefers IE6. God help 'em for few others will.
>win32k.sys isn't a necessary part of kernel per se - it's a lot of the functionality
>that in *NIXes is provided by desktop environments, but moved to kernel mode
>in NT 3.5 (?) times to improve performance.
If it's in kernel-space then, as far as bugness goes, it's part of the kernel.
This is typical MS behaviour - trade security for speed.
There is a whole lot of stuff in Windows kernel space that shouldn't be there. They put it there to make it faster (so why is it still slow?) at the expense of security.
Bad trade, gentlemen. Bad trade.
Re: Bad trade.
The thing is that in theory they had a better case than the simplistic model of "unix": They tried to actually make use of the four security rings model x86 got from multics. Turns out that the way they did it was a bit too detrimental to performance for everyone's tastes. There's a lesson here.
What the lesson is? I don't know. Maybe that yes, context switches are awfully expensive, though a microkernel like QNX still manages to do quite well. Or maybe that one shouldn't make graphics performance that integral to overall system performance? As I say, "real servers are headless".
The fact that micros~1 finds itself so often making such onerous tradeoffs itself is another sign that something is fundamentally fishy with what they're doing. It's not just bad management or poor programming; it's also a bad approach to engineering.