who uses safari on W7 anyway?
Safari Web browser...no danger of this bug causing any real damage then!
An unpatched critical flaw in 64-bit Windows 7 leaves computers vulnerable to a full 'blue screen of death' system crash. The memory corruption bug in x64 Win 7 could also allow malicious kernel-level code to be injected into machines, security alert biz Secunia warns. Fortunately the 32-bit version of Windows 7 is immune to …
Safari Web browser...no danger of this bug causing any real damage then!
Why would anyone want to use Safari, full stop!
Posting on Win7 64bit with Safari right now.
If you don't hear from me again you'll know why :(
Personally I think Safari for a Windows is a piece of crap not least because it inflicts a pale imitation of the OS X look & feel onto Windows. But I suppose some people might use it, for example Mac users who are running it from work or whatever.
Still, it sounds like an edge case. I assume that if it's something to do with the height of an IFRAME that safari is blindly trusting the content to be sane and then doing something stupid such as allocating memory which exhausts all physical memory, consumes all system resources or something similar.
The fact that Safari is needed to make this happen is irrelevant - the fact that it happens when it shouldn't be able to is the big issue here.
Those who don't trust IE, don't like Firefox (it has become bloated) and don't trust Google with their privacy perhaps?
Makes you wonder have a user space application can crash an OS. Usually it is code running inside the kernel space that does that, drivers and the like.
testing their code so that Safari users get the same experience as Firefox users, IE users, Chrome users, Opera users.
I have 5 browsers installed on this box. I only browse the Internet with Firefox, the other 4 only ever run code from my dev server or local files.
Now if all browsers were standards compliant and all supported the same feature set, I would only need one.
"Those who don't trust IE, don't like Firefox (it has become bloated) and don't trust Google with their privacy perhaps?"... use SRWare Iron?
Hmmm, because rather ironically given Apple's involvement, people are still free to choose what they can run on their own machines and not just go along with the default supplied or the big names?
I do quite a bit because :
- I don't trust IE
- I strongly dislike what Firefox has become
- I still like Seamonkey but it leaks memory like hell
- I like Chrome but not enough to let Google know every move I make (so Srware Iron is my friend too)
- In my experience, Safari is rock solid, keeps a reasonable memory footprint, very seldom renders sites wrong and still has an UI classic enough to fit my old timer's taste
Same here, I develop in Firefox then check through IE6/7/8, Chrome, Safari and Opera to ensure everyone can at least use the basic functionality. The amount of time I waste ensuring that each browsers "quirks" are attended to, verges on farcial.
IE, don't like Firefox (it has become bloated) and don't trust Google with their privacy perhaps...
So use either Iron or Opera.
Can't belive how many techies have never heard of Iron !
why you were voted down. Perhaps the voter thinks farcical means funny?
It's far from funny, in fact there is no humour at all in messing about with css or conditional html just to get it to look right on several different browsers.
Have an up vote to negate the down vote
"Edge cases" are exactly what an OS is supposed to be able to deal with.
The user or an incompetent programmer can always do something stupid. When this happens, it should not bring the whole system down. It's no longer 1984.
The system is there to be a gatekeeper, to manage resources , and to sensibly deal with problems including inept and malicious users.
Pretty sure that was Opera's doing, actually.
I don't do any online banking from Safari neither any other browser running on top of Windows. No way. So your comment is entirely irrelevant.
Secure browsing on Windows is an oxymoron, anyway.
I don't even use Safari on my Mac let alone any of my windows machines. I try to keep Apple's crappy software off my machines so no iTunes, Quicktime or Safari for me.
"and don't trust Google with their privacy perhaps?"
because Apple's hands are clean...
Farcical amount of time on browser quirks?
I know that one, trying to point out to a (publically funded/charitable) customer that they are wasting over £600k a year on browser support.
The response "well corporates like us use xp/IE6 and no way IT are going to let an update change the browser"
God, I don't even want to keep thinking about how wrong this policy is on every level.
I don't care if you ARE an IT admin, you're failing to perform your duties, deliberatly causing your employer to waste money and exposing them to the oldest collection of security flaws which they pay you to mitigate.
and if you happen to be the boss, then more shame on YOU because you're paying this worthless "£$%wit good money and letting THEM tell YOU how your business should operate and blindly trusting them to keep your company safe while ignoring the fact they shun the advice of EVERY major player, including the people that wrote the software in the first place and every worthwhile security consultant ever.
Mine's the one with the blinding fit of rage spilling out the pocket.
...and that would be difficult.
This could be a Safari thing, in that it is installed to run as administrator allowing it to do silly thinks like trash the OS, or more likely being Apple; it has installed a whole bunch of other crap you don't need and were not told about that is running as a privilaged services in the background.
I installed iTunes on a Vista laptop a few years back, then uninstalled it; after which I was constantly getting issues the the machine because Apple had deleted system dll's!
I decided it was easier and safer to re-install the OS than try to find everything iTunes had screwed up.
...your last para! Oddly, in my experience (& I have to use the 'big five' browsers daily, across an installed base of several hundred PCs), Safari blows. Hard. I'm an old-skool dev, weaned on assembly language & C, but I appreciate good UI design - which Safari just doesn't have.
Incidentally, no MS lover here, but IE9 is shockingly good. Really, really good. So the motivation to go hunting to install a 3rd-party browser on new machines just isn't there any more, for me. And it's simply not enough of a religious issue to get steamed up over, any more. IE is no longer the whipping boy of browsers.
" "and don't trust Google with their privacy perhaps?" because Apple's hands are clean..."
I more inclined to trust Apple because I am their customer. If they don't deliver what I want, they lose business. With Google, I'm the product - it's aim is to please the people who pay them, and that often means intruding as far as they can get into my life.
The deeply depressing thing about windows has always been that it trained users to expect a level of quality far below what was the norm before windows existed.
It is no longer 1984 but I used computers in 1984 and the general purpose OSes available at that time were generally very solid and would not let a user program crash the OS. I have run computers running a variety of operating systems of that age for up to ten years with the only reboots following power or hardware failures. Can anyone say that about Windows? It was only when Windows became widespread that people began to accept that computer software was inevitably unreliable. Windows has become much much more reliable than it was historically but incidents like this continue to reinforce the impression that it was extermely poorly designed (if designed at all), with reliability and security being low priorities if they were considered at all.
I'm open to the posibility that you were running an OS on some serious iron (for 1984) back then, and not an apple/ibm clone/microcomputer, but except for IBM mainframes and things like that the OS's of the time pretty much surrendered all control to any program that were run... depending on the platform free access to memory and registers weren't that uncommon, heck, some (microcomputers) unloaded the OS at the load of a program and you had to restart the machine to get back into the OS.. in the meantime your code had direct access to whatever hardware it fancied.
U sure your specs haven't gotten a little bit rosy since then? ;)
One or two new hardware devices have been added to the mix since 1984...
Most of the bluescreens that occur are triggered by third-party device drivers. Windows (unless something has changed recently) is the OS that "enjoys" the most third-party support. The number of developers qualified to write good device drivers can probably be counted on one hand. Fortunately the development kits comes with good fleshed out examples that often only require minor modifications to support simpler hardware devices.
That said, it is of course extremely serious if a usermode app running as a non-admin user is able to trigger a BSOD. I do not recall something like that happening to me in 17 years of Windows NT/2000/XP/Vista/7 usage. I have of course experienced many BSODs due to badly written device drivers though.
"except for IBM mainframes and things like that the OS's of the time pretty much surrendered all control to any program that were run"
Except for UNIX (including Xenix). And Multics. And VMS, TOPS-10, TOPS-20, MCP, Pick, and GCOS. And those are just only some of the non-IBM-mainframe OSes of the time (that sprang to mind). Many ran on minis and workstations; Xenix ran on PC-class machines.
For that matter, 1984 was only three years before OS/2 1.0, and only four before Windows/386.
Lumping all general-purpose non-PC computing into "mainframes and things like that" is a bit like saying "except for cars and things like that, motor vehicles all have two wheels".
There have been a number of Windows user-mode BSODs. There was an entertaining one a few years back where just printing a short sequence to the screen of a "command prompt" (shell) window would crash the OS (due to a bug in CSRSS, if memory serves). I verified that one myself, in part just to see how easy it would be to exploit it.
For that matter, many of the escalation vulnerabilities in Windows (and there have been plenty of those over the years) can crash the system when fuzzed sufficiently. I suspect you can do it using Tavis Ormandy's #GP Trap Handler exploit, for example, with an unprivileged user-mode program.
In other words, this needn't have anything to do with drivers; nor is it the fault of Safari. (Impressive how many commentators can't understand that simple fact.) It's a bug in the OS, of a sort that's been seen before and will be seen again. It's noteworthy, but not unique.
And this happens to other OSes too. I remember a bug in the Pyramid flavor of UNIX that was triggered by an erroneous pipe() system call and caused a kernel panic.
Clearly an Apple plot.
Hmmm makes you think doesn't it. When a non-Apple O/S can't even keep control of a badly behaved, rogue app, no matter how shitty it becomes, falls in a heap due a small overrun in a browser app, you just have to wonder.
Could be a duff graphics driver told by an app to allocate some ridiculous sized surface, as a consequence of which it crashes and BSODs the OS. Something somewhere should be catching the error though.
> Could be a duff graphics driver told by an app to [ ... ]
No, it couldn't, because apps don't speak directly to graphics drivers, they aren't allowed to; they speak to the win32k subsystem of the OS, which then translates the graphics APIs they invoke into various calls to drivers, and it is that win32k subsystem (like every kernel-mode subsystem or driver) that is completely responsible for validating the input parameters it is passed with and ensuring that it doesn't submit any bad requests to drivers. In other words, a user-mode app attempting to allocate a hugely oversized bitmap or canvas or whatever should not cause the win32k subsystem to generate an insanely-huge allocation request to a device driver. (And the device driver should reject it also, and in fact probably does; we have no evidence, just your supposition that the crash is happening in a driver rather than the core kernel.)
It is not up to the OS to make size checks before asking the driver to perform allocations. That is the driver's job. If the driver can't handle the allocation then the driver should return an error.
The rationale behind this design is that only the driver really knows how big an allocation it can make. The OS should not know. If it did it would have to know the max allocation size for all different graphics cards which is clearly not a good idea.
Yes, you are correct that this is merely a supposition. That's why (s)he wrote 'maybe'.
Dave Cutler is going to realize that "Personal Computing" isn't exactly what he was doing at DEC ... The world has moved on, Dave. The cognizant use the UNIX[tm] model.
I do have some sympathy for him. He was sucking on Ken "Unix is snake oil" Olsen's kool-aid teat during his formative years, so seems likely that the Olsen kool aid was still in his blood stream when he went off to Microsoft to re-invent UNIX poorly^W^W^Wwrite NT.
Also I am pretty sure that Dave had nothing to do with putting "Win32" on top of the kernel, which seems to be where most of these fuck ups come from.
At the end of the day Dave Cutler et al could have saved everyone the bother by implementing POSIX properly in the first place, like Linus did.
It's just as well that no UNIX-inspired OS in the world has ever had a kernel vulnerability exposed to userspace, otherwise your comment would make no sense.
But you're right, the "everything is a file, except there's ioctl in case it isn't a file, plus random synchronization primitives which aren't files" model is obviously way better than the "everything is a polymorphic HANDLE" model, unless you're some kind of incognizant fool!
I think "implementing POSIX properly" is an oxymoron, isn't it?
Besides, based on the number of kernel updates I get on this box, it doesn't look like Linus has even implemented Linux properly, yet.
I am certainly no Windows fan. but I have met and spoken with Dave Cutler and had an interesting conversation with him back in the NT 3.x days. He's a bloody genius.
Yes it really sucks that Windows has its own APIs and does not do POSIX, but that was a command decision from above. Dave certainly had the nous to make POSIX happen.
Back in the NT3.x days, MS was pushing NT for servers as a simpler to use and cheaper alternative to the 386 *nix offerings of the day. To woo over customers they offered POSIX compatibility (that almost worked) and a pushable streams module interface. These were needed to allow companies to port their products to NT.
The POSIX implementation was really crap though. Performance was horrid. But once companies, like the one I was working with at the time, had got sucked in and committed to NT based products it was too late. You had to port your apps to the Windows API to make them run properly. Even still, we needed a 100MHz 486 to perform a job that had previously needed a 25MHz 386SX.
I'm impressed that you know Dave Cutler's name, given your lack of knowledge about operating systems or history. When NT was written in 1989, it had dozens of modern OS features that UNIX either did not have or was still in a very confused state. Cutler's impact on UNIX is what is not properly appreciated. I was at Bell Labs when UC Berkeley folks ported BDS to the VAX, and it was pretty clear to us that BSD was UNIX + VMS (virtual memory, a file system that was not journally but at least did not horribly suck like UNIX V7).
Linux is basically UNIX + NT. So many ideas in modern UNIX come from Microsoft - the use of dynamic linked libraries, device driver interfaces, asynchronous file I/O, journaling file system, etc. All things done by MS operating systems before UNIX. UNIX still lacks the systems engineering design that Cutler and Microsoft brought to operating systems, the modularization and formal interface (e.g. COM) structure. Using BSD in the last 1980s, we had to edit tables and recompile the whole kernel to install a new device driver, since drivers were simply subroutines in a monolithic program.
Readers who are interested should take a look at Hart's book on Win32 Programming, to see what a kernel design should really look like.
Yep, SCO OpenServer 5.0.4 required a re-link of the kernel to change the IP address. Maybe not as drastic as recompiling the entire kernel, but still hardly something you can do on the fly.
No idea how DHCP worked on that OS. I have a feeling it didn't, and nor did any Internet link with a dynamic IP address.
>>So many ideas in modern UNIX come from Microsoft - the use of dynamic linked libraries, device driver interfaces, asynchronous file I/O, journaling file system, etc
Actually, dynamic-linking was implemented long before porting VMS onto MS soil, even before Unix, namely, Multics (BTW, the has the DLL hell been fixed already?)
--jfs (journaled fs) was the first one...
As a matter of fact, is MS suing anyone for the infringement of these (patented) ideas? No, that it is safe to disagree with you. Even if MS has any technological influence on Unix, POSIX and Linux, the counter-influence prevails. MS also is know to persevere in resisting the influence... not for ever. PS was introduced in 2006 decades after csh, ksh bsh, bash etc. Now Win8 is promised to be headlessly available w/o GUI.
>>the modularization and formal interface (e.g. COM) structure
This is the funniest part. Where is this modularism? Windows is modular on paper only, cannot be tweaked the same way as other systems.
Re: "So many ideas in modern UNIX come from Microsoft - the use of dynamic linked libraries, device driver interfaces, asynchronous file I/O, journaling file system, etc."
All those ideas predate Windows and some even Unix. Dynamic linking came from the MULTICS project, like almost every other idea in "modern" operating systems.
Solaris had dynamic linking very early on (not sure if before or after NT), but in any case the Solaris implementation, which is what Linux copies, is superior to Windows DLL:s in that creating and using dynamic libraries is practically identical to static libraries. In Windows you had to jump through hoops. (Export modules? Strange nonstandard C extensions? what's actually the sharing semantics of global variables in DLL:s in different Windows versions?)
I agree that older Unix like that 1980's BSD was inflexible in that recompiling or at least relinking the kernel was often needed for minor changes. It was simply showing its age. NT could avoid much of this as a new design, and so could Linux. It does not mean that Linux copied Microsoft, they were both just taking advantage of "new" (actually by that time well-known) techniques.
Dave Cutler is often given far to much credit for VMS, as he was just part of a large team of designers (both hardware and software that created VMS and the VAX hardware together), and that is why VMS is probably to most stable and feature rich OS ever written.
Unix didn't go 32bit with virtual memory until about a decade after VMS was shipped.
On VMS the C-RTL/POSIX libraries were nothing but a tiny user mode library that VMS developers avoided if they wanted to do anything: clever, complicated or efficiently.
Just about al the kernel features of NT where copies of what VMS had been doing since 1978, the only big difference being that VMS didn't go multithreaded until Version 7, on Alpha's (not on VAX hardware).
If anyone wants to read how a truely secure kernal/OS should be implimented look around for a copy of the (Open)VMS "Internals and Data Structures", it covers everything from boot sequence onwards.
For those that think not doing multi-threading is bad, they need to realise probably less that 1% of VAX's made had more than one CPU so there was no benefit to be had, and the OS used AST's (Asynchronous System Traps) that made writing software where one process could handle 100's devices, timers, etc... all at once was so simple you wouldn't beleive (plus it's much more efficiant than threading on 1 cpu).
PS: The "Open" seen with VMS is silent.
"Just about al the kernel features of NT where copies of what VMS had been doing since 1978"
They're not, you know, they're really not. But the number of people who realise where they do originate is negligible. I have the privilege of having seen inside where many came from.
Other Cutler-inspired projects included a PDP11 OS, and a distributed realtime environment for VAXes called VAXELN. VAXELN had threads and the like before threads were even heard of, *and they were useful* even in the one-processor case.
VAXELN was an embedded environment where you could think about designing the application rather than driving the hardware (and the network, and all the other non-productive stuff that other RT kernels used to require, and which some still do).
*That's* as much where the NT kernel concepts come from as from VMS.
Sadly there's very little written about VAXELN, but I do believe Custer's book mentions it.
Re AC @ 13:11: "Unix didn't go 32bit with virtual memory until about a decade after VMS was shipped."
Honestly, would it kill you to do five minutes of research?
VAX-11/VMS was announced in 1977, with the VAX-11/780. BSD3 UNIX had paging virtual memory; it was released in 1979. (UNIX System 7 had swapping, but not paging, in the same year; it spawned UNIX/32V, and then BSD3, before the year was out.)
1979 - 1977 = 2. 2 < 10.
VMS is a perfectly good OS - though I suggest you don't try that "most stable and feature rich OS ever written" line in a room full of TOPS-10 or TOPS-20 fans. And personally I think OS/400 / System i gives it a run for its money too, on the stability/features front, even if VMS is probably more fun to work in.
Most Win 7 64 bits I have seen chose Firefox. Including my work PC.
Coming soon to a browser near yo$%#^CCU NO CARRIER
Ever so secure and crash free...NOT!
It is Safari that is the exploited software here, this is what causes Windows to crash.
Biting the hand that feeds IT © 1998–2018