To be honest, the NT kernel is not too bad - bit slow and clunky.
But the rest .....
And dont get me started on the network stack. Jesus.
Windows NT has hit an important milestone. Its launch is now closer to the first Moon landing than it is to today. With its debut in July 1993, Windows NT ushered in a gloriously pure 32-bit future, free of the 16-bit shackles of the past. While the majority of PCs at the time were running MS-DOS, often with Microsoft's …
quote- I didn't mind NT - did the upgrade from NT 4.03 to Windows 2000 AD -
I did that once in 4 hours upgrade and a 5 client minute outage. Took the BDC for the domain, put it on isolated network, promoted to PDC, upgraded to Windows 2000 AD, took a workstation on the prod network, put it on the isolated network and logged in as if nothing had happened.
Shutdown the production NT PDC, powered on the Windows 2000 AD server in production, and voila, upgraded to AD from NT4...
Of course, if it didn't work I was mostly fubar'd because there was no real backout...once the Client machines connected to the 2000 AD and logged in, only a drop from domain and rejoin back to the NT 4 PDC would have recovered the system if I had to backout.
Was praying it wasn't going to be a brown underwear day :)
And dont get me started on the network stack
I think they eventually re-wrote it, using code stolen^W 'inspired by' BSD..
And on the HAL front - that's something OS/2 also did. In fact, OS/2 effectively took over the whole BIOS too - which is why there were so many odd errors early on - the IBM programmers expected manufacturers to actually design their hardware and BIOS according to the spec..
He brought just about everything else. At one time it was reckoned that the best tutorial guide to writing a device driver for NT was the VAX/VMS Device Driver manual.
@ Dave Pickles
When Win NT was first released we were doing side by side comparisons with VMS and two things that Cutler didn't bring over were puzzling. First up Logical Names which gave you a level of indirection for practically everything along with protected name spaces. Then there was the big omission of Installed Images that allowed privileges to be assigned to trusted pieces of code (amongst other things, sharing, fast startup etc as well) removing the need for users to have privileges. Both were probably out of place in a PC operating system.
Interix/SFU/SUA/whatever - I tried, REALLY TRIED, to make it work so I could build things with it. But X11R5 was just TOO out of date, and autotools didn't have the capability of handling the lack of compatibility. And writing those changes myself proved to be a frustrating (if not impossible) task.
Didn't even have 'tar' - only 'pax', and pathetically didn't support a lot of things (like compression).
I gave up on it. Cygwin just works better.
As for Windows NT needing 16Mb of RAM: compare that to Win-10-nic, which seems to run poorly with 100 times as much RAM... especially 'the Metro' / UWP garbage.
I had the unfortunate experience of having to create a virtualbox VM running Win-10-nic to test an application on. I had an easier time installing the latest ReactOS (multiple crashes, and looping in the 'OOBE WELCOME' menu thing. After (effectively) disabling audio (switched to the AC'97 driver, which apparently isn't supported at ALL), I was actually able to install it. Then I went to give MS feedback on how pathetic something worked, and the performance of the UWP text box was SO bad, I coudl type LITERALLY! TWICE! AS! FAST! as the text rendering of what I'd typed. Meanwhile, CPU on _TWO_ _CORES_ was being _MAXED_ _OUT_ the _ENTIRE_ _FREAKING_ _TIME_!!!
I'd take that old clunky NT 3.1 *ANY* day over CRAP-WARE like Win-10-nic.
Oh, and my windows application ran JUST FINE (even though I had to test it to make sure). I'll need to fire that thing up ocasionally to test it out AGAIN and AGAIN, of course. But I think I'll "unplug" the virtual network drivers when I do it, so that it doesn't spend unnecessary time and bandwidth UPDATING itself whenever I do it... and use a diskette or CD image to transfer the application EXE file whenever I test it. Heh.
What the hell?
I had 8MB in my Windows 95 machine! Watched the Hindenburg crash may many times on the free encyclopedia CD on my 2X cdrom that came with it.
Did 98, 98SE, straight to Windows 2000 Pro (we don't talk about ME), XP, Win7, Win 8.1 and stopped before Windows 10...
Makes you wish OS/2 Warp won, doesn't it?
Makes you wish OS/2 Warp won, doesn't it?
It does indeed - except for one thing - I strongly doubt that IBM would have been any more pleasant to deal with than Microsoft. After all, just look how they treated OS/2 once they finally decided they couldn't be bothered with it - they didn't let anyone else have it for *years*
And even then, they charged so much for it that all the follow-ons (ecomstation et. al.) have been unaffordable.
Pre NT it didn't so much matter how much memory you stuffed in but rather how well you managed that first 640K. Lots of LOADHIGH statements, judicious use of memory managers and, frankly, witchcraft marked my career as a neophyte sysadmin back in the mid '90s. Happy days, seen through the lens of comfortable middle-age.
oh the good old days. seemed to get a lot better when the ODI drivers came out, I eventually managed, with much cursing to get a PC simultaneously running IPX, NDIS and DECnet stacks - with the icing on the cake getting enough up in high memory for windows to start. Getting windows to start was the benchmark ...
Of course if you wanted to add an extra device like a scanner you were stuffed and had to enter the qorld of Quarterdeck. QEMM me up baby!
Had 3 weeks' trying to cram a certain UK bank's upgraded DOS front end teller system on their PCs using every trick I could think of. It finally came down to the load order of the different modules. Finally got it to load reliably on every PC.
Gave myself a big pat on the back and was moved on to a different project. I was only told later that they didn't use it, and were instead upgrading to XP. I think they were running out of compatible network cards for their 15 year old branch PCs, and decided to bite the bullet and upgrade the hardware.
Windows NT took so long to build because Gates insisted on compatibility with MS-DOS, MS-Windows and (16-bit) OS/2. Compared to MS-Windows it was stable and performed half decent if you had lots of RAM. I once ran, err crawled ran NT 3.51 and Exchange server in 16 MB.
MS-Windows 4.00 was usuallly packaged with MS-DOS 7.00 and sold as Windows '95. That crap was hastily launched as 32-bit OS/2 started to gobble up MS Windows market share and NT had too high hardware requirements for the unwashed masses until the Home Edition of NT 5.1, a.k.a. XP, not to mention the selling price. It took Redmond until 2000 to create a usable server edition. Compared to Unix it still lacks (pseudo)-terminal support.
Funny how NT 4.00 complained about the presence of a disc in the CD-ROM drive when it was labeled C:, but not when after renaming it to H:.
Sure, with AD it was far better for larger networks, and could compete with Netware.
Still, before it, your option for a server were expensive Unix licenses (and the hardware to run them), or Netware - just developing for Netware was more complex and you couldn't reuse your Windows skills, and there were less software available.
NT4 was OK if the network wasn't large. Just it made more sense with NT4 clients, Win9x wasn't really designed for networks.
OS/2 remained a small niche.
"How do you make a Windows NT server 4x quicker? Stick Netware on it."
But in doing so you make it 4x more expensive. Not that NT was cheap, but Netware was pretty pricey and the difference was enough to pay for a substantial hardware upgrade.
>But in doing so you make it 4x more expensive. Not that NT was cheap, but Netware was pretty pricey and the difference was enough to pay for a substantial hardware upgrade.<
The genius thing that MS did to beat NetWare was not to bother enforcing user licence counts...
You could run 100 or more users on NT with only a 5 CAL setup as long as you weren't worried about the legalities - and many companies weren't. NetWare enforced the user count very strictly, so you HAD to buy the appropriate number of (very expensive) licences.
That's why you rarely see Netware any more.
>I'd go back to W2K.
I did for one of my old laptops (very old Thinkpad) since I needed a true 9 pin com port I could lug around easily on site. Since software also needs the vb6 run time double the reason not to upgrade. Of course that laptop is used for instrumentation testing only and never goes on the real network. Still runs like a champ and even shoe horned an old version of cygwin on there.
Meant to say 9 pin serial com port as the USB to serial nonsense is the dog's breakfast and not worth the hassle especially when diagnosing comm issues with old equipment where want less variables not more. Went W2K instead of XP due to less memory usage which before I increased laptop memory was an issue sometimes.
I did notice the icon, but that is not how things were at the time. Although the Linux kernel pre-dates NT, practical Linux distributions arrived at a similar time. PCs came with DOS/Windows/95/95/98/ME bundled. NT cost extra and if you wanted Linux you near enough had to assemble your own PC from supported components. To start with, Penguins were few and far between and likely to have dual boot machines.
I think the hatred (from both sides) started around the time of XP. Microsoft decided that home users would use a cheap DOS based OS and that business users had to buy XP. Linux was a minor irritation because it was came with all the software you needed for free, had a proper CLI and better GUI but few people had even heard of it. Microsoft knew they had to crush it so they could charge monopoly prices for XP.
For me it was no contest: my legacy DOS software would not work with XP but did work with DOSEMU on Linux. Microsoft trying to trash the boot loader to prevent dual boot systems was not appreciated. The hate came from Microsoft forcing EOMs to bundle their OS with new computers. That could be avoided by building your own desktop but Linux laptops (when you could find them) had the extra cost of Windows without the price reduction from crapware. The FUD and lies from Microsoft got tiresome really fast, went on for years and became SCO vs World.
At some point, Microsoft noticed they were fighting the wrong battle and their real problem was Android - Linux with the best bit (the license) circumvented. My own hatred for the Microsoft tax faded years ago because I no longer need a desktop, Vista meant I got my laptop cheap and now that it is starting to fall to pieces I am building a sturdy wooden modular replacement from components. (That project is more because of EOMs deciding that I have to buy thin and fragile than anything I can blame Microsoft for.)
And originates from long before XP, and even before 95. ‘Cos, for example, the favoured word processor early on became WordPerfect and whilst it’s series of owners tried to stay ahead of and then match Word feature for feature MS built undocumented API’s into each iteration of Windows to make WordPerfect explode.*
By comparison OS/2 had an excellent basic set of workhorse programs (Faxworks for example had a built in graphics subsystem so you could amend and annotate faxes, add signatures etc on the screen 20 years ago or more) then the developer’s website would suddenly and “inexplicably” disappear, the supposition being that the developer, understandably, succumbed to the Microsoft shilling.
*Remember there was an OS/2 version (5.2 if memory serves) of WordPerfect but at a time when cross-platform compatibility was essential (that is, until MS succeeded in making compatibility unnecessary) it was ill-formed. Basically a pig’s ear.
But then there was a Linux version too, which did not justify the name WordPerfect. Not just the ear - the whole pig!
From what alternative reality you come from - or maybe you're born around 2000? Your post is full of factual errors.
Just to start, when XP was released, DOS and the Win 9x line were killed - and both consumer and business users were to use an NT kernel. Anyway, MS has been hatred for a long time already by then.
The way it moved to crush competition in the 1990s brought a lot of it - mostly deserved- but then the competition were the likes of IBM, DR, Lotus, Wordperfect and Borland - not the then unknown Linux which became usable for generic users only towards the end of the decade - i.e. the KDE project was started only in 1996- when MS already achieved its goals, and then incurred in the antitrust investigations.
Don't know what DOS software you were running, because thanks to the Virtual 86 mode of the CPU supporting DOS software was much easier - and I don't know about NT 3.x, but it was easier to support DOS games (in 1994 I was running LucasArt's "Tie Fighter" without issues in OS/2 3.x DOS box, and Turbo Pascal 7) than Win9x ones which required a DirectX support NT didn't have (NT4 supported DirectX 3 only) and would have come later in 2000 and especially in XP.
Direct access to the hardware was not an issue because the VIrtual 86 mode trapped that and gave the OS a way to emulate the operations. Of course devices not emulated could have been an issue.
Android would come much, much later, Linux was able to erode the lucrative server OS market much earlier, as soon as it became a viable replacement for more expensive Unixes on powerful enough hardware.
Microsoft have been shits for a very long time but the comment I was replying to was about 25 years of hate. I started with TᴇX on Unix before OS/2 existed. One large project with Microsoft Word sent me running back to TᴇX and these days I prefer python/reportlab. I was aware of OS/2 and WordPerfect battles but as they did not affect me directly I do not know if they are a good match for the 25 year time frame. OS/2 started 31 years ago and the last release was 16 years ago. Perhaps it is a reasonable fit it MS started their attack 25 years ago and you are really persistent at holding a grudge. WordPerfect started 39 years ago and much to my surprise is still going. Did Microsoft's hate against WordPerfect start 25 years ago? Have they done anything about WordPerfect in the last five years?
The software I had problems with were cross compilers. I do not know why the did not work with NT and XP, but Microsoft's technical support was particularly unhelpful. They said: "God hates you."
Oops: dinner time ... got to AFK
Not hatred exactly, but I thought at the time that NT -- no matter what its technical merits -- was a dubious idea. The problem I anticipated was that NT was never likely to be the server OS that Unix was even back then and migrating the user OS away from a small, minimal core (i.e. MSDOS) would mean that when the next generation of low end devices came along, Microsoft wouldn't have an ecosystem that could be shoehorned into them.
Pretty much what happened. You cell phone doesn't run an NT derived system because by the time the hardware became capable enough to support one, other OSes owned that market. And neither does all the annoying IoT stuff -- largely for the same reason.
Actually by the time the hardware was capable of running Android, it could run an NT kernel as well. Windows Phone 8 was less resource hungry than Android.
DOS was so limited, and so often bypassed, I wouldn't call it a core nor a kernel.
Nor Unix nor Linux nor Windows are microkernels. Windows kernel is not large - just, until a few years ago you couldn't make a "compact" install of Windows, it installed anyway a lot of code and services you could only disable later. For example, Windows Server has the full Active Directory support installed, and you can make a Domain Controller just running the dcpromo utility.
It is true you can often run Linux on older, less powerful hardware - but for simpler tasks as well. As soon as you have similar needs, the hardware is more or less the same.
As soon as you have similar needs, the hardware is more or less the same
My last place, we had a number of webservers - two linux boxes (main and failover) and one IIS box. The IIS box cost 4 times more than the linux boxes because of the spec 'required' to run IIS rather than Apache & OpenCMS..
I can tell you when my hatred started. Late '90s some complete and utter eejit in their
advertising pestering department decided on a gimmick. They would get a magazine pubilisher to put a gob of the sticky stuff used to attach floppies between two pages with the slogan "Don't get stuck with Microsoft".
I suppose said eejit in his idiocy thought it would simply peel off with no harm done. It didn't always do that on magazine covers and stood no chance of being got off the flimsy pages without tearing. The eejit also hadn't realised the slogan was ambiguous. As a reward for such an arrogant tampering with what I'd paid good money for (and to the other advertisers who'd paid good money to buy space on the same pages) I decided to take the meaning they didn't intend and avoid getting stuck with them as far as possible in the future.
Back in the early days I had their FORTRAN for CP/M which seemed a bit of a miracle although I suppose even a Z80 box had more memory and storage than I was allotted on the University mainframe a few years earlier. And Windows itself was quite welcome when it first arrived: I could run an X-server on it to connect to the HP-UX boxes I was responsible for or, later, just multiple terminal sessions.
But Microsoft, over the years, have brought the hate on themselves through the sheer arrogance of their behaviour.
You could copy the contents of the floppies to a directory and install from there. I ran an OS/2 2.11 farm and would copy the floppies to one server then run the install across the network from there. Saved eons of time having to install a FP on 80 servers.
Developed that out to a repository for all of the packages (Notes, CM/2, ADSM, etc) and fixes on the servers. Once I was done with the base OS install on a new machine I didn't have to touch a floppy or CDROM again.
Warp came on a CD too I got it that way - although IIRC it required still five-six floppy to boot before it could read the CD
Two floppies were all that you needed unless you had some *really* exotic hardware..
(My copy of OS/2 Warp came bundled with a SB16 sound card and a CD-Rom drive to hook up to it).
(My copy of OS/2 Warp came bundled with a SB16 sound card and a CD-Rom drive to hook up to it).
That reminds me of one of the fixes I saw while browsing through the bug list. It stuck in my mind because it showed how much of an effort IBM was making to ensure VDM<->DOS compatibility. And it sounds like 'cool geeky programmer stuff' :)
There was a very good golf game for DOS. For its time, graphically excellent. Digitised images for the course and contours for the greens. While you were playing there'd be bird song and occasionally running water. The fix I remember seeing was for the sound card. Apparently the game was causing problems because it tried to send the samples to the card 10,000+ times a second and the VDM couldn't service the interrupts that fast. I think the fix IBM implemented was to have OS/2 take over controlling the sound card so that the VDM didn't need to raise interrupts. So presumably they emulated the SB hardware for the VDM. Cool stuff :)
Mind you I also remember them getting snippy because so many joysticks of the time were not programmable and that broke their driver model. A similar problem was that they expected all printers to be connected using Centronix cables with the Acknowledge pin wired up and functional. They seemed quite offended when they discovered that most printers of the time had that feature disabled and/or the owner was using a cheap cable that didn't have the pin connected.
"There was a very good golf game for DOS. For its time, graphically excellent. Digitised images for the course and contours for the greens. While you were playing there'd be bird song and occasionally running water."
Sounds like Access Software's Links. Played it quite a bit in the 486 days along with its successor Links 386Pro (which allowed SuperVGA resolutions). Eventually acquired by Microsoft and rolled into Microsoft Golf, came and went (though I don't blame Microsoft for this--the push to realtime 3D golf rendering by the 5th console generation rendered the Links engine obsolete IINM).
"21 3.5" disks, and the installer insisted on *every single one*."
I don't remember what Xenix used but I don't think it was quite that many.
I had a SCO install which came on a CD but needed a sloppy to boot. It wouldn't install on Virtual box even if you could get a copy of the floppy onto it - it didn't like the emulation. I had a few clients with Informix on SCO (the staple of a lot of small businesses at one time) so having that on a laptop was quite useful. About the time laptops no longer had floppies Linux became mature enough to use without spending more time fiddling with it than doing actual work (KDE 5 is making me start thinking that things are going backwards).
I threw out a set of SCO Xenix/386 install disks not that many years ago that had been used in anger circa 1991/92. They were on 5.25" disks and there were shitloads of them - can't remember quite how many but I remember them plus the manuals in their boxes took up a couple of shelves in a big bookcase in our office.
Lol, reminds of the one time I raised a support ticket with IBM. I noticed that on the UK version of Warp there was a solitary full stop below the copyright message. Being young and naive and therefore a fanboi(*) I reported it. Two months later out of the blue I got a parcel. It was a Jiffy bag with the latest service pack on floppy disks and a note saying that the SP included a fix for my issue. To this day I don't know whether to consider that extremely good customer service or a pointless waste of IBM's resources :)
(*)I am no longer so naive as to be a fanboi (a good thing) but also not as young (a bad thing) :)
"NT 4, in 1996, is peak Windows as far as this grizzled hack is concerned, before NT was retooled for consumers with the launch of Windows XP in 2001."
You missed W2K?
I migrated my W2K VM from my old to new laptop this morning. It runs the one application I can't get running under Wine and couldn't find a decent replacement for under Linux. I'm trying to decide whether to migrate the W7 VM. Probably not.
"Just out of interest, does ReactOS run the software?"
Not tried but probably not. It actually fails to install properly as far as I can tell and the bastard vendors had no interest whatsoever in fixing it. It needs to contact their servers to register although IIRC there was a means to register it by contacting them off-net. But it's a long time since I bought it and I don't know if I could even register a re-install so the easiest thing is simply to keep it on a VM where it's registered and working.
Just look at where Linux graphic code runs now - same ring. Since a lot of processing now happens in the graphic card acceleration hardware, and that's strictly tied with the driver which needs to run in the kernel to talk to the hardware, there's little choice but to run most of the graphic code there until you like slow performance - ring transitions can be very costly - just look at how fixes for meltdown slowed things down... and why they took the shortcut to map kernel code to user space protecting it just with the paging mechanism - *and all OSes did it*, Linux included.
Stability depends mostly on the quality of the driver, and especially in the past a lot of cheap graphic cards came with bad drivers. The real downside is security, because things like font processing may be used as attack vectors.
So what about microkernel environments, where as little of the kernel is exposed as possible? And virtualization, which necessarily involves a lot of time in Userland? Hasn't there been something of a push to pull more performance-intensive stuff (including graphics and low-latency networking) into Userland to avoid the costly context switching and insulate against rogue processes?
Ask yourself why microkernels like MINIX or Hurd went nowhere, and the only commercial implementations are at best "hybrid" ones like macOS.
Intel envisioned something alike a microkernel with its four rings (0 - kernel, 1- I/O, 2- OS services, 3 - applications), but the cost of so many transitions were too high. AFAIK, nobody used such an architecture (which was not portable also, since most CPU had only two levels). More separations and slower "communication" mechanism across levels means slower performance, albeit better security.
Modern virtualization allow guest OS to run in ring 0 for performance (avoiding emulation, which itself runs in ring 0) - and that's why "ring -1" was added.
The problem with graphics and networking is they need to talk to the hardware. In the Intel architecture, you can talk directly to the hardware in any ring <= the IOPL setting to access I/O ports (but that's is set to 0 in most OS), or when you can access physical memory when hardware is mapped there, and that's again usually something only the kernel can do.
If, and only if, you can do most processing in user space and then move the processed data to the kernel to be sent to the device, you can avoid most switches - otherwise you need the other way round.
Take font display: everytime you need to create a glyph you need to compute its image, antialias it, etc. etc. You can send just the glyph "code" to the kernel and do all the processing there, or you'll need to go back and forth from the kernel to compute the correct image (because you may not want to duplicate a lot of the kernel states in user space, which would anyway needs to be kept in sync...), and then send the whole image to display.
Networking on the other hand, if most of the protocol data can be computed in user space, may just need to send the bytes to be transmitted to the driver in kernel, which depending on the transport protocols may do very little but to send them to the hardware to be transformed in electric or optical signals.
I was speaking about generic desktop/server OS - embedded and other niche uses may be better served by other architectures, where the issues that may make them bad on a generic desktop/server OS may not exist at all, while they can solve others.
You can't easily pre-compute each font glyph (at any size?) - because its actual display may depend on what's before, after and below. And often, it needs to happen in real-time. Vector font formats are very complex, and are designed for high-end typography needs. Maybe overkill for many users, but there is also the DTP crowd, and the like - and you may want a PDF to be displayed as designed. You zoom, and glyph have to be re-computed.
"You can't easily pre-compute each font glyph (at any size?) - because its actual display may depend on what's before, after and below."
But what about computing font glyphs in userland THEN pushing it on to the kernel for compositing like a layer? I can see things like 3D rendering necessarily being kernelland because it's the GPU that does the actual lifting; same with video acceleration. But fonts?
Too many ring switches, I'm afraid. For example antialiasing needs to know what's behind the glyph (which may not be known by the application, if not under its control). Fonts antialising may use "hints" inside the font data to avoid they look the wrong way.
Also, font rendering may be hardware accelerated as well. With the increased size of display devices, and their resolution, even displaying 2D objects had to be accelerated. Windows old GDI is often too slow for some tasks, and that's why hardware accelerated Direct2D was introduced.
Still, you can rasterize fonts in user space and sent the result to the kernel for display, AFAIK there are some libraries that do that - but it's simpler when you can pre-render a whole static output (i.e. a PDF page) than when you need to manipulate a dynamic output.
It is true that maybe the new wave of flat, solid color designs may not need it, but remember Windows 7 enabled the "aero" interface only if the underlying hardware was good enough, and some effects could be disabled if there's not enough power.
"It is true that maybe the new wave of flat, solid color designs may not need it, but remember Windows 7 enabled the "aero" interface only if the underlying hardware was good enough, and some effects could be disabled if there's not enough power."
I believe the key requirement here is GPU compositiong. That's why Aero automatically turns off if you use something like a screen mirror driver (like DFMirage, recommended for use with VNC on Windows) because the screen buffer has to be in main memory for a mirror driver to work properly.
"Ask yourself why microkernels like MINIX or Hurd went nowhere, and the only commercial implementations are at best "hybrid" ones like macOS."
What about QNX, then, used in BB10? Even the memory manager in QNX is a separate process, yet it doesn't seem to have performance issues..
Not exactly a successful example <G>.
Still ut's a phone, it doesn't have the multiprocessing/multitasking needs of a server. I think Canon's DRYOS is also a microkernel one, and it runs many millions of cameras - but again, it's a specific OS for very specific needs.
"Still ut's a phone, it doesn't have the multiprocessing/multitasking needs of a server. I think Canon's DRYOS is also a microkernel one, and it runs many millions of cameras - but again, it's a specific OS for very specific needs."
But it having a GUI means it has to tackle one of those bug-a-boos: graphics performance. And based on what I've read, a BB10 phone CAN do some decently-demanding stuff like 1080p video. Either BB10 breaks the QNX microkernel segregation or they found a way to get good hardware-accelerated performance out of a microkernel. Which?
It's much easier when you have a single application taking the whole screen. Windowing makes processing heavier. Even with DirectX or the like, often performance are better when you switch to full screen, than running the application in a window, especially games.
Also, it runs on ARM, not Intel. Intel ring transitions imply a lot of security checks, structures lookup and loads, it's one of the reasons they are slow. ARM has a simpler model, and it could be faster.
That's why faster instructions like SYSCALL/SYSENTER were introduced later - albeit far less versatile, but created to support the way OS used the CPU to call into kernel code. Also, if you can't pause or give far lower priority ti other processes/threads, switches happens far more often.
Anyway, from the Wikipedia page, don't know how much reliable "Later versions of QNX reduce the number of separate processes and integrate the network stack and other function blocks into single applications for performance reasons."
"but they moved print drivers out of the kernel, or the default was out, you could still run old ones and Kill your terminal server with just 1 print preview!"
As you say, but apparently one of the chief complaints of early NT's was terrible graphics performance, even for the most basic stuff. And that was because graphics were originally kept in Userland as much as possible, BUT because of the necessities of graphics hardware, that caused massive context thrashing. It's not like Microsoft was entirely to blame for moving graphics drivers back into Kernelland--they were under pressure to get the performance back up to speed or people would stick to the old Windows, even at a time when 2K represented the final push before XP deprecated 9X.
but they moved print drivers out of the kernel, or the default was out, you could still run old ones and Kill your terminal server with just 1 print preview!
Assuming a shop used HP printers (back when HP still made printers, as opposed to today's cheap consumables vacuums), you only needed two drivers to support all of the terminal users: the HP LaserJet II PCL driver and the HP LaserJet II PS driver. HP printers through at least LaserJet 6 could use those two drivers just fine. Didn't have Terminal Server print driver problems (though I did have my fair share of other problems).
Printer drivers are not "interactive" - unless you still use line printers and like to see a character at a time. If you're generating PS/PCL or the like, maybe directly from data already stored in user space, and then send it to the printer, there's little need to do it in kernel.
"Interactive" computation like performing a 3D model transformations in real time (games... but not only), may work better if you send everything in kernel, and then just tell what transformation you need each time you need it displayed, maybe several times per second...
Thank you, I was wondering when / if someone else would mention this! OLE to Ring 0 calls?! Really??!
At the time I sent the full technical details of that change, plus plenty of writeups from technical journals, to the U.S. Navy Training Support Center in San Diego, CA. Knew a flyer stationed there and it turns out that they were educating [him] on the new NT4 and had no idea about the compromised kernel! Got them straightened out in a hurry! :p
One of the main selling points for any version of any product from Microsoft was that you could run your software from previous versions.
That's why Windows NT still contained the incredibly messy WinAPI which, because it had no way of generalizing things, had a function for every feature imagined by the creators, as well as data structures where the things you were interrested in were declared "reserved do not use". The API was so bad that people resorted to reading the stack in callback functions to get more information from the system.
Then you had features deliberately put in to harm your competition by making it harder for them to implement them. SMB is said to have quite some feature duplication, apparently developers didn't read their own code.
The problem for Microsoft is that they cannot get rid of this. Any change means loosing backwards compatibility. Any loss in backwards compatibility means that wine and ReactOS will look like better alternatives.
SMB has its root in a lot of IBM code.... and dates back before Windows, when IBM designed a protocol that could work over NetBIOS/NetBEUI and back then it had to work on non TCP/IP transport protocols as well (especially IPX).
Some of the "reserved" stuff was "reserved" because it could change in later versions. The fact that people messed with them just made Win32 even more messy when MS had to cope with compatibility - read Raymond Chen's "The Old New Thing" blog for many examples.
But it is true it was also used as a competitive advantage.
> The problem for Microsoft is that they cannot get rid of this. Any change means loosing backwards
> compatibility. Any loss in backwards compatibility means that wine and ReactOS will look like better
Actually, we're already there.
An old 16-bit windows game that we tried to run under Windows 7 - ah, no 16-bit system, runs perfectly under Wine on Linux.
Admittedly, small potatoes yet, but Microsoft are shackled to compatibility as one of their key marketing advantages.
That article has some glaring errors. DR GEM, IBM, MS & Apple all copied Xerox more than each other. Lisa (the pilot Mac :) ) not Apple II had any step forward in HW & SW. The Apple II was dreadful and a success mostly due to Visicalc. I had one, as well as later RM380Z, ACT Sirius 1, original IBM and Apricot.
We still had an NT box doing actual work until very recently (i.e. last 18 months).
Coincidentally I found a genuine copy of W2k just the other day when I was tidying out some storage. I couldn't throw it away. I remember 2k very fondly having used it with my first proper job after uni in about 2001.
NT 4.0 was running fine as server for us with about 20M RAM. But screen was only 800 x 600 @ 8bits.
XP needed about 90M RAM. Each SP needed more RAM. Double buffering a 1600 x 1200 @ 24 bit or 32 bit screen adds a lot more RAM usage and slows it.
AV software or crapware slowed XP, otherwise it didn't "slow". I have a laptop I only stopped using regularly 18 months ago with XP, bought in 2002 and re-installed once in 2003. It never "slowed down". Its 1.8GHz P4 & 1600 x 1200 screen is still superior to average laptop sold in a supermarket with win 10.
I have some Win10 gear, but everyday use is now Linux Mint, Mate desktop and customised TraditionalOK theme on Lenovo E460.
I never actually used XP on any of my machines, I kind of hated it but I had to support it on friends, other Soldiers and family's computers for about five years. I still think it looks like a butt ugly Fisher-Price OS unless its being run with the classic theme.
My Windows 2000 desktop system ran like a dream until we had a weird January thunderstorm and lightning strike which nailed my apartment building, resulting in a massive hardware failure the same damned day Vista came out. After getting new hardware I couldn't find my Windows 2000 install media which had been lost in one of the Army's famous Permanent Change of Stations when I was junior enlisted and active component. Its probably still in some Army warehouse in Texas or California.
I hated it even more than I had hated Windows XP until Windows 7 came out.
"NT5 for me. AKA Windows 2000. The first (somewhat) stable desktop OS I encountered, and it didn't slow down quite like XP (NT5.1) did."
Ditto. I used to dual boot NT4 and 95/98, until 2000 came along, with decent hardware support (eventually), DirectX support, multiple monitor support without the need for a specialist card, a satisfactory level of stability. Nothing but fond memories.
ZFS isn't available on Windows.. I think you mean NTFS (or possibly VFS)
I meant that ZFS has some of the capabilities of VMS filesystem, 30years later, and a bunch of more complicated improvements. But you still don't have such an easy user way of getting back previous versions of a file
That was one of a handful of things that OS/2 did a lot better. OS/2 VDMs were almost hypervisors. So low level that you could actually boot them off a floppy disc. They could run just about any DOS application you cared to including games and still get crash protection. I remember playing Geoff Crammond's first F1 simulator while downloading from CompuServe in the background.
The other thing it did better (at least in concept) was having an object oriented shell. The implementation was a bit rough but conceptually a very powerful idea.
I also thought its memory management was better, being similar to that of Unix. RAM was just the fastest form of storage and no attempt was made to keep it free. Unfortunately it led to a lot of support queries from people wondering why they never had any free RAM but I prefer the idea of letting RAM backing 'evolve' rather than the original Windows idea of continually trying to trim working sets.
And of course OS/2 had REXX.
Ah well - that was then and this is now :)
My personal file server is running NT4 Server (SP6a) and has since 1998 or so. It has only ever been shut down due to hardware failure (fans, drives) or extended power outages. Uptime is normally measured in years. It just sits in the corner quietly doing its job of domain controller, DNS, DHCP, VPN, file sharing, printer sharing without complaining. Previously hosted a website getting 100,000+ visitors a day and the only issue with that was MDAC falling over because (shocking) Access DBs aren't designed to be hammered like that. I keep thinking I will shut it down and replace it with something more modern, then I think "It works, why downgrade it to something that needs to be rebooted every 2nd Tuesday for updates?".
NT 3.51 was better, no GDI in Kernel. There was even a preview version of the Explorer shell. But MS wanted people to buy upgrades.
Also why there was no retail SP for USB on NT4.0. I had a preview of the cancelled SP that the USB worked. MS was worried the SP with USB would hurt Win2K sales, yet it wasn't completed. XP is finished version but was rushed and by SP3 got bloated. Also some stupid gratuitous GUI / location changes on W2K and then XP.
W7 is simply a SP fix of Vista, because by 2003 (NT5.2) lost plot on Vista (NT6) development.
Now NT (aka Win 10 and really win 7.2 as Win8 is really Win7) is pointless.
Why not compare with Xenix, OS/2, BSD Unix, AT&T UNIX and VMS resources?
Comparison with MS-DOS/PC-DOS, Concurrent CP/M, CP/M 80, CP/M 86, Intel ISIS II, Apple II DOS etc is pointless.
Curiously there was an MS OS/2 which included MS LAN Manager in 1989, is that why NT starts at 3.x? Also NT ran OS/2 text mode (console) programs on OS/2 subsystem, MSDOS command instead of NT cmd on an NTVDM, and 16 bit Win 3.x programs using WOW translation of 16 bit WinAPI to 32 bit NT API and NTVDM. So NT 3.51 & NT4.0 ran Win3.x & win95 mixed 16bit/32bit mixed programs faster on the Pentium Pro than Win95 did. Win9x killed the Pentium Pro.
NT was held back for 10 years by success of Win9x and badly written windows programs that ignored security APIs and needed Admin mode. Properly written Win32 programs, even written for NT3.1, worked fine on NT4.0, Win2K, XP, Vista/Win7 without being Admin.
there was an MS OS/2 which included MS LAN Manager
Which was itself a clone of the IBM OS/2 LAN Manager (we used that as our file server back in the old token-ring days).
It worked quite well (for those days) and only fell over twice in about 5 years - once when the aircon broke and the room it was in hit 55C and the second time when I hit the power button by mistake.. (the monitor was on top of the server and had exactly the same power button, just 4 CM away from the server power button. While going out the door, I pushed what I thought was the monitor power button but, before I took my finger off, realised that the texture my other fingers were on wasn't the monitor. I stood there for an age while by colleagues went around the office telling people to save their work. I think they did it as slowly as possible in order to teach me a lesson. We taped over the server power button after that.)
IIRC LAN Manager was a 3Com and Microsoft collaboration running on OS/2 - when MS and IBM were still collaborating on it. IBM resold it as well.
That's why Windows had (and partially still has) many "LM" features, including the infamous hash for password storage.
Especially since DEC used to sell them below cost to push the NT license and to get people off VMS - for reasons that made sense to somebody you could buy an Alpha with NT for about 1/2 the price of the same HW with VMS !
So we bought the NT machines and installed Linux, almost 2x the power of a Sparc for 1/2 the price.
“Windows NT .. Originally intended as a successor for IBM's OS/2, before the collaboration between the two companies foundered on the rocks of the success of Microsoft's Windows”
Haaa .. you're a funny guy :]
“let your plans be dark and impenetrable as night, and when you move, fall like a thunderbolt”, Donald J. Trump
No. It was another example of how much applications availability matters - people need and use applications, not operating systems.
Windows 3.x - which was also cheaper - soon got many applications, while OS/2 didn't.
IBM was too slow and clumsy at filling that gap, even after it bought Lotus, and its development tools were also inferior (VisualAge, anybody?)
Also for a while, especially before Warp, IBM was still dreaming to use OS/2 to promote its own hardware at expenses of clones, while Windows supported whatever you liked. Often, you didn't find drivers for non IBM hardware.
OS/2 was probably a triumph of *bad marketing" from IBM, together an ill-conceived strategy, they didn't own the PC market, and failed to understand it.
Slight correction. IBM’s desktop salesman understood the market very well. They were desperate to sell boxen, which meant Windows, and no incentive to push OS/2.
AND MS would wield their monopolistic power by threatening to withhold information about the latest version of Windows (so it could actually run properly) unless IBM turned its back on OS/2. Ergo market opportunities for Compaq et al and the rest is history.
>>Nothing less than a 386-class processor (for the Intel iteration) would do, and running it in less that 16MB would make for a very sub-par experience – astonishingly excessive for the time.
Considering the 486 had been around for almost 4 years at that point, I really don't think it was that crazy to ask for a 386. Now the 16 MB of RAM was a little on the WTF side, because memory was bonkers expensive back then, but asking for a 386 by 1993 wasn't at all excessive. Especially when considering that NT was marketed for workstations and servers and not for general-purpose consumer level stuff like 3.1 and the 9x versions of DOS-wrapper "Windows"
Just my two cents.
Especially since you needed it as well to run Windows 3.x decently - it could run on 286, but with many limitations especially to run DOS programs. I don't remember anybody running it using a 286.
Even some advanced DOS programs using a DOS Extender often required a 386.
Still desktop machines with more than 4MB were rare - and probably many low-end 386s had no enough power to run the more demanding NT well.
I run a 286 (and DOS) until 1994 - but just because I was a student with no income, and my parents couldn't afford a new one. Not really the target for NT.
Despite their tremendous success in the "consumer market" as well, in the early 1990s PC were still essentially business-oriented machines - most software was also expensive.
And NT was aiming at the high-end market, 3.x would do for the low-end one.
As soon as I got a job, I got a Pentium and 8MB of RAM...
And yet the WorkPlace Shell trumps Windows' GUI...
I miss it. It was so flexible. You can have a dozen folders, each with their own colour schemes and fonts.
And you could have workspaces. Assign apps etc to a workspace (folder) - open it, and all the apps/docs associated with that folder opens.
The only weak point was the .INI files... if one goes corrupt, then you have a jolly time to recover from that issue.
NT4 SP6 was very stable, but not secure. In contrast Win10 is secure (better than in comparison to NT4), but stability is a 50/50 affair, what with the latest tomfoolery from Redmond in insisting it MUST be updated every so often.
I sometimes wonder what would the world have been like had OS/2 and Novell gained serious traction - Novell for file servers and OS/2 for desktop/servers....
When I built my first PC in 97, NT4 workstation went straight on it. This was because I was an A-level student and MS were doing an offer where you got Win95/NT4wks for ~ £35 I think it was. The dumb thing though was that the Windows 95 license was upgrade only, and my machine had no OS to upgrade, so I went for NT4 instead and it was pretty damned good. Although I did have to throw ram upgrades at it pretty sharpish.
I was actually a little excited by what I'd heard and read about the guts of NT---as an old VMS hand a lot of it felt familiar. But I never had a chance to actually use it until the ill-considered decision to move the video drivers into the kernel. I had to warn users every time I needed to make a change to the network user database that was running on one of the company's NT systems---there was a better than 50/50 chance that clicking on "Save" would cause the database application to crash and bring the system down with it. Eventually, we decided that making simple changes like this could only be done after normal business hours. All that solid-as-a-rock VMS lineage wiped out by one silly decision.
Many years ago, I worked for a major financial institution on the European mainland. We were evaluating RDBMS solutions. We were a Netware house and wanted the solution to run as an NLM.
When we contacted our local MS rep for the MS SQL Server NLM they told us that they were discontinuing support, but we could take the NT version out on a spin. When I said 'we don't have NT,' he said 'you can take NT on a spin as well.'
The very next day, he showed up with a stack of 37 3.5" floppies. I started the install. Cluster error on disk 12. A day later, I had a new disk 12. Cluster Error on disk 14. A day later, I had a new disk 14. Eight days later, I called the rep and suggested he 'stuff it.' The next day, he arrived with three floppies and a CD-Rom - I'll never understand why he didn't give us the CD option on day one.
I inserted boot floppy #1 and rebooted. The machine whirred and asked for floppy #2 and 'hit any key'. Then floppy #3. And finally, I received the message: 'Please insert the Windows NT CD-Rom into Drive A: and hit any key to continue.' and there was no way to edit the drive letter. So I did just what it said and inserted the CD-Rom into drive A: a bit at a time.
Remember how touching almost anything in the network settings (including IP address) required a server reboot? Tell the kids of today that, and they won't believe you....
I still have rather fond memories of this era though, supporting NT server, Exchange 5.5 (isinteg -patch forever ingrained), Proxy Server etc. when the more glamorous sites had these revolutionary ISDN lines...
These days servers are so dumbed down and 'user friendly' you can't work out which of the sugar coated answers to a wizard lead to the configuration you know you want and live in perpetual fear of unwanted side effects that it will decide you must surely want....
I worked there then, I was asked to look at the MS white paper on NT; what I said was that it had all the right words but there wasn't enough info to tell whether the code worked the same way. The sub-geniuses (to be polite) at Acorn didn't follow up, so the next thing I heard was when an ex-Acorn employee was working on the 68k port, sometime after I had left the sinking ship. The ARM port could, and should, have happened at the start of the '90s; if the management had actually followed up as opposed to BS'ing it WOULD have happened then because MS really did have the shyte.com It would have happened; back then I cared.
So far as I can tell (I later worked for MS, but not in the OS division) NT seems to have lost and found its way several times since then. It is a damned good micro-kernel but it is beset by the *F*F*F* shell; Windows Explorer (apparently a pseudonym for DOS 3.0) takes it down every time. BUT that is an application. You can do everything you want (and quite a lot that any sane OS vendor doesn't want) if you escape from the Win32 API.
Then there is NTFS. I LOVE NTFS. Sorry, I probably shouldn't say that in public.
John Bowler jbowler acm.org
Biting the hand that feeds IT © 1998–2019