back to article Happy 20th birthday, Windows NT 3.1: Microsoft's server outrider

It started on the server, became the desktop, it's still there in Windows 8 today, and it just turned 20 years old: Happy birthday, Windows NT. Windows NT 3.1 was released to computer manufacturers on 26 July, 1993, and initial sales of Microsoft’s debut server operating system were modest – fewer than 500,000 units sold in …

COMMENTS

This topic is closed for new posts.
  1. Piro Silver badge
    Pint

    Next Technology? Damn..

    I always thought it was "New Technology", which was a fantastically silly name for anything.

    1. Seanmon
      Windows

      Re: Next Technology? Damn..

      I thought it was "New Technology" too. However, rumour has it that "Windows New Technology" was a backronym created for WNT - WNT being to VMS as HAL is to IBM. Not impossible, I guess, given the number of ex-VMSers on the project.

    2. hplasm
      Devil

      Re: Next Technology? Damn..

      Wasn't it called "Not Tested" when it first limped out?

      1. captain veg Silver badge

        Re: Next Technology? Damn..

        I remember that in the contemporary computer press it was often referred to as "Not There" on account of the delays.

        Guy Kewney penned an article on the then new OS/2 version 2 ("better DOS than DOS, better Windows than Windows", remember?) in which he begged IBM to call it anything other OS/2. Tracy, even. So for me, NT always stood for "Not Tracy",

        The Windows 2000 startup screen proudly declared "Build on NT Technology"; so that's "N[ew]|[ext] Technology Technology". Right.

        -A.

        1. Michael Wojcik Silver badge

          Re: Next Technology? Damn..

          I remember that in the contemporary computer press it was often referred to as "Not There" on account of the delays.

          When OS/2 2.0 won the race against NT to get the first boxes on shelves - the race that famously had Balmer saying he'd eat his hat if Microsoft lost - IBM ran magazine ads with a picture of the NT logo and some sarcastic expansion of "NT" - might have been "Not Today" or "Nice Try" or something like that. I think I may still have one around somewhere, but I'd have to dig through piles of stuff to find it.

    3. mkc

      Re: Next Technology? Damn..

      I thought it stood for Northern Telecom, a reference to the networky part of Windows?

      1. Destroy All Monsters Silver badge
        Headmaster

        Turned me into a Newt..echnology

        From the Hallowed Halls of the Troll TowersEl Reg Editing Room:

        http://www.theregister.co.uk/2013/06/10/openvms_death_notice/

        The architect of RSX-11M and VMS was Dave Cutler, who planned a portable, object-oriented successor, Mica, running on hardware codenamed Prism. When DEC wasn't interested, he and some of his team decamped to Microsoft, where they were given the project of reviving the moribund OS/2 3 project after the IBM-Microsoft split. While OS/2 2 was the Intel 386 version, OS/2 3 was to be portable to non-x86 processors. Cutler drew upon his previous Prism and Mica work to bring Microsoft's OS/2 3 to the Intel i860 CPU, a RISC/VLIW chip Intel had hoped might be a successor to the x86 line.

        There were two versions of the chip – the basic i860XR, codenamed the N10, and the enhanced i86XP, codenamed N11. Microsoft built its own i860 workstations for the development effort, based around the i860XTR and consequently nick-named the "N-Ten". The initials of these – NT – is where the eventual name for Cutler's finished OS: Windows NT.

        Don't know whether true....

      2. Daniel von Asmuth
        Headmaster

        Re: Next Technology? Damn..

        NT was called 'Not There', because it was released well past schedule, just like Windows NT 6.0 (called Longhorn or Vista or even 8).

        Microsoft's first Operating System was Xenix (not counting DOS), that later became SCO Unix. They tried to build a server Operating System, call LAN Manager, to beat Novell Netware. Some of the functionality later went into Windows for Workgroups and Windows 95 and NT, but it was just peer-to-peer networking. LANMAN mostly failed.

        WinNT 3.1 was purely a Workstation OS. By Win NT 3.50 Redmond came up with a server edition, which was inferior to Netware, OS/2 or Unix. It took until NT 5.0 (Windows 2000) for the server edition to become worthy of its name.

      3. Anonymous Coward
        Anonymous Coward

        Re: Next Technology? Damn..

        "I thought it stood for Northern Telecom, a reference to the networky part of Window"

        That would be Notel you are thinking off....aka Bay Networks

    4. TheVogon
      Mushroom

      Re: Next Technology? Damn..

      Add one letter to each of VMS....

    5. NateG
      Pint

      Re: Next Technology? Damn..

      It was my understanding that the origin of "NT" was N-Ten, aka N10, a codename for the Intel i860 processor that the 32-bit version of Windows was originally meant to run on.

  2. Captain Offensive

    Virtual NT

    Tim. In a fit of curiosity a few months back I not only managed to get a copy of NT Server (3.51 rather than 3.1 admittedly) running using VM Ware Player, I have since upgraded to NT4, 2000 and 2003. not made it to 2008 (R1) yet which of course would be the end of the line due toR2 and later being 64 bit only. But it works! :-)

  3. ElNumbre
    Pint

    Not quite old enough.

    I wasn't quite old enough to be exposed to NT 3, but I do remember seeing NT4 after Windows 3.11 for Workgroups and recognising it was the future. I could never understand why MS persisted in trying to develop two separate windows lines and was glad when they decided to unify them in Windows 2000.

    The NT line was pretty robust compared to the 9x kernel line which would crash if a butterfly in taiwan flapped its wings at an inopportune time.

    I feel like I've grown up with the NT kernel; we're both older, wiser and better for it, although still prone to occasionally doing something unexpected for no real reason.

    1. Tony Pott

      Re: Not quite old enough.

      IIRC the issue was games. They had great difficulty creating an environment under NT that would support DOS/WIN Win9x games adequately, or at all. This was not really sorted till WinXP, so while many business machines moved to Win2K, MS introduced WinME as a (horrible) stopgap for home computing until XP was ready.

      1. kittyjess

        Re: Not quite old enough.

        I always thought that Win ME was a quick drop in replacement due to Windows 2000 Home (Neptune) when it ran late, a quick rewarmed hack job of 98se, that shouldn't have been put out there.

        I did really like the watercolour theme from Neptune and it's a shame that they didn't use that for Xp, rather than that kiddy Luna theme....

    2. Tony Pott

      Re: Not quite old enough.

      I should have corrected you in my reply: the existence of WinME means, of course, that they maintined 2 lines until XP., not until Win2K.

  4. Nigel 11

    Misleading

    Windows NT 3.1 was the biggest remake of the Windows family until Windows 8 came along

    True, if you are looking "under the hood" i.e. at the kernel. (But note, the Win 8 kernel is still derived from NT). However, the kernel is not the first place most Windows users look. The other revolution was replacement of the Windows 3.1 GUI by the Windows 95 / 98 / 2000 / XP GUI, which pretty much defined a (small-w) windows desktop until Windows 8 was dumped on us.

    NT 3.5 (at the time it shipped) was unbelievably stable, but still ran the 3.1 desktop. (It was basically NT 3.1 with most of the bugs fixed). NT 4.0 ran the newer desktop, which had required driving a coach and horses through the carefully designed VMS-like security model of NT 3.x. The system's architect, David Cutler, formerly architect of VMS at Digital, left Microsoft around the NT 4.0 release, possibly because of Microsoft putting image above security considerations. Microsoft has probably been paying the price ever since!

    1. Fred Goldstein
      FAIL

      Re: Misleading

      Amen. They started with a good idea but screwed it up.

      NT 3.1 was basically the alpha test; 3.51 the usable beta. It had HAL, the hardware abstraction layer, which helped make it compatible with the DEC Alpha and later chips. But by 4.0, they threw it away and put the GDI into the kernel, making the whole thing unstable, in exchange for (I am told) about 15% more speed. So a few months' of Moore's Law was the payoff for ruining a much better system.

      1. Admiral Grace Hopper
        Windows

        Re: Misleading

        "... put the GDI into the kernel, making the whole thing unstable, in exchange for (I am told) about 15% more speed" - and this was the first time that I started to wonder exactly what it was that the Emperor was wearing.

        I was a huge fan of Windows NT. Having moved from a mainframe environment to writing stuff for Windows via a brief experiment with Macs, Windows NT and especially v3.51 felt like a return to a world run by adults after being sent back to the playpen. I'm sure that MS would still have been able to make dumb decisions even if they hadn't moved from the original design philosophy, but for that brief period they seemed to be getting things right and moving in the right direction. Every time MS do something brain dead I think, "GDI moved from Ring 4 ...", sigh and shake my head.

        I feel old again.

        1. Kebabbert

          Re: Misleading

          But as of today, Windows does not run the graphics in the kernel anymore. The graphics is running outside the kernel. Windows7 is the most stable Windows Ive tried. It works well. Sure, it is not as stable as Unixes, but it is stable enough.

          Strange enough, Linux has lately moved the graphics into the kernel to gain more speed. This move has made Linux more unstable.

          1. Anonymous Coward
            Anonymous Coward

            Re: Misleading

            "It works well. Sure, it is not as stable as Unixes, but it is stable enough."

            Excluding Linux. It is rarer to get a stability issue with a recent Windows server than a Linux system in my experience across thousands of boxen of various flavours...

            1. Anonymous Coward
              Anonymous Coward

              Re: Misleading

              I disagree, I think that Windows and Linux are pretty much on a part in terms of stability. I think that the majority of stability issues come about for similar reasons of education on each system.

              Too many Windows administrators are unfamiliar with the command line, and just jab away at the GUI until whatever they want to do is done.

              Too many Linux administrators think that because they know commands by rote (ie: if X happens, issue command Y) they somehow understand what they're doing.

              In both cases there is a lack of understanding of how the system actually works. In the case of Windows the mindset tends to be, if the GUI let's me do it, it's safe. In the case of Linux the mindset seems to be I know the command to run, therefore I understand what I'm doing.

              1. Anonymous Coward
                Anonymous Coward

                Re: Misleading

                The real difference is that the GUI lets unskilled administrator to work somehow with Windows systems. It's not that a Linux admin knows the commands, therefore understands what is doing, it's viceversa - to know what command to run he (or she) needs to have an idea of what to be done and how, and that cuts out many unskilled admin (although I've seen some just cutting and pasting from Google, and perform some taks without a clear idea of what they were doing, copying someone else setting without properly adapting them to their needs).

                While with a GUI an unskilled admin can look for something that looks to perform something alike he (or she) needs, and get something done somehow, not always in the proper way.

                Both GUI and command line are good to perform tasks they were designed for, and a good admin uses both depending on what allows to perform a given tasks the best and fastest way. And a good administrator before using fingers checks if brain is connected.

            2. Anonymous Coward
              Anonymous Coward

              Re: Misleading

              Yes, today if you use good hardware and certified drivers is very, very rare to get a BSOD, I've several Windows machines that gets rebooted only for monthly patches.

              Machines that becomes ridden of malware competing to obtain the full control, cracked software, or cheap hardware with bad drivers are usually unstable, but any OS would be in such a situation.

    2. Anonymous Coward
      Anonymous Coward

      Re: Misleading

      NT 3.5 (at the time it shipped) was unbelievably stable, but still ran the 3.1 desktop.

      We ran NT 3.51 at the first company I worked for. It was rock solid, albeit a bit slow even on the dual Pentium Pro 200MHz Compaqs that we used. Scripting was definitely a problem, but ActiveState produced a Perl 5 module that allowed a certain degree of control. Then NT 4.0 came along and things became very unstable. I was happier using the SunOS stuff that the majority of the company was running on, so when it looked like they were going to switch wholesale to Windows I jumped ship to a Solaris based outfit.

    3. jackbee

      Re: Misleading

      What you mean by David Cutler left Microsoft? The man is still around...

    4. Anonymous Coward
      WTF?

      Re: Misleading

      Thats not the only misleading comment.

      "was so good it also changed the direction of computing"

      Sure, as long as you'd never used anything other than DOS and your idea of "computing" is a desktop PC. Solaris blew it out of the water GUI wise and TBH still does on the server side even today. Along with most unixes frankly.

  5. h3

    They did something for performance with NT 4.0 that broke things.

    Don't count Windows 8 as a big change due that hack that makes metro apps run on the Desktop.

    If that is possible the change is no more than something like WPF

    1. Anonymous Coward
      Anonymous Coward

      "They did something for performance with NT 4.0 that broke things."

      Under statement of last century. MS let every man and dog run processes in ring 0, initially so they could make graphics faster which started the bad spiral of bad drivers crashing the whole system and ended, now, with anyone pawning your system.

      1. Anonymous Coward
        Anonymous Coward

        You can't run code in ring 0 but writing a driver. Now unsigned drivers raise a big red dialog box asking you to accept it. And again, to install a driver you need to be an administrator. So if your systems are configured to let every man and dog run as "root", sure, you have a security issue...

  6. Mage Silver badge

    Security

    The Security was actually good and still good on later NTs...

    But there were three HUGE problems.

    By default there was no Ordinary User account created, only the Admin account.

    People didn't write applications properly so they could be installed by Admin and used by User. This especially was an issue from NT3.51 when people starting to use the Workstation product and applications written by WFWG / Win95 developers.

    Only with PROPERLY configured permissions on NTFS. Out of the box the permissions on directories not set to the idea.

    The Token based scheme and ACLs was very powerful for people that bothered to use it properly. The Problem was that folks treated it like WFWG / Win9x (and increasingly MS themselves from Win98). Other often ignored features of serious value:

    Named pipes (can't be created on Win9x, but even DOS clients can connector them)

    Using files as Arrays (sort of persistent virtual memory)

    Streams in Files (a little like Apple Resource Forks).

    The problem was that most people never bothered to learn how to configure it or how it worked as 1/10th as much as a Linux/UNIX admin/User. Eventually this applied to MS too, which is why they did REALLY STUPID stuff (GDI to Kernel in NT4.0), gratuitous moving stuff around (W2K, XP, Vista/W7, Win8) for no good reason. Buggy Explorer. Stupid defaults on Share and Device names and security.

    So the BIGGEST problem is the install defaults. 2nd Biggest was similarity to WFWG & Win9x. Win9X should NEVER have been released. It and Win98 helped degrade NT4.0 Win2K, XP, Vista/Win7 and Win8 to becoming ever more bloated, unreliable, less secure and more broken.

    NT4.0 major security & reliability flaw was GDI moved to Kernel top make video 10% faster. Stupidity given how fast PC performance was improving 1995 to 1996.

    I did have NT3.5 on a 386DX-16 MHz with 6M of RAM. Worked fine as a file server. NT4.0 was fine with Internet Proxy (wingate), Mdaemon for Mail, MS-SQL server, File & Printer server etc in 20M RAM on a 486.

    So NT3.1 wasn't "bloated" or "Slow" for a 32 bit server, nor even was NT4.0.

    NT 4.0 ran on Alpha, PPC, MIPS and 64bit Alpha as well as x86. It had Clustering (developed by DEC) from 1998/1999 that could be implemented really cheaply with two ordinary Servers, SCSI controllers with two channels, two external storage shelves.

    Where did MS go wrong? Concentrating in eye candy instead of real suitability and REALLY badly done installer Wizards with BAD silent defaults. STILL. Why is EVERY service on by default?

    1. Jim 59

      Re: Security

      "This was the moment when Microsoft could have enforced isolation between system files, application files and data, but perhaps for the sake of compatibility with legacy Windows applications, it is too lax: an enormous amount of effort was needed later to patch up its vulnerability to DLL version issues, malware and user security."

      And that's why we are still menaced by botnets of badly maintained legacy Windows boxes.

      1. Anonymous Coward
        Anonymous Coward

        Re: Security

        Yes, the problem is the amount of bad written code around. Too many Windows programmers learned to code with Windows 3.1 and stubbornly refused to learn to code properly for later versions.

        IMHO with Windows 7 MS should have started to block such code wholly, and show a large dialog box tellin "This application was coded by a moron who refuse to write modern code. Please change it with a better one".

        1. Yet Another Anonymous coward Silver badge

          Re: Security

          Trouble with NT was that you could do bugger-all as an ordinary user.

          You had to be admin to open the network settings dialog to find your own IP address

          And with no "sudo" the only way was to log off and back on as admin

          1. kain preacher

            Re: Security

            So you never tried adding your self to the power user group ?

      2. Anonymous Coward
        Anonymous Coward

        Re: Security

        "And that's why we are still menaced by botnets of badly maintained legacy Windows boxes."

        That phone home to armies of remotely hacked Linux based command and control servers...

    2. Anonymous Coward
      Anonymous Coward

      Re: Security

      People maybe don't understand that in protected mode only code running at or below IOPL can access the hardware through I/O ports - and usually physical memory is accessible by highly privileged code only as well.

      The problem is that due to privileges checks and other operations needed when a ring transition is needed (switching stack, copy parameters, move data from user space to kernel space, switch CPU state, ecc. ecc.) the more the calls that need a transition (back and forth...), the slower the code is. That's also one of the reasons that most OS running only on Intel hardware don't use the full 4 rings, but only two. Using all the rings would lead to more secure and stable code, but also slower.

      To minimize this transitions, MS moved most of GDI code and video drivers to the kernel, thus when GDI code must call the video driver code it has not to go throuigh a transition (drivers in user mode would need a kernel mode counterpart to access the hardware). The real problem was (and is) that Windows drivers may be complex to write, and with many small companies writing drivers without employing skilled developers the risk of a bad driver was high (that's why it was always better to buy from reliable companies).

      But ask yourself - where Linux graphics drivers are? In user space or kernel space?

      1. Jim 59

        Re: Security

        You're saying that the system call interface was slow and put an overhead on software. Improvements such as threading and memory mapping have come about partly to improve that situation. It's not the drivers that are at fault it seems to me, but rather the system call / privileged access interface which is inefficient. Ok maybe bad drivers too.

        Basically, Microsoft appeared to rush Dave Cutler into doing a bad job, releasing NT without the proper multi-user safeguards a grown up OS ought to have. Result: 20 years of virus anarchy.

    3. stizzleswick
      FAIL

      Multi-platform misunderstanding

      "NT 4.0 ran on Alpha, PPC, MIPS and 64bit Alpha as well as x86. [...]."

      The problem that I experienced with one company was that they actually were trying to unify their x86, PPC and Alpha machines under NT. Which did not work, because the various compiles of Windows would simply not run much software compiled for one of the other hardware platforms--I remember the simple un-zipping of a ZIP file created with FreeZIP on an x86 became an unsurmountable challenge on an Alpha--so we ultimately decided to split between Solaris for the servers and MacOS on the workstations at the time... with a few, rare x86/NT machines for the bookkeeping crew.

      I guess the main problem was that most software distributors simply did not go along with the idea of supporting multiple hardware platforms and so, for the most part, only offered compiles for NT on x86 and/or MacOS on PPC, and if lucky, HP/UX on Alpha.

  7. Lionel Hutz

    Stability for a time

    I well remember running NT3.1 and NT3.5 as my DESKTOP OS (remember NT Workstation?) because I just couldn't deal with the garbage that was WFW/Win32s at the time - constant crashing and running out of "resources" caused by poor GDI heap management. You needed what was, at the time, considered just GOBS of RAM to get the job done. However, these OSs finally let me run my development environment for more than 24 hours straight :-). I stayed in the server OS world for my desktop through NT4, WIN2K, and WIN2K3 because WIN95/WIN98/WINXP were much less stable. Windows 7 finally delivered what I would consider to be equal stability to the server counterpart. However, as many have already mentioned, NT4 architecture put the graphics handling into ring 0, which was was a bad move. NT4/WIN2K definitely suffered more crashes because of this move. Anyway, here's one guy that's happy they loaded the desktop GUI into their "server" product. It made my work day much more productive.

  8. Version 1.0 Silver badge

    Stability

    NT was not as stable as VMS by quite a long way but it was incredibly stable when compared to anything running on a PC in those days. I've had up-times of well over a year with NT server boxes - generally you just have to turn them off to clean out the fans and power supplies.

    I turned the last NT box (running a mail server and FTP) off about two years ago - never hacked even once.

    1. Mr Anonymous

      Re: Stability

      "I turned the last NT box (running a mail server and FTP) off about two years ago - never hacked even once."

      FTP on NT4, how we laughed (not) when users with bad passwords got hacked, the anons created directory names like com0 and filled them up with warez that you couldn't see or access with file manger.

    2. Anonymous Coward
      Anonymous Coward

      Re: Stability

      Windows NT whil emuch better than what came before was appallingly unstable. We considered using it for a product but stopped when we were seeing 2 or 3 crashes a day running light workloads of standard MS apps. The idea this was the best on PCs at the time is a joke. As an example we were running an RTOS with a unix process model on PCs at the time and never saw any crashes except doing driver development over a three year period and thousands of installs. NT used as part of the IT environent using carefully managed restricted workloads and frequent preventative reboots managed up times of 2 to 3 days.

      MS have now achieved the reliability that used to be the norm 30 years ago.

      1. Anonymous Coward
        Anonymous Coward

        Re: Stability

        "We considered using it for a product but stopped when we were seeing 2 or 3 crashes a day running light workloads of standard MS apps."

        Your experience and mine differ. Most of my colleagues were using the IT-supplied Win98 on their desktops and laptops. I was using my own NT setup (unsupported by corporate IT). The W98 users were frequently "out of memory" or unproductively blocked in some other W98-specific way that my proper 32bit-OS environment just treated as routine. So if their spreadsheet was too big to print, they came to me to get it printed. Etc.

      2. deadmonkey

        Re: Stability

        Carefully managed by whom?

        Someone who couldn't fix their way out of a paper bag it seems.

    3. Yet Another Anonymous coward Silver badge

      Re: Stability

      >NT was not as stable as VMS by quite a long way

      To be fair, Stonehenge wasn't as stable as VMS.

      The only way to stop a VMS machine was to put a stake through its CPU and bury it at a crossroads at midnight. And even that didn't work if it was part of a Vaxcluster

      1. Anonymous Coward
        Anonymous Coward

        Re: Stability

        "The only way to stop a VMS machine was to put a stake through its CPU and bury it at a crossroads at midnight. And even that didn't work if it was part of a Vaxcluster"

        You (and others) might enjoy the video at www.hp.com/go/disasterproof - VMS and some other stuff

        VMS is still around, despite HP's best efforts, but if you want to buy it new, you have to buy it on an IA64.

        If you just want to play, there are lots of zero-cost emulators for VAXes and Alphas. At least one blog has details of how to set one up on a Raspberry Pi, and there's another one packaged for Android. The software is available at zero cost via a hobbyist program for OS and tools ("layered products" in DECspeak).

    4. MQADM

      Re: Stability

      I still pine for the days of my VAX/VMS and OpenVMS servers. The reliability on those environments was truly amazing. System uptimes could range into the years! You just couldnt kill them. They may not have been pretty, i.e. no graphical interface, but you got things done. The eventual add-on of the CDE interface was okay but seldom used.

  9. Anonymous Coward
    Anonymous Coward

    "proved bad both for security and for the ability to script common tasks."

    I only ever came across one thing that I couldn't script on WinNT 3.x/4 and that was the user can logon using dialup networking checkbox in the user manager applet, IIRC.

  10. Roger Greenwood
    Go

    Expiry of patents in NT

    Surely by now some of the major ones have expired, or did they apply late for many?

  11. Anonymous Coward
    Anonymous Coward

    The new people

    I remember, having switched company, working with Unix and the NT like before. I then had a slight problem with a Unix server and decided I should get acquainted with the technical service in that company. In steps a young girl, asks for the server and without asking a question goes and shuts it down from the power switch. Rather annoyed I tell her it is not a prober way to deal with a Unix server. Startled she replies, oh I thought it was a NT.

    In those years quite a few customers decided to switch from Unix to the NT. The lack of prober scripting was a big problem for us. Fortunately some university in Utah had made quite good tools for that, sed grep and similar.

    One problem all those customers had was that although the new Intel NT hardware had a lot more Mhz they where slower than their old Unix machines. More or less every customer was forced to upgrade the hardware. An other totally new experience for them where viruses and people breaking into their systems. The way I felt it then was that a new sort of people had stepped into the server rooms all with a fairly poor understanding.

  12. sawatts
    Devil

    POSIX Subsystem

    Early releases of Windows NT (3.1 or 3.5?) included a "POSIX" subsystem - the absolute *minimum* required to claim any compliance to the standard.

    Why?

    As I was told at the time, from rather disgruntled folk affiliated with DISA - the US Gov/DoD had raised a tender for an operating system which was required to be POSIX compliant...

    (you can see where this is going)

    The end users were expecting a replacement for their trusty UNIX systems; but the tender didn't stipulate *what* level of POSIX was required - so MS satisfied the requirements with their minimal (and useless) subsystem, undercut the iron mongers, and where duely awarded the contrat.

    Que lots of annoyed end users, sitting in front of BSOD, on piles of unusable source code...

  13. Stephen Channell
    Happy

    Sun Microsystems called NT a Mainframe OS for a PC

    When NT launched, it (together with IBM MVS, DEC VMS, SunOS) was the only the fourth multi-threaded, multi-processor OS. Scott McNealy described it as a mainframe OS for PC’s, and really not needed for the desktop. Scott also said it would take ten years before the multiprocessor performance matched SunOS.

    The OS was very stable until NT shoved the device drivers into the kernel (with NT4) and allowed graphics to crash the box.. which is ironic because Kernel graphics is one of the reasons Linux is better for GPGPU & HPC (intervening Server updates switched off the GPU if not being used interactively)

    1. Kebabbert

      Linux graphics runs in the kernel, too.

      "...The OS was very stable until NT shoved the device drivers into the kernel (with NT4) and allowed graphics to crash the box.. which is ironic because Kernel graphics is one of the reasons Linux is better for GPGPU & HPC..."

      You do know that Linux today, runs it graphics in the kernel?

      1. Anonymous Coward
        Anonymous Coward

        Re: Linux graphics runs in the kernel, too.

        so why did you compile it into your kernel?

        Your choice, obviously, but why?

        1. Anonymous Coward
          Anonymous Coward

          Re: Linux graphics runs in the kernel, too.

          I'm just wondering if the "it's in the Linux kernel, we're doomed" people actually understand the difference between "it's in the kernel", and "it runs in kernel mode". And if they understand the architectural differences between the way an app on Linux communicates with the (optional) graphics subsystem hardware on Linux, and the architectural equivalents on Windows.

          Basically, how close (ie how exploitable) is the relationship between a generic non-priv user app on Linux, and the graphics device driver on Linux, with or without a GUI environment (a busybox-centric setup seems to manage quite nicely without graphics)? And what's the equivalent on Windows?

          1. Anonymous Coward
            Anonymous Coward

            Re: Linux graphics runs in the kernel, too.

            NT 3.51 was very much like Linux is today in that respect. GDI was in the kernel, but it was pushed out to ring 4. With 4.0 it moved to ring 0, making the stability of the whole O/S vulnerable to bad graphics driver code -- there was a fallback to the generic driver available, but on first boot you could certainly be treated to a horrible surprise. Which kind of reminds me of the experience I had just a couple of days ago bringing up Fedora 19 after installing the latest Catalyst drivers for an add-on AMD card (did I mention yet that I think systemd sucks?).

            GDI issues aside, NT 4 was a real boon to most of us because of the then awful state that both Netware and Unix were in. Novell was a nightmare to deal with, at least for junior sysadmins like myself (trying to get the product code to do an emergency 3 AM re-image of a key file server whose backups had too long been neglected). The Unixes of the time were creatures of the hardware they were included with (e.g. Solaris ran on Sun hardware, AIX on IBM, Irix on SGI, etc.). When it came to deploying individual file servers across a continent (a strategy mandated by the then current cost of network bandwidth) those cheaper Intel boxes were really the only way to go, and that's where NT 3.51, and later, NT 4, kind of saved the day.

        2. Anonymous Dutch Coward
          Meh

          @AC 01:00 Re: Linux graphics runs in the kernel, too.

          Compile it straight into the kernel or have it as a loadable module that gets loaded into the kernel... the driver ends up in the kernel process anyway....

    2. Anonymous Coward
      Devil

      Re: Sun Microsystems called NT a Mainframe OS for a PC

      Where are Sun and Solaris today? Maybe they weren't so good, after all....

      1. Kebabbert

        Re: Sun Microsystems called NT a Mainframe OS for a PC

        Solaris is the most wide spread Unix. There are something like 10 million downloads of Solaris 10, alone. It runs on x86, that is, lot of servers. It also runs on all SPARC machines.

        IBM AIX runs on POWER servers. There are not many of them, in comparison. HP UX runs on Itanium, maybe there are fewer Itanium servers than POWER servers?

        The most innovative Unix today, is Solaris. Everybody wants or copied Solaris tech, such as ZFS, DTrace, SMF, Crossbow, Containers, etc. You name it. For instance, let us talk about the lesser known DTrace:

        -The Linux clone of DTrace is called Systemtap.

        -IBM AIX clone is called Probevue

        -Mac OS X has ported it

        -FreeBSD has ported it

        -NetApp engineers talked about porting it, on a blog post. (NetApp ONTAP is a FreeBSD derivative)

        -VMware clone is called vProbes

        -QNX has ported it

        These are just the OSes on top of my head, that I know. If I google a bit, maybe I could add some more OSes that ported/cloned DTrace. It is almost like every major OS has got DTrace, in one way or another. DTrace is a must have, they think.

        So, can you name some cool IBM AIX tech that everybody has cloned or copied? Or cool HP UX tech? Or cool Linux tech? No? There are no must-have tech? Then surely Solaris is the most innovative Unix today. And the most wide spread Unix too. Your post is in other words, totally off, split from reality.

        1. Anonymous Coward
          Anonymous Coward

          Re: Sun Microsystems called NT a Mainframe OS for a PC

          Please support your assertion that Solaris is the most wide spread Unix with actual data - 10 million downloads mean nothing, especially if no timeframe is given.

          1. Kebabbert

            Re: Sun Microsystems called NT a Mainframe OS for a PC

            "...Please support your assertion that Solaris is the most wide spread Unix with actual data - 10 million downloads mean nothing, especially if no timeframe is given...."

            I dont have links right now. But this is so well established that I dont bother. Everybody knows it. This site sometimes presents the state of Unix, and other reports says the same. Just read them. It goes something like this: IBM made the most money (but they are selling few expensive servers). Solaris sells most servers (but cheaper). bla bla. Just read the reports.

            But if you think, Solaris runs on x86, there are lot of Solaris distros on x86: Nexenta is a storage vendor company selling Enterprise servers, Tegile is a Enterprise storage company competing with NetApp, Greenplum, Coraid, etc etc - there are lot of x86 distros and hardware vendors selling open sourced Solaris products. Then we have Oracle Solaris which is closed, it is wide spread on x86. And of course, all those SPARC servers too. In comparison, POWER servers are few. And Itanium too. This is so well established I dont bother, google it yourself. Everybody knows this.

  14. P Taylor

    SP ?

    The only Windows OS ever to get up to Service Pack 6.

    By which point it was pretty darn good.

    1. Mage Silver badge
      Linux

      Re: SP ?

      I even ran a prototype USB stack on NT 4.0, rumoured it would be in SP7. Of course most drivers for it had to be manually installed because programmers IGNORED MS advice to look for FEATURES and not just test for OS Ver > nn. Allegedly SP7 was canned to help Win2K sales as people not upgrading from NT 4.0.

      Best Windows NT versions

      1) NT3.5 (NT3.51 was just a patch to include gratuitous APIs stuck in Win95 to stop Office 95 running on Win3.11/WFWG3.11). NT3.51 even had an Explorer beta as a option. Only economical for most small offices as a server.

      2) NT 4.0 after SP1. Enterprise Server Edition could break 2G/4G barrier for 16G RAM. Most CPU types, 1st 64 bit version and Clustering. Also more than basic POSIX via MS Services for UNIX, though you needed a 3rd party X server. Cool running X and Windows GDI seemlessly on one desktop with CMD console and UNIX shell console.

      3) XP and Server 2003

      MS OS/2 1989 (not the IBM or joint IBM /MS version) had LANmanager built in rather than an add-on. so is the Predecessor of the 1st NT, NT3.1. It was intended for servers only. Soon with Win 3.0 clients, delayed till 1990. Is it the reason NT starts at 3?

      Networking in NT was a LANmanager subsystem and NT also supported OS/2 console mode applications (worked on NT4.0, probably dropped on Vista? Never tested XP).

      Linux kernel beta released about the same time as NT in 1993, much to disgust of Andrew S. Tanenbaum who had released Minix in 1987. I played with Minix in 1991 but deployed DR Multidos.

      One huge shortcoming in the NT design wasn't security, but no Multi-user, unlike UNIX, Xenix, Cromix and Linux etc. This is partly why there is no Sudo. XP added a half-baked User Switching via Terminal Serivices Subsystem (I always disable those services). It "bites" MS in the Hosted/Cloud market, so much easier for multiple users on Shared Linux hosting, Windows needs the massive RAM overhead of multiple VMs and multiple Windows OS instances to achieve the same properly.

  15. Anonymous Coward
    Anonymous Coward

    Thank you commentards

    Just like to say thank you to author and commentards (sorry if that causes offence anywhere :)).

    Normally I seem to have to be the first to write the bits about the difference between performance and productivity, and how Gates' insistence on NT benchmark performance being reasonably close to W98 benchmark performance led to a major loss of robustness, stability, and security, because too much stuff got moved to kernel mode so it could share data more "efficiently" and thus make benchmark performance appear faster (at the cost of decreased productivity due to reduced stability).

    1. Anonymous Coward
      Anonymous Coward

      Re: Thank you commentards

      At least they learnt from it. Microsoft OSs now have one of the lowest vulnerability counts of comparable OSs, and one of the most powerful / flexible security architectures. Linux is positively archaic in comparison.

      1. Anonymous Coward
        Anonymous Coward

        Re: Thank you commentards

        "Microsoft OSs now have one of the lowest vulnerability counts of comparable OSs"

        I don't understand what that means. What OSes are comparable with Windows, other than Windows? Which vulnerability counts are you comparing? Website defacement counts, which were briefly a fashionable measure with MS folk a couple of weeks back, are not relevant as a measure of datacentre server security. You could perhaps try CVE, if you knew what that meant.

        "one of the most powerful / flexible security architectures."

        But as has already been pointed out (by Mage?), Windows still doesn't understand multi-user environments at all, let alone multiuser security. Which in a server OS is surely a bit of a dropoff.

        "Linux is positively archaic in comparison."

        Which Linux is archaic? Is the kernel archaic, or the GNU and other stuff on top of it, or both?

        Or are you saying that because something is based on tried tested and proven mature technologies, it is necessarily worse than anything which is new and shiny and different?

  16. Anonymous Coward
    Anonymous Coward

    3.51

    Always preferred 3.51, which was more of a bugfix pack for NT 3.1.

    It still had the Win 3.1 shell, which could be used to confuse people when you were running Office 97 (which drew it's own controls) on it!

  17. welshie

    Server component still has security bugs from before then

    So NT 3.1 was a Netware killer? Not quite. It was positioned to steal market from another of Microsoft's products, er. OS/2 Lan Manager.

    For what it was, OS/2 Lan Manager was reasonably good, had some sort of security model, with file system permissions and acccess control lists, which is pretty much what Windows NT stuck with until Active Directory came along, and it was only then that the Netware market share started falling.

    The Server, and Workstation service in Windows NT was a direct port of the OS/2 Lan Manager code, and took all the bugs with it. To this day, there's a security bug that I raised with Microsoft for OS/2 Lan Manager in 1991, that still isn't fixed in the latest Windows server product. Microsoft's solution to this seems to have been to hide the exploitable functionality from the GUI and documentation, but it's still all there.

  18. gkroog

    New Technology?

    I thought it stood for something a bit more technical and IT specific like "Network Technology."

    As for the chance that Windows 8, like Windows NT 3.1, will be remembered with fondness in 20 years time: no chance: the masses already hate it now...

  19. Andy 70

    humph

    back in the day when Microsoft decided they had enough of pratting about with IBM developing OS/2, Microsoft decided to release their own32bit OS.

    so they started casting about for anyone making a 32bit OS with linear memory access, pre-emptive multitasking, lightweight, fast, expandable.

    Impressed with Amiga's datatype file handling, library system, and device handling, they approached commodore and asked if they could license AmigaOS v2/3. Microsoft would handle the 68k to x86 rebuild, drivers, tech, marketing, the works, and push it as their entry to the 32bit market.

    Commodore management did the normal Commodore thing when someone waved bucket loads of potential cash at them, and told Microsoft to stuff it.

    Microsoft then built WindowsNT and took over the world.

    Goddamnit.

    1. Kebabbert

      Re: humph

      MS asked for Commodore to license Amiga OS? Never heard this before. Do you have links, or is just hear say?

  20. Don Mitchell

    NT had a good solid API for systems programming

    There's a nice little book by Johnson Hart, "Win32 System Programming" that shows off the kernel API. When NT was being designed, UNIX in its various inconstant forms was a mess, still had very poor features for memory allocation, threads, concurrency, etc. The problem was that nobody with a professional understanding of operating systems and requirements could sit down and do it over again, and NT was exactly that. Asynchronous I/O, threads, multiple heaps, completion ports, a real event mechanism (UNIX signals were really useless in the old days), a wide variety of concurrency mechanisms, choices to wait for events, poll for events, etc. Windows was also modular, with well defined interface mechanisms, dynamic linking, device deriver interfaces. All things UNIX did not have in the 1980s (it was just a monolithic C program in those days, you had to recompile the OS to insert a new driver). So NT was a systems programmer's dream. Today, Linux has most of these features, but NT deserves a nod of respect from those of us who remember how backwards UNIX was when NT came out.

  21. Anonymous Coward
    Anonymous Coward

    NT = Mica

    At DEC Dave Cutler developed a new operating systen called Mica for the new processor Prism. When the project was cancelled Dave took the tape and walked over to Microsoft. Mica is the basis for NT. Some years later Microsoft compensated DEC/Compac 200M$ for this.

This topic is closed for new posts.

Other stories you might like