back to article Windows Server 2012: Smarter, stronger, frustrating

Microsoft has released Windows Server 2012, based on the same core code as Windows 8. Yes, it has the same Start screen in place of the Start menu, but that is of little importance, particularly since Microsoft is pushing the idea of installing the Server Core edition – which has no Graphical User Interface. If you do install a …

COMMENTS

This topic is closed for new posts.

Page:

  1. RICHTO
    Mushroom

    NT has never been a Monolithic type system. It has always been a hybrid micro kernel design. Only really old UNIX legacy based things like Linux are monolithic these days.

    1. Anonymous Coward
      WTF?

      @richto

      "Only really old UNIX legacy based things like Linux are monolithic these days."

      You mean those old unix legacy things that MS has been desperatly being playing catch up with ever since it released NT? When - *gasp*- Windows went 32 bit protected mode & multi user. Not simultanious mult user mind, that had to wait. Along with proper remote login. And networked graphics. And then after years of being told the GUI is all you need they finally catch the clue train and come up with PowerShell. An oxymoron when you compare the unix shells but better than nothing. Now we have TA DAA! - Server Core! - Wow! An OS that can be run without a GUI - I assume - remotely. Now where have I seen that before... Naturally it won't be via ssh - that would be too easy and standard for MS. No doubt it will be some overcomplicated roll-your-own solution probably involving some GUI-for-idiots on the client.

      1. h4rm0ny

        Re: @richto

        So looking at your list of things that UNIX and Linux had that Windows Server 2012 now has too, it seems you think that Server 2012 has now caught up with UNIX and Linux?

        1. Anonymous Coward
          Linux

          Re: @richto

          "So looking at your list of things that UNIX and Linux had that Windows Server 2012 now has too, it seems you think that Server 2012 has now caught up with UNIX and Linux?"

          When Windows gets a decent multi root file system , can load and unload drivers on the fly without a reboot, allows you to control EVERYTHING on the command line with full process daisy chaining and job control (mainly backgrounding jobs), has full remote command line access (ie not requiring a dozen different single task GUI clients), dumps that idiotic registry , implements a sane version of sudo, setuid bits in the filesystem and fork() , allows different desktops running under different users simultaniously on seperate screens (X windows on various version of unix has managed that since the early 90s) then yes , it will have caught up.

          1. h4rm0ny

            Re: @richto

            I think with Server 2012 and Powershell you have the remote command line access you ask for, but modded you up for a good answer.

      2. Anonymous Coward
        Anonymous Coward

        Re: @richto

        @Boltar, have you forgotten your meds? You need to calm down a bit.

    2. Anonymous Coward
      Anonymous Coward

      Tired old troll is old and tired.

      Whatever next? An explanation of how RISC is so much better than CISC?

    3. Hi Wreck
      Coat

      Nt

      NT started off as a nice micro kernel but that all got tossed out of the pram when they integrated the GUI stuff into the kernel. It has taken them eons to fix that one. Look at QNX for a micro kernel and OS that would do Tannenbaum proud.

    4. Anonymous Coward
      Anonymous Coward

      Monolithic kernel isn't necessarily bad

      Just as much as a microkernel isn't necessarily good. Even dear "AST" would admit this today, even after telling Mr Torvalds that he wouldn't receive many marks for a monolithic kernel submitted as an assignment. :-)

      They're different ways of tackling the same problem. There are advantages in both. Performance is one disadvantage of the microkernel model it took Microsoft quite some time to get their "layered" kernel right. The earlier versions of Windows NT weren't exactly high performers, Windows NT 4 lumbered along a bit... Windows 2000 was better. Then they started piling on the rubbish in Windows XP and Vista. I observe some of this rubbish is noticable by its absence in Windows 8.

      Portability is one of the strengths of a microkernel. It's therefore ironic that Windows NT, being largely microkernel-based, runs on so few platforms, compared to Linux which is as you rightly point out, monolithic. Windows NT did run on more, but I suppose they decided it wasn't worth persuing the others. Does make you wonder what it'd look like had they decided to keep an ARM port of Windows NT going though.

      Where Windows is considered "monolithic" is more to do with the fact that the user land and front-end seems to be conjoined at the hip with the back-end kernel. I can take a Ubuntu Linux desktop, and completely strip away the GUI environment leaving only the command line. Indeed, I did this very act today.

      Try that with Windows XP, or 7, or Server 2008. No dice, the GUI and kernel are inseparably linked. Same with MacOS X, although MacOS X without its GUI is essentially Darwin, so probably doable, just not obvious. Windows has been that way since NT was first released. Consumer Windows has been like this since Windows 95.

      The fact that Microsoft are recognising this as a limitation of their platform however, and are now taking steps to remedy this however, I can say is a good thing. Now clean up the POSIX layer a bit, and we might even have a decent VMS-like Unix clone that will make running applications designed for Linux a lot easier.

      1. Anonymous Coward
        Anonymous Coward

        Re: Monolithic kernel isn't necessarily bad

        @Stuart. You've been able to run Windows without a GUI since 2008, also a GUI is not the millstone it once was to an is.

    5. Don Mitchell

      The NT microkernel architecture was rejected very early. It was going to support Win16, Win32 and even POSIX, but no such system was ever released. NT and even DOS/Windows 9X were still much more modular than UNIX. Microsoft used DLL's, COM interfaces and device-driver interfaces (DDI) long before UNIX had them. Even more complex technology was developed later to support extensible GUI interfaces (linking and embedding, Visual Basic and such stuff). Microsoft did this to allow them to update small parts of the OS instead of shipping a whole OS release to their customers everytime.

      I remember installing device drivers on my SUN workstation in the late 1980s, and you had to edit the interrupt vector tables and recompile the kernel, because at that point in time UNIX was still just a giant monolithic C program. NT was much more advanced when it came out in 1989 -- it supported threads and concurrency much better than UNIX, it had async I/O and events and light-weight coroutines (fibers), I/O completion ports, etc. These features were more or less added to Linux much later on (I've heard async I/O in Linux is pretty dodgy).

      1. Anonymous Coward
        FAIL

        "device-driver interfaces (DDI) long before UNIX had them."

        You have heard of the unix /dev directory , right? What do you think that is?

        1. Ken Hagan Gold badge

          Re: the unix /dev directory

          It is a way of *naming* devices. Beyond the ability to open and close handles to the device (and thereby read and write data) it offer *nothing* in the way of interface standardisation.

          1. Anonymous Coward
            WTF?

            Re: the unix /dev directory

            "It is a way of *naming* devices. Beyond the ability to open and close handles to the device (and thereby read and write data) it offer *nothing* in the way of interface standardisation."

            The device handle IS the interface. Go read up on ioctl() & fcntl(), I don't have time to have a discussion with an idiot.

            1. Ken Hagan Gold badge

              Re: the unix /dev directory

              Why don't you read up on "interface"? It doesn't just mean "hole through which you can pass unspecified data".

      2. Anonymous Coward
        Headmaster

        The NT microkernel architecture was rejected very early. It was going to support Win16, Win32 and even POSIX, but no such system was ever released.

        That's not what defines a microkernel. You're describing a system call interception layer, like those that many Unix and Unix-like systems have had since the early 1990's. For example, FreeBSD uses this to provide the ability to run Linux binaries and NetBSD uses a similar technique to maintain backwards compatibility. I do recall a POSIX compatibility toolkit for NT, but it wasn't usable and simply allowed MS to sell software to the US government.

        NT and even DOS/Windows 9X were still much more modular than UNIX. Microsoft used DLL's, COM interfaces and device-driver interfaces (DDI) long before UNIX had them.

        DLL's are just shared libraries (albeit done in a very awkward way). Unix has had shared libraries well before MS copied the idea. COM is vaguely similar to the concepts that Unix derived microkernels such as Mach implemented years before, but MS did it in a way that wasn't clean enough and resulted in many security vulnerabilities.

        Even more complex technology was developed later to support extensible GUI interfaces (linking and embedding, Visual Basic and such stuff). Microsoft did this to allow them to update small parts of the OS instead of shipping a whole OS release to their customers everytime.

        VB has nothing to do with modularity - it's just a programming language and framework similar to Tcl/Tk in the same timeframe. As for OLE, I believe ToolTalk on Unix systems that used CDE predates it.

        I remember installing device drivers on my SUN workstation in the late 1980s, and you had to edit the interrupt vector tables and recompile the kernel, because at that point in time UNIX was still just a giant monolithic C program. NT was much more advanced when it came out in 1989 -- it supported threads and concurrency much better than UNIX, it had async I/O and events and light-weight coroutines (fibers), I/O completion ports, etc. These features were more or less added to Linux much later on (I've heard async I/O in Linux is pretty dodgy).

        So you're comparing SunOS to early versions of NT? On the one hand those early versions didn't offer many of those features you describe, and on the other hand it lacked many features of Unix - some of which it is only now gaining in the 2012 release 23 years later.

  2. durbans
    Stop

    Exchange 2012?!

    "The new Server Manager is in many cases a wrapper for PowerShell, something that will be familiar to Exchange 2012 administrators."

    I'm going to assume that you meant Exchange 2010, which would make perfect sense :-)

  3. durbans
    Happy

    But nitpicking aside.....

    Server 2012 looks fantastic...

  4. h4rm0ny
    Thumb Up

    The de-duplication is awesome. Run 64 Linux VMs on it and have all the redundant OS code in each of those VMs exist only once on the disk, massively reducing storage space. Deduplication services already exist, but this is integrated and standard and based on conversations with others, it looks like it's significantly superior.

    Also love how all the GUI elements are now wrappers for PowerShell and off in the preferred install. Makes it more like the power you get with Linux CLI. Very impressive.

    1. Keith 72

      The de-dup isn't useful for the VMs you are running - they'll all be rehydrated when you run the VMs. It's great for libararies of VMs though.

  5. rich0d

    When you say "up to two virtual instances" does that mean Hyper-V in 2012 Std is *limited* to 2 Windows VMs? Or that's what's included in the licencing? Just IIRC, 2k8 R2 ent you was entitled to 4(?) "free" 2k8 VMs in it.

    1. Anonymous Coward
      Anonymous Coward

      Standard licence...

      MS give you two VMs included in the licence costs for the server install and, much like rdp sessions, you can run a dedicated server with licences to run more.

      1. wayneme

        Re: Standard licence...

        (I am the Windows Server PM for Microsoft UK)

        Correct on the licensing rights mentioned above for Standard, for Datacenter customer has unlimited numbers VMs within their usage rights.

        There are a couple of key things which have been missed from this article:

        1) Windows Server 2012 goes beyond just machine virtualization. WS2012 addresses virtualization on compute, storage and networking. Feature such as Shared-Nothing Live Migration and Hyper-V Network Virtualization

        2) Windows Server 2012 delivers simplicity in managing a server estate with large investment in Powershell, 2000+ new commandlets + Intellisense

        3) A platform to enable multi-tenancy, high density website (CPU throttling for websites) and hybrid applications. System Center + Windows Azure complete the story but giving customers a complete Private, Public to hybrid experience.

        4) A simplified rich VDI experience for users and IT Managers, dynamic access control to enable compliance within the organization.

        the GUI is pretty :) but for many ITPro's and folk using servers everyday it is the power of what is under the hood that really makes Windows Server 2012 special....

        Try Windows Server for yourselves at www.microsoft.com/windowsserver

        1. h4rm0ny

          Re: Standard licence...

          "System Center + Windows Azure complete the story but giving customers a complete Private, Public to hybrid experience."

          How does this work and what can you do with it?

  6. Destroy All Monsters Silver badge
    Windows

    Hmm.... sounds good. Really good. I am intrigued.

    Wait ... did I say that?

  7. /dev/null
    Meh

    Does Windows Server Core 2012 really not have a GUI?

    Or is it like Server Core 2008, which has a desktop, just no Windows Explorer?

    1. Captain Scarlet Silver badge

      Re: Does Windows Server Core 2012 really not have a GUI?

      Its "Modular" (This is advertising so take with a pince of salt)

      You can have what you want, either a shell or no shell.

    2. h4rm0ny

      Re: Does Windows Server Core 2012 really not have a GUI?

      There are actually three modes, a pure, pure GUI-less instance, a version that has a Desktop but no real GUI per se, and then the full GUI thing with all the tools, et al. It's my understanding that the first is the preference if you're managing multiple servers - you just leave it without a GUI and manage it remotely using PowerShell scripts of the Server Manager tool which will handle all your instances.

      They are different modes rather than different installs. So if you want to put the GUI on one of your instances temporarily, you can do that. It will be removed when you switch back to the GUI-less mode. I don't think it really makes a difference with a single server, but when you have a lot, it's significant. Just like I don't install KDE on all my Linux servers because I just manage them remotely, why have one on Server 2012 for the same scenarios.

    3. Anonymous Coward
      Anonymous Coward

      Re: Does Windows Server Core 2012 really not have a GUI?

      As someone else mentioned, there are three interface modes:

      1) full GUI, 'nough said

      2) minimal server interface. This will give you a powershell window, a task manager window, and best of all, the Server Manager. This is great if your system is up and running and you just need to check on things regularly.

      3) Core. Just Powershell.

      Best thing about this new setup is, you can completely remove the features you don't want, e.g. if you're not using IIS on a particular server, you can delete the installer files. It allows you to trim the OS down to the bare essentials. Use Powershell Web Access to perform actions on the server (there's even a nifty new Powershell menu which will let you control the server manager features through the console).

      Frankly, this is the server they should have built years ago. I've never been a fan of Microsoft, but even I have to admit this one is pretty great.

    4. Anonymous Coward
      Anonymous Coward

      Re: Does Windows Server Core 2012 really not have a GUI?

      It's hardly a GUI: yes it has a mouse, but all you have to look at are a Cmd Prompt or Notepad (oh and Task Manager for when you forget and close that Cmd Prompt!).

  8. TeeCee Gold badge
    WTF?

    "You can even assign virtual disks to folders rather than drive letters........."

    You've been able to mount disk partitions as directoriesfolders in Windows since Jesus was a lad. Why would the virtual ones be any different?

    1. Anthony Chambers

      I thought this

      I've no experience of virtual disks though so assumed that they had been different up to now?

      1. david 12 Silver badge

        Re: I thought this

        Virtual Disks mounted as folders works on my Windows 7. Not on WinXP or 2003 -- except using a virtual mount point for a network drive, as used by roaming profiles.

    2. P. Lee

      > You've been able to mount disk partitions as directoriesfolders in Windows since Jesus was a lad.

      but have they sorted out the mess that causes with checking disk sizes?

      50Gig: c:\

      1000Gig c:\mybigdisk

      I want to install a new program in c:\mybigdisk and the installer checks the disk size of c: and doesn't allow it.

      I haven't tried it in a while, so it may have been fixed - answers on a postcard.

      That said, I'm really annoyed at KDE, the way it holds volumes mounts in the GUI layer rather than updating the underlying OS. At least OSX bungs everything in /Volumes so the whole OS has access and not just the GUI.

      1. Anonymous Coward
        Anonymous Coward

        Re: > You've been able to mount disk partitions...

        @P Lee, what you describe is a problem with lazily written installers, not with the OS.

  9. wanderson

    Windows Servr 2012 functionality?

    Earlier this year it was reported that Microsoft was attempting to update/replace it's aging and limited NTFS file system with most, if not all of a license of Zettabyte File system owned by Oracle.

    No-where in this or any other article written about new Windows 2012 by Microsoft Wow guys has this issue been addressed. More stories about perpetual "point & click" administration Interface that is nothing new on Windows, and "updated" Hyper-V virtualization that in September 2012 remains years behind VMWare and RedHat KVM virtualization are not worth reading for those looking for substantive information on Microsoft "great, improved, New OS".

    Continuing propaganda of Gee-Whiz! about anything Microsoft is a waste of time for Theregister and technologists looking for true innovation. Everything said about this new Windows - it's "technical features", has been available from Linux and *BSD for several "years", and providing significant performance, reliability and security advantages over Windows by every metric.

    1. Anonymous Coward
      Anonymous Coward

      Re: Windows Servr 2012 functionality?

      Seriously? I find that very hard to believe indeed.

      NTFS is a mature filesystem, there are new features arriving in it with every version of Windows, it's hardly limited or "aging" assuming that aging of a product is in some way bad. What makes you think it is limited?

      I don't really care about the my OS is better than your OS. What does it matter as long as you've got an appropriate tool for the job?

  10. George of the Jungle

    Windows Home Server anyone?

    A lot of this sounds pretty good for the corporation and small business, where admittedly MSFT wants to be.

    Pity that those of us with Windows Home Servers will probably be borked on this release. MSFT seems to have given up on that space.

  11. vordan
    Paris Hilton

    Stupid question

    This may be a stupid question, but does it still have the Registry? How are the configurations persisted?

    1. Ken Hagan Gold badge

      Re: Stupid question

      Last I heard, Windows still has a registry and Linux still has an /etc directory tree full of poky little text files all in different formats. Also last I heard, *neither* is a single point of failure since both Windows implementation of registry hives and Linux' implementation of filesystems are generally pretty damn robust. Both, however, can be easily trashed by end-users who don't know what they are doing.

      I believe that "best practice" on Linux is to run some sort of version control software on the /etc tree. This at least allows for reverting bad changes and documenting the reasons for good ones. I don't think Windows has anything equivalent. Linux therefore has an architectural edge in my book, but this is pretty marginal.

  12. Anonymous Coward
    Windows

    Where are the admin tools?!

    I smell a little fail here.

    While I applaud the course Microsoft is taking where they're basically stripping the GUI (or what's left of it anyway) from the server environment I get the feeling they're ignoring an important part of the process.

    Say I grab Server 2012 and put it on my network which currents contains Win7 and WinXP clients and some older Win2k3 servers. So how exactly am I going to administrate this server, considering that there are new tools and scripts to be used ? (put differently and more ontopic: new roles to keep in mind, new specific admin features to use (which should be addressed within the MSC admin scripts), etc.).

    Usually you'd get yourself 'RSAT' (Remote Server Administration Tools) which are a very fine collection of tools which you can use to administrate specific aspects of your server remotely. But guess what? When it comes to Server 2012 there is a RSAT version which specifically addresses this, but its only available for the customer preview of Windows 8.

    The Windows 7 version of RSAT still sits on version 1.0 SP1, thus being able to administer server ranging from Win2k3 right to the current Server 2008 release.

    Wouldn't it have made a /little/ more sense to release both the server and the admin tools at the same time, especially considering that the default installation behaviour is core mode?

    Oh, and don't my word for it. Simply search MS download center for rsat yourself (link to MS download center).

    1. wayneme

      Re: Where are the admin tools?!

      (MSFT - Windows Server PM)

      hey there,

      Just to clarify you get the choice of how you wish to deploy a Windows Server, you can choose between Server Core Installation and Server with a GUI

      here is an article which helps describe: http://technet.microsoft.com/en-us/library/hh831786.aspx

      http://technet.microsoft.com/en-US/windowsserver.

      There is another great resource for FREE training on www.microsoftvirtualacademy.com, feel free to also register for the tech launch event on the 25th Sept: https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032523367&Culture=en-GB&community=0

      1. KroSha
        FAIL

        Re: Where are the admin tools?!

        Not what he asked. "We" want to be able to install as GUI-less Core, but still be able to run RSAT from "our" desktops. Y'know, remotely? Isn't that the point of Remote Tools; that they plug straight in to the command line, without using the command line?

        Minor FAIL.

        1. NogginTheNog

          Re: Where are the admin tools?!

          Doh! You shouldn't be running admin tools from your desktop as the same user you get your email and surf the web with.

          Why not build a Server 2012 VM (locally or remotely)and administrate from that? Or A Windows 8 one come to think of it?

          Agreed this is a bit of a biatch, but MS do have form for this: didn't they do something similar with previous Server 2008/R2 releases, making the admin tools only work properly on Vista or Win7?

  13. This post has been deleted by its author

    1. Anonymous Coward
      Anonymous Coward

      Re: Expensive

      Get a Mac?

      The server software is cheap and you can graduate to a more capable server *nix when you grow

      /coat

      1. Anonymous Coward
        Anonymous Coward

        Re: Expensive

        Mac as a server?

        SRSLY?

  14. Prof Denzil Dexter

    I hate per core licences

    despite their proliferation, i hate them.

    I pay you for the right to use the code. what i run it on shouldn't affect the price. Why charge me more for running it on 4 cores? Thats like Shell charging me £1.50 a litre on an Astra but £6 a litre if i drive a DB9.

    1. David Dawson
      Trollface

      Re: I hate per core licences

      Well more like if you were charged some kind of registration fee for being able to drive cars on the road, and if you had a more powerful/ bigger car, you paid more even though you don't particularly take up more space or go over the road more.

      Yes, imagine if the government levied different levels of tax on more powerful cars! The tragedy....

    2. Jean-Luc
      Joke

      Re: I hate per core licences

      > Thats like Shell charging me £1.50 a litre on an Astra but £6 a litre if i drive a DB9.

      Shell is happy enough with the extra liters that sweet DB9 will be guzzling. Heck, it might even give ya a volume discount.

      Plus, your local garage will correct Shell's oversight and not forget to overcharge you.

Page:

This topic is closed for new posts.

Other stories you might like