I'm having trouble thinking of a valuable use case for this 'OS'. It looks like a solution looking for a problem. Why run a whole OS - nano as it may be to bootstrap an application?
Microsoft has slipped out a preview of Nano Server, its ultra-slim deployment option for Windows Server 2016, along with the full preview versions of the forthcoming operating system. Nano Server is “by far the most important change we have made in server since Windows NT", according to Microsoft Distinguished Engineer Jeffrey …
You need an OS underneath your application, or else you end up with a Hell of a lot of extra code to write, test and maintain.
Amongst other things the OS serves as an abstraction layer between the application and the hardware. Without it the application would be required to take care of the complexity of handling Interrupts, Drivers, Memory Management, disk or peripheral IO etc ?
Unless your application was a driver or did nothing more than make a LED flash then I have would have difficulty understanding what usefull purpose it could serve.
It's MS-DOS Server!
It has just enough OS to talk to the hardware. They should have included a C\:> prompt.
What are the License Terms?
It's pretty clearly was created in response to the similar (but smaller) "just enough OS" versions of Linux that have been around for a while which are intended for use in packaged cloud or VM applications. For these, you only include the minimum of stuff necessary to make a specific application work.
Windows Nano sounds like it would be best suited if software vendors bundled it as part of their application for installation into a VM. That is, you buy (license) the software from vendor 'x', and it comes with Windows Nano included in the virtual machine image, along with just the dependencies needed to run that application. This would put the app dependencies issue back on the software vendor, rather than on the customer. Whether or not this can happen will depend of course on Microsoft's licensing terms and conditions.
The lack of features (including a fully functional shell) may be intentional, as Microsoft may wish to severely limit what can be done with Windows Nano to ensure that it doesn't cannibalize too much revenue from traditional forms of Windows. On the other hand, they may simply be bumping up against having painted themselves into a corner by putting all their eggs into the DotNet basket for the past decade. The corresponding stuff in Linux tends to be written in C, Python, Perl, or Ruby, and those languages are quite happy on trimmed down Linux systems.
"There is no IIS (Microsoft’s web server) on Nano Server, but it will support ASP.NET 5.0, as well as (according to Microsoft) PHP, Nginx, Pytho 3.5, Node.js, Go, MySQL, Java in its OpenJDK guise, Ruby 2.1.5 and the SQLite local database."
Looking at that software list, aside from ASP DotNet it's all stuff ported over from Linux, and supposedly even ASP DotNet will be running on Linux some time in the future. They're not giving customers a lot of reasons to run Windows. If vendors package the OS in a VM image with their software (see the first paragraph), then those vendors will be looking at which is cheaper and how to bargain for price cuts. I can't see that idea being too popular with Microsoft. It's possible they may be working on an IIS and MS SQL Server versions which will run on this in order to satisfy the customer base that is locked-in to them to give them a reason to not simply port to Linux for this market and be done with it.
Overall it's an interesting idea, but success will depend on a lot of factors which aren't apparent yet.
Re: What are the License Terms?
It's pretty clearly was created in response to the similar (but smaller) "just enough OS" versions of Linux that have been around for a while which are intended for use in packaged cloud or VM applications
Maybe Microsoft wants to get into the NAS market? The irony of that would be brutal as Linux became known and used in production for delivering file serving with Samba where Microsoft's stuff was far from robust. I think I'll stick with what I know and don't have to worry about. The last thing I need is *more* license management overhead.
Yes yes very good
But does it have SSH or some sort of standardised remote access.
Plenty of things can run Python, nginx et al.
Thats not a killer offering. Slick remote access would help.
I can sell Windows easily to clients but I dont because it makes my job as a techie that bit trickier.
There are a couple of aphorisms quoted in the world of Docker:
"Don't reconfigure, rebuild."
"If your Docker instance runs an SSH host, you're doing it wrong."
It looks like Microsoft are on board with this concept. But as a previous poster pointed out, Why run Windows at all, given the supported software list? The usual reason, that it is familiar to Windows sys admins would now seem to be absent.
Re: Very Docker-ish.
Why run Windows at all, given the supported software list?
Because there are limits to Nadella's "liberal orientation". The primary use case for this server is cloud deployments and Azure. Suggesting throwing out Windows as the base OS for Azure. Sure, me and you may agree, but try doing that one in Redmond.
What is interesting is preparation of this OS for customer release. I smell a BIG "Azure On Premises" and Hybrid Cloud push coming up.
the usual reason
... is also that it runs applications that are familiar to Windows sysadmins. Things like Exchange and IIS, things that this version of "Windows" can't do either. Very odd premise. Why would I want to pay licence money for this in preference to the better-established and entirely free alternatives?
Re: Very Docker-ish.
I can think of many uses where a reconfig would be better than a rebuild / redeploy.
MS needs to sort out remote access full stop.
Powershell is great but if you have to use remote desktop to get to it its wasted.
Re: Very Docker-ish.
Properly configured, you can run powershell locally and have the commands execute on a remote machine. You don't need to remote desktop to get to it.
Sounds like its very closely related to the IoT version of Windows 10 - similar functionality levels currently at least.
So it's so stripped down that none of the usual Windows management tools work and it doesn't support their own software, only Unix ports.
It's difficult to think of a reason why anyone would go for this or why this even exists.
>It's difficult to think of a reason why anyone would go for this or why this even exists.
I don't think MS expect *nix users to come flocking. They are trying to staunch the flow of Windows users who need *nix tools from jumping to a proper *nix OS.
I think the idea is that you run your OS-independent business-logic cloudy app on a cheaper version of Windows.
If you need IIS, you can jolly well carry on paying for Windows as usual.
It runs the usual minimal web stuff. It looks aimed at supporting one web application per server, to better isolate them. From a security perspective, could be a good idea. It's not a way to run complex, large server side applications yet. Anyway, larger, more powerful databases and the like can be still hosted on standard servers and be accessed from nanoservers, if needed.
Also, being unable to access a machine directly if there are issues that may hinder a connection, it's not something I'd like on some type of servers. It's PK with something you can easily replace with a copy.
Add me to the confused list....
In common with most of the posts so far, I'm wondering..... why? and what for?
This seems so typical of what you have to put up with when MS rush products to market without finishing them properly: Massively convuluted installation procedure with a "cobbled together at the last minute" feel about it? Limited subset of features guaranteed to trip you up the moment you need to use something you assume is there, but then find out is missing? Can only connect to the server using esoteric proprietary Windows tools from a Windows PC that has them installed?
And using 1Gb of memory for an install doesn't exactly sound like "nano" to me - I've run proper Windows server installations successfully on half of that in the past. All in all this sounds like something that will have the Linux people chuckling away at it, whilst some poor sod at Microsoft Press has to churn out yet another breeze block sized textbook that makes it usable, but spends half its' length going on and on about how brilliant the product is.
Re: Add me to the confused list....
> And using 1Gb of memory for an install doesn't exactly sound like "nano" to me - I've run proper Windows server installations successfully on half of that in the past
Windows in 64MB, what was it XP? 2000?
Re: Add me to the confused list....
I got Apache, PHP and MySQL running on Windows 2K server, the whole kaboodle running with 35 meg of RAM in production (4x16 mb chips on the mobo) for a small CRUD data entry app for 2 people with a PHP script that pulled photo processing information from Kodak's servers in the US. Worked like a charm, and booted fast too, though I did have to deinstall more apps and more services than Microsoft would have liked. would have used NT 4 but that did not have native USB support and the PC's disk drive was naff.
From this, what has MS just invented that is so earth shattering?
Re: Add me to the confused list....
What was it?? Linux in 8MB? or was that 4.... Worked quite well in 1998 - including X window and software development, with a mail server and DNS even.
The why and what for
Per Microsofts Jeffrey Snover (chief architect for Windows server), Server Nano is *primarily* a scratch-your-own-itch refactoring.
The biggest user of Windows Server - by far - is Microsoft Azure. If you can save 25% on the size, you can increase VM density by 20%. If you can save 50% you can double the VM density.
Harddisk footprint has been reduced with a factor 20. That is a *massive* saving once they scale to Azure.
OS RAM usage is down considerably as well. The fewer features means fewer patches (both bugfixes and security patches) and consequently fewer reboots. Microsoft investigated how many of the patches in 2014 that touched the components in Nano and concluded that 80% of patches would not have been required, as they concerned components not in Nano.
But to turn the question around? Why does a *server* - by definition a machine whose primary task is to run a workload - need to have a *command interpreter*, a *shell* and an *editor* - even if it is very basic.
Why should you need to log in to a machine over SSH, start a command interpreter on the server and issue commands? Why would you want a Ruby interpreter? All extra components has to be maintained and adds to the attack surface.
Microsoft has come to this realization late, but at least they now go the whole way and may very well take it a bit further.
Ideally the remote server is "just a server" with a standard interface to control and configure it and no way to log in locally. That is what Server Nano is.
Btw, PowerShell has this nice property that it can submit "script blocks". Script blocks are semi-compiled script, so while MS will still need some PowerShell infrastructure on Nano, they could very well cut away the *shell* part of it - leaving only the execution engine. Already today - if you use PowerShell remoting - you can send scripts to the remote that is not just text. They are parsed and turned into a PS script block locally and then sent to the remote PS engine. The upshot is that you can create scripts that refer to *local* script files but execute them remotely. PS will send the parsed scripts blocks for those files across the wire.
Re: The why and what for
Exactly my thinking, seems like Microsoft reinvented something well known in other ecosystems and very useful. That is : headless server platform optimized for running open source server software (e.g. open source version of .NET, ngix, Python etc.) on top of minimal kernel. Since large number of Windows shops do not understand the idea of "headless machine" they will obviously complain, but that does not make the whole idea wrong at all.
Is it something I could put to use personally? Probably not ... but I could toy with it, especially if it ran bare metal on cheap-and-cheerful ARM box. Or on top of kvm hypervisor on regular PC.
Re: The why and what for
As long as a nanoserver runs in a VM you may have little reason to log on directly. But if it runs on its own hardware, say some kind of appliance, you may need some direct access if the network components don't work for some reason. Otherwise, no matter how PowerShell is good at remoting, you're cut off, and you'll never know why...
Pets or cattle?
> But if it runs on its own hardware, say some kind of appliance, you may need some direct access if the network components don't work for some reason.
The mantra is: You should not manage your servers like your pets. You should manage them like your cattle. As Snover said: "If one get's ill you do not check it into the animal hospital - you fire up the barbecue". While I personally would not like to eat a sick animal, I totally get the idea when i comes to servers.
If a server becomes unresponsive you nuke it and re-install it using whatever method you used originally (PXE). Your environment based on PowerShell DSC or Chef or Puppet should ensure that the server comes up configured like the rest of the hoard. If that fails you discard it.
You have to consider that the target for Nano is not your basement hobby server. It is servers on (huge) datacenter scale *and* single workload VMs.
When the datacenter is built from containers with hundreds or thouands of servers in each container - you do NOT send in a repairman (veterinarian) when one misbehaves. You disable it and chalks it up to cost of doing business. When enough servers have failed you may consider refurbishing them.
Your infrastructure should already be resistant to server failures. As soon as a server fails, workload should shift to other servers, either as part of clustering or hot-standby or super-fast provisioning. Either way, the only reason to try to salvage a server should be HW savings - not to make services available again. If you depend on salvaging a bad server for availability to services you are doing it wrong.
Which means that you should have no rush. Whatever was on that server was redundant (in the sense that it is available elsewhere) and you can just re-commision it at a time of your choosing and with no regard to data. I.e. re-install.
"What was it?? Linux in 8MB? or was that 4.... Worked quite well in 1998 - including X window and software development, with a mail server and DNS even."
Not 4. I ran Linux on a 4MB system, and X + xterm was enough to soak it. To be honest, 8MB was pretty minimal once you were using X. (X was known even in the 1980s for it's "extreme" hardware requirement of needing 8-16MB of RAM to run decently.)
Anyway... if I were Microsoft, I would make the MSI installer service an installable/deinstallable package. You install MSI, install whatever, then deinstall MSI (so you don't have to worry about potential bloat and security implications of having an installer present.) I really can't understand not having some kind of local command prompt either. I would bet the reason there is no GUI at all (even a screen with a command prompt) is that they found a massive wad of interdependent spaghetti code, have no possible chance to seperate it, and to get the size down they had to rip out the whole enchilada.
It'll be interesting to see if they get this to some kind of reasonably useable state, it'll certainly be better from a security standpoint than the status quo.
if I were Microsoft, I would make the MSI installer service an installable/deinstallable package.
If I were Microsoft, I would port dpkg or rpm to Windows, then put apt or yum on top. This would cost them a couple of hours of some junior engineer's time, and the rest of the plan then slots into place...
 The code is readily available, and can be shipped with Windows just so long as MS satisfies the 3(b) promise. It costs zero cash. And porting it really won't be a big deal...
Unless you _really_ need something that only runs on windows (and probably wont run on this anyway), why use nano if you can roll out a free bare-metal linux distro with very well known and well documented tools?
This is going to be a very tough sell for mircosoft.
Be a lightweight host for Hyper-V VMs and containerised applications
So is this replacing the current core server install option?
I suspect the primary use you'll see for Nano Server is as a Hyper-V host, where the reduced reboots and smaller footprint will be a welcome change, making it similar to VMware ESXi.