I'm more curious to see the security they've deployed to ensure VXers can't use it to modify the Kernel behind our backs.
A collaboration between SUSE and Red Hat is going to bring relief to Linux users the world over: they'll be able to patch their systems without reboots. The live patching infrastructure looks set to become available in version 3.20 of the Linux kernel. The two organisations introduced their distribution-specific live patching …
Just because it's in the main line kernel doesn't mean that it will be turned on by default in all distro's. More than likely it will be tunable via one or many kernel options during compile. So if there are security concerns in some use cases you can just turn it off.
At any rate it sounds good to me in theory. We'll have to see how it does in practice.
Viruses already commonly live patch OS kernels. What the sort of live patching talked about in this article does is to figure out how to patch kernels automatically rather than requiring maintainers to craft each live patch by hand.
This is different than a rootkit - which mostly intercept and "detours" system calls. Here you're going to change how the kernel itself works, and usually the most difficult thing to handle is what is in memory. Code may hold pointers to memory areas holding data structures. Those data structures may change size, etc. due to a patch, and may also need to be relocated because of size changes, etc.
And to make it even worse, some data may be shared across different code. The tricky part is to ensure everything is update correctly, without "split brain" situations that can lead to havoc.
"Just because it's in the main line kernel"
It's not. And won't be anytime soon.
Hey @jake, are you watching over Linus's shoulder, as he's typing alleged "... and won't be anytime soon"? Because no reply to Jiri's message is seen on LKML list, at this moment.
"an early shot at live patching called kSplice was acquired and turned into a proprietary service." - KSplice was acquired by Oracle, who offer it for their own Linux distro.
Solaris had this way back
Of course, Solaris had hot patching of the kernel years before linux. Back in... Solaris 8? Or earlier?
DLL hell comes to Linux
Very clever stuff
But if you're worried about restarting the machine then you've architected your service badly. Uptime is the enemy of good design practice. Instead, high availability should be used so you don't care about restarting hosts, losing hosts, moving servers in the rack, or any other interruption at the server level. There are very few tasks out there that need a single thread to stay alive for lengthy periods of time without moving. There are a few, of course, but they are quite rare.
Re: Very clever stuff
It's not matter of worrying about it, it's about making further economies. Whilst it's true that a system might need to be resilient to one of its servers going away, there's almost always an incentive to keep as many of them up and running as possible. If you size the system to provide the required level of service despite a worst case rate of equipment failure, power cuts, patching reboots, upgrades, etc, then eliminating patching reboots means you need less hardware.
Re: Very clever stuff
Yes, because I never have a service that I just run on one machine, because any HA architecture would be pointlessly expensive.
OK - I'm open to whole pile of possible failure modes, but to remove one whole set of "scheduled maintenance" outages would actually remove all the outages I've had over the last 18 months (possibly longer, but I can't remember)
Re: Very clever stuff
If HA is pointlessly expensive then you clearly have the ability to take the service offline, which was my point. If you can't take the service offline to patch then it ought to be highly available. Realistically it will take years for this to prove itself enough for production anyway, but as I said it's interesting and clever stuff.
Re: Very clever stuff
Face it - if you can't allow the little time needed to patch and reboot, well, you can't really allow for the far greater time needed to repair/rebuild when hardware fails badly, especially if you don't have spare parts readily available - which means you need *more* hardware anyway.
Re: Very clever stuff
Posting anon because.. well.. obvious.
My day-to-day exception nightmares are the NOAA senior duty meteorologist workstations. (These are the guys who monitor all the things. And the weather, in theory, but as far as I know they mostly just watch home-grown infrastructure monitors and call sleeping people to restart supercomputer jobs.)
They each have 3 workstations, so in theory you can turn any 1 off at any time without issue. Theory is pretty. In reality, they'll scream if anything at all happens (and if we don't jump, they scream to the union.. cuz feds..) Being able to apply security patches (usually) without destroying the (stupid redhat hack) nvidia drivers will be a godsend. And for all that RHEL is "locked" inside each rev, they release new kernels about every week so every patch cycle involves a reboot.
Linux never needed reboots, or at least it was what a lot of clueless Linux user repeated here over and over...
Maybe now they will repeat over and over they never said it....
If you genuinely believe that anyone with brains ever claimed that Linux never needed a reboot, you're really as bad as the people who claimed it. Every kernel update, unless you wanted to use the early kernel trampoline patches or these patch's predecessor, required a reboot.
However, that said, how many reboots are needed to do simple things like patch office suites, do the initial install, etc? I'd say "less", not none. Windows reboots, is it three times? on every deployment of it that I push out to my network. It seems to be unavoidable after sysprepping. But I can roll out a Linux deployment with a single reboot (i.e. the one to get into Linux from whatever deployment tool I've used).
And now how many reboots do Linux installs need if we have official live patching - something that MS just doesn't offer in their software? That's the point.
Nobody sensible has ever claimed that you don't need to reboot Linux. But it's been disgustingly easy to get 400+ day uptime for years, long before the MS offerings stabilised, if you're that way inclined (Why would you do that? It means 400 days of no kernel update!). However, now, even a kernel update doesn't necessitate a reboot.
However you argue it, I have a few dozen more updates today as part of Microsoft patch Tuesday, which is going to necessitate rebooting every computer, including servers, on-site at least once. However, the VM's I have of Linux-based stuff only reboot when I decide they need a kernel update at the moment, which because they are internal, non-critical and non-privileged, is rare.
"But it's been disgustingly easy to get 400+ day uptime for years"
We recently found a few Windows 2003 servers that have been running for over 8 years without a reboot! They are still working just fine. Obviously they have not been patched though - we will be updating them to 2012 R2 shortly.
Microsoft's application deployment, configuration, installation and patching technologies are generally well ahead of what is available on Linux - but reboots and not needing to patch multiple times to bring a system up to date are definitely an area where Linux still has a lead. It's long over due for Microsoft to fix this...
"Microsoft's application deployment, configuration, installation and patching technologies are generally well ahead of what is available on Linux"
Windows stops IIS for every .Net update for each version, then other updates require a reboot that is slow as it then completes the install during the startup process, somethimes restarting again. You cannot have two nines uptime on a single windows server as the patching process takes too long. Linux downloads and installs without stopping Apache, if necessary a one time reboot is required for a new kernel.
Windows downloads updates in series, rebooting a number of times as each update session is complete. Installing a new server without a slipstreamed DVD is an ugly process. Linux with no extra work is install, download the _latest_ patches, install and reboot.
Both systems have their advantages, I run both, just stop posting the shite about your favorite.
We run a few VMs here, mostly Linux but the odd Windows one is powered up for testing - usually after having been off for a month or two. When I've done my testing and come to power down the Windows VM (Windows 7 currently) I invariably have to wait half an hour for the patches it's been busily downloading in the background to finish installing. It's got to the point where I have to remember not to shut it down at the end of the day, otherwise I'm hanging around for it to finish...
you need to find the little link in the shut down dialogue that says shut down without installing updates.
You must have missed the bit where it says: "but reboots and not needing to patch multiple times to bring a system up to date are definitely an area where Linux still has a lead" - which are the only flaws that you identify.
Otherwise Microsoft application and patch management technology is leagues ahead of Linux - good examples would be AppLocker, SCCM, streaming installs and application virtualisation via App-V.
"You cannot have two nines uptime on a single windows server as the patching process takes too long"
Clearly you have no idea what you are talking about. Uptime measurements do not include planned downtime, and even if they did, that would suggest nearly 4 days of downtime for patching a year! Windows Server patch installations would typically take 2-3 minutes of planned OS downtime a month including a reboot. For anything critical you would be clustering or load balancing anyway, so there would normally be near zero impact to the service availability...
For Linux servers - the many kernel updates also result in similar downtime at present too...
@Anon Re: Useless...
"Windows stops IIS for every .Net update for each version"
And Apache stops every time you update in-built PHP modules. This isn't really much of a problem, the problem is that a simple .NET patch is often 200Mb of MSI's that trawl through all your .NET assemblies before they'll do anything, then change a handful of files, while writing at 30MB/s for 10-15 minutes sometimes (note: Actual figures in front of me as I update a server!)
And Linux updates to Apache DO stop Apache. They have to, or you'll still be running the old version - even if the underlying files are (sometimes) updateable without having to stop Apache to update them. Until you restart the service, you'll still be on the old version. Same for SSH, email servers, etc. on Linux. This isn't an argument.
As far as I can tell, Linux updates are done in series too. Otherwise dependencies are a nightmare to resolve properly. However, application updates do not require a reboot, that's the advantage. And a fresh Linux install from the stable ISO can easily take an hour or more to update to the latest version and slipstreaming isn't something that the average Linux sysadmin would do (though it may be easier to do so, I don't know).
Re: Useless... (BS)
"""Microsoft's application deployment, configuration, installation and patching technologies are generally well ahead of what is available on Linux"""
BS, do not make us laugh.
I never said that "anyone with a brain" said that. There's also the "Joke Alert" icon.
But you can look at many "Linux fans" here claiming that in lots of posts - feel free to look for them, it's not difficul to find them - obviously, as per your own words, they are brainless people without a clue about how software - especially a kernel - works.
Nobody ever denied Windows needs reboots - and it asks you explicitly so you know. Linux needs reboots as well, just brainless people believe it doesn't because it doesn't ask explicitly (some update manager do aks it, though). Anwyay, if you slipstream a Windows installation, you can install it with a single reboot as well.
But as long as people like you measure their sysadmin skills on "uptime" and not on how well machines are really cared for and properly setup, there's little to do - sure, that a simple number even brainless people can understand - just, it means nothing....
And "now" right now means in a "near future".... when kernel 3.20 will be ready. Don't know if the Windows kernel architecture will allow for something like that - one of the reason Windows requires reboots is to avoid memory corruptiion due to different versions of the same piece of code trying to access something which is no longer equal among them - say a memory structure that changed in size of something alike - it could happen for internal structures. It will be interesting to know how Linux handle such situations and move data from the old to the new setup.
Re: Useless... (BS)
Try to use a WSUS server once in your life, you could be surprised the way you can manage what gets which patches and get also reports about that.... while easily syncing your local patch repository with the remote one.
A bit better of reprepro or other solutions...
You're still defending that Windows rubbish software in 2015? Get a life.
Don't know if the Windows kernel architecture will allow for something like that - one of the reason Windows requires reboots is to avoid memory corruptiion due to different versions of the same piece of code trying to access something which is no longer equal among them - say a memory structure that changed in size of something alike - it could happen for internal structures.
"Windows cannot update files because they're in use..." seems to be the standard excuse for needing reboot after patching (on a desktop/laptop anyway, can't speak for servers since all mine are various unix variants).
Still, at least Windows did get past (long time ago) the most annoying oddity of requiring a reboot when changing network settings. :-) That used to be so very annoying.
> that would suggest nearly 4 days of downtime for patching a year!
You have clearly never rebooted an HP server. Most of 10 minutes doing 8 layers of bios setup before it even THINKS about booting...
Re: @Anon Useless...
We build production-ready VMs in under 30 minutes, even in RHEL. (Kickstart, basically.) It lets RedHat do their thing (for some reason, that includes only the last point release..) then yum update -y, then reboot.
Re: Useless... @ LDS
Are You saying my Xubuntu LiveDVD system staying Online for 156/172 days (last 2 sessions) @ time, is wrong ?
My quest to destroy this Laptop Via Using it, before it's extended warranty runs out & they have to give me a new one, again, is wrong ? Maybe I should reinstall Windows, it made fans run all day, just sitting there ...
My quest for the "Eternal Server" is a mad hopeless dream ..??
I did have to reboot recently as I changed IPTable firewall settings, but strangely, NOT when I updated Wine to 1.7, While a win32 continued to run, in a wine window, with No Restart, after I started installing the update I reliased Wine was running, but thought, "oh well see what happens", and it was fine ...
But, I agree with other users, here, I will be looking how they secure the "system" away from "the others" on internet ...
Re: @Anon Useless...
No really it doesn't. You don't run mod_php - I'm not aware of anyone sensible running mod_php any more. As for stopping Apache - do a graceful restart and then no, you're not stopping Apache, it lets any old children serve their requests and then they die off, being replaced by new child processes.
Linux updates - yum downloads all updates required, you only need to run it the once, no need to run it 3 or 4 times. As for a fresh install taking an hour? Hahahaaa really? Even if you're going through and doing the install manually it takes about 15 minutes to go from no OS to a completely up to date OS - unless you're on dialup.
Are we going back more than 15 years here?
Re: @Anon Useless... - Lee D
I Keep the "SYSTEM" as 1 item & Apts as a different install using - apt-offline, So if the machines needs a Update from say Xubuntu 11.04 to 14.10, just backup personal, install 14.10, reinstall apps with apt-offline, Restore Personal Backups, with a script in under 45 mins, usually, but these are just a basic internet/home box/laptops, so my Main Desktop can take a while longer....
Re: @Anon Useless...
"We build production-ready VMs in under 30 minutes, even in RHEL."
Takes us well under 10 minutes with Server 2012 R2 building from scratch including installing last months patches that have not yet been slipstreamed, or a few seconds via a Hyper-V clone...
Re: @Anon Useless...
"As for stopping Apache - do a graceful restart and then no, you're not stopping Apache, "
Yes you are, the clue is in the name - restart means that you stop and then start. IIS has a similar 'restart' option.
Hands up everyone who expects Slackware ...
... to adopt this kind of idiocy any time soon.
I mean, X86? Really? With no MMU? Are they serious?
On a RHEL course many years ago the instructor said (semi-seriously/jokingly) that "rebooting is a sign of weakness", the gist being that there is a fair bit you can change on the fly in Linux. However he also pointed out that's all well and good until the server reboots... so it's best to make changes permanent. And as part of the RHEL certification exams they'll reboot servers to make sure.
Meanwhile there are some people who seem to think that uptime is a bad thing and insist on "maintenance reboots", which to me are the work of the devil and usually cause more problems than they (attempt to) solve. If there's a problem on a Linux box that requires frequent reboots to "fix" (eg pospone till the next time) then looking into the problem is probably a better idea than rebooting to mask it.
Reboots, and some thoughts on patching....
I tried, very successfully, to avoid rebooting a nice but cheap ACER Aspire 7520 laptop for over a year. Kubuntu was updated regularly, I just skipped anything involving the kernel or drivers, as there was no indication whatsoever of problems. Then it went very slow....
On investigation, the cooling arrangements, fan and heat exchanger on the end of a very nicely made heatpipe, were heavily clogged with dust, and the CPU temperature sensor was throttling the speed to avoid a meltdown.
So I concluded that it did really need a shutdown, clean and reboot every six months.
I ran OpenBSD on a small tower of relatively low performance level, I think AMD K6/II-450, continuously for 3 years without a reboot. No delicate airways to clog. That was a successful trial to show that OpenBSD, plus KDE, was perfectly usable as a desktop OS, but the lack of nice (compared to apt-get, synaptic and now muon) package management tools brought it to an end, for now.
I have yet to see Windoze, even 7, which has been the best so far, stay up and working fully for more than two months. Something always goes wrong. If you turn on automatic updates, they force tediously slow reboots, and if you leave it off, you get hacked....
On modern hardware, I now always plan for a maintenance reboot on occasion. If it is a vital server, you can always configure another machine to substitute for it when needed.
Oh, and although I fully appreciate the use of live kernel patching, my instinct tells me that (on any kind of architecture, even a humble microcontroller), new code must be patched in via something like jump instructions, or changed subroutine call addresses, leaving unused code in situ, and adding bloat as well as some performance loss, compared to a freshly compiled kernel. But don't let that stop you, if minor performance loss is acceptable, as it very often will be. All I am saying is that a full, fresh rebuild every so often may be advantageous.
Having said that, I used to work with a fine piece of hardware of about 1985 vintage, a Tektronix 8560/8540 system. The 8560 was basically an LSI11/23+ and various extra bits, basically a UNIX machine (truly wonderful in 1985!), but the 8540 was for in-circuit emulation, using dedicated hardware cards with a real 8086, 68000 or whatever, and lots of trace memory, but its control processor was a humble Signetics 2560, with, presumably, EPROM memory. (I think it was copied into static RAM at run time, then any patches overlaid.) There was about 2k (very expensive in those days!) of EEROM, which was used to hold patches. We used to get a letter in the snail mail from Tektronix occasionally (no email in those days, and only 300 baud modems!), with an ASCII text listing of what patch to enter, and it was duly entered, and fixed some bug or other (many did not apply to what we were doing). So I have successfully, and even very happily, used a system which overlaid patches on to its OS, a very long time ago.
It doesn't matter if Windows gets this feature sometime in 2050 as the system will still need reboots, therefore making live kernel patching redundant.
Linux: Wants to be an Erlang VM, can't be an Erlang VM.
Re: Oh desire!
"Linux: Wants to be an Erlang VM, can't be an Erlang VM."
Installed latest CentOS. Erlang is no longer included! Had to find some repository and add it. Linux is just so crappy to use...
" the most annoying oddity of requiring a reboot when changing network settings"
THIS was the most annoying:
"Your mouse has moved. Windows has to reboot for changes to take effect."
@crayon - rebooting after network change
Yes, I have always wondered about that. Considering how M$ originally were too inept to write their own TCP/IP stack, and borrowed it from xBSD (no problem with that, at least they got some good code), and how, as far as I have seen, no version of xBSD, Linux, Apple OS X, Solaris or anything vaguely UNIX-related, needs a reboot after changing network settings, I have to wonder how they managed to achieve this absurdity. Perhaps the ever-incompetent Billy-boy was "illegally commingling" * the code again? As always, enquiring minds need to know...
At least one example of this remains, apart from the original IE abomination, and that is some "Telephony" service which when killed also kills networking. There are probably umpteen more spurious and inappropriate dependencies in Windoze.
* (illegally commingling is a quote from the judge in the Monopoly trial, and relates to the practice being "illegal" in properly constructed software. The judge did NOT say that it was illegal in the way that law works in his courtroom, but it was a wonderfully succinct way of saying what he thought about the coding practices at M$.)
Re: @crayon - rebooting after network change
"borrowed it from xBSD (no problem with that, at least they got some good code)"
Apart from all the security holes. Remember the ping of death?
"how, as far as I have seen, no version of xBSD, Linux, Apple OS X, Solaris or anything vaguely UNIX-related, needs a reboot after changing network settings"
Neither do current versions of Windows.