I'll be sure it's closed & the battlements are full of English Longbowmen to defend the castle from the MS barbarian hoarde. Thanks for the warning!
In March, when Microsoft announced plans to release SQL Server for Linux, Scott Guthrie, EVP of Microsoft's cloud and enterprise group, said, "This will enable SQL Server to deliver a consistent data platform across Windows Server and Linux, as well as on-premises and cloud." The release of the first public preview of SQL …
"SQL Server for Linux runs atop a Drawbridge Windows library OS – a user-mode NT kernel – within a secure container called a picoprocess that communicates with the host Linux operating system through the Drawbridge application binary interface."
Cool - so basically they are using Linux as a container / hypervisor. So presumably all the security and ease of use advantages of SQL Server and Windows - for instance Kerberos Delegation - will still work just fine? I guess probably it's slower though on Linux....
>Cool - ...
>I guess probably it's slower though on Linux....
Who needs performance or scalability (or even stability, this is beta as hell after all) in a database solution? This smells of checking boxes in marketing material (see we are cross platform, not locking you in wink wink) more than anything.
"Why do we care about desktop advantages on a server? "
Ease of use doesn't have to only be a desktop feature! It makes for a lower TCO via a lower cost of entry, lower training costs, users / DBAs prefer a simple life, etc, etc... However "ease of use" for SQL can also be via the remote management toolsets, not actually on the server...
Does this include a full version of PowerShell (which many remote admin tools rely on!) ? That would be a massive improvement on say BASH...
"So let's find a fancy new name so we can pretend to innovate!"
Hooooooold on, thar! [vague Quick Draw McGraw reference]
Let's step back about 2 meters and see what this REALLY means...
THIS means that MICROSOFT has a 'Wine-like' CONTAINER that could be used by *ANY* windows application maker to, *ahem*,
*QUICKLY*! *PORT*! *THEIR*! *WINDOWS*! *APPLICATION*! *FOR*! *LINUX*!!!
Which implies... that a LINUX PC could (theoretically) run *ANY*! *WINDOWS*! *APPLICATION*! if Micro-shaft would *BOTHER* *TO* *RELEASE* *THIS* the way they had a nice XP subsystem for Mac OSX a decade or so ago... remember?
Maybe Linus could get that built in to Linux. Killer app for Linux on the desktop, finally making it the year of Linux on the desktop (as well as losing the command lineand replacing it with a GUI for people that are too old, stupid and/or lazy to learn the command line - like me).
"Maybe Linus could get that built in to Linux."
What they've done is simply provide an library level emulator on top of the Linux system calls and libraries. A non-proprietary version of that has been about for years. It's called Wine.
"too old, stupid and/or lazy to learn the command line - like me"
I'll hazard a guess that I'm a good deal older than you. I've installed Linux for a number of even older relatives who never need to use the command line. As you seem to be admitting to being stupid I'd suggest that that lies in not realising that you don't need use it unless you choose to.
Which implies... that a LINUX PC could (theoretically) run *ANY*! *WINDOWS*! *APPLICATION*! if Micro-shaft would *BOTHER* *TO* *RELEASE* *THIS* the way they had a nice XP subsystem for Mac OSX a decade or so ago... remember?
Most Linux geeks would refuse to touch it. One of the biggest drives for switching to Linux (at least on the desktop/workstation side of things) is to get away from Microsoft. Sure, we've got Wine, but that mostly just gets used for games these days.
Wine is quite different to Drawbridge as it's not running an NT Kernel.
Useful for old Windows programs with no Linux alternative, and may run ones that won't work on Win7 / Win8 / Win10.
The only point to Windows was the legacy compatibility. They have badly broken it and also MS has emulated some APIs used by traditional "Forms" based GUIs in favour of GUIs written Direct3D APIs, which is crazy for ordinary non-media programs. Does MS think users only want video and games and force devs to use higher RAM resource APIs for "ordinary" applications? Mad.
Drawbridge is a proprietary way for MS to distribute to paying Linux users, it's not a good idea for users that want to run legacy applications.
We're talking about GUI vs command line, etc, but yesterday I had to use Powershell to get replication to work in a HyperV platform because the GUI wouldn't find the correct certificate for HTTPS authentication.
At least on Linux servers, you don't get a shock when the GUI doesn't work properly, because you don't expect to be using a gui full stop.
It also doesn't try to reboot itself once every couple of weeks unless you perform some low level registry hacks....although I believe they (eventually) hotfixed that.
Obviate has two accepted meanings (listed in most dictionaries)
1. Remove (a need or difficulty)
2. Avoid or prevent (something undesirable):
So "obviates the need for" is not incorrect under meaning #2. Arguably redundant but then so is "PIN number" and "wipe that smile off your face". Zero redundancy doesn't necessarily make for good writing/journalism
I can't help wonder who their intended target audience is going to be. Because I can't imagine that there'd be a large market for this. When it comes to databases then you can use all the performance you can get (depending on the database of course). So layering it with virtualization seems like a lot of overhead to me. Especially when the Linux box itself is already running in some kind of virtual environment.
Is it possible that Slurp is hearing through there back channels that many 'bloat server customers are planning to migrate to Linux? Relational databases behave very similarly (some differences in the SQL dialect and the DDL) that changing one for another is relatively straightforward. If Slurp wants to stay relevant, their products will probably need to run on other server OSes.
> I can't help wonder who their intended target audience is going to be
Organizations that want 'free' but don't care about 'open-source'? SQL Server Express running on Linux means a proper database for no money. Postgres is lovely but if you have SQL Server skills in-house you'd probably prefer to use them.
Whether we can trust Microsoft to stay invested is another matter — I'd be very worried about being abandoned after a year and yet another change in strategy.
I do not think there would be much of a virtualisation overhead, since the technology does not virtualize a machine, only an OS. As you move up the abstraction layers, the optimization opportunities are more obvious. In other words, it is system calls which are virtualized now, not the CPU. For majority of Windows APIs there is a very simple relation to Linux system calls (especially if you control also the application code, i.e. SQL Server itself). This means a wrapper will add very little overhead. This also includes IO (at least with the most popular options, including asynchronous IO) which is the largest source of virtualization overhead and coincidentally also major source of database performance issues (next to CPU cost of running queries). Also, Microsoft is obviously aiming this as competition to Oracle on Linux, so they cannot really afford large overhead.
For the most part, if you follow a traditional technologist POV, the entire idea is entirely preposterous. This smells like one of M$'s latest forays into a growth area because their revenue is not meeting growth expectations, and may even be on the decline.
Linux has always been the database platform of choice due to it's lack of fat layered abstractions that require code that should execute in 1 CPU instruction that instead requires 50 instructions.
Taking this approach with porting to Linux is absurd, and like M$'s feeble efforts to enter into the healthcare EHR market (which was yet another M$ miserable failure) this looks like yet another desperate attempt to right the sinking ship.
First of all, if I have a SQL Server db and need to increase my performance without spending ridiculous $$ on hardware - which will only provide minor performance gains on the sluggish windoze server platform - I have a nearly identical database management system that already affords near seamless migration to a platform that is superior in every way for database performance:
For you yonkers and M$ groupies out there, SQL Server is a hacked up, dumbed down, platform-neutered version of Sybase, which is now owned by SAP. Sybase has been around since 1987 and cut it's teeth on Unix platforms. When M$ decided to enter the DB space - a totally new space for them - they had absolutely bupkus to offer the marketplace. So they did what big, fat, incompetent, arrogant companies do, they buy - or in this case license - a superior product that already exists and rebrand it.
I can take my SQL Server database TODAY, and migrate it to Sybase with minor modification, and I'm up and running on a platform that boasts superior memory management, vastly superior filesystem performance options, and dramatically less code bloat / platform layered abstraction. Not to mention a DB platform that didn't get frozen in it's development in the early '90's, but rather a db that has excelled in nearly every area over it's dumbed-down cousin.
The only reason that this might sell is because most M$ customers are ignorant groupies who don't understand the options available to them and are religious about their platform and language of choice - because they don't know any better. M$ operates on the crack cocaine principle - once you're doped up and hooked on the drug - you will eventually become too stupid to even bother to validate whether or not the crap that I'm selling you is good for your business or not. You are simply looking for your next fix, and are addicted to the label.
While SQL Server - in all fairness - is somewhat competitive with other DB's out there, I have numerous choices that already run on a number of platforms that boast superior performance. So this is a total non-starter...
I was the loan Sybase DBA / developer for a 23 / 6 trading firm for a good half-dozen years or so. Until ASE 15. At that point I realized their product roadmap had gone so far in the wrong direction that they were destined to have their marketshare sink to where the mindshare was (nowhere). They were making themselves more like Oracle than like MS. Sybase always provided decent performance and great reliability straight out of the box, with low licensing and very low admin costs relative to Oracle, which always seem to need two DBA's, a SysAdmin, and a storage guy just to install it; MS was the opposite; I know MS shops that don't even have a full-time DBA on speed-dial; not saying it's a good thing, but there you have it. Further, since about SQL-Server 2005 MS started introducing cutting edge features way ahead of Sybase. I'd have switched to MS long ago if only it ran on an enterprise grade OS. Like Linux.
Having worked in Unix, Novell, and M$ shops for 24 years I had the opportunity to learn a lot about the differences between the OS's and the products that ran on them.
I hear you concerning a great company taking a great product in the wrong direction; I've seen this with other products as well. While it is frustrating, it usually isn't worth jumping ship over.
As for those 'cutting edge features' that are supposedly 'way ahead of Sybase' I would be remiss if I didn't remind that cutting edge features or not, SS still runs on NTFS; Windows still uses the archaic, technologically ancient and thoroughly eclipsed page file; all applications have some portion of their memory paged out to disk, no matter how much physical RAM is available. This is why when I build Win servers and clients I always double-triple the memory requirement, create a large RAM disk, and create the only page file on the RAM disk; this allows me to work around - in the only way possible - these crude limitations that harken back to the late '80's. It's still not as fast as the UNIX memory models, but it's better than the stock Win approach.
So cutting edge features aside, due to the fact that Sybase has always been UNIX aware, it can leverage hardware resources that aren't even available to the Windows world, such as XFS and JFS for speedy large database file access. So those "cutting edge features" are more than offset by the platform performance potential alone that is natively available to Sybase.
"Windows still uses the archaic, technologically ancient and thoroughly eclipsed page file"
Just like a Linux uses swap file you mean?
"This is why when I build Win servers and clients I always double-triple the memory requirement, create a large RAM disk, and create the only page file on the RAM disk; this allows me to work around - in the only way possible"
Or you could simply disable paging to disk....
Linux doesn't use a swap file.
Linux uses a swap partition. But the normal 'nix algorithm (varies from distro to distro) typically will not swap out anything until the physical RAM is in danger of being completely used up. Windoze, on the other hand, swaps out memory to disk when there is no need to swap it out; as soon as you run one program part of that program's memory will be swapped to disk.
Windoze without a page file works about as well as a car without brakes. Run Windoze without a page file and as soon as Windoze runs out of RAM - and sometimes sooner - it will crash.
Take some time to learn not only how Windoze works, but how other OS'es work as well. You might be suprised at what you find.
Sybase always provided decent performance and great reliability straight out of the box, with low licensing and very low admin costs relative to Oracle, which always seem to need two DBA's, a SysAdmin, and a storage guy just to install it;
Amd there in lies (one of) the rub. Sybase completely lost its way with licensing and thre away the fairly reasonable licensing in favour ridiculously inflated licensing costs in some aspiration of imitating Oracle I suppose.
"Organizations that want 'free' but don't care about 'open-source'? SQL Server Express running on Linux means a proper database for no money"
This only makes sense if SQL Server is the only thing you recognise as a "proper database". Some of us not only recognise other "proper databases"* but regard SQL Server as a Johnny come lately.
*Database engine or database server to be correct. Databases are just the collections of data that the engines manage.
""Organizations that want 'free' but don't care about 'open-source'? SQL Server Express running on Linux means a proper database for no money""
That's probably where this is headed - let devs that like trendy OSS stuff play with this to develop / test on, but when you need to scale up / care about security / want production grade clustering, you can uplift to Windows Server....
care about security ...windoze...
Would be cool if El Reg would publish the shill's IP addresses. I think Vogon would very quickly change the tune on MS and security. Or disappear forever under the load of attacks from the rest of us.
(Although we'll probably find out that while TV is paid to promote MS "security" [cough] they cannot pay enough for TV to use MS's shiteware!)
[says he who is currently building a Win7 box and is posting from it during day 2 of the ordeal]
"That's probably where this is headed - let devs that like trendy OSS stuff play with this to develop / test on, but when you need to scale up..."
...then you keep it on 'nix. Over the last few years we've migrated a shedload of server stuff to CentOS, and this just adds yet another service that can now be migrated.
It's saved us a bucketload in costs, not just up front but it's also no less reliable reliable while being easier to manage, and doesn't involve the added cost of having to actually manage licenses which is non-trivial (esp. with virtualisation in the mix).
In fact the only thing left on Windows Server is AD plus a few legacy corp apps including SQL Server. Most desktops are too obviously thanks to the likes Office, but now that the board has acknowledged to the very real cost savings we've managed on the servers without loss of reliability, they will be the next to be reviewed.
There's some software which pretty much insists on SQL Server (looking at you Sage), but if you prefer linux on all your servers, this might be the solution.
(Although any company that insists on SQL Server will probably not support their product on SQL Server for linux for years anyway)
A couple of years back my place of work would have been a perfect example - server-wise a Linux shop (no licensing minefield!), but some commercial applications require SQL server and there is no sensible/affordable alternative.
In the end we hosted a Windows VM on one of the Linux VM hosts, but it was always a security worry. If we had found an equivalent functionality/equivalent price application that directly supported MySQL or Postgres we would have certainly gone for that - this new approach from MS is a useful solution to the problem we faced.
They don't have an application to convert a Linux admin to a Windows admin.
Probably because it would see little use.
They have, however, spent a great deal of time and money working on converting Windows users and admins to Linux ones.. To list a few : GWX nagware, Win 10, Win 8& 8.1, the constant breaking of working systems with untested "updates", their removing software that people paid for because MS don't like it, their utter hatred of their customers, their lack of decent security
So what's the practical overhead of this, as, the memory sandboxing does sound like a useful security step over running it on a windows box (which I'm trying to work out if that's ironic).
I know I don't use my SQL server to anywhere near it's limits on the VM it's running on now, if I could port the underlying VM to linux without the gripes I got from users when I suggested switching to MySQL I can at least save on the licence cost of the host VM.
(we're a relatively small business, so appreciate our use case differs from most of the market)
reminds of me of the ancient project tangerine that tried for a common binary layer above the multiple CPU instruction sets once available so applications needed to be compiled once and would run on many different versions of unix. Died in the Unix Wars of late 1980s I think. How many layers of virtualisation are being run these days ? Hyperviser, OS, container with this inside ?
>How many layers of virtualisation are being run these days ? Hyperviser, OS, container with this inside ?
Lashings. Consider that Mac OS X has been entirely a VM for many years now, running inside LLVM. And I seem to recall that (some?) linux also moved to the same LLVM-virtualised approach.
If well written, the performance penalties can be staggeringly light. One hard number I know: linux clients on Xen (using the paravirtualised drivers) run with only a 3% penalty to the bare metal. That's... jaw-dropping.
What in the name of holy god are you talking about? LLVM might have 'VM' in the name but it isn't actually a virtual machine in e.g. the VMWare sense. It's the back end of a compiler, basically. In goes intermediate representation, out comes native x86/ARM/etc code.
Good heavens. You're right. Now.
When did that happen?
Honestly, you turn your back and get on with other stuff for a decade or two, and things just go off into la-la land.
But yeah, I cocked up re LLVM. It's now purely a compile-time notional/target-machine "vm", not a run-time vm.
I do apologise.
For interpreting my other statements: I was last full-time hands-on Xen 2yrs ago (one of the few to get four-way DRBD replication working), and the perf.figure comes from a book we had which included replicating/testing some Clarkson Uni work (corroborated hand-wavingly by my gutfeel own-observations comparing the bare metal servers with the Xen VPSs).
Um, no. LLVM is a compiler technology that abstracts away the underlying assembly language of the platform. macOS has been native x86 since the transition. Nor is it pure Mach, opting more for a hybrid approach to a kernel.
LLVM isn't even platform-independent. It simply treats function calls and φ-operations as primitives during code generation. That the bytecode can be interpreted rather than translated to the targeted assembly is a happy accident, not a goal of the project.
It may be only a matter of time before Microsoft release a version of Windows that has a unix/linux backend. Windows GUI. Linux kernel.
I am talking in the 50 - 100 years timescale here.
You techie guys tell me. What is the next big leap in OS? Who is working on it? When is it going to happen? We just seem to be stuck in a horrible hole at the moment with user friendly but insecure and bloaty Windows; versus slick secure Linux that has poor app support and in my experiencce requires you to learn an archane annd unintuitive commmand line language to make it work. (And when writing a support model I have to double the server support costs for Linux versus Windows (and yes thatss offset by licensing, though Red Hat aren't that cheap).
" in my experiencce requires you to learn an archane annd unintuitive commmand line language to make it work."
Look, if you're going to shill, at least learn to spell.
Every time - every damn time - Linux and Windows stuff gets mentioned we have this spurious notion involved that Linux doesn't have GUI interfaces. It has a whole stack of them. For any given version of the kernel you can choose any of them. Windows may have had a selection of interfaces, sometimes overlapping in terms of being in current support, but it's strictly one to one between the core and the interface. Yes Linux also has the command line option; remind me what it's called. It's not cmd.com or Powershell is it?
"this spurious notion involved that Linux doesn't have GUI interfaces"
Linux on the server generally involves a lot more editing of text files and use of the command line / shell than Windows Server does. Especially when configuring any associated software.
For instance, just compare installing Oracle DB on Windows to installing on Linux...
>For instance, just compare installing Oracle DB on Windows to installing on Linux...
What with that ? Notice that it (yes, IT, the Oracle INSTALLER) is a gui app, ouch ... runs on HP-UX, Solaris, AIX as well, mate ... needs an x server ...
You sir, just shot yourself in the head.
Then, you have the Oracle express RPM/DEB ... even easier than any windows app to install .. double click, watch, done. ... No "Next+untick search&home page hijacker"/Next/Next/Next/Finish BS ...
"What with that ? Notice that it (yes, IT, the Oracle INSTALLER) is a gui app, "
There is lots more to installing Oracle than running the GUI - it's far less effort on Windows.. But for a specific example - in Windows the installer fully sets up RAC clustering. In Linux you have to do lots of keyboard bashing....
A spurious comment at best, as Oracle on Linux vs. Oracle on Windows is like comparing an F-22 Raptor to an F-105 Thunderchief. While Oracle has done it's level best to bring advanced DB performance & scalability to the Windows platform, on Unix it has always had it.
So let's see, we've had over 30 years to mature scalability on Unix platforms, but Windows after 26 years still doesn't scale well for Enterprise DB work... not a valid comparison at all.
For those who run enterprise db's... the installation process is completely irrelevant. It only matters to those who don't understand what serious db's are used for...
Yes, yes I've seen the M$ trolls tout these numbers over and over. I've also worked in Enterprise Oracle and M$SQL shops, and have seen the difference in scalability in both systems.
TPC is a TEST SUITE which runs CANNED BENCHMARKS. This is not reflective of actual, long term enterprise scalability, reliability, performance, maintainability etc. in the REAL WORLD.
It does make for some good trolling though...
"user friendly but insecure and bloaty Windows; versus slick secure Linux that has poor app support and in my experiencce requires you to learn an archane annd unintuitive commmand line language to make it work"
uh, not exactly. mac OSX has users for whom you show a command shell and they're like "WTF?" and blank stares, etc. most recently I was trying to explain how to use command tools to ssh into an embedded device that has an RPi controlling it, to a mac guy. he's got a mac. that means he has ssh. Well, he gave up and said "I'll just bring it by so YOU can do it..."
"It may be only a matter of time before Microsoft release a version of Windows that has a unix/linux backend. Windows GUI. Linux kernel."
I can only see that happening if Linux ditches it's dated and inflexible monolithic kernel model, and moves to a micro or hybrid micro kernel design...
If it's that M$ is supporting SQL server, well the number of times I've seen updates in W10 turning it off and stopping software we support that runs it, I'd say they are looking to drop it long term. I suspect that they'll be "dropping it off into the community" at some point. It'll become abandonware.
Getting it on Linux is good, as with exposure to the producers of specialised software, this is a reason why they've not released it cross platform, and with the embedded systems I've been supporting would be far better being on Linux anyway.
Canny approach by Microsoft. The "virtualisation" overhead should be minimal: windows's (NT) kernel has been POSIX compliant from the get-go (which is why things like the rather wonderful CygWin can exist), and SQL Server is just a massaged/bugfixed(!!) Sybase -- which was IIRC written originally on/for the unix architecture. Certainly my first Sybase work was on unix, back in '95. So Drawbridge could well be doing buggerall to translate SQL Server's OS calls to POSIX; it may be that they were actually left as POSIX in the first place.
Bodes well for other bought-in apps being "ported" to Linux. (Win-architected things like MS Office are likely to remain problematic.)
[If you ran it on NT or Windows 3.1, you were running version 4, not 1. v.4.21 was the first to run on NT or Win 3.1.]
They did not rewrite it from scratch. They DID clean it the hell up (I actually banned the use of Sybase in the Australian office of the company I was working in at the time, after discovering that simply rearranging words' order in predicates would invoke wholly different codebases, with wildly varying bugs of typically semantically catastrophic nature ("why am I suddenly getting a cartesian product?") Switched to using SAS as the SQL front-end to Sybase's raw data -- that worked well. SAS kicks arse in every direction.)
But, as someone who worked in the R&D team for a RDBMS, the chance of them rewriting the core _architecture_ from scratch is effectively zero. Replacing the words with their own -- yes. Changing the architecture ... if they'd had appetite for that, they'd have simply started from scratch. Vastly cheaper/"easier". RDBMSs are non-trivial. There's a reason why Oracle/Sybase/SQLServer are still not SQL92 compliant.
So I hear what you're saying, but I stand by what I said about the core architecture almost certainly being dominated by its original POSIX assumptions. And hence, relatively friendly to a translation layer sitting on a POSIX API.
SQL/Server 7 was a modular re-write separating the transaction coordinator, parser/optimizer and execution engine. The Sybase cooperative threading model (it wasn’t multi-threaded until much later) was moved into NT Fibres for lightweight work units. Later versions of Windows introduced list-IO to bunch collect and scatter IO operations together to reduce kernel switching.
A picokernel of some sort would be needed because Linus wouldn’t allow these kernel extension in Linux.
"A picokernel of some sort would be needed because Linus wouldn’t allow these kernel extension in Linux."
Or, leverage the Linux ability to do all of that write-collection FOR you via cacheing and 'lazy flush' instead of "paranoid writes" which is what Winders seems to be doing...
and then, it's FASTER!
so you do a bunch of 'teeny writes', then hit 'flush()' for your completed transactions. So simple.
All DBMS bypass the filesystem cache for fast non-blocking asynchronous IO. The Linux asi_write and writev glib functions either do async or scatter/gather, but not both. MS added both to NTFS for SQL/Server (much to the announce of Oracle).
Granted Linux syscalls are lighter than NT, but list-IO still as advantaged.. which is why Oracle still recommends RAW volumes .
"All DBMS bypass the filesystem cache for fast non-blocking asynchronous IO."
you sure about ALL? that's a very 'broad' categorization.
" The Linux asi_write and writev glib functions either do async or scatter/gather, but not both"
'asi_write' doesn't exist in FreeBSD [I just checked], at least not the version I'm running. So using it is not truly portable for POSIX systems (a big 'minus' if it's required for performance). Anyway, if _I_ needed that kind of I/O specialization, I'd write a kernel module to do it. And if you use threads in a very clever way, async I/O isn't all that hard... [been there, done that]. But I'd rather let the OS handle all that for me, then use 'flush()' to make sure it writes. In a thread. So it doesn't block anything else. It just blocks "that transaction" waiting on the I/O to complete.
And then, maybe, you FIX the operating system so it's faster? As long as the patches are not written using a crap-code style, heh. - what was the term Linus used, 'compiler masturbation'?
So yeah, I've been there with the async I/O stuff, and though 'gathered' writes do have their appeal, I'd have to wonder what kind of performance boost you really get from that, over [let's say] a memory mapped file, or multiple threads making separate but parallel I/O requests. You definitely get the kernel layer speed benefit from making a single I/O request, though, using 'writev' to get them all done at once. I just haven't measured the differences in actual practice to see how much of a boost you'd get.
in any case, translating "all that" across a layer between Micro-shaft way and POSIX way might cause a performance bottleneck on its own...
Frankly I'm amazed anyone is surprised by Microsoft's approach here.
They had three options here;
1/ a total re-write of SQL Server, changing it to allow it to sit on either Windows or Linux architecture
2/ Production of a whole new Linux variant of SQL Server, with support and future development obligations
3/ Just adding a Windows kernel to the bottom of the existing version.
Seems like a very obvious decision to me. It's not as if they are expecting a mass switch to SQL Server by Linux users, and the profits to come rolling in. So of course they're going to do it the easiest way possible.
I think this is a great move and we should all applaud. The more competition for Oracle the better. Its still going to be an oligopoly though and unlikely to have a big impact on pricing. The more enterprise ready DB platforms available to us the better. Currently we have SQL, Oracle and DB2 (well, I have seen big enterprises using it to save money).
We aren't going to drive costs out of IT til we get more.
After years of using MS OSs (from MS-DOS 5.5 to XP), I now use Linux Mint 64bit, quite happy with it.
And as I don't need to use more than two or three MS based applications which will never be ported, I use VirtualBox running XPSP3.
But my gut instinct tells me this is not a good thing, for Linux or the Linux community.
"Where once Linux was a cancer to Microsoft, now Windows is growing inside of Linux."
Ergo, Microsoft is now a cancer to Linux.
We all know what happens next and it's not a question of 'if': it's a question of 'when'.
No way I am (consciously) letting MS code inside my Linux box.
There are going to be some teething issues, such as Active Directory membership. A lot of the security used within SQL Server installs relies heavily on AD authorization via users and groups, that's going to be a bit of stumbling block in the early days. I can see why they might so this, loads of instances onto headless Linux servers and create well performing farms of DB servers.
Where I am we're a 50/50 Oracle and SQL Server shop and we like the idea of wresting control from the Windows admins with regards the backend server builds our DBs run on. We'd like to move all DBs to Unix backends and this might be an answer to that but I'd be cautious of approaching for at least the first 2 years of this project and only then if MS decide to seriously pursue SQL Server on Linux platforms with as much gusto as they do on the native Windows platform,
1) Did anyone seriously expect Microsoft to produce a fully native re-write of SQL Server? That's a huge undertaking. Also it would be kinda embarrassing for them if they did that and it ran better on the LINUX version.
2) I wouldn't say Wine was strictly speaking a good example of "virtualisation" - it's a set of libraries intended to simulate the MS ones, not a full hosted OS!
These actions from Microsoft - in porting, or rather running virtualized Windows based SQL Server on Linux do not answer some fundamental questions.
What are the Performance,Reliability, Scalability, Flexibility and especially Security functionality differences between running SQLServer on Linux versus running enterprise PostGreSQL as EnterpriseDB, OracleDB or IBM Enterprise Informix?
While these competitors all excel over SQLServer in categories mentioned above, only PostgreSQL/EnterpriseDB offers a significantly better Return on Investment (ROI) over SQLServer, both initially and in long term cost and support service outlays.
Besides which, all SQLServer competitors run on many more Operating Sytems and hardware platforms, including Linux and Windows than SQLServer ever will.
>> While these competitors all excel over SQLServer in categories mentioned above, only PostgreSQL/EnterpriseDB offers a significantly better Return on Investment (ROI) over SQLServer, both initially and in long term cost and support service outlays.
Did you cut and paste that from your marketing brochure? FYI - SQL Server has a way better security record than any of the above. Also PostGres performance majorly sucks in comparison.
One of the first things that Microsoft did after getting hold of QDOS was to extend the original CP/M like system calls by adding system calls that were clones of those used by Unix on MS-DOS 2.0 and up. When MSFT acquired DEC's software technology they inherited a very nice POSIX compliant kernel lurking under the Windows crust, a kernel that unfortunately disappeared recently in yet another of their revisionist moves to force programmers to use their own proprietary calls (a mish-mash of often competing, overlapping calls that seemingly have little meaningful structure). This might signal a return to sanity on their part.
Biting the hand that feeds IT © 1998–2019