Did somebody fight through the cobwebs and go into the necropolis to do incantations over the old Xenix/Data Server/Sybase codebases?
Yes, I am really that old >>===============>
Microsoft's long, gentle embrace of Linux continues with the first release candidate of SQL Server 2017. Microsoft said the early release would land in the middle of this year. Arguably, since this is only the RC1-level release, Microsoft's SQL-Server-on-Linux is running late. There's not much detail on what's in the box, …
"You didn't seriously expect SQL Server to be written against WinRT/UWP, did you?"
maybe not initially, but... they seem to have a "one windows-track" mentality in Redmond. Assuming, of course, that THIS person has influence over SQL Server, too:
I wouldn't be surprised in the LEAST to see "the METRO" and "UWP" and ".Not Core" take over, and be implemented on the Linux side to compensate. Bad ideas have a habit of REPLICATING when the one who CAME UP WITH THEM IN THE FIRST PLACE is in THAT influential of a position within the company.
SQL/Server heavily uses scatter/collect async IO that does not have a consistent API across Linux filesystems.. just because they've documented Drawbridge does mean Oracle does not have an equivalent.. they do.
The reason for the Linux release is for Docker containerisation, not specifically for Linux servers. Expect to see Drawbridge used with other Microsoft Server software.. before it is open-sourced when Windows Server Hyper-V is ready for containerisation prime-time
I've been playing with it since its first release on Linux and it's not bad. However I just get the feeling that Microsoft is creating a solution for a problem that only exists for Microsoft, not the consumers. Anyone with good SQL Server skills is a generally a reasonable Windows tech too, they're not going to want to learn a whole new operating system especially one like UNIX ( cut the crap, Linux is UNIX!) which is not always the easiest to work with. I came from Oracle on Unix in the 80's, then Oracle on Windows and up through SQL Server on Windows, so I've had to learn to be a "jack of all trades" to survive, however my personal experience of SQL Server only admins I've met is that they're happy on Windows and they like it that way. The only people likely to want to use this are accomplished UNIX admins with solid DBA skills in lots of different DBs who have to support apps that only install on SQL Server an nothing else. I can't see many Windows admins and/or SQL Server admins jumping on this. This is just a project that proves MS knows Windows is slowly being hit in the server rooms, both on-prem and especially in the cloud, where every single inch of rack space needs to earn its keep.
@Amorous C - I sort of agree that SQL Server on Linux is solution looking for a problem. There enough good relational databases that run natively on Linux that another is a yawn, SQL Server seems to be a pretty good relational database. So the question is why is it being released to Linux. And reading the tea leaves right now is frustrating exercise.
Short of an SMB, I'm trying to figure out who has a homogeneous Linux datacenter. Heterogeneity is a fact of life these days. The last and only time I had gigs using the same operating system, database, or compiler was in college. Budget, requirements, determine available tooling, solution. Same for everyone of my engineering disciplines.
Still, nice to have the option and Lord knows, it's a database my tools know well.
"The last and only time I had gigs using the same operating system, database, or compiler was in college"
In my case, that was when I worked at Apple Computer. Our University had very diverse computing resources; it even built it's own micros before the invasion of the IBM PC.
If MS does a serious attempt to make SQL server to run on Linux, but also related api's needed to deploy existing windows applications in combination with SQL server on Linux, then it is a very viable platform. There is a lot of space between free MariaDB and the $ 10,000,- / CPU punch you in the face from arrogance Oracle products.
For people developing serious web application needing a database and transactional integrity, it could be a good alternative.
There is a lot of space between free MariaDB and the $ 10,000,- / CPU punch you in the face from arrogance Oracle products.
And PostgreSQL fills that space very nicely.
Not sure anyone who has used any other database would want to touch SQL Server. OK, maybe some people still have memories/nightmares of db4 or their only previous experience of an open source db is LibreOffice Base. If that is you, rest assured, SQL Server is NOT the way to go.
As has been said already - the reason for this is some people have to use SQL server, like it or not, and the writing is on the wall for Windows.
"Not sure anyone who has used any other database would want to touch SQL Server."
TCO, performance, ease of use, scalability, features, integration and security? For instance SQL Server on Windows Server has had the fewest vulnerabilities of any enterprise DB + OS stack every year for the last decade.
"There is a lot of space between free MariaDB and the $ 10,000,- / CPU punch you in the face from arrogance Oracle products.
For people developing serious web application needing a database and transactional integrity, it could be a good alternative."
To be fair, SQL Server sits much close to Oracle than the free end, being only slightly less worse in terms of license wrangling.
As middle ground we have PostgreSQL which is a very solid solution, and enterprise support is available for that for those with fat wallets.
It's aimed at the cloud market. Almost all the new cloud software stacks today run on Linux first and foremost, with Windows only as an afterthought, if at all. Note the additional announcements of analytics using Python and R. With Python in particular, people use it because there are loads of third party libraries to do nearly everything you can imagine, and many of them aren't supported on Windows. You have to think not only of the database, but also what else will be used with the database.
If Microsoft limited MS SQL Server to Windows only, it would be a sentence of a slow death for it. This would be like if Oracle limited Oracle DB to running only on Solaris. By porting it to Linux, it gives it a chance at continuing relevance in the future.
On premises versions in the small to medium size market are probably intended to remain running on Windows for the foreseeable future. However, Microsoft's focus for now is on the cloud.
" Almost all the new cloud software stacks today run on Linux first and foremost, with Windows only as an afterthought, if at all"
Azure doubled in sales last quarter alone and that runs on Windows. Current annual cloud run rate at Microsoft is $15 billion. That means it just overtook AWS!
I think you will find despite throwing millions of dollars at lawyers, The SCO Group could not prove that Linux was Unix. You are very welcome to try, but bear in mind that TSG went bankrupt because they dedicated their entire business to this false premise and had nothing left when the rest of the world caught on that they were lying.
Linux aims for Posix compliance. The compliance is sufficiently good that a great deal of software is source code compatible with other Posix compliant operating systems (with minimal porting effort).
I am sorry that you find Linux difficult to work with. Various educational resources understood by young children can be found here. If that is still too difficult for you, McDonalds are hiring.
A bit too close to home?
I wish you'd use the same argument with some of the clever young things hereabouts who think AIX is Unix, and get very bent out of shape when it isn't so much.
And my child was working with Windows before she could write. Learned basic arithmetic by playing with it on a windows machine.
Finally, anyone claiming a child could understand the man pages is pushing it, a bit. The one I like to cite as being willfully dense is the one on ln butbthere are many othrs.
I think that you need to say why you don't regard AIX as UNIX.
If you take Unix certification, AIX is very much UNIX, being certified as conformant to the Unix 03 standard.
If you take Solaris as UNIX, then AIX is not Solaris, although Solaris is UNIX (as is macOS 10.12, HP/UX 11i Release 3 and Huawei EulerOS 2.0 and one or two others).
Interestingly, if you look at the UNIX 98 certified systems, then z/OS V2.1 was at that time certified as UNIX, even though there was no UNIX kernel involved (this is also the case with macOS).
Unfortunately, Linux is not UNIX, whatever way you look at it. It may have some form of Posix compliance, but nowadays, that does not give you UNIX branding, or even much in the way of confidence that you can port applications around.
Where I have problems is when Linux application writers have difficulty porting to a UNIX platform, because there is so much in modern Linux distributions that goes beyond what a UNIX provides. Examples include DBus, KMS, SystemFS etc. all of which are useful, but which are not in any UNIX system.
"I think that you need to say why you don't regard AIX as UNIX."
Then I will.
Lets start with the logs. If you really know about AIX I need say no more other than to cough "binary format".
Then there are the processes (like volume management) that are much easier if you use smitty, which will show you the scripts it generates that look only superficially what my clever young things want to code because real Unixmen don't do menus. Or something.
I should point out that I work in an environment with Solaris, AIX and Red Hat Linux and am often confronted by naive coding stupids:
a) scripts that assume Linux versions of utilities that have been wildly expanded in capability and are therefore "broken" good and proper when they propagate out into the enterprise
2) scripts that fail to take into account differences in default behavior of said utilities on different *nixes and are therefore "broken" a bit more creatively - df scripts will sometimes feed back a number after being stroked by awk or cut, but since df has different columns under AIX than solaris the numbers aren't what the script-laddie thinks they are and comedy ensues. The sort of comedy that wakes people up at night a few weeks after script deployment.
@) more of the same.
One can approach (most of) AIX like it was BSD or SVR4, but life is a lot easier if one comprehends that IBM wants their "Unix" to look like a mainframe O/S and the jobs are quicker and safer to complete if one does it their way instead of the generic Unix way. One also has to understand that even if AIX has a "This Is A Real Unix" sticker on it, the utilities don't always behave the same way they would on the spark computer running an O/S directly derived from SVR4.
Given the differences between BSD family and SVR3 family Unixes, one might be on safe ground to say "Unix isn't Unix". It certainly would make for better preconceptions of engagement in the Bright Young Things hereabouts, who use the "Unix is Unix" mindset and send all output to /dev/null by default so figuring out what went wrong with their brilliant script and when the wheels fell off is a crapshoot.
Having grown up across VAX, PDP, DOS, S36, *cough* windows, BSD, HPUX, Solaris, AIX, IRIX, Linux, MVS, VMS, and a couple of others, your problems are not with *NIX (my preferred reference), but with BYT that have not run into a wall or three yet. We all have to earn our bruises to learn our uses.
( Yes boss, the script is 12,000 lines long. No. It is NOT 12,000 lines of code. Its 4,500 lines of code. Yes, the rest is commentary. NO YOU MAY NOT REMOVE THEM)
So, your idea of a UNIX system is that it needs to be configured using flat files, and using CLI commands? And log files need to be in plain text?
Short of the AIX error system used by errpt, most log files are plain text. Errpt is not part of standard UNIX, although I seem to remember that the Bell Labs/AT&T 3B2, 3B10, 3B15 and 3B20 UNIXes also had a binary error log. It seems to be RedHat Linux has no hardware log at all. Which is better, a binary log with utilities to read and export errors, or no hardware logging at all!
AIX runs syslog, so if you want the same sort of logging from BSD utilities, turn on and configure syslog! You can even get the binary errors from the error logger written into syslog if you want.
I have been using UNIX for many, many years (in fact, you will struggle to find anybody who has made a career out of UNIX for nearly 40 years in the way I have), and have used UNIXes from Bell Labs, AT&T, Sun, HP, Data General, Perkin Elmer, Digital Equipment Corporation (DEC), ICL, IBM, Pyramid, Sequoia, SCO (the original one, Xenix), SCO (the new one - UnixWare) and these are just the ones I remember!. I was also offered a job at Unix System Laboratories, although the money and location taken together was just not right.
The one thing I will say is that ALL of them have had some form of menu driven assist, be it Sysadm, SAM, Smit/smitty, or even Admintool on SunOS. In fact, the one that has probably been most prevalent is Sysadm, which was in AT&T SVR2, and was often taken to the other SVR2, 3 and 4 derived ports. Smitty is more of the same.
Often, the script that smit/smitty generates only looks complicated because of the way the parameters are broken out of the menu. Everything run from smit can also be done from the command line, and more often than not, by one or two commands with quite sensible parameters.
The individual commands may look unfamiliar, but then many of the Solaris or HP/UX commands are similarly unfamilier (and not standardized). Most AIX admins I know normally use smit/smitty to work out what command needs to be run, and then work out the parameters from the man pages, and then use them from the command line forever more.
From my (very extensive) experience, I would say that there is absolutely no standard way of administering a UNIX system from the command line. They're all different. Even down to the way that the System V rc scripts are implemented.
What I think you are doing is leaping to the assumption that SunOS/Solaris is the standard UNIX, and everything else is not. This is really not the case, and if you wanted a standard for a true UNIX, I suggest that you unpack a version of UnixWare, which I believe still uses sysadm.
You missed out possibly the biggest criticism of AIX. The ODM in AIX is a binary database of configuration information, but you can actually treat it much as you would stanza driven flat files, because in reality, that is what it is. You would not believe how much scorn even the internal IBMers had for the ODM when it first appeared, which is why it never got more complicated than it is.
AIX is derived from SVR2, with some SVR3 additions. the SVID up to issue 2 is based on SVR3, as is POSIX 1003.1. UNIX 03 is based on the SVID issue 3, and AIX has had those changes incorporated into it to remain compliant. But nowhere in these standards does it say anything about core OS administration.
I would actually have loved SVR4 to become the main porting base. I was working for AT&T at the time, and attended the SVR4 Developer Conference (1988?). I also ran the internal AT&T version R&D UNIX 4.03, which was based on SunOS 4 - (SVR4) on Sun 3/280 and 3/60 systems. I liked the look of SVR4, but to claim that only systems that are like SVR4 are UNIX is almost as stupid as me claiming that BSD is not UNIX (although in truth, that is something I might actually say).
Remember, neither RedHat (or any other Linux), nor HP/UX, the other UNIX you mention, are SVR4 based, so using SVR4 as your definition also excludes the other OSs you administer.
Anyone claiming anything a child does on a computer needs them to read the man pages is lying through their teeth. And suggesting or implying that "Windows Help" had ever helped anyone is not going to help their case either.
Once the GUI is up and running, then pretty much everything is easier on most OSS GUIs than Windows, and your learning a better long term investment too, since OSS GUIs are not in the habit of dying - hell you can even get CDE on almost every Linux or BSD if you really want to*.
Your OS was installed by an adult (or possibly teenager) - children do not install OSes, and even if they did, Linux only requires you to know what country you live in, what keyboard you use (try it if any doubt), and what your name is. Windows, needs to know a whole heap more - probably requiring knowledge and understanding of buzzwords like "product activation key" and "EULA", a working Internet connection, and a phone call which puts you on hold for hours.
* And for the utterly desperate, there is always fvwm95 - although I admit you would probably have to compile it yourself. If you are a child, you probably don't remember Windows95 anyway.
I agree that Linux != Unix, but I disagree that the SCO Group (which I will shorten to SCO, even though this is a bit of a misnomer) thought that it was.
What they were trying to prove was that Linux incorporated code from the Unix code base, and that as such, there was copyright and possibly patent infringement happening in every Linux instance. They also made noises about revoking certain Unix providers (particularly IBM) source code licenses, because they believed that IBM et. al. were guilty of contaminating the Linux code base. Because the Unix source licenses were granted in perpetuity, SCO had no right to claim this. It was all FUD.
Their business model was that they were trying to convince large Linux users that to remain out-of-court, they needed to purchase Unix licenses if they wanted to continue to run Linux, with a side line of attempting to do the same for AIX customers, because in their view, IBM no longer had a license allowing them to provide Unix derived works to their customers.
Some organizations were taken in and did purchase licenses, just to be safe. In the mean time, IBM thumbed their nose at SCO and told them to take them to court.
After much arguing, and with full sight of the AIX source code, SCO failed to persuade any of the judges of their claims. They were unable to point to any common code between AIX and Linux other than some ancient code that came from Unix Edition 7, which SCO themselves had put under a fair-use license.
Worse than that, they awoke Novell, who waded in to the fray to point out that SCO did not actually own the Unix IP but had purchased the rights to use the Unix source code, and collect the license fees. part of which SCO should have, but had not, been paying to Novell. Once ownership was established, Novell issued an indemnity to Unix licensees, which effectively pulled the rug out from under SCOs feet.
Somehow or other, SCO managed to draw the process out, and it's only in the last 18 months or so that the last of their claims that had any potential monetary value was thrown out, leaving only a couple claims to appeal against the courts judgments. Effectively The SCO Group Inc. is finally dead.
In the meantime, I cannot see who now owns the Unix IP, as Novell have been sold, and some of their assets divested to companies like Microsoft and Attachmate/MicroFocus and maybe HP?
If anybody actually has any real idea about who owns the core Unix IP, I would be very interested in their thoughts.
> the SCO Group (which I will shorten to SCO, even though this is a bit of a misnomer)
The usual TLA is TSG.
> If anybody actually has any real idea about who owns the core Unix IP
It is unlikely that there are any protectable copyrights in Unix source code. The Novell-TSG case concluded that no IP had transferred from Novell to SCO. While they did phrase it as 'Novell owns the IP' there are many barriers to this actually being true: early Unix versions were not registered when it was a requirement; some versions were put into public domain; agreements between Unix Labs and the Regents of Berkley; many third party contributions that did not assign their copyrights.
For these reasons, and others, Novell did not attempt to collect together copyrights in order to sell them to SCO and instead simply stated in the Bill of Sale that they didn't get them.
The listing on the NASDAQ was SCOX, but as it is no longer listed, that tag is not used.
I prefer to avoid TSG, because of the number of other organizations that I've personally come across that uses that abbreviation.
As I understand it, the original SCO, when they were trying to negotiate the deal for UNIX IP, could not raise enough money to buy the rights wholesale. Novell offered them the right to use the source code, and collect license fees, and left open the possibility that the full rights could be purchased at a later date.
It would appear that some within SCO did not read the agreement fully, and never offered the extra money for the complete rights, so they remained with Novell. I'm sure Darl McBride probably regards this as the worst oversight that happened in the whole mess.
What The SCO Group got was a right to use the source code, and develop and sell derivative works although they would have to go to X/Open or the Open Group to get any derivative works that deviated from what they had licensed called UNIX. They also got the job and mode of the money for selling licenses.
The reason why I am asking is that I would very much like to see the source code for SVR4 released under an open or at least a permissive license. I don't even know who you would apply to to get a commercial source code license any more. I know that The UNIX Historical Society have the full source code for some ancient and niche UNIXes, and even some partial source code for System III and System V, but I would like to see something a little more recent, and would love an actual buildable system.
I want the more recent code preserved before the last systems and tapes containing the source are dropped in a dumpster!
> They also got the job and mode of the money for selling licenses.
My understanding, based on following Groklaw and reading the APA, was that SCO would collect the annual licence fees, pass them to Novell and Novell would then return a 5% collection fee. Novell never received any of this, SCO kept it all. The court case ruled that Novell was entitled to these royalties.
cut the crap, Linux is UNIX?
I think you will find despite throwing millions of dollars at lawyers, The SCO Group could not prove that Linux was Unix.
They couldn't prove it wasn't derived from the System V sources, but it turned out Novell owned them anyway so it was a non issue. (An expensive non issue mind!)
I'm not willing to start throwing things over the semantics of 'UNIX' 'POSIX' and 'Linux'. Yeah, admittedly, you're right, BUT.. from a user perspective, NOT from the actual software, kernel, or standards compliance, running applications on a UNIX system, a BSD system, a Linux system, etc. is all very similar [until you get to systemd and then all hell breaks loose]
/me operates on FreeBSD most of the time.
The solution has found its problem - at least here.
Do not underestimate the number of legacy applications written in .NET (or even ASP classic) , many of them running on nothing newer than WIndows Server 2003 with an urgent need to move to a more sane and scalable foundation.
The biggest hurdle here is that they are quite often heavily tied into SQL server - and for many customers SQL Server is quite acceptable, while Windows is not.
Fwiw, Microsoft don't believe Windows is where the $$ will be in the future. The desktop market is being reduced year on year, some leaking to Windows laptops but a lot is going iOS/Android/Chromebook too. Their foray into mobile failed, and web search, well yeah. They have moved on and don't see Google/Apple as their primary competitors. Their primary competitor is Amazon. That is where they are trying to carve out their next ecosystem.
Look at some of their acquisitions like Xamarin, and their .net core work that sits besides this to see how serious they are taking this. Even Android gets office apps. Don't hear me wrong, they still want you to buy Windows, but in this brave new world of containers and serverless architectures, they are much more interested in staying relevant than locking you in at every point. It really is quite a contrast from a decade ago.
"Look at some of their acquisitions like Xamarin, and their .net core work that sits besides this to see how serious they are taking this. Even Android gets office apps. Don't hear me wrong, they still want you to buy Windows, but in this brave new world of containers and serverless architectures, they are much more interested in staying relevant than locking you in at every point. It really is quite a contrast from a decade ago".
^ I'd agree with all of the above. The hardline fundamentalism of Gates and Ballmer has given way to pragmatism of Satya Nadella and long may that stance continue.
Why not opt for that dependable workhorse, the old AS/400, err System i, where the Operating System and the database are integrated into an easy-to-manage system, just like Windows Longhorn?
Now we are curious about the performance of MS SQL on W2016, MS SQL onLinux, MariaDB, Postgres, Oracle on Linux, Oracle on Solaris, IBM on System/Z.
I see this as a good thing, and actually have a use for it.
My long term goal is to get Windows completely out of our enterprise. Getting all of our desktops on Linux isn't completely viable yet. We are getting close, but I can't quite make it happen. Windows 10 Spyware Edition has accelerated this move, however.
Where we can get rid of Windows is in the data center. We've been able to get most of our servers over to Linux. We do however have some legacy systems that require SQL Server for their back end. With SQL Server available for Linux, we can get almost all of our Windows servers migrated.
Microsoft is a rather heterogenious company. My guess is that the department making the SQL-Server sees Windows as a lost case as it's more and more aimed at mobile systems and the consumer, despite of recent developments like the Terminal window supporting colour and more than 80 columns. (whoo hoo)
Now there are many IT departments which try to get away from Windows when ever possible. Usually that means buying newer web-based applications running on Linux, typically with MySQL.
It's a bit like in the 1980s when minicomputer manufacturers realized that the future would be in microcomputers and they all released versions of their minicomputers in microchip form. Suddenly you could have a PDP-11 that's just a single board (plus RAM).
In both cases we have companies trying their products to stay relevant for longer. In both cases the problem ultimately was that their products had no user relevant advantages over cheaper competitors.
If Ken Olsen had not believed "Unix is snake oil" and had sold VMS for a more reasonable price (With a version for the 486 when it came into being|) DEC would be alive today, and Windows probably would have been strangled at birth. Or maybe DEC would be in a position like Apple today - with the market share of the more discerning user.
Two things happened at once: (a) there were fantastically cheap machines on the market which could do some of what the bigger ones could, and (b) the price drop meant there was a market 1,000 times bigger within a year. That meant 999 out of every 1,000 users had no previous computer experience, and did not have the expectation of a machine which was only down for minutes in the month, rather than needing reboots several time an hour, or software that generally worked, and when it didn't, gave enough information to figure out if the problem was with the hardware, software or user.
Olsen and DEC should have known and understood this: it was the exact same situation they created when the PDP8 came out in the face of mainframes!
DEC could and should have aggressively sought to compete, rather than saying "This stuff is a pile of shite" and expecting the users to know the difference.
People were simply not able to compare the PDP11 with the PC, and realise that it was worth more, and it was never going to be perceived as worth the "more" that Olsen claimed once the volume brought in the cash to fix the bugs (a bit). Few PC users ever saw a DEC machine or used VMS (or Unix).
> Two things happened at once: (a) there were fantastically cheap machines on the market which could do some of what the bigger ones could, and (b) the price drop meant there was a market 1,000 times bigger within a year.
What you have claimed to be 'at once' and 'within a year' were actually over a couple of decades.
Micro computers started to be available from the mid 70s. The initial IBM PC was just another micro that cost more than your car and was very limited compared to others already in the market (no hard drive, no networking, poor performance). It was only in the mid 80s that clones started making the pricing much cheaper and the market expanded.
> DEC could and should have aggressively sought to compete, rather than saying "This stuff is a pile of shite" and expecting the users to know the difference.
You obviously weren't around in the early 80s when DEC were selling their Rainbow PC systems.
I accept that it took a bit more than a year or two to pan out fully, but
I am not talking about home users - that was a whole generation later, as you say, and a second round of revolution.
In 1976, I was building both DEC and Intel systems - and maintaining software on PDP11's in a well known London software house. (And learning Unix).
I cannot state the exact year, but when we knew there was going to be an IBM PC, almost the entire company was told not to start any new project, as, regardless of what is was performance wise, it was going to be IBM and a PC. And in those days, no one was every fired for buying IBM.
We sat doing not much for almost a year before it was released. (Actually, I was using 8080's to replace PDP11/10s as comms multiplexors for a French client).
Psychologically, the PC was a "Business Machine" and not a toy - to the PHB. And its the PHBs that control the money. The users I am talking about were medium sized businesses, not home users.
The day we got the specs, everyone in the company who could write code was porting our business software from the PDP11 and/or ICL mainframe to it. Whatever it cost, it was way less than a PDP11/60 - which one of our customers bought for more than my mum paid for a 5 bedroom house in Islington. Companies with a staff of under 20 could run their own payroll on a machine that cost less than a contract to run it on someone else's ICL1900 and still have the machine to do other things for the rest of the week.
DEC could have gone in hard, much earlier than they did, and with more vigour. They did not. A single board PDP8 would have been cheaper than PC when the PC came out - if DEC had decided to go that way. The PDP8 is way simpler than the 8088 (I think it has less transistors than an 8080), and there was a ton of DECUS software fully debugged - what was there on the original PC? People around me were proposing a single chip PDP8 in 1972 - before the 8008 was even released. As an embedded processor, the PDP8 would have been viable in situations where the 8008 was not - mainly because of the vast amount of well tried software. (I was in a team which decided not to use an 8008 in 1974).
I admit the VMS vs Unix battle was a much later round of the same battle - roughly the mid 80's as you say. It was not til the 486 that Intel had a proper MMU that could run a proper OS. At that time, there was no other established OS that could have been (a) ported to the 486, and (b) was widely used in Industry. DOS was not exactly great! And PC hardware crashed daily - hourly for the cheaper machines.
However, this was also a time when the market expanded by a factor of 1,000 - for a second time.
Yes I knew about the Rainbow - and the Heathkit PDP11 - I could not afford them either.
> It was not til the 486 that Intel had a proper MMU that could run a proper OS. At that time, there was no other established OS that could have been (a) ported to the 486, and (b) was widely used in Industry.
You may have a particular definition of 'proper OS', but I was running multiuser/multitasking MP/M on 8085 and Z80s with bank switching quite effectively in the very late 70s. Later I switched to DRI's Concurrent on 8088/8086 with EEMS (eg AST RAMPage) and derivatives, such as DR-Multiuser-DOS (386/486). These and DRI's other range FlexOS were quite widely used in industry.
I was making decisions about this stuff then. The perception was, at the time, that Unix was not as secure, stable or as capable as VMS, and the "UNIX Wars" Had started, so Ken had a point. One of the things that derailed the company was the unappealing DEC Rainbow, which could run DOS, but often needed a special version of vendor software like Lotus 123, it could also run CP/M software and had VT100+ terminal emulation. They were expensive, and it's idiosyncrasies probably forced people into a PC environment. We also had the PDP-11/23 based DEC Professional for engineers/scientists and DECMates for clerical support workers. There was some similarity in software between the models, like they could all run varieties of the WPS word processor and linked to the functional but basic ALL-IN-1 suite.
I thought at the time that a better approach for DEC to sell to their mini customers, might have been to avoid the PC which was still in its infancy and sell MicroVAX Servers with standardized software. Clerical staff could be given terminals - Small office workgroup users were generally doing low level clerical tasks. Networking was easy with DECNet, and the relatively few "high level" engineers/scientists/data crunchers could be given their own networked MicroVAX.
In the end we went with NetWare and 286/386 PCs which could run terminal emulation software into our MicroVAX/PDP/DataGeneral minis, and later tried to replace proprietary mini OSs with Unix, but by that time a lot of our specialist software could run on PCs.
There were very practical reasons why DEC did not do a 486 port of VMS, most of them architectural. VMS made good use of a number of VAX specific instructions, including IIRC some arbitrary length string and number instructions, and others with implied loops in the instruction itself. As I understand it, the re-write that had to happen to allow VMS to transition to the Alpha, even though this had some instructions to ease this work, was significant, as was the following one to Itanium (under HPs stewardship).
Now there is an Intel port, my guess is that the x86_64 port will be much easier.
In the '80s, one of DECs aims was to try to produce lower priced systems that could run VMS, starting with the MicroVAX II (the first MicroVAX had significant restrictions that made it difficult to do anything with), and continuing with a number of small MicroVAX systems including desktop VAXStations (not to be confused with the MIPS based DECStations which ran BSD/Ultirix/Digital UNIX).
These were actually quite good value, but were priced in the same sort of bands as equivalent Sun, or Apollo workstations and servers.
What DEC did, which was unforgivable in marketing terms, was to announce the Alpha based VAXes a long time before they were ready. This literally killed about three quarters worth of VAX sales, as customers decided to wait to buy new systems until the Alpha based systems were available. Unsurprisingly, this gave DEC cash flow problems, which IMHO, they never recovered from, leaving them vulnerable to takeover offers at a later time.
I never really understood the rational behind Compaq buying DEC. but I suppose Windows NT on Alpha was probably one of them.
I find your commend about PDP11 strange. The PDP11 never ran VMS (VAXes were called things like VAX 11/780), The closest thing to VMS that PDP11s ran was RSX/11m, which is widely regarded as the direct ancestor of VMS, and was managed by one Dave Cutler, later of VMS and Windows NT fame.
The PDP11, although being a classic architecture IMHO, was a system of it's time. It was a purely 16 bit ISA, although to make it more useful, there were some addressing extensions bolted on to larger and later systems. No PDP11 was able to address more than 4MB of memory, and the process address space was strictly 16 bit, with an instruction and data separation feature on larger and later systems that extended this to 112K or maybe 120KB, as the top 8KB was reserved for memory mapped I/O devices ( I can't remember if the I/O page was in both the I&D spaces, or just the Data space).
Even when PDP11 was a common architecture, the 56KB process limit on the non-separate I&D systems was a severe limitation, which lead to large applications having to use memory resident overlays and also split the applications into multiple processes using IPC to communicate to do anything serious. I ran Ingres on a PDP11/34e with 22 bit addressing ('34s did not normally have 22-bit addressing - it was a SYSTIME kludge) under UNIX edition 7 for some time, and the data manager had to be split into something like 7 different processes to allow it to work.
There were micro PDP11 implementations, some of which made it into desktop systems (the F11 and J11 micro PDP11s), but these were really just offered for continuity for customers who would or could not transition to VAX. The main reason for people staying with PDP11 was for it's I/O system, which made it exceptionally suitable for lab instrumentation, process control and real time implementations, and for operating systems not similar to VMS, like RSTS/e.
I would still be interested in buying a desktop 11/83 at the right price, even though I would probably use a PC more powerful than it as it's console.
"It will certainly have better security, it likely has better I/O response, but that is unknown without testing."
We'd like to thinks so, but then there's "that layer" again. I suspect it's been tweeked to prevent WIndows servers from looking bad when compared to Linux.
Simply "not optimized correctly" would do the job, actually... basically all of the reasons why I/O on Linux and BSD operating systems is _SO_ superior to what it's like on comparable windows machines, in every way I've tested, from disk I/O to networking, it's obvious to me that Linux and BSD perform more efficiently.
And so I'd want SQL Server to get a boost from ZFS or EXT4 or that uber-efficient use of disk cache by the OS, something Windows generally fails at.
Call me cynical, but that choice of name concerns me. Right now we may be in the Linux "embrace" phase, but if "extinguish" does indeed follow then that would be the act of "raising the drawbridge". Suck people into running MSSQL in the cloud then leave them no option but to upgrade to Win10...
But for now it does scratch the itch at my SMB - only Linux servers here, and someone is requesting an application that only supports MSSQL. That might work for us...
All the talk from Microsoft and comments here at TheRegister do not address - in real world, factual technical terms - performance issues in comparison of running SQL Server on Windows versus running the product on Redhat Enterprise Linux or Suse Enterprise Linux as examples.
In the end, the proof of the pudding is in the eating!, and Microsoft products have shown a "fail" result in that regard.
I saw a presentation several months ago that said that performance on Red Hat was slightly below Windows in most or their automated tests, although it beat Windows in at least a couple. This was prior to optimisation, so the speaker thought that it may well be faster in general on Red Hat by the time of release.
I'd wait for some independent benchmarking before making any decisions however.
"I saw a presentation several months ago that said that performance on Red Hat was slightly below Windows in most or their automated tests, although it beat Windows in at least a couple. "
Recent tests of KVM vs Hyper-V under Cloudstack shows Windows to be faster too.
So, you've not heard about the SAMBA project then, its only been out for 25 years. :-)
SMB client support - Check
File server support - Check
Print Server support - Check
Domain member support - Check
Domain controller support - Check
If you check your repo, you will find it already in there.
"Now please port Active Directory to Linux "
even if it's a seat-based licensed application, I think IT departments would welcome abandoning the REST of the nightmare associated with maintaining/patching/disinfecting windows servers.
"Oh, CRAP, the junior accountant got another e-mail with a spreadsheet in it!"
"Scan the network for viruses"
[every windows server is infected, some demanding ransoms]
"CRAP CRAP CRAP CRAP CRAP!"
If thats the case you got some really shitty security and permissions going on. Having good backups and minimum required access mitigates this problem majorly. Restoring a few encrypted files from backup is inconvenient but not as bas as being totally hosed cos you couldn't use security properly
This has always been their long term plan, ever since they funded SCO to try and destroy "free linux"
Clearly the plan was to get hold of the valuable Kernel and limit the "free" usage.
That plan failed, with the redirection of how computers are used ,cloud, services etc
they are now ready to start moving their products over to a system where the majority of maintenance and work is done for them, they no longer make most of their money from the OS.
Biting the hand that feeds IT © 1998–2019