What will be left?
It seems like everyone at EMC is moving toward the door. This announcement will probably only accelerate the exodus. I wonder what will be left of EMC when/if Dell completes this merger in six months.
433 posts • joined 12 Dec 2011
It seems like everyone at EMC is moving toward the door. This announcement will probably only accelerate the exodus. I wonder what will be left of EMC when/if Dell completes this merger in six months.
I would say IBM is becoming a software company, but a lot of their systems are actually primarily software products. Services, as you mention, was 10 years ago... now all services, with exception of high end consulting, are commoditized. No one makes any money managing iinfrastructure, help desk, call centers, etc.
I agree with the x64 commodity comment. Getting into server SAN would be a lot like getting into PCs. No margin and low cost wins. The major server SAN shops, Google, el al, are white box users. The average shop would probably prefer shared SAN because it is easier to manage than clustering a bunch of x86 servers together. If they are going server SAN, they will buy the cloud service from Google, Amazon, etc and let them figure it out.
If server SAN is really the way the storage industry is going, which it might be, then storage is a great business not to be in... unless you are selling software.
SSD is not the same as next gen, NAND based Flash. SSD has all of the drive form factor latency as HDD... it is a modern media forced into a legacy form factor. Gartner should have only included NAND Flash directly connected to the bus as an all-Flash array. The problem would have been that basically only IBM uses real memory style Flash. Everyone else uses the same SSDs that have been around for years. They can't have a magic quad with only IBM on it though, EMC would have lost it on them.... although IBM is the all Flash market share leader. It does make it a non-category though, because if SSD equals all Flash then everyone has an all Flash array and has for years.
"half of April found 55 per cent of 377 respondents had yet to buy “some type” of HANA product."
which means 45% have bought HANA... which is really good for a three year old database.
"Georgens was very pleased with NetApp's flash portfolio; some 18PB of flash were shipped in the last quarter. He said: "I'll just state it flat out. I would not trade the flash portfolio of NetApp with the flash portfolio of any other company."
Likely because no other company would want to make that trade with them... but, still, he's not trading.
Virtualization can be a good thing, if you are after functionality. If you are after raw performance (primary reason why people run block and FC), it is not a good thing. NetApp does block on file, write out architecture, software RAID, etc... all for good reasons I'm sure, but raw performance was not one of those reasons. It's a trade off. NetApp must be aware of the OnTap overhead, otherwise why go with Engenio's OS for the EF series?
People have concerns about Windows 8?
Intel benefitted from VMware on x86 because it allowed them to go up market into the Unix space, Sun's space in particular. Yes, some of their sprawl was consolidated which resulted in fewer low value x86 servers, but they broke into the more lucrative top half of the server space. Cisco is not Intel, Cisco is Sun. They are selling expensive proprietary gear which will be commoditized by Nircira. Intel will start to partner with bottom barrel switch makers to produce white box switches, aka dumb bandwidth, and all of the high value management software will move to a vendor agnostic software layer.... There is no way Cisco is falling for this assurance from VMware. EMC/VMware is now Cisco's largest competitor. VMware knew they would be going to war with Cisco when they bought Nicira, but the upside is worth the downside of losing Cisco as a partner and they needed to counter Hyper V which was getting close to ESX at a lower cost.
People put too fine a point on Buffet "not investing in technology." He didn't invest in .com companies in particular because he did not see the business model and thought the valuations were crazy... which ended up being true for all but Google. People pegged him as "not investing in tech" because everyone had everything in tech companies in those days. He doesn't have an issue with technology, he has an issue with valuations which don't mirror the cash the business generates.
that is holding back ultrabooks. It is that you can buy a Windows laptop for $400 with nearly an identical user experience, so why pay more for more or less the same thing in a few mm thinner form. SSD is not a major issue. Most of the data people access is on a server via the internet anyway. Unless you are doing video editing or something or the sort (in which case capacity on the ultrabooks would be an issue anyway), most people don't mind waiting an extra second for their Word file to appear.
with their acquisition of Nicira (Cisco's nightmare). What is Lenovo using for interconnect switching? Are they planning to roll this out in the West, or purely a China and surrounding area play?
First, Oracle did not say that HP said Itanium was dead. They said that Intel executives told them that Itanic was nearing end of life, to no one's surprise.
Second, there is not a "contract" so much as an enforceable promise, which is what promissory estoppel means. Even though there was no contract, because HP had foolish relied on Oracle for so many years and Oracle had foolishly written in the Hurd PR and other statements that they would support HP. The judge decided that Oracle needed to support Itanic. In other words, if you told someone you would sell them your house and that person went out and sold their house, but then you decided that you did not want to sell your house, after that person had relied on your promise to their detriment, a court might enforce the promise to sell. That is promissory estoppel. It doesn't mean you signed the contract.... As I mentioned, we are in round one of at least three rounds, so this single judge's ruling will not be the end of the story.
If you promise to bring them hundreds of millions or billions in return through your software's availability on their servers, I think they would do it in a heartbeat. HP has received a return on investment many, many fold the cost of any test servers they provided to HP. The HP - Oracle partnership, at this point in time, benefits HP much more than it benefits Oracle... obviously, otherwise there wouldn't be a lawsuit.
"OK, since you bring it up, now you can also admit that you were also wrong in that Oracle very obviously has a contract with hp that is different to the relationship between hp and M$, RH, etc, etc. Hey, I don't care how many times you want to admit you're wrong, but every time you post some more bumph you'd best check it first, otherwise I'll have to point out your incorrectness again, and again, and again"
Very gracious as usual. Oracle and HP did not have a formal contract. This judge has ruled that they did have a contract through promissory estoppel. Courtesy of Wikipedia, "The doctrine of promissory estoppel prevents one party from withdrawing a promise made to a second party if the latter has reasonably relied on that promise." MS and RH also did not have formal contracts with HP, but, if it was the case for Oracle, HP could make the argument that they reasonably relied on Microsoft and RH's statements regarding Itanium support and when they withdrew support HP was harmed.
"I find it ironic that Oracle of all organizations should criticise another vendor over 'partnerships' given that Oracle's track record in that space is particularly inglorious."
True, Oracle is a miserable partner, but that is all the more reason why HP should not have been inclined to believe, and why it would have been unreasonable to think, that they had any sort of long term, any circumstance partnership with Oracle. No one has a iron clad partnership with Oracle and they eventually get around to hosing everyone.
"Quite impressive really that a seemingly only verbal contract can be worth so much, I guess HP's next battle will be INtel and forcing them to keep developing the processors!"
HP will be able to force Intel to push out half-baked new chips for some time. They already have been. What about their key Itanium reference and development input customers? They issued press releases stating how committed they are to Itanium, probably took free gear as betas, presumably wrote in PRs how they plan to use it in the future, HP took their suggestions and made changes to the OS to HP's detriment now that they are moving away from Itanium. Taken to its illogical extreme, HP may have some verbal implied contracts with those former customers too. Itanium is great. If you don't agree, speak with our attorneys.
Linux will definitely continue to cut into Unix, but x86 stack still needs to fix the vendor integration problem. Anyone who has worked on or talked to someone who has worked on a Linux on VMware (or other hypervisor, but generally VMware) environment will tell you that they just tend to inexplicably go down from time to time. It isn't a problem with any hardware, firmware, OS, hypervisor, management utility, application component in particular but just some strange one-off incompatibility issue (e.g. someone upgraded the firmware on the fiber switch, now VMware can't see the disk... all of the vendors say their component is working fine, should be working, isn't working). Maybe the x86 server vendors will get around this by pre-integrating "cloud" appliances, but it is going to be difficult for anyone to get VMware, Red Hat, Microsoft, Oracle, various hardware manufacturers, etc who all hate each other passionately to agree to Unix-like pre-integration.
"The next logical step for Oracle is to drop support for RH, SLES and other distributions and to offer support only for Oracle Linux users. Just you wait..."
I agree, Oracle has Linux in their sights. They have been playing this cat and mouse game with RHEL (Oracle takes their maintenance, RHEL responds by baking patches into major upgrades so it is more difficult for Oracle to find and duplicate them). If RHEL is no longer binary compatible with OEL as a result of this, Oracle could decide to just drop RHEL instead of support two Linux forks. Not many people care about SLES, but RHEL would be a problem. VMware is also on their list of things to do.
"Have you ever actually had to work with Oracles support or integration services? I suspect you haven't, or you'd understand how that statement can NEVER apply to Oracle. The support is a lot of things, but excellent is certainly not one of them"
Yes, I agree, Oracle support is not great. I have worked on several very large Oracle implementations, so I am familiar with Oracle support.... There is a difference, though, between Oracle actually trying and being incompetent vs. Oracle trying to be incompetent.
I admit that I was wrong in thinking that this lawsuit would be shot down out of the gate, but this is by no means a final judgement. Oracle will appeal this ruling to the judge. If that fails, they go to appeals court. If that fails, they go to the CA supreme court. This isn't a done deal.
I have no great admiration for Oracle and wouldn't care if they were hammered, but getting slammed for stating the obvious about Itanium and following the same course as MS, RH, and many other ISVs is ridiculous.
"And far from causing vendors to worry about making commitments to platforms and products etc, I think this could bring some much needed clarity to the whole area of heterogeneous support - i.e. instead of vendors saying "we support X with Y", they are now much more likely to say "we support X with Y for a minimum of Z years, and will extend that support if we think appropriate at some point". As a customer planning my investements in technology, I'd be much happier with that."
I agree that promissory estoppel and implied contracts can be enforced if damage has been caused based upon reasonable reliance, but people seem to have this impression that there was a "cold, hard" contract signed with "Multi-Year Oracle-HP Itanium Support" at the top, which is not the case.
I also agree that theoretically this could bring clarity to the support process as companies will be forced to spell out in the future, in cold, hard contracts, exactly what the relationship is, what the obligations are, and for how long. The danger, IMO, is that the whole IT industry works on informal collaboration, partnership and integration. If there needs to be a six month legal review prior to any joint testing or integration planning, everything is going to slow way down. The lawyers will take over everything, the cost of collaboration/integration will be much greater, and it will be more likely that companies will stop working together as often. It could result in people either being IBM shops, Microsoft shops or Oracle shops and no combination thereof in the same stack because the costs of creating the constantly changing integration agreements is just too high to justify.
"I think you misunderstand a core part of this case: Oracle and HP had a contract. This case wasn't based on press releases, but cold, hard, legal contracts the two companies had signed."
You hear this a lot, even though it is manifestly untrue even from HP's lawyers point of view. Is this something that HP reps have been telling people?
This is HP's complaint. Read for yourselves. The implied "contract" was made, according to HP, based upon Oracle's longstanding support for Itanium and the Hurd press release. If you read the trial documents, everything centers around the Hurd press release and what it meant. If they had a formal contract with Oracle, this would have never even gone to trial.
True, there is no practical way of enforcing this judgement, if it sticks, with any meaning. There is a big difference between creating a court ordered port and providing excellent support and integration work. Does anyone think Oracle is going to put their all stars on the Itanium support team (or the interns)? The court might be able to force Oracle to support Itanium, but they can't make them do it well.
Not that Oracle doesn't deserve their comeuppance for a variety of other reasons, but this sets a bad precedent (if it is not overruled) for the IT industry. Now anytime any vendor issues any statement of support for a company or their products they have to assume that they will be committing themselves to supporting that product forever under any circumstances. Therefore, companies will be much more reluctant to agree to any press releases, integration work, or anything else which might be construed as a "contract." Interoperability and collaboration will suffer as no one will want to do anything which might be later brought back in a lawsuit as an informal contract. Why doesn't HP sue Microsoft while they are at it? Surely they can find some press release in which Microsoft wrote that they were committed to Microsoft on Itanium or HP in general, which is apparently now a contract for support until HP tells them they can stop.
"Mainframe software is costly, the z/OS, CICS, etc licenses."
PS - You do not need to use any of this software with System z running Linux. You would use the z/VM, mainframe hypervisor, which is markedly better than VMware (takes basically no management overhead), and then SLES or RHEL on top of the z IFL (integrated facility for linux, a specialty processor optimized for linux). No need to buy and spend time setting up clustering software, it has sysplex baked into the system. Not only is mainframe not more costly, but I would be amazed if it is not considerably less costly in scale after all of the x86 software has been added.
"With public cloud venders like Amazon, you can’t do this, so you’re stuck with that one public cloud without the ability to move workloads from private to public and visa-versa along with separate management of public and private workloads."
I guess the comment makes more sense given that definition. Everyone has these self-serving definitions of the cloud. For VMware, the cloud = virtualization, specifically buying more VMware licenses.... If VMware wants to create the ability to Vmotion workload from private to public, they are going to have to build their own public cloud. None of the major cloud providers are going to want to pay VMware licensing rates when they can get KVM or Xen for free and have the skills to maintain them. I can't imagine any public cloud, or large scale service provider (Google, Yahoo, etc) would use VMware. They would be uncompetitive, from a pricing perspective, with those that use open source.
Not the case. Mainframe software is costly, the z/OS, CICS, etc licenses. A mainframe, the physical hardware, is right in the range of a comparably sized x86 cluster, if not less costly. You can buy the base z114 for about $100,000. You can run RHEL licenses on that hardware for less than the cost of x86 (as a z processor is a massive throughput monster).... The mainframe is the only platform that is EAL5 secure, literally in a class of its own from a security perspective. The best feature, though, is that it just works. About a 30-50 year meantime between outages. You are not going to be able to achieve that level of uptime by assembling a bunch of non-integrated components made by companies that hate each other, e.g. RHEL and VMware, MS and VMware, Oracle and everyone.
What do you mean by hybrid cloud? The definition I have heard of hybrid cloud is: a company that has some of their workload on a "private cloud" (compute, network, storage virtualization plus deployment automation) and some of their workload on a public cloud, such as Amazon EC2, IBM SmartCloud, Force.com, etc. I don't understand how EMC/VMware would have anything to do with hybrid cloud as they one offer the virtualization component of the private version, generally partner with BMC for the automation . How is hybrid cloud a game changer?
Why wouldn't you want to buy a mainframe to run the "cloud"? What is the mainframe? The mainframe is a giant centrally controlled (by software) virtualized server which can rapidly deploy a number of different workload profiles very securely and reliably. That is the definition of a "private cloud"... IBM has just been building "private clouds" for the last 40 years. Apparently that flies in the face of the "cloud" marketing that "cloud" is all new. No, "cloud" is a reversion back to the mainframe paradigm. Client/server brought wide amounts of distribution of computing, "cloud" is bringing it all back into one large cluster. The "cloud" is a validation of the IBM mainframe model, centralized computing distributed to terminals (or thin-clients/tablets these days).
Agree, I just checked it out. Gmail works fine on Opera.
I don't think these pure SSD arrays, or tiers of SSD, are practical for most people. Most people don't have workloads that require 500,000 IOPS, but they have a bunch of workload that requires 40-50,000 IOPS. If you spread all of that SSD out in a modular format, everything is fast without any tier being super high performance. A dab will do you with SSD. All of the studies show the diminishing returns as SSD is added. If you spread SSD and cache in a modular architecture, you also don't need to worry about the tiering software and management of pushing hot volumes into SSD and then pulling them on to disk when they are less utilized. Saves a bunch of software costs and makes everything much simpler. It will also cost a lot less.
As long as a client OS can open web browser and the few remaining thick client applications, there is no need for a replacement. There are no functional or performance reasons to upgrade. No one wants to upgrade to get a client OS that is 10% shinier than XP.
"You are confusing data corruption with high availability, they are not the same thing."
Yes, I know. I included the superior HA features in Oracle/DB2 because of your spider example (i.e. you want to use MSSQL over MySQL for workloads where you can't recapture the data). This assumes that MSSQL is superior in HA as the DB going down would be the reason for recapture. If you have a workload where you cannot recapture data (live OLTP for banking transactions or the like), you would want to use a DB with a proper HA architecture like Oracle or DB2. Also, HA and data quality go hand in hand. The most common reason for a data corruption or a contiguity issue in the DB is that the DB goes off-line. Data quality isn't an issue in the normal course of operations. It is when something goes wrong that data quality becomes an issue. HA/data quality are inherently linked.
"My point was that you don't want to do things on the cheap, I was not suggesting that they did run any particular database."
Yes, and my point is that MSSQL and MySQL are both doing things on the cheap. Neither are enterprise grade at the RBS level.... MSSQL forces you to run on the MS NT platform, the opposite of mission critical. MSSQL cannot do contiguous paging, still uses standard 8 KB blocks, few data transfer options, is missing a whole range of indexes, could go on. I am sure RBS has never considered either MSSQL or MySQL for the workloads you are talking about, so the advantage for MSSQL over MySQL is a false comparison. Neither have that level of enterprise functionality. Using "RBS applications" enterprise grade functionality as a counterpoint to MySQLs scale, flexibility and cost advantages is false.
"This is hardly a drop-in storage engine of the type of innodb or myisam, is it? It's an entire database back end."
More of a drop in than the non-existent MSSQL options.
"*Point being*, that you didn't know that MSSQL provided most of what you are talking about and you're trying to cover that up."
No, the I acknowledged that all of the features were not specific to read performance and that I was throwing out possible advantages the engines could provide for different types of workloads. MSSQL does provide many of them, but you can tailor your engine to your workload which is not available in MSSQL.
"You also said that: "SQL replicates everything and sucks a bunch of storage and system performance" which was just plain bollocks, which you've not acknowledged."
Acknowledged, I was mistaken. MSSQL can do transactional log ships. I know about the types of binary logging.
"I never said that the underlying table structure had no effect, in fact I'd be very surprised if they didn't, but you didn't seem to understand what the query optimiser was and how important it was -- orders of magnitude instead of small constant factors or multiples to the point where the underlying storage mechanism can become almost irrelevant. The query optimiser is never to my knowledge used with reference to the underlying table structure, only logical rewriting of the query. I believe this is the standard terminology."
Actually, you wrote "outstandingly stupid comment. The back end does not matter, only the result set (or whatever) output." The table structure (back end) does matter, obviously. Yes, query optimizers matter as well, and are by no means only available on MSSQL. Different engines are an advantage for MySQL which MSSQL does not have, so, like I wrote originally, score one for MySQL.
"Their businesses to deal with very large amounts of low quality data (low quality can mean either just that (tweets), or it can be lost without much consequence (tweets again), or can be recaptured (Google spider spiders again). If you want to run a business where any data failure can be expensive then you may not wish to go with the cheapest options (RBS, NatWest). I think you are trying to compare things which shouldn't be compared. Do you think they run their payroll on hadoop & mysql?"
I wrote in my original post that MS SQL had an advantage over MySQL in data quality, used the example of corruption in the case of a power loss with ISAM. As I mentioned, however, MS SQL is certainly not best of breed in this category either. Oracle or DB2 have far more advanced corruption protections and HA protections (e.g. data guard, datalens and RAC and the IBM equivalents) than MS SQL. To my knowledge, RBS and NatWest run their payroll and all other critical data on DB2, not on MS SQL. If you need those advanced features, you are likely not using MySQL or MS SQL. The comment wasn't directed toward data quality. It addressed performance.
"Absolutely wrong. The optimise I am referring to optimises at the logical level of the data and the data distribution, with some reference to the physical extras such as the presence of indexes. An example. You have a million row table (mrt) and a thousand row table (trt), both are indexed and you want to join them. Do you join the mrt to trt, meaning that you go through a million rows...."
The engine will absolutely effect the read performance for various queries, as will optimizers that determine the most efficient way to run the query (e.g. transforming a subquery into a semi-join operation and then treating semi-join like another join operation throughout the optimizer). MySQL has query optimizers and different data handling engines. Many different engines with many of different data handler profiles. The link below provides various benchmarks for different MySQL engines using the same read queries on the same data set with the same "normal" OLTP physical structure. If the engines made no read performance difference, there would be no difference on these benchmarks as the only changed variable is the engine.
Here is a benchmark on the various engines for JOIN queries, based on your example. Different results for different engines.
On the run down of features, I was just throwing out examples of various differences at the database engine/server level that can effect read or write performance or both. Point being, engines effect performance, having a wide variety of engines or being able to develop/donate your own (if you are Google or fb) is a benefit for performance of a particular workload as opposed to being forced to use the standard MS (Sybase) engine.
"you seem to be comparing relational with non relational. Does mysql have columnar data layout? If not, why do you ask it of mssql?"
Yes, the Calpont and KFDB engines are columnar.
"Replication based on logs was always available, see<http://nirajrules.wordpress.com/2008/12/08/snapshot-vs-logshipping-vs-mirroring-vs-replication/> for a whole list of options including log shipping. What you seem to be confusing it with is a snapshot replication, which it has as well (you pick whichever suits your problem best). I'm not an expert on the subject and what I do know is rather stale so I'm not going to say any more on it."
Yes, MS SQL can log ship, but does it do it at the binary level as opposed to the SQL statement level which then need to be compiled (binary vs. transactional logs)? It may be possible, but it was my understanding that MS SQL ships the transactions. Binary obviously being higher performance.
"If you don't distinguish between read and write performance, that's a major problem. For read performance (which makes up the majority of db work), one b*tree is going to be much like another b*tree. Also you fail to understand the relevance of the optimiser, which has a vastly greater role in read peformance than you seem to realise."
I can come up with some SPEC benchmarks, but they are generally pretty worthless as the hardware is never apples to apples and its more of a tuning test than anything you are likely to see in the real world. MySQL does have the SPECJ record, but I assume Oracle used some crazy config. The best way to judge performance and scale is to look at what has been done. Most of Google, Yahoo, facebook runs on MySQL. Not familiar with any MS SQL applications with that read or write performance or scale.
One b*tree, or columnar comparison tool, is not like all others. You write that "one b*tree is going to be much like another b*tree", but in the next sentence you mention optimizers... meaning that one b*tree, or look up, is not going to be like the next b*tree. For instance, MySQL uses a special algorithm in LIKE string look ups, called boyer-moore, to initialize the pattern for a string and perform the search quickly which limiting the index range for the search. MySQL also has a pretty slick hash index optimizer. There are all sorts of details, but, point being, one b*tree is not like every other and there are various optimizers for improved reads in MySQL.
"outstandingly stupid comment. The back end does not matter, only the result set (or whatever) output. I suppose that if you want fast write performance than unlogged MyISAM then occasionally I'd agree with you. In the main, for real work, I would not."
I certainly does matter. The use of different engines allows you to have, for instance, tables which are transactional or not, in-memory or not, compressed for a particular application/workload, columnar vs. relational, row level lock (Inno) vs. table level lock (ISAM), foreign keys or not, various relationship constraints or not, etc. The engines acts as an optimizer for a particular workload as opposed to the general table style. If the back end did not matter at all, why are all these columnar DB companies wasting their time?
"Like, the transaction log in mssql is not binary? WTF? perhaps it's handwritten XML by the gnomes that live in the server."
You, I assume intentionally, did not respond to the full sentence. MySQL uses a log based replication method as opposed to a data based replication method. Instead of replicating all of the data to the slave copy, MySQL only replicates the binary changes (differential at the block level) to the slave. MS SQL, unless it has changed recently, uses a publish and subscribe paradigm. MySQL has way less data to replicate, way faster.
MySQL just set the world record on the SPECJ benchmark.
Benchmarks, especially Oracle benchmarks, are not great predictors of real world performance as they are almost always highly modified versions of the DB which rarely have an apples to apples hardware configuration. The best predictor of scale and performance are real world results. facebook, Google, Yahoo, Ebay and many of the other highest IO, both read and write, applications in the world run on MySQL with the Inno engine. It is more than capable of handling the highest of the high end workloads from a performance perspective. MySQL lacks enterprise security/access, extension, and some RAS functions, but performance and scale is definitely not an issue.
I think calling someone 13, as opposed to providing evidence which would contradict any of the above, is about the most adolescent response possible.
That is information from Microsoft. As Microsoft only has 17% of the overall DB market, per Gartner, it seems unlikely that they would have a 2-1 market share on SAP as compared to the rest of the workload market. .NET applications is where MS SQL is most common. The vast majority, like 90%, of large SAP users, say 1,000 plus users, run on either Oracle or DB2 for core SAP.
I believe Oracle is still the largest DB install for SAP (Oracle claims that over 2/3s of SAP users run Oracle DB, but that is probably overstated). IBM DB2 is undoubtedly growing the fastest for SAP workloads, primarily because DB2 is SAP's preferred DB. Many of SAP's largest installs, e.g. Coca-Cola, Pepsico, Siemens, 3M, SAP (internal use SAP environment), Welch's, Pfizer, Cardinal Health, Medtronic and many others, have migrated from Oracle to DB2 in the last few years. SAP and IBM have built a deep compression algorithm for SAP data which reduces the storage requirements by 60%, DB2 is integrated into the SAP cockpit and operates as part of the SAP environment.
The apparent contradiction is because Microsoft includes any customer running any application or tier at all associated with SAP which uses MS Server or SQL as being an "SAP on Microsoft" install. There is no way on earth that 57% of SAP customers run their core DB tier for SAP on Windows Server, but they might run an application or presentation server on MS Server.... or they might run some reporting module on SQL with the core FICO and other core modules running on AIX - DB2 or Oracle.... Microsoft would include that customer as a "Microsoft install" even though the environment is predominantly running on something else with one small component running on SQL or MS Server.... Oracle does the same thing with their 2/3s number.
I am thinking of MySQL.
facebook case study:
It would be pretty hilarious if MS's response to Oracle is that they agree. MySQL is terrific and they would encourage Oracle DB users to consider the cost advantages of MySQL for the majority of their workloads which do not require RAC or Oracle's high end features... which would be a good portion of the Oracle install base.
MS SQL is an ANSI SQL DB (based on Sybase), so it will work with Java, PHP, etc. I am not saying it won't work. I am saying it is not designed to work with open stack and third party technologies to the degree it is designed to work with .NET and MS technologies. You can only run it on the MS platform. That is only true of MS SQL. Oracle, DB2, MySQL, Postgre, etc all run on every major platform, including enterprise platforms (Unix). MySQL will support all of the open stack engines, Inno, Merge, MyISAM, memory, cluster. MSSQL supports MSSQL's Sybase engine, that's it. MySQL and MS SQL are about at parity with Java, they both have a decent JDBC driver. PHP can natively create a MySQL DB and perform basically every function in the DB without ever needing to use a MySQL editor. That is nice and can't be done with MSSQL. Oracle and DB2 can both store Java procedures in the database, I don't think there is any comparable level of integration with MS SQL. DB2 and Oracle will both handle Java functions such as garbage collection, Java multithread support. MS SQL will work with PHP, Java, etc through a connector, but MS is not going out of their way to help you with Java or open stack technologies. They want you to use .NET and MS platform, IIS, etc.
or have public clouds been going down at an inordinate rate over the past month or so, Salesforce.com, Amazon EC2, now Azure?