back to article Standards and interoperability: Are you backing the right horse?

IT can sometimes seem like a long, drawn-out process of making things work with each other. Whether it’s getting back-end systems to exchange information, or trying to open a file that has been sent in an unexpected format, most who work with technology will be familiar with the challenge. But surely standards are supposed to …


This topic is closed for new posts.
Silver badge

Over a third of a century of un*x ...

... and so far, I see no interoperability issues, from the IBM 3151 dumb terminal to the small cluster of vaxen to the 3 year old Sun to the laptop with Slackware 13.1 Beta 1 ... Interoperability issues are always caused by (mis)management of resources, usually driven by marketing forces.

Gold badge

Only part of the story.

"Due diligence is key when buying and deploying IT systems and services, in terms of both what you need a system to do now, and what you might need it to do in the future. A few questions asked early on around interoperability can go a long way; otherwise, by the time you find you have backed the wrong horse, it may be too late to do much about it."

This is really only part of the story. There are many facets to ensuring compatibility between products. The first part is indeed “due diligence,” but what exactly constitutes due diligence? Vendor promises? Here’s a shocker for you: salesmen LIE.

No, despite what many people in “the industry” would tell you, and certainly despite what Intel will tell you (over and over and over again,) one of the worst things in the world you can do is buy new technology. Now, I don’t mean “buying new computers with warrantee, etc.” is a bad thing. I mean buying version 1 of anything is completely ****ing retarded. You don’t buy Vista, you wait for Windows 7. You don’t build a smartphone on Moorestown, you wait for Medfield. Etc.

Let’s stick with the smartphone analogy for a second, because it gives us a good opportunity to look at an up-and-coming technology trying to break into an extant market. Right now, if a dozen phones came out with Moorestown/Meego, I wouldn’t even bother testing a single one of them. I would stick with RIM, or consider Android on ARM because it’s proven. One refresh cycle later, (three years on,) MeeGo 2.0 on Medfield would be out, both the hardware and the software having had a generation of early adopters walk face first into the current landmine of lies, damned lies and statistics for me. I would have a reasonable idea of how Android on ARM stacked up against RIM against MeeGo on x86, and what sort of patent catfights or standards lock-in **** swinging was on the horizon. (El Reg and Ars serve their purpose by keeping me informed of such things.)

Moving that over to the latest and greatest server hibbery-jibbery: let us say that Intel comes out with the super-deluxe 16-core HAHAHA processor with added IOMMU pi.5 and some awkward decision to do something strange like migrate VPro directly into the processor. Fantastic; it’s a new processor requiring a whole new type of motherboard with eleventeen squillion pins and is fundamentally incompatible with AMD’s approach to the exact same thing. This means that in order to even begin to compare one to the other I have to wait for VMWare to get samples, code for both manufacturers, run a couple generations (to deal with patching bugs, etc.) before I would have a real idea of what benefits (if any) this “new hotness” can bring. Not only that, there are questions that might arise: would the inevitable incompatibilities and attempted lock-ins prevent me from migrating my VMs across architectures? Would it be fully backwards compatible? So many questions.

The truth is I don’t trust vendors. Not a single one of them. They all play their games, and mouth “openness” out one side of their mouth while telling how their lock-in is the greatest thing since the LED out the other. Maybe in the world of high-performance computing the need to squeeze in a few more gflops/sq. ft matters so much that interoperability, reliability and avoiding technological dead ends are just not relevant concerns. Maybe some places can replace all their gear all at once every four years. For the rest of the world though, they deal with the realities of aging systems that absolutely must talk to eachother, and can’t easily be replaced. (For example, I maintain several very large and expensive digital photo printers somewhere in the quarter-million each range, each of which runs on Windows 2000, will only ever run on Windows 2000, and won’t even use the newest service pack at that. They have a service life that will extend for another five years at least.)

“Due diligence” is only part of it. Experience and a very healthy dose of cynicism are absolutely required for cutting through the FUD and the layers of “but NEWER IS BETTER” that you will receive not only from vendors, but rabid geeks and management types as well.

Newer may sometimes be better. Newer is however always a set of lies, damned lies, statistics, bugs, patches, incompatibilities, yet more lies, regression and /progression/ testing nightmares waiting to happen.

This topic is closed for new posts.


Biting the hand that feeds IT © 1998–2017