It seems to me that IBM is selling itself to Lenovo bit by bit (pun intended).
How long before IBM simply disappears? 20 years? 30 years?
We’re not even out of January yet and already the UK's tech channel has possibly seen its biggest story of 2014 after IBM sold its low-end server business to Lenovo. The two companies have history, of course, with IBM offloading its profitable-but-not-for-long PC business to the Chinese firm back in 2005. Lenovo did rather …
It seems to me that IBM is selling itself to Lenovo bit by bit (pun intended).
How long before IBM simply disappears? 20 years? 30 years?
That kind of death previsions already have been done in mid-80's, remember? But if you add to your sentence the wording "today's IBM" in the reinventing sense than the answer will be probably in next few years.
> That kind of death previsions already have been done in mid-80's, remember?
...and IBM was basically bailed out due to direct intervention from the President of the United States. It wasn't quite as flashy as the fed throwing billions of dollars at them but it was an intervention at the highest levels all the same.
OK JED, I will bite.
Do you have a citation that will enlighten me on this news?
I want to know the Who, what, when and where
>How long before IBM simply disappears?
IBM (or its constituent companies) once made bacon slicers, coffee grinders and scales. One of the reasons they're still around is that this is no longer their core business. Although you will find echoes of the "computing scale" in a self-checkout system.
Please be a little specific, or more specific, of highly specific!!!! Not spewing, as a half-ass troll.
Two schools of thought here:
1. IBM has long term strategic thinkers capable of planning and considering ten years out.
2. They have a desperate need to hit quarterly targets set by Wall St so will flog off whatever parts of their divisions such as X86 server and GPS to meet them.
I wonder which is the most likely....
> Does IBM know something we don’t about the future of low-end x86 servers
In the early 90's I worked for IBM. Even then, the view was that they weren't a hardware company, but knew that the "future" was in services. They also knew that the profitable work was at the leading edge, not in the box-shifting, mass-market.
What IBM is good at (and their longevity, albeit with many up's and down's does support their view) is innovating, productising and doing stuff that other companies can't / won't or aren't big enough to. So in that case it's no surprise that businesses they nurtured and grew into successes will get sold off: its their pattern.
As for datacentres and energy. The solution is simple. Once it becomes too expensive for cloud operators to power & cool their datacentres, they'll simply stop doing it. Whether they close down gracefully or just switch off the lights and walk away, one day, will be interesting. However companies that use these services need to always remember that nothing in cloud
cuckoo land is under their control and that this will be their biggest vulnerability.
However, with datacentres using as much power as a aluminium smelter, once electricity becomes too expensive for cloud computing, it will also be too expensive for other essentials. That will have a more far-reaching effect on individuals' lives than where the popups get served from.
I agree on the 90s view from IBM, it was consulting and some services but at the buggets and just under end of the market.
I was surprised they held onto making silicon based devices for so long.
IBM will be around for a long time way after the manufacturers of computing objects as we know them now have gone. Ability to handle people, data, knowledge and make decisions will always be in demand - just as it has since the Greeks.
There is another alternative to building bigger and bigger data centres.
Rather than look at the power footprint of the hardware, why not start looking at the power footprint of the software?
Looking at what people are doing on systems nowadays, how much more productive are people with, say, office productivity suites today versus what they had 15 or 20 years ago when systems were a fraction of the computing and consumed power (you only have to look at a 15 year old PC, and spot that the power supply could only supply around 100W. Look at a modern PC, and you will find that 300-500W power supplies are the norm now.)
I know that there are new applications that people use that do need high footprint software (anything to do with high quality media is a prime example), but for many tasks, both on a commercial and a personal basis, modern software is big, bloated, and power hungry.
The power economies available from ARM and Intel Haswell show that there are considerable economies in power, but this has largely been soaked up by software with higher requirements. Reducing the memory footprint and CPU cycles required to run the systems mean that each system will be able to run more work in the same power budget.
I'm not saying that all workloads can have their power significantly reduced, (Big Data and HPC workloads will always be memory and/or processor intensive), but much of VDI and running simple data processing workloads, and running web sites are hugely inefficient because of the way they have evolved and the tools used to write them.
So my view is dump the RAD tools and languages that require 10s of megabytes to run "Hello, World.", and move back to the development of light-weight applications on stripped down OSs, coded by skilled coders who are tasked with writing efficient code, and then run more work on systems with the existing power foorprint.
The cost balance will move from quick to develop but expensive to run, to expensive to develop but cheaper to run, but that equation will shift as power gets more expensive. It will have to happen eventually once computers reach the limit of what can be achieved in the available power budget, but why not start now before the crisis hits us?
"Once it becomes too expensive for cloud operators to power & cool their datacentres, they'll simply stop doing it."
There's cloud and there's datacentres. If you set out to provide cloud to lots of independent low-volume customers, you don't need a lot of machines in a single space. Even if they are all on the one site, they can be spread out to whatever extent your cooling needs require. Your service is merely to provide a redundant array of virtual PCs. That's embarrassingly parallel and will never be hard to cool.
Proper datacentres are the ones running loads on Big Data where bringing the necessary processor grunt and the vast amounts of data together is actually a problem. These were the problems that people used datacentres for before everyone jumped into the cloud. They don't scale and might well prove hard to cool in the future.
Right now, people talk as though they are use the hardware of the latter to provide the service of the former. I suppose that makes sense, since it is what the datacentre people had lying about when cloud took off, but cloud doesn't need to be done that way so in the long term there is no problem with cloud scaling. You just build new clouds differently.
The 90s were a different world. IBM were providing systems across the board, from Mainframe to PCs, with fingers in the printing and storage business. They were a broad spectrum company, and for the first half of the decade, did not do much in the way of services. This was typical, and HP and other manufacturers were doing the same.
When they did moved into services, it was mainly to allow them to provide an end-to-end solution to many customers, doing design, supply, installation and support for customers who wanted a one-stop-shop. It took them best part of a decade, and several services organization purchases (and cherry-picking the best of the inherited staff from outsourcing deals) before they actually made it work properly.
The danger here is that their biggest services customers have been the biggest organizations, including government and the financial industries, and as we know, these are areas that are being severely squeezed. I cannot see the services market growing again in the short-term, so to take a radical shift into cloud and/or cognitive computing while divesting a large part of the nuts and bolts business is a gamble with little fallback. It may be inspired, or it may lead to a weaker IBM if the bubbles don't materialize or burst before the Next Big Thing appears.
And whilst it will still be true that they can still take part in the services business, the fact that they are no longer an end-to-end supplier may allow other services companies to get a foot in the door and exclude IBM. After all, concentrating on services has done HP a world of good!
"I'm not saying that all workloads can have their power significantly reduced, ...... running simple data processing workloads, and running web sites are hugely inefficient because of the way they have evolved and the tools used to write them."
Very true: why, for example, in a system specification I recently reviewed (it had come from a reasonably sized consultancy) have things like dual-socket x quad core boxes just to run caching in front of a web server farm? Surely that's the sort of job crying out for lower power boxes with lots of RAM, not big (ok, relatively big-ish) tin with RAID5 for the O/S (!) and SAN access for the cache data (again, "!") Surely a small (maybe 32 bit, maybe 64, does it matter for this task?) ARM server can do something as simple as a static file cache, with the correct software and configuration?
@Getriebe: And what has IBM got to do with the attributes you claim will be needed since the Greeks ?
IBM provides consultancy and designs systems.
In fact this has been going on since civilisation started - Sumerian clay tablets are the medium for a database system that recorded family members, possessions, and possibly how much grain and oil they were entitled to from the centeral stores, during droughts or famines. Somebody designed that system, somebody operated it. Whether we call them "priests" and "scribes" or "consultants" and "data entry clerks" is pretty immaterial. Whether we call it a "temple" or a "data warehouse" is, in practical terms, probably also immaterial.
So I agree with Getriebe, except that I don't think he pushes it back far enough. There is a line from clay tablets with triangle marks through Hollerith machines to temperature controlled rooms full of discs and servers, and companies that recognise that the software is more important than the hardware are better primed for long term survival.
" After all, concentrating on services has done HP a world of good!"
LOL! Yeah right... just look what happened to EDS.
The fact of the matter is that 'single X86' is no longer an appropriate technological solutin except on the desktop or small office server, and IBM aint in those markets.
Machine room servers are going on for massively more economical virtual machine technology based on high end rack and blade type solutions.
And as more an more serve side applications move from being operating system dependent to network dependent, serving their users via networks sockets, only, and because high end applications are massive on design effort, it makes sense to go the extra half yard and provide them on whatever platform is the most cost effective, and if that happens to be ARM, or some other processor, so be it. It is no big deal.
Microsoft is tied to Intel, and Intel owns the medium computer space, because Microsoft owned that space. The medium computer space is increasingly irrelevant, and so is microsoft and so are Intel.
the so called cloud model - essentially smart clients running standardised internet based applications on cheap end user devices coupled to massive centralised storage and application power, is the current cost effective way to deliver IT to end users.
The personal computer is no longer a personal computer: its no more than a smart graphics terminal hooked into someone else's mainframe running time shared applications,
The wheel turns full circle, and what IBM envisaged as the PC, is now what every user level device is. A terminal.
So IBM simply acknowledges that and gets out of a market that it simply isn't interested in. IBM has already been replacing servers with virtual machines for its client base, - its not interested in prolonging a change it probably feels is not only inevitable, but actually welcomes.
There will always be a niche market for small single OS servers for private and geographically secured reason. Not everyone will want to use public clouds, but a huge number of people will. There will always be a need for proper desktop workstations with local processing power, but again niche,market for the very few people who actually use computers to generate real content, rather than merely enter data or consume content.
But those are not mainstream.
However companies that use these services need to always remember that nothing in cloud
cuckoo land is under their control and that this will be their biggest vulnerability.
Nope, You had it right (
cloud cuckoo land) the first time!!!
> The fact of the matter is that 'single X86' is no longer an appropriate technological solutin except on the desktop or small office server,
You couldn't have come up with a statement that is less true if you tried.
Furthermore, that statement is even less true now than is used to be.
If anything, it's IBMs more "high end" server business that's in danger of being obsolete.
Maybe you should try doing word-processing on a 15-20 year old machine, just to remind yourself how slow and frustrating it was back then compared to now.
20 years ago, we had Windows 3.11 and Office 6. 15 years ago we had Windows 98 and Office 97. The computers would crash several times per day. Changing fonts meant waiting a few seconds for the dialog box to come up. Printing was the same, and you had to wait until the job was sent to the printer before you could start working again.
> the fact that they are no longer an end-to-end supplier
They only sold off the x86 server business - where they had to buy stuff from Intel and AMD. They still have many different server offerings using POWER. They may even be starting on ARM servers.
> 20 years ago, we had Windows 3.11 and Office 6. 15 years ago we had Windows 98 and Office 97. The computers would crash several times per day.
20 and 15 years ago I _didn't_ have Windows*. I had DRI's Multiuser-DOS and I didn't have crashes, nor to wait for font dialogs or printing. Nor were they 'slow and frustrating' even on 80386. Word processing, software development, and everything else were fine.
* I sometimes had 3.11 running as one session of the Multiuser-DOS.
"15 years ago we had Windows 98 and Office 97"
Lucky you. I'd already had Windows NT for over five years at that stage (1993?).
"The computers would crash several times per day."
Yours might. My IT-supported colleagues might. My self-installed Windows NT laptop didn't crash, and it never routinely ran out of resources either, unlike the DOS-derived systems, so when a complex document wouldn't load or print on W98, folks came to me to get it sorted.
Then in due course Gates got his hands on the internals of Windows NT, and stuff that had no right to run in kernel mode could suddenly blue screen the system any time it chose. And frequently did.
Not so happy days.
Your problem is this; "coded by skilled coders".
Have you seen the drooling monkeys being churned through universities these days?
Companies that develop applications want to hire the cheapest people they can. As long as the code runs/compiles they don't give a rats arse about how efficient the code is. Nor does anybody give any thought to producing maintainable code.
Some of the code I see, and sometimes attempt to fix, is truly horrific. Epic monstrosities of incomprehensible, multiply nested if statements, repeating code blocks and byzantine layers of unnecessary complexity.
Recently I found a function in our code base produced by a low rent Indian coder (a "Team Leader!" to boot) that was over 600 lines long with vast amounts of reduntant crap and so many nested if statements that it made your eyes water. I managed to distill that crap down to a readable 40 lines or so. It wasn't an easy job I can assure you.
It's much harder to fix crap code than to write clean code in the first place yet we continue to write crap code. Why is that?
Don't get me wrong, I agree with everything you said but the sad reality is that the people who do the hiring & firing don't give a shit. As long as it works without crashing long enough for it to be demonstrated to a potential customer is good enough.
>> 20 years ago, we had Windows 3.11 and Office 6.
20 years ago, where I worked, we used WordPerfect.
No version of MS Office in all the years since has been even half as powerful for consistently getting documents to look exactly like you want them to.
Funny you should mention "word-processing on a 15-20 year old machine." Recently, someone gave me an old computer as part of a housecleaning. It is a DEC HiNote laptop with small screen, 75Mhz 486, 20MB of memory (that's MEGAbytes!), 504MB hard drive (another mega), Window 95 and Office 7. It is running order and I tried it out. Slow it is not. Office 7 runs just fine. But there is no eye-candy. I did not try to print with it, and yeah, printing might be slow. And maybe it is not so good at integrating graphics and photos into documents. Compared to the computer I am using right now, it has 1/30 the processor speed, 1/200 of the memory, and 1/100 of the hard drive space. Just goes to show how much bloat Microsoft has dumped on us all.
Poor Ole EDS,, HPQ swallowed them and never even burped. $13 Billion, in one gulp.v ;-)
In many companies, the people who do the hiring and firing just don't know enough about programming to know what they are doing or to tell good from bad. They also have little idea of how much the different levels of qualification and experience are worth.
In my time I've seen programming recruitment carried out by someone whose previous experience was recruiting staff for a call centre, and people with academic experience recruited for customer-centric business jobs. This is just as bad as recruiting incompetents.
Civil, mechanical and electrical engineering, not to mention banks, tend to have proper management structures which can recruit appropriately, but a lot of software companies just haven't been around long enough to have passed-down expertise.
I had WordPerfect 6 running on Windows 3.11 on a 486SX with 4MB RAM. It was bearable if you had about a page of text, but if you had much more than that, it got very slow. I was a student at the time, which meant lots of long documents with charts and diagrams pasted into it.
> I had WordPerfect 6 running on Windows 3.11 ...
There's your problem, right there. WP 5.1 would have been fast and reliable on that machine.
I work in an office full of them, push button monkeys with not 2 brain cells to rub together on a good day, aggegate.
What they produce is truly appalling, because it has never occurred to even one of them that someone else may have to read their code. Neither has it crossed their mind to carefully analyse the issue at hand and reach an appropriate model for solving it BEFORE they start coding. The monstrosities I have seen beggar belief, but hey, 30+ years and counting, I have seen it all more or less.
It's deja vu all over again. The industry has not "learned" and the mistakes we knew not to make 30 years ago have been constantly repeated ever since :(
:) "There is a line from clay tablets with triangle marks through Hollerith machines to temperature controlled rooms full of discs and servers, and companies that recognise that the software is more important than the hardware are better primed for long term survival."
When I wrote that I was thinking of zero, which could have come from the Hindu people using a pebble to mark 1 and its absence (ie a depresion in sand) to make nothing
But I thought that would be too difficult to explain.
While I agree with what you said, I think you missed what I was suggesting. I was suggesting that industry should skill up their coders so that they were capable of writing the efficient code. This would be of benefit to many of us older people, as we came from such an environment.
Education does as industry wants. If there was a serious need to have people trained in writing assembler, within 5 years, the education system would be falling over itself with suitable courses (it takes that long to develop a syllabus and get it accepted). Vocational training could be even quicker as long as there were the trainers able to teach (although this is debatable).
The only reason that Java, C#, .Net and Python are the programming languages of choice in education is that they think this is what industry needs!
Computing anything requires computing hardware. If your business isn't in the hardware business then you are ALWAYS dependent upon those who are in the hardware business.
When your customers have problems which relate to the hardware, and configuration, they're not going to be looking at you.
When they have very specific hardware needs, or unconventional requirements, you're automatically out of the game, because a hardware manufacturer can always do better than you.
Being a hardware manufacturer who provides end to end solutions gives you an advantage in the eyes of the customer, that single point of contact thingy customers want. If you can't provide that, then you're just another third party supplier, competing in a field full of third party suppliers.
I think the point is that as cloud computing develops, and as even commodity hardware starts to be "good enough", the need for specialised hardware (except in some niche areas) is largely going to go away. In fact, people will want it to go away because vendor lock-in is to be avoided.
The software needs to become more efficient in all phases of the development cycle, but ideally should not be dependent on custom hardware. And someone is always going to make that hardware.
"The question may arise as to whether government clients may push back on having Chinese Lenovo kits in their data centres due to "national security" concerns."
That thought crossed my mind, before purchasing some Lenovo PCs and ThinkPads, but I dismissed it with the thought that if any back doors were discovered in their hardware they would be toast, so are unlikely to risk it.
What's more, the Chinese equivalent of the NSA/GCHQ isn't likely to be trawling through your personal data in the hope that they can engage in a little blackmail on the side. An opaque system encourages abuse by the people who operate it, something that doesn't seem to occur to our politicians (unless of course they are being blackmailed to keep quiet by those same security agencies).
I'm still trying to understand why Edward Snowden was employed by Dell while he was working for the NSA, according to the Guardian ( http://www.theguardian.com/world/2014/feb/01/edward-snowden-intelligence-leak-nsa-contractor-extract ). This raises questions as to how closely Dell is working with the NSA.
Whilst a reasoned argument may lead you to this conclusion, that is not what happens inside many government organizations.
A lot of organizations with security concerns have effectively got a black list of countries that they do not deal with, and this normally extends to companies from those countries. China is on this list, and thus it requires special sign-off to deal with those companies.
What I find difficult to reconcile is that buying a server made and shipped from China with a Lenovo badge may be prohibited, buying one made in China with an IBM badge is not. It seems that all it takes for purchase to be acceptable is the location of the registered head office of the company.
I do wonder why Lenovo don't make more of the fact that there are joint head offices, in Beijing, China and Morrisville, North Carolina, U.S. As far as I can ascertain, they were incorporated in Hong Kong before China took control of the region back, and are counted as a Global Fortune 500 company. Everybody talks about it being Chinese, but it looks to me like there would be some mileage in claiming that it was an international company.
Why would you go to IBM for a low end server? you'd immediately think "over-priced" or "expensive".
Of course. Most business don't need and aren't able to pay well constructed servers with lot of metal and unique advanced functions, when you can buy the same processing power on servers built with cheap plastic. These small servers won't last long so why pay more?
Its a lightbulb moment in a lot of senses. The box is open on Cloud - it won't go away. We'll probably have just 1 Colossus one eventually. Reduce the power hungry software - great idea - we won't need coders though, we'll have a machine to run the cheap code through it to reduce it. It was mentioned that Sys Admins will see their day soon. I believe it. They'll be employed to keep the lights on (15 million merits anyone?).... Its Peak IT. We're reaching it (or we've gone over). The UK hasn't built significant numbers of boats or dug coal for over 30 years, lets see where IT will be in 10... I for one have started learning Chinese and Hindi and am looking forwards to working for our benevolent new overlords
Watch out for HP's low power claims, they always said that for their blade chassis too but using their own tools it turns out the blades use more power than similar rack systems.
By selling its system x channel to Lenova, immediately, IBM has created greater pricing flexibility for the rest of its server range. If IBM believes the x86 architecture has peaked, certainly other than as a commodity, it conflicts with IBM's known specialisation in high priced quality. Does IBM have future plans in the hardware sector; neither does the author of this post nor do we know.
Start by making something for consumers that is affordable and isn't crap. I love HP servers, but their consumer-grade products are absolute crap, and Dell fell into that hole too. Lenovo & Asus are the only brands I recommend for cheap machines these days, and it has paid off.
My thinking is similar. I recommend Lenovo and Dell (believe it or not) BUSINESS-class laptops. In other words, Lenovo Thinkpads and Dell Latitudes. Every company in the computer box business, including Dell and Lenovo, treats consumers like crap, and sells poorly made and unmaintainable crap to consumers. But sell a business 5000 pieces of crap, and Michael Dell, Meg Whitman, Ginny Rometty or any other computer CEO will get a call from the CEO of the business, saying that 5000 pieces of crap are sitting on a loading dock to be returned to the manufacturer. This is only a long-winded and eloquent way of saying that computer mfrs cannot afford to sell poorly designed equipment to large corporations, govts and NGOs. So you buy a Lenovo Thinkpad or Dell Latitude and rest easy. I still like black Lenovo Thinkpads best, because black goes well with anything and it is easy to accessorize.
One business and technology tactic that could help propel increased sales of Lenovo over HP in the midrange X86 Server space, while also allaying some fears of Western governments about Chinese cyber interloping, is for Lenovo to "fully embrace" the Linux and Free/Open Source Software (FOSS) software platforms as an equal or even higher level partner than for Microsoft Windows.
This step is prudent and logical since most of the enterprise and even smaller business and organizational Virtualization and Cloud Computing Services growth explosion taking place are predominantly Linux based, and with Linux and FOSS, Western Governments would have much less or little concern about Operating System (OS) "back doors" or networking botnets (if using Linux networking gear) in protection against (Chinese) Cyber attacks, as compared to the well known and documented Microsoft serious cyber vulnerabilities and possible OS back doors.
This move is very unlikely to take place though, as Lenovo has consistently shown to be a Microsoft dependent and obedient OEM just like HP and Dell – albeit up to now in the PC Desktop and very small X86 Server arena, that could not therefore adequately differentiate itself from it's Windows siblings, or be as attractive to many European Union and South American large technology purchasers who are now established as and increasingly "Open Standards and Open Source" only adopters.