back to article Is the server layer just a commodity?

It’s been a while since Nicholas Carr wrote his polemic ‘Does IT matter? which documented how IT was commoditising, turning into a utility with little to differentiate itself – a theme which he continued in the book The Big Switch. He was clearly demonstrating an economist’s grasp of technology – falling onto the trap of …

COMMENTS

This topic is closed for new posts.
  1. a walker

    Redundant Array of Cheap Servers

    While the concept of a redundant array of cheap servers (RACS) might be fine for the bean counters, supporting them would be a complete nightmare. Having purchased cheap 1U servers, I found that providing you did upgrade or modify them they did actually work. However, when I needed to upgrade the Bios in one the server was left unusable; so the Bios had to be downgraded to the original and the applications moved to a new server. Replacing every server in a cluster could be expensive, unless you plan to buy new servers every three years

  2. Anonymous Coward
    FAIL

    The fallacy of RAID to PHB's

    RAID is dangerous when PHB's think their data is safe when it is in a RAID array.

    Yep, I have come upon a case where one (not in IT) company CFO beleived that they had made their data safe because it was in a Raid-0 array.

  3. Maurice Verheesen
    IT Angle

    Requirements

    Yes bios updates will fail, servers can catch fire, backups could be destroyed and managers will make stupid decisions, have an organizational process that deals with such situations!

    But first, "the server layer" is quite vague IMHO. Does it mean hardware? Or hardware + OS? Or hardware +OS + Apps (i.e. a webserver)? It's hard to draw a line these days were commodity stops and the specific application begins.

    The trouble however is not the hardware or the OS or even webservers and software frameworks. Problems start much earlier, namely at specifying the required functionality of an IT system/solution by a business.

    Contrary to popular believe, in some cases requirements can't be captured by a static list of demands. They are continuously changing. Customer requirements captured at any given time are therefore seldom correct and have an expiration date. This is at the heart of the problem and why IT never "just works", since the measurement of IT's success is a moving target.

    Software products are never complete, since requirements and expectations keep changing. Unless your products and/or customers needs (and therefore IT requirements) never change, the solution seems to be (at least to me at the moment) rapid, iterative development of functionality. The company/department that has an organizational process that can deal with these changes has a better chance of satisfying requirements.

    I can imagine that decoupling the functionality (i.e. software) from the hardware is a relieve. So maybe virtualization is not that bad a marketing fad. Iff it makes sure software can change without having to deal with hardware restrictions and hardware can change without a need to change the software. The two opposing forces just might establish an equilibrium.

    In the mean time, I think it's wise to try and apply the same decoupling principle. Have play yards where application developers can play on their own development servers and when it's time for a release, plan a meeting with the two opposing forces and design the production environment together. But it doesn't hurt to keep in mind what the capabilities are of your "server layer" (and maintenance thereof) when playing in the play yard ;)

This topic is closed for new posts.

Other stories you might like