back to article We're going for optimised workload delivery...

Who wouldn’t want IT to be delivered in a more dynamic, flexible, agile, [insert your least detested buzzword here] way? Optimal configurations for server and desktop workload delivery have been discussed, and indeed, attempted, for many years. So how close are we to really nailing this? The answer to this question may depend …

COMMENTS

This topic is closed for new posts.
  1. AndyZos_AIX

    We have never had it so good...

    Along with many others we are working on retrofitting all the good bits about Mainframe back into our AIX and Windows estate and have been seem to have a much better chance at delivering "Optimised Workload Delivery".

    Virtualisation of systems from a cpu and memory perspective have allowed us to drastically reduce the time it takes to commision a system (no extra cabling for SAN and Network for instance). Virtualisation has also made it possible to increase the hardware utilisation as there are now no constraints from a hardware card perspective i.e. running out of network cards/internal scsi disks for root disks etc. This has enabled us to push up CPU utilisation and drop the cost of each system form both a manpower perspective and a hardware spend perspective.

    Advancements on the Power6 hardware have also allowed us to segregate partitions from a licensing perspective and allowed us to over-commit cpu within a sub-pool meaning that although the frame may have 64Cpus we can run Oracle partiitons within a sub-pool of 10 CPUs and cram on as many lpars as we can into it and only pay for 10 Oracle licenses. Again, this brings flexibility and cost savings which can be passed to the customer.

    The dynamic operations side of thing allows us to respond to workload peaks without requiring an outage, this helps us meet SLAs keeps the customer happy and as long as the software licenses are felxible enough we can save money and remain legal!

    However, to do this properly the customer must be charged for a Service and not (as it seems has been the case for many years) for x number of CPUs and x amount of memory. To fully exploit new feeatures for optimised workload delivery the whole enterprise and way of working both from an IT perspective and from a customer/business perspective has to change. To achieve fexlibility the customer has to let go of the comfort blacket of knowing that he has bought 4 CPUs and 4 he will get (even if for 90% of the time they only run 30% busy).

    So as with mainframes, one of the headaches around this flexibility of using shared resources is who pays for what, how do you measure it and how is it charged back and accounted for at the end of the day.

    System management also needs to increase its scope and focus on the CEC performance and utilisation rather than concentrating on the individual LPAR. Keeping on eye on this enables you to confidently over-commit CPU without flattening the box.

    The other major handbrake to fully exploiting these "new" features is software licensing. Although the big players seem to be more on the ball regarding how customers wish to use the new hardware there are still many smaller vendors who insist on certain rigid contractual obstacles to fully making use of this flexibility. Getting these smaller vendors onside can be a time consuming process, however I believe that over time nearly all will have to go with the tide.

    Anyway, I definately believe that as these features are taken up more readily IT certainly has its best chance so far of optimising workload and service provision...if the bean counters/change management and contract management allows!

This topic is closed for new posts.

Other stories you might like