back to article LzLabs kills Swisscom’s mainframes – but it's not the work of a vicious BOFH: All the apps are now living on cloud nine

Swiss software upstart LzLabs says its first customer has successfully kicked the mainframe habit and moved all of its big iron applications into the cloud – without having to rewrite or recompile any code. Swisscom, the country’s largest telco, has replaced 2,500 MIPS (Millions of Instructions Per Second) worth of IBM …

  1. MiguelC Silver badge

    The mainframe is dead, long live the mainframe!

  2. Caff

    mainframe switch

    There is no power switch there on a mainframe...

    Also how does the lzlabs mainframe migration/modernization offering compare to similar from microfocus or dxc?

    1. aaaaaaaaaaaaaaaa

      Re: mainframe switch

      "..............offering compare to similar from microfocus or dxc?"

      Knowing both MF and DXC, lzlabs is probably lights years ahead and a tenth of the cost..

      1. Caff

        Re: mainframe switch

        My only experience is with the enterprise server from MF, ran well could run JCL, batch and cics regions. Cobol was just exported and wrapped up as a DLL.

        Would be interested to know what the pricing / features comparison is like but trying to get that out publicly from those companies is nigh on impossible.

  3. schafdog

    Not link to picture of the International Beverage Machine?

    1. x-IBMer

      International Beverages Machine

      The new IBM:

      https://www.linkedin.com/feed/update/urn:li:activity:6535785170232975360

  4. Keith Oborn

    Yes but--

    Seems to me that the fundamental problem is still all that legacy COBOL and this just kicks that can down the road, or do I misread?

    1. Caff

      Re: Yes but--

      Generally a re-write is too expensive or risky, so legacy cobol is wrapped up for each function/feature. New applications then written to the new system and wait for the legacy parts to die off naturally. This bit the die off naturally depends on the lifescale of the business, bank/pensions products have such long lifetimes that the code lasts decades.

    2. Bronek Kozicki Silver badge

      Re: Yes but--

      I do not think that the COBOL itself is that much of a problem. The age of the hardware platform is, the fragility of code is, but the choice of the language does not necessitate either. Lots of people apparently use PHP for serious applications, despite the fact that it is much worse than COBOL.

      1. DougS Silver badge

        Re: Yes but--

        This "mainframe in a cloud" solves the problem of fragility of hardware, but does nothing for fragility of code. And of course makes your business dependent on your internet link and cloud provider, which will have uptimes in no way comparable to a mainframe's.

      2. Anonymous Coward
        Anonymous Coward

        Re: Yes but--

        Based on the picture their mainframe is z14, can't be any more than 2 years old. So the"age" of the mainframe is not old. IBM's zSystems are designed with darn near redundant everything. If one component fails, there is another that is either active or in hot standby. Code is fragile on any/all platform. We started a migration from our mainframe to x86 under Linux using Java 10 years ago because the code on the mainframe was to fragile to change. If you changed one program it could cause 5-20 other programs to break. The programs on the mainframe were 25+ years old.

        Three years after going production, with the new Java based the director in charge of the migration said we have to come up with a better way because after just 3 years the Java code was too "brittle", if we changed one thing in one program it would cause the whole system to just die.

    3. e^iπ+1=0

      Re: Yes but--

      Wtf!

      "just kicks that can down the road"

      Shurely you mean "CICS that can"?

    4. alanplayford

      Re: Yes but--

      Not really missed the point, Keith, but consider this .....

      Usually migration has involved recompiling the source, but can you trust that the source is totally up-to-date?

      More to the point, it buys valuable time to consider modernization paths, using cheaper resources, to enable legacy stuff to be re-written in modern languages which CAN be supported now and in the future.

      1. A.P. Veening Silver badge

        Re: Yes but--

        Usually migration has involved recompiling the source, but can you trust that the source is totally up-to-date?

        I was part of a team that solved a similar problem a couple of years ago. We solved it by running everything in tandem for a while and comparing the results (automated). The night jobs were easy, we just restored a back up made immediately preceding the night run and kicked it off when we felt like it, setting the appropriate system date and time on the new system. And yes, we caught a couple of differences, the problem was usually that somebody had modified the sources on production without backporting those changes. As a result of our efforts, IBM learned how to do something it considered impossible: a one step OS/400 migration from version 5.3 to 7.1.

  5. Quentin North
    Boffin

    Tools

    THe key thing I remember from my mainframe days, is that it is not just about the CICS COBOL apps, which Microfocus can support, what about all the JCL and batch scheduler loads etc? Also, key tools of IBM mainframes like RACF really do make the environment robust, so I do wonder if this platform will be nearly as reliable or secure.

    That all said, I remember when Baby/36 came along and allowed System/36 RPG II applications and OCL to run on networked PCs, it practically killed IBMs then ageing midrange platform. Still, the successor AS/400 trundles on.

    1. Michael Wojcik Silver badge

      Re: Tools

      Micro Focus Enterprise Server has JCL support (JES2, JES3, and VSE variants), and has for years. Batch support includes REXX and TSO, and scheduler integration.

      ES has a security mechanism which provides functionality similar to RACF, though since it's not tightly integrated into the OS it assumes your migrated mainframe applications aren't hostile.

      LzLabs isn't our only competitor playing in the "migrate mainframe applications to the cloud" space. I don't know anything about their offerings beyond what's in the article, though. (I don't spend a lot of time looking at our competitors; I'm focused on improvements that customers actually ask us for, or that we identify internally. Other people research the competition.)

  6. Doctor Syntax Silver badge

    Software from one hardware platform is running on a totally different one without recompilation? Are they running mainframe binaries on an emulator or do they ditch the binaries and just interpret the source?

    1. Anonymous Coward
      Anonymous Coward

      Depending on exactly how they're doing it, IBM's lawyers may be having a chat with them soon.

    2. peeterj

      The mainframe binaries are being run.

  7. cschneid

    Interesting. One of the advantages of CICS is its resource management, where an application can update a DB2 table, a VSAM file, an IMS segment, and then send an MQ message only to encounter a problem, abend, and all those updates never happened. LzLabs claim to be able to do the same.

    There is much talk of load modules, no mention of program objects which is the format of any COBOL application recompiled with IBM Enterprise COBOL v5+. That may not matter, as the LzLabs seemingly has an emulation layer. I say seemingly because their product data sheets are not available to the hoi polloi.

    Customers are, however, still stuck with one vendor, just as they were with their IBM Z. Also, I didn't see a mention of cost comparisons. I presume LzLabs is cheaper, at least for the honeymoon period, taking into account TCO and not just TCOWICAFE (Total Cost Of What I Can Account For Easily).

    I wonder about SMF, which is useful for post-event analysis.

    It seems like an awful lot of effort is being put into mitigation of a perceived problem: lack of mainframe skills. I think it's probably cheaper to just train the new staff, but that would make them skilled labor instead of fungible resources.

    1. alanplayford

      SMF records are cut as previously and available for analysis as before.

    2. Michael Wojcik Silver badge

      One of the advantages of CICS is its resource management, where an application can update a DB2 table, a VSAM file, an IMS segment, and then send an MQ message only to encounter a problem, abend, and all those updates never happened.

      Coordinating multiple resource managers in a transaction is not unique to CICS. It's pretty common, in fact.

  8. James Anderson Silver badge

    Hercules

    For decades now you could run mainframe software on x386 and up using the open source Hercules emulator.

    The main problem is IBM won't license the software to run on the kit -- or if they do they charge eye watering M/F prices.

    So it looks like they interpret machine code but trap the CICS, DB2 type calls and emulate them with Postgress.

  9. Anonymous Coward
    Anonymous Coward

    Given the circumstance, one wonders if we should maybe start a countdown timer to see how long it takes for these folks to get hit with a ransomware attack and/or an x86 side-channel attack.

  10. Brett Weaver

    Reliability?

    The mainframe would provide 5 9's uptime. This configuration?

    1. DougS Silver badge

      Re: Reliability?

      Since it depends on both your internet connection and your cloud provider, you can subtract at least two 9s from that figure.

      1. x-IBMer

        Re: Reliability?

        To be truthful, you have the exact same issues with your legacy system - you still need an Internet upstream to actually have customers access the system, and you still need a properly managed datacentre. Swisscom manage datacentres and they are a major Telco, so you’d hope they know something about Internet, and connections.

  11. Anonymous Coward
    Anonymous Coward

    So many questions, so few answers

    For those not up to their armpits day to day in matters mainframe (as some of us are)... some perspective.

    2,500 MIPS* is puny. In the current generation (z14), a one-engine box is over 1,800 MIPS, and a two-engine box is 3,400 and change. The largest z14, a 170 engine monster, is over 140,000 MIPS.

    So, to begin with, this is a very very small mainframe environment. One wonders how complex the workload is. Things you can do (easily or not) with small and simple workloads may be impossible with large and complex workloads.

    Curious as to how they are emulating SVCs, CICS APIs, MQ APIs, IMS, DB2, and the plethora of crucial 3rd party software (e.g. job schedulers, console automation, print managment) most mainframe shops rely on to process the workload.

    Also curious as to how they manage the resource management/conflict controls built into the mainframe environment, within a single zOS instance and across instances through SYSPLEX technolgies.

    Also curious as to how they intend to manage running critical workloads inside hardware platforms with less internal redundancy than a zXX box.

    Also curious how they intend to support the same level of I/O throughput without the physical and logical capabilities of the mainframe platform.

    For context: I toil in operations in a shop with four z13s with a total (active) rating north of 30,000 MIPS.

    * IBM doesn't use the term "MIPS" anymore, their 'equivalant' is PCI, as seen in their published LSPR tables.

    https://www-01.ibm.com/servers/resourcelink/lib03060.nsf/pages/lsprITRzOSv2r2?OpenDocument#z14

    1. YetAnotherJoeBlow

      Re: So many questions, so few answers

      When I last used CICS, I had a lot of respect for its abilities in a truely love-hate relationship. So this company services the CICS calls AND all of the other APIs as well? Yes, I have a lot of questions too! Had I read this article anywhere else, I would have called BS.

    2. x-IBMer

      Re: So many questions, so few answers

      And none of the points you make prevent larger workloads also being migrated in the same way. A particular focus many of us mainframers have is on the traditional Reliability, Availability and Scalability (RAS) strengths used to market the mainframe and justify its enormous costs. However it’s long been the case that properly configured x86 based servers can also meet these RAS needs. The same applies to the, again traditionally touted, mainframe I/O rates.

  12. Anonymous Coward
    Anonymous Coward

    Not sure what they mean by "... find a way to drag the ancient mainframe architecture, kicking and screaming, into the 21st century. " The underling hardware uses the same memory modules as x86 servers and the same PCIe bus, just more of them and designed in a way that no failure of a single component will bring the whole system down. In fact the I/O modules on a modern mainframe are small computers running their own little OS, probably a stripped down Linux kernel.

    Last time I checked, and I still work on IBM mainframes, under all operating systems that can run on a mainframe you can write in programming langues other than COBOL. You can run different OS's on IBM's mainframes: z/OS, z/TPF z/VSE, s/VM, and Linux. I'm not sure about z/TPF, but the other 4 all support Java and all of them support C/C++.

    The modern mainframe hardware is designed for resiliency. It has redundant array of independent memory (RAIM, think RAID but for memory), spare CPU's, multiple I/O paths to the same device. In fact other than RAIM, mainframes have had spare CPU's and multiple I/O paths for decades. That I am aware of there is no SPOF on a zSystem (can't say that for a pizza box x86). It's not the mainframe of the 60's, but it can still run code that was written in the 60's.

    There are just so many hardware features that zSystems have that x86 systems don't.

    The hardware platform does not change the fragility of a program or application system. If it is fragile on a mainframe, the same code is fragile on any other hardware platform. Any application that is written and maintained for decades will become more and more complex and thus more and more fragile. It does not matter if it is running on a mainframe, x86, or SPARC based hardware.

    It interesting that it took multiple x86 servers to replace a small mainframe when a single z/14 can run thousands of virtual Linux systems.

    1. x-IBMer

      The problem has never been about how great the mainframe hardware and integrated software is - it’s always been about the cost. We had to teach our new manager at IBM to answer the question he constantly got from our customers about why it was so expensive with the answer “expensive compared to what?” - which is the correct position to take when analyzing which hardware/software combination solved your business problems at the most cost effective point. While no-one denies the mainframe is a great computing platform, the question is whether an alternative platform is ‘good enough’, especially if the costs are significantly lower.

      If you dig a little into the PR, I think you’ll find that either 4 or 8 virtualised x86 cores were enough to rehost the 2500 MIPS - that’s a pretty good ratio considering how cheap intel is when compared to SystemZ cores.

      1. Anonymous Coward
        Anonymous Coward

        Interesting point of view on cost, considering there are company migrating their Linux workload to run on zSystem using IFLs and z/VM and saving millions of dollars a year in software, hardware, and environmental costs.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019