back to article External vs internal: Why hybrid cloud is the way to go

A never-ending stream of cloud providers tells us that they can do a better job than our internal IT departments. And occasionally we come across surveys claiming the same thing. Is it all marketing puff or is there some substance in claims that external is better than internal? They look after the servers for you … but ... …

  1. Nate Amsden Silver badge

    giant difference

    between something like SaaS and IaaS. The article seems to interchange the use cases pretty freely which is misleading. Also seems to be very vague on provider capabilities. IaaS for example the provider may very well have their "code" up to date, but it's still up to the customer to configure (or mis configure) the "firewall".

  2. PJF

    Sooo...

    What happens when your ONE (or combo of) "service(S)" goes TITSUP? An excavator installing new (gas, water, electric, sewer, etc), and your (AWS, cloud de joure) decides to take a break (s/w, h/w, F'ed update, "operator error", ect.) at 1400 on a Friday? You let the staff go for a (long) weekend? F. the (sales, updates/patches, support, etc.) - Sorry, XYZ, can't help (support, diagnose) ya, ALL our info is on the "Puffy Blue" cloud(s), and we let the staff go to the pub/bar....

    Combo, "local" and cloud, is the best taste in my mouth. Age old adage - don't keep all your eggs in ONE basket(case).

    1. Jack of Shadows Silver badge

      Re: Sooo...

      That's already stated in the article. Ditto the freaking interconnects (which you must make sure is not shared anything). In all cases, unless you have platinum grade, redundant data centers, the major cloud players can beat any generic firm's uptime. That's why "Cloud" anything is a desirable feature to consider. [I was playing this game with mainframes in the 80's.]

      My only problem with "Cloud" to date has been the speed of my pipes. I'd hit the wall at 1 TB+ on my datasets. Now the the pipe is running about 46 mbps vice 6 mbps before. Security of the network, transport and at rest set, yada, yada. And that's improving with the new feature sets from Intel.

      I fer sure ain't perfect and that'll be reflected in my uptime. Running my stats vs theirs? They win. Not by much, but they do.

      1. Phil O'Sophical Silver badge

        Re: Sooo...

        freaking interconnects (which you must make sure is not shared anything).

        Easier said than done, especially over time.

        We had our main site configured with power feeds from separate substations, network fibres from two different telcos entering opposite sides of the building, no commonality this side of the Atlantic.

        Some years later we had a total network outage, which wasn't supposed to be possible. The post mortem revealed that telco A had been bought by telco B, and as part of the post-acqusition consolidation they had merged parts of their infrastructure. One of those merged parts was the fibre sliced up by a construction crew, many tens of km from our office. They were able to reconfigure around it, but we were offline for a few hours, fortunately not a critical problem for an R&D site with local source and build systems.

    2. big_D Silver badge

      Re: Sooo...

      Our customer have hot stand-by locally. They cannot afford to "take a break" if there is no internet connection - and being rural food processing facilities, they tend to not have the best internet connections in the first place. Funny people want to eat meat, but they don't like having slaughter houses and food processing plants slap.-bang in the middle of their industrial estate or town, where they have good internet.

      The farmers are worse off, they have to electronically upload their registration data for each animal, before it can go to slaughter. One of them took nearly 2 weeks to download the 64MB installer for the registration program.

      In the processing plants, the software controls the production lines and without a working server, they cannot do anything. If the conveyor or the industry specific hardware (Fat-o-Meter, AutoFOM, scales etc.) break down or stop communicating with the server or the servers go tits-up, then they have 15 to 30 minutes to get it working again, otherwise they have to start throwing carcasses away - and if it is a software problem, the software provider gets the bill for lost production.

      At such facilities, you just can't rely on out-of-house facilities. You might use them for backup or for production analysis, but for the important work, you need reliable, local systems. If you can't guarantee that the internet connection is back up in under 15 minutes (Telekom usually say 2 to 3 working days), then it just isn't an option - and then you have AWS, they probably don't have an SLA that guarantees a maximum of 15 minutes downtime during production hours (usuall 01:00 through to 16:00).

      In addition, controlling the PLC on the production line or in the cool house needs response times measured in milliseconds, again, something a cloud service can't offer, let alone guarantee.

      1. Roland6 Silver badge

        Re: Sooo...

        "Our customer have hot stand-by locally. They cannot afford to "take a break" if there is no internet connection - and being rural food processing facilities, they tend to not have the best internet connections in the first place."

        In this scenario, I wouldn't go with a local hot stand-by, I would of gone for local 'satellite' processing with data feedback to the cloud for key manufacturing systems. This almost totally decoupling the continuity of local processing from the vagaries of the internet connection and cloud systems. The trouble is that few applications and architectures directly support the concept of satellite processing.

        What is interesting, with the growth of VM and cloud technology, it shouldn't really matter where stuff is physically located, but it does, cloud still means in most cases wholly within a single physical datacentre.

  3. razorfishsl

    Obviously you never user 365 in Asia.......

    It requires a LOT of maintenance, is usually impacted for some reason or other, and it is always "our engineers have identified a problem with a recent software upgrade"

  4. gerdesj Silver badge

    no half-baked amateur cowboys, thanks very much

    "no half-baked amateur cowboys, thanks very much" My in-house setup has better uptime than Azure this year. But that's not the real point. The real problem is that internet connectivity still sucks. It's getting better but even when your cloudy systems *are* available, that's a cold crumb of comfort when you can't get to them.

    I've always said to my more excitable staff that when I see two straight years of uptime from cloudy operators then I might consider it.

    At the moment all I see is sweaty mares galloping across the land, with riders who are still trying to invent a functional lariat.

  5. Anonymous Coward
    Anonymous Coward

    proof of concept issue

    We have a project coming up where the initial trial will be cloud-based - prove the thing works before getting approval for any actual on-premise equipment. My concern is that if the cloudy trial works, it will just be left in place and scaled-up: no approval given for a hybrid, as that involves a business case, budgeting etc that will take time to get approved, with users in the background saying "we don't need all that, it works just fine now" - and then the connectivity dies, and the same users start screaming that it doesn't work. Other than waiting for the first failure - or arranging some proactive civil engineering works - any ideas welcome ;-) More seriously, we can't be the only organisation that will find itself in that position; it would be interesting to know how people manage it ...

  6. JimBob01

    VPN to cloud = security?

    I really don’t understand how this works. Isn’t the main point of VPN that you are connecting two points of your own infrastructure over an untrusted network. It appears to be claimed that cloud providers are implicitly trusted partners.

    Alternatively, you could tunnel to each OS instance but that could get very busy when you, at least, double the number of tunnels for redundancy …and then you remember that you have multiple sites that need connectivity...

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019