back to article Serverless is awesome (if you overlook inflated costs, dislike distributed computing, love vendor lock-in), say boffins

Serverless computing, which does actually involve servers, has been touted as a way to reduce computing costs though its pay-per-use model and to free developers from operational concerns. But – and there's always a but – researchers from the University of California at Berkeley contend that it's an expensive disappointment for …

  1. Andrew Commons

    Measurements?

    ...just imagine how much innovation will be done one these platforms...

    How is 'innovation' quantified? Innovations per second? A metric fuck-ton of innovation? And is there a one-to-one correspondence between 'innovation' and 'disruption', we need to measure the inevitable 'disruption' as well.

    Can the El Reg Units desk help out here?

    1. Phil O'Sophical Silver badge

      Re: Measurements?

      How is 'innovation' quantified?

      Perhaps the unit should be the "Sinclair" ? Possibly on a log scale, though. The bigger the project, the less the innovation ?

      1. Andrew Commons

        Re: Measurements?

        Good choice, it will need to be calibrated of course. So maybe a Drone can have an innovation of 1 Sinclair and disruption, measured in Gatwicks, of 36?

    2. W@ldo

      Re: Measurements?

      "metric fuck-ton"

      Now, that's a very useful measurement! Love it, I wish I could use it at work when someone comes forward with a half-baked idea....

  2. Doctor Syntax Silver badge

    "Serverless computing, which does actually involve servers"

    That's called "Getting rid of the difficult bit in the title". YM 101

    "it's an expensive disappointment for all but a few simple applications."

    That ignores its true basis. It's a lucrative business for vendors.

  3. Anonymous Coward
    Anonymous Coward

    Lets make some shit up

    "We need to be infrastructure free. We'll mislead the purse holders into agreeing our new ideas. We'll then all keep quiet when it's revealed it actually costs much much more, way over a million more than we said. All because the tax payers will find out. We'll get the chop and the purse holders won't be re-elected.

    But infrastructure free is the way of the future"

    Cock.

  4. Destroy All Monsters Silver badge

    "Serverless Lambda" is like "Unmoored RPC"

    It sounds dangerous and foggy.

  5. Andy Mac
    Facepalm

    So according to this paper, serverless is good for "this" but not for "that". Isn't that true of everything and the rest is just hype?

    And if you're believing the hype... well, there probably wasn't much hope for you in the first place.

    1. Tessier-Ashpool

      Indeed. Having dipped my toes into some of Azure's offerings, I'm quite looking forward to making use of Durable Functions. I got out of the mindset of relying on code running on a particular server quite some time ago. As a developer, it's a good thing if I can focus on writing efficient async stateful code without all that tedious nonsense of worrying about server instances, the OS that's running the code, platform upgrades etc.

      1. Doctor Syntax Silver badge

        "got out of the mindset of relying on code running on a particular server quite some time ago."

        I'm genuinely curious about this sort of thing.

        Is your data purely ephemeral?

        If not, how do you manage connections between the not-a-particular-server and the server holding the persistent data? Each time you invoke service you'll need to set up a connection between wherever it's running and the data server and that would include authentication - hopefully two way because the application server would need to know it's connected to the real data just as much as the data server would need to know the connection comes from a genuine application. This takes time and resources. In fact if I understood the account of the TSB debacle correctly it was this sort of issue that was the underlying problem.

        Another aspect is that if you don't have control over where the application runs you can't be sure of the speed of the link to where the data sits.

        I'd expect issues like this to be serious hit on performance when it gets into production.

        1. Tessier-Ashpool

          Is my data purely ephemeral? Sometimes, yes: anonymous connections yield a computed result that is returned to the client. Where data needs to be preserved, that’s where a database or cloud storage comes in. We’ve moved away from hosting dedicated sql servers or storage devices for the same reasons as outlined above. These are services that are not pinned to particular physical instances. My data may be in one of a dozen physical locations - I really don’t care where so long as each and every one has a high bandwidth connection. The biggest resistance we’ve met is from infrastructure guys, who get moody when they find out they’re no longer needed.

  6. Sammy Smalls

    Caveat emptor

    As always.

  7. Anonymous Coward
    Anonymous Coward

    Have worked on many Serverless projects, including system critical applications in regulated industries, I feel like I can have some input here.

    Inflated costs: just not true IF DONE RIGHT, and for the RIGHT projects. On average, 65% cheaper than an equivalent EC2 deployed solution for projects I've been involved in (compared to both known previous versions of projects and estimates).

    Vendor lock-in, true but not really a secret. Not sure about other people but I don't really make a habit of switching cloud providers every few months. Also, the recently released AWS firecracker could provide some interesting options.

    Efficient data processing. This is a case of picking the right tool for the job. I generally don't use lamba for anything that will take more than a minute to run. In this case I use lamba to spin up an EC2 spot instance (billed per second) to get the job done instead.

    Same for hardware APIs. I don't go around complaining that DynamoDB doesn't work well with relational data, pick the right tool for the job folks!

    Bandwidth is a fair point and they are aware of it with improvements coming but more can be done here. It's also a well known issue, however, so it is considered in your architecture and design patterns. Depending on the project, going asynchronous might help, using queue patterns and things like sockets to feedback to a user so they don't need to wait.

    Overall it's a pretty bad review in my opinion and seems a bit anti-aws making me question who paid/commissioned it. They actually missed some known downsides that are actually a lot more relevant and apply to all vendors, usually more so to the Aws competition.

    1. Pascal Monett Silver badge

      Re: not true IF DONE RIGHT, and for the RIGHT projects

      Care to expand on that a bit ? I'd really like an idea of what the "right" projects are.

      1. sed gawk

        Re: not true IF DONE RIGHT, and for the RIGHT projects

        FWIW the article pointed out a good use case.

        A - trigger small script based on an AWS event.

        So some webby front end is given write access to a named S3 Bucket (Input Bucket).

        A JS client side read only front end is served from another S3 bucket (Output Bucket).

        When an object is uploaded to (Input Bucket), use Lambda to run a script taking the newly uploaded Object as argument, and process it in some way, writing the output to the (Output Bucket).

        Now that you could do with a single VM and polling of S3, but it's a workable use of lambda and the likely costs of having a small VM running constantly are likely to exceed the compute costs.

        Yes - this is trivial and contrived, but it's simplistic async RPC.

        That's useful if you want to scale RPC on a per request basis.

        The (sometimes)useful side-effect being that every RPC has its own security policy, and can fail.

      2. Martin M

        Re: not true IF DONE RIGHT, and for the RIGHT projects

        Implementing a low-mid usage REST/GraphQL API by exposing Lambda functions via API Gateway is an incredibly common one. In most cases you're going to be using some form of backend database and your mid-tier should be stateless in any case. It can save a lot of money - think about not just your production environment but all of your test environments that are often incredibly underutilised even on the smallest EC2 instances. I've seen a project collapse their mid-tier hosting costs from many thousands a month to about 100 quid by doing this. Production scales seamlessly with no need for cluster management, autoscaling configuration etc.

        One gotcha to this: your application must be able to handle relatively long call latencies related to cold starts during load spikes, as containers and app runtimes are dynamically spun up. Latency will depend on language; statically compiled Go will be very much faster than Java's much heavier runtime and JIT compilation. There's a clear tradeoff there for not paying for always-on infrastructure. Under steady state load, things are fine.

        Lock-in is a fair point - people need to think about that and go in with their eyes open. But if it actually became an issue, I strongly suspect someone would extend something like Kubeless to create an open source AWS Lambda compatible runtime (assuming that isn't already the case).

        As usage increases you might get to the point where it makes sense economically to run your own clusters over EC2 with a dedicated team to manage them. But if your API is relatively well written and doesn't needlessly piss away cycles (OK, I admit that's a minority), you'll almost certainly never get there. If you do, it's a good problem to have. Even lockin is likely not a problem - you'll probably a/ have already rewritten your API several times over anyway and b/ have the money to do so because your service is a wild success.

        As others have said, benchmarking ML use cases is simply ridiculous and suggests a bias rather than neutral academic work. No-one with an ounce of sense would do that on Lambda. Also all the points about I/O limitations - the types of use cases for which Lambda is well suited are usually CPU bound.

  8. herman Silver badge
    FAIL

    It all sounds like the age of the dinosaurs (mainframes) has been replaced with a new age of bigger dinosaurs. What will come next, the age of Personal Computing? Maybe I should go and build a small desktop machine and call it a PC.

    1. Pascal Monett Silver badge
      Trollface

      No, no, you're not with the program.

      After Serverless Computing, we'll be ushered into a new era that will be called Power-Assisted Computing. In order to properly adapt this paradigm, you should build a Computer Assistant permeated with AI and 7G connections.

      Oh, and a holographic screen. It has to be holographic.

    2. phuzz Silver badge

      You're missing a state.

      There's PC, when the computer is on your desk. Server, when the machine is somewhere in your office, and mainframe, when the machine is in someone else's office and they rent it to you (we call this 'cloud', or 'stuff-as-a-service' this time around).

      IT as an industry tends to move between these states, I think we're somewhere near mainframe at the moment.

      1. RancidRodent

        "I think we're somewhere near mainframe at the moment."

        Minus the efficiency and reliability of course - that "progress" - apparently...

  9. Anonymous Coward
    Anonymous Coward

    why would anyone use serverless for data processing? as a paradigm it doesnt work. data processing requires quick access to... data. that use case requires traditional understanding and experience of building out server and database infrastructure. you can't just put that 'on the cloud' and expect someone else to solve all the complex (and often unique) problems you'll see in each data application.

    serverless is for computational logic. the stuff where there isn't really that much data coming in or going out - just a whole bunch of logic that needs calculating. the clue is in the name - functions, lambdas. if you're needing to handle persistant states, then functional processing doesnt fit with your needs. functional programming is stateless - as in the only state that's kept is only whats needed to execute the function.

    perhaps articles like this are good in reminding everyone not to jump on every bandwagon that comes along. personally, i think serveless is great as finally we have the ability to develop an application without getting bogged down with building out metal boxes (even virtual ones). its one step closer to the idealised computational paradigm - a world where computation can be non-local, stateless and ubiquitous (ie it goes where its needed rather than the users going to where the compution takes place).

    anyway, every tool has its uses and each use-case has its appropriate tool.

    1. Doctor Syntax Silver badge

      "that use case requires traditional understanding and experience of building out server and database infrastructure"

      That's legacy computing. It's just not cool. All it's good for is running a business.

  10. Geebee Zeebee
    Boffin

    Costs are not apples-to-apples

    Most folks who point out that Serverless is expensive are often failing to adequately consider the alternatives.

    Sure, processing a workload on Serverless might cost more than processing the equivalent workload on dedicated hardware... but if you wish to make an effective comparison you must consider what indirect costs are eliminated by Serverless...

    In order to build, code, deploy, and run a typical server-side commercial workload on dedicated hardware, involving file and data, you will need the following people (or skills):

    - Linux system administrator

    - networking engineer

    - security specialist

    - DevOps

    - database administrator

    - software developer

    Most systems of any size need these people around continuously. If you provide a mission critical service, then that's a minimum of 3 of each, operating across timezones to cover the clock, 24/7/365. And that's the bare minimum! An expensive exercise.

    On top of those people costs, you also need additional hardware (hot-swap, cold-swap, etc), boxes of brand new spare parts for every piece that could fail, firewalls, network redundancy, etc.

    If you want to then SCALE this workload, you need a similar list of people, but usually with decades more experience, and you generally have to re-write the system from scratch to be stateless anyway. If you don't, you can't scale past one big box. To scale you'd also need to add load balancers, external storage devices, external databases, etc, along with all the specialised skills needed to keep them up to date and running. Also 24/7/365.

    The traditional model starts with all those massive costs from day one, and you need to maintain them regardless of the popularity of the product. Your costs in the first year for all those people and resources are probably going to be about the same for 10 users as they are for 100,000. You can also expect a fair bit of downtime: failures are more common in systems grown over time that are built on increasing dependence on a fragile combination of settings highly specific to one particular hardware and OS configuration.

    By contrast, a Serverless commercial back-end with similar workload, only needs a software developer, or maybe also database administrator. And you don't need them 24/7 - you only need them when you want to add new features. A well-architected Serverless project already scales as part of its basic nature. You pay per execution of your code, so you know that costs will scale together with the popularity of your product. Management generally like this predictable model. Your workload runs completely statelessly, so you know that when it runs tomorrow it will be working the same as it does today. If something suddenly behaves differently, there is no need to send engineers to troubleshoot the problem, because it is probably something your software developer needs to solve.

    The Serverless model starts with minimal infrastructure and people costs. When you have 10 users for your first 6 months pre-launch, infrastructure will probably be free, or perhaps a couple of dollars per month.

    Additionally, sensible Serverless architecture lends itself to support good design patterns from the outset. Individually scalable microservices, for example, can be a huge reducer of running costs, but can take a lot of time, people, and energy to get right in a traditional environment.

    And as for vendor lock-in: a sensible Serverless system should be built with a wrapper that makes it portable, so if you think AWS's latest price drop didn't quite go far enough, just flick a switch and run your workload on Azure.

    Bottom line is that Serverless done right can save millions.

    1. marcfielding

      Re: Costs are not apples-to-apples

      Finally someone with a brain that isn't just ranting in the comments.

      I've personally take multinational organisations through PoC's using services like Lambda and it almost always, always works out a lot cheaper - key examples:

      PoC looking at migrating Wordpress fleet(over 2000 sites on servers) to Lambda - saved a ton cash, improved site speed for virtually every user across the globe, reduced complexity and therefore the number of problems.

      Multimillion company launch with A LOT of users all coming online after an announcement at a huge conference, no scaling issues, worked like a dream.

      I could go on for a quite a while here but I'm going to with the examples.

      As Geebee rightly points out Serverless also reduces personnel requirements massively for businesses.

      Also from a Node perspective, the services I've designed are really easy to work with no crazy config, no docker containers, just Node 8.10, Gulp, Serverless, Jest, CircleCI

      Note I don't dislike Docker but when people are trying to replicate a server locally it almost always never works, so then go to docker which is great but it's more complex - I once had to spend 6 days trying to configure my env at a well-known gambling org, simply because the docker setup was complex and staff churn meant nobody had documented it.

      This "research" and I use the term loosely seems to confuse the purpose of Lambda, they seem to have deliberately picked things that nobody in their right mind would try and use it for and then said "Hey Lambda is crap" - it's not and it's changing the way we develop applications for the better.

  11. RLWatkins

    No, it really is "serverless".

    Yes, really. At least according to terminology which has been in use for sixty years.

    Remember when those guys invented the "client/server" model of distributed computing? Computers called servers ran services, to wit non-application-specific functionality such as file storage or printing.

    The computers which ran both application and service code were at the time called "hosts". They still are, since about 1960, except at companies which are dominated by marketroids. (We all know who.)

    So yeah, most of the computers in those datacenters are hosts, not servers, and yes, if it's running my application code it is indeed "serverless". See? They got it right by accident. Stopped clock, etc....

  12. Hans 1 Silver badge
    Boffin

    Cloud, serverless ... just "somebody else's computer!"

    What happens if they fail to update a component and your app gets 0wned ?

    How do you know what that black box is doing ?

    This is crazy and has to stop!

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019