back to article Amazon reveals its Google-killing 'R3' server instances

Amazon has released a new class of virtualized, rentable servers that make more RAM available than systems from rivals like Google and Microsoft. The "R3" memory-optimized servers became available on Thursday following their announcement at the Amazon Web Services summit in late March. "R3 instances are recommended for …

COMMENTS

This topic is closed for new posts.
  1. Michael H.F. Wilkinson Silver badge

    Nice improvement, but ...

    Still not enough RAM!!

    I know, I know, us compute guys are never satisfied. You give them more than 640 kB? They want more than 4 MB, you give them 16MB? They want 256MB! You up your game to 16 GB, and some knucklehead wants double that, and so on, and so forth.

    Having frequently run out of memory on a 64 core 512GB compute server (with 384 GB Fusion-IO card), the R3 is still not for me. Still looking for a distributed-memory solution for our problem. Not trivial.

    1. jswinterburn

      Re: Nice improvement, but ...

      I agree - Dell sell a machine with 6TB of RAM, why can you only rent one with a 24th of that?

      1. Lusty

        Re: Nice improvement, but ...

        "Dell sell a machine with 6TB of RAM, why can you only rent one with a 24th of that?"

        Simple economics. Almost nobody needs a machine that size, and those who genuinely do probably don't want it in the cloud. Those same people who need that memory footprint will also be spending most of their time trying to distribute the workload to avoid a bottleneck too.

        It's cheaper for a cloud provider to buy commodity parts, so probably 2 socket boxes and even then they probably only fill them to 512GB memory because that's about as high as the vCPU/pCPU ratio will allow them to push in normal workloads.

        Cloud isn't about what's possible, it's about mass production. Ford don't make a car that can beat a McLaren but McLaren cars are desirable by a few and affordable by fewer. Ford could probably make something to compete but it's not their business model.

        1. jswinterburn

          Re: Nice improvement, but ...

          I don't disagree that the majority of instances aren't going to be that kind of spec, I'm just surprised that it's not available at all - There are quite a few things which need a machine of that size. If you want to de novo assemble a reasonably sized genome for instance it would be quite difficult with only 244GB of RAM.

          1. Lusty

            Re: Nice improvement, but ...

            I agree it's a nice to have, but having one or two larger boxes increases the cost significantly more than just buying the hardware. The people swapping out components will have instruction sheets for example on the standard model in their floors of the data centre (or in their "appliances" in the Azure cloud). New spares would need adding to the stores, possibly then requiring a new store. New drivers, firmware and images would also be needed. I'd have thought each model introduced globally will cost many millions in total, making it hard to justify on the off chance someone will require a large vm or two.

  2. bigtimehustler

    I assume "244GB "r3.8xlarge" server works out at $2,016 per month of full-time use" is flat price, non reserved instance? If you were running flat out like that for months on end you would be crazy to pay for it that way, you would get yourself a reserved instance along with its discounts. I assume you can do that with this new instance type...if not i guess you will be able to soon.

    1. Blane Bramble

      If you are using reserved instances though you are committing to a 12 or 36 month agreement - if you need that much power for that length of time, why aren't you building your own cloud/server farm?

This topic is closed for new posts.

Other stories you might like