back to article Brit server maker Avantek puts its back into ARM servers

Tony Lees, managing director of Avantek Computer Limited, wants to sell you your first and then your next hundred ARM servers. "This is our current plan: to take over the world with ARM," Lees tells El Reg with a laugh. At the same time, however, Avantek is dead serious about joining the ARM army and fighting against the …

COMMENTS

This topic is closed for new posts.
  1. Phil Endecott

    Can I please have a 1U version?

    (Seriously. I have a "homemade" 1U ARM server board based on a 2-core Samsung exynos 5 chip, Which is great, but it would be better to have something a bit more "professional". What I don't need is dozens of cores or disks.)

  2. Infernoz Bronze badge

    Only 32-bit, WTF?

    I won't consider ARM for server use until it has 64-bit memory support, because any decent application needs a lot more RAM than 4GB, especially for database, storage or VM servers!

    Even Android tablets could use 64-bit ARM, then the memory would become really useful!

    1. Anonymous Coward
      Anonymous Coward

      Re: Only 32-bit, WTF?

      Do you need >4GB of memory in the server, or >4GB of memory immediately and simply accessible to each application? They are different things.

      If 4GB per application is sufficient, all you need do is find an ARM design partner and motivate them to make you a 32bit ARM chip with a >32bit total address space and document how to use it.

      Not rocket science, been going on for decades with other chips (PDP11 16bit core, 22 bit memory address space in some models, accessed via an MMU eg PLAS directives in the OS, x86 had PAE, 32bit core with >32bit memory address space accessed via an MMU, yea even unto the Z80 and some vendors with bank switching magick, and so on).

      Or wait a little while till the true 64bit address ARM stuff comes along, but bear in mind that in a design focused on low power consumption and low cost, making addresses twice as wide will likely come with a bit of a penalty in terms of power or performance or both (and hence also cost). Double the width of addresses and the same amount of cache only holds half as many items. Double the width of addresses and there's a far chance that any given application gets bigger. Decisions, decisions.

      It's 20 years or so since the first mainstream (albeit non-x86) 64bit chips arrived, with 64bit OSes to boot. But there's still plenty of 32bit-compatible work around. And there's genuined 64bit stuff too. And a grey area in between where "bank switching" may work.

      Have a lot of fun, whichever way works best for you.

      1. Anonymous Coward
        Anonymous Coward

        Re: Only 32-bit, WTF?

        No finding required - use a board with Cortex-A15 or A7 CPUs and they have 40-bit addressing called LPAE despite being still 32-bit.

        http://www.arm.com/products/processors/technologies/virtualization-extensions.php

        ARM talks about using the virtualization features in combination with LPAE to provide multiple VMs with their own 4GB physical RAM, I presume this is something expected to run on Cortex-A15 but I don't know what software supports it. I'd expect at least Linux KVM to work.

    2. P. Lee

      Re: Only 32-bit, WTF?

      ARM is a low power cpu. It isn't really in the market for running large databases on a single chip. x86 does that (better).

      ARM is cheap, low power and scales horizontally. If your problem space doesn't fit that, its probably not the right solution for you.

  3. Frumious Bandersnatch

    keep the performance per watt [...] lower than Intel

    Eh, that's the one they way to keep higher. The bit in ellipsis (cost/performance/watt) is, as you say, something they want to keep lower. So minimise Watt/GFlops and €$£/GFlop/Watt.

    Personally, I'd love to see some of this stuff making its way out of the data centre and becoming something that someone could buy as a desktop/workstation replacement. The low-end ARM-based systems (Pi, ODROID and so on) are all severely lacking when it comes to I/O bandwidth and interconnection options. I'd love to see more of this on-die high-speed networking stuff make it into consumer products, preferably with similar buses/interconnects for accessing GPUs using something standard like OpenCL. I know that's unrealistic given that the desktop market is tanking and nobody wants to risk ARM in that kind of system right now, but if such personal mini clusters of ARM machines were available, I'd jump ship from x86-based systems in a heartbeat. I guess there's always Parallella, but it remains to be seen when that will become readily available, how easy it will be to program for, how much software support there will be for it and so on.

    As I said, it's all a bit of a dream scenario, but at least it's good to see the ARM platform developing into something that you can do some serious processing on. Give it a few years and I reckon I might just get my wish.

    1. Anonymous Coward
      Anonymous Coward

      Re: keep the performance per watt [...] lower than Intel

      A network switch interconnect to access OpenCL GPU? That's absurd. Latency would be a killer, PCI-E latency is borderline acceptable.

      These servers are flawed from the get-go.

      So what if you can fit hundreds of nodes? Even when they have a 64 bit chip it still means you need to run hundreds of physical copies of OS, drivers and base apps which makes it terribly in-efficient as a system.This is 'software overhead' you have to live with and something ravers of ARM systems fails to take into account.

      Also the MIPS of ARM chips are much lower than Intel.

      I cannot see much use out of these, if anyone can, do please enlighten us.

      1. catalinuxro
        Thumb Up

        Re: keep the performance per watt [...] lower than Intel

        Well, think about ten's of LAMP stack in 4U, every node dedicated to an customer. In an envelope power of 450W.Seems to be a good fit for dedicated hosting for an provider.

      2. Charlie Clark Silver badge

        Re: keep the performance per watt [...] lower than Intel

        @AC

        I cannot see much use out of these, if anyone can, do please enlighten us.

        While I disagree with most of what you say I, too, would like to know what the server that Oxford University is buying is going to do. Even if it's just for research purposes - software development on such a system as a cheap way to develop and test HPC code - it makes sense. It may well be that academia is indeed going to be the target market for systems for the time being but don't underestimate how that may work: this year Oxford University, next year CERN, the year after that IBM has them in its portfolio.

        1. Bronek Kozicki
          Thumb Up

          Re: keep the performance per watt [...] lower than Intel

          I could use it. Certain application architectures are not very difficult to adopt to distributed processing which would fit well this architecture. Only a bit worried about performance of these cores. It would probably not fit well all of the use cases I have.

  4. Anonymous Coward
    Anonymous Coward

    "It would be better to have one disk per core, of course"

    Why, of course?

    For as long as I can remember, the SAN vendors have been telling us the direct opposite - one big SAN storage farm per datacentre, local disk storage is so last century.

    Can they both be right, have times changed, or what?

    1. TelcoWelder

      Re: "It would be better to have one disk per core, of course"

      I thought the same thing.... I'm assuming it's because there is a SATA 2.0 controller on each ECX-1000... so if there are more SoC's than disks, you'll have to NFS/iSCSI... not that that would be a real issue, of course.

      1. avantekcomputer

        Re: "It would be better to have one disk per core, of course"

        There are five SATA2 controllers on each SoC (four directly available, one via edge connector), so 20 SATA per ECX-1000. This makes one disk per core achievable in the right chassis.

  5. Suricou Raven

    I see a use.

    So it's only good for loads that need lots of computing at very low cost-per-watt, but a minimum of communication between processes? Cryptanalysis comes to mind.

    I'd say to ask the NSA, but they already know.

  6. Anonymous Coward
    Anonymous Coward

    Power is the key...

    and apparently the UK will run out of power by 2015 so maybe ARM is the solution?

    http://preview.tinyurl.com/ofgem-nopower

  7. Matt Bryant Silver badge
    Facepalm

    Why 3.5in disks?

    Surely performance isn't the driver here, so couldn't they save power and increase the number of disks per socket by using laptop drives? If they already have a flash drive per socket surely it makes sense to have two slow laptop drives per socket instead of one 3.5incher?

This topic is closed for new posts.

Other stories you might like