The ARM collective doesn't just want to get into the data center. It wants to utterly transform it and help companies "manage down the legacy" of existing systems, as Ubuntu founder Mark Shuttleworth put it during a live chat hosted by ARM Holdings to close out the Mobile World Congress extravaganza in Barcelona on Thursday. "I …
...you can do the same stuff on Atom processors without recompiling.... Is the ARM platform compelling enough to make you switch, especially given their lack of a track record in scaling up?
I believe Atoms use more power for less performance. They also cost a lot more that ARM SoC's.
Most Linux apps require just a simple recompile to run on ARM....I don't think that is a bottleneck.
It's going to be an interesting couple of years. I can see, in the near future, a home server containing a rack of tiny ARM based boards (Raspberry Pi style with more I/O performance), much as described in the article, but serving up home entertainment and all sorts. Got another TV/child/security camera? Add another board. Maybe its not necessary/required, but it's certainly feasible.
More power for more performance I think for Atom.
And a whole lot more price.
Intel is worried about ARM moving up or replacing Intel kit, but probably doesn't want the kind of margins ARM gets today.
More power for more performance I think for Atom.
I think this very much depends on the application.
I would suspect that for many applications you are right, but for many you are wrong. Added to which, most server applications (which this article is talking about) scale well across threads. Therefore, if we took an Atom dual core processor (the most Intel do at the moment, I think) compared to a multi-core ARM SOC, I think you would be able to get similar performance for much lower power and cost.
This is only my own gut feeling, I have no numbers to back it up. Where I work we had enough trouble upgrading our MES from a couple of old (>10 years, although they are still running perfectly) Alpha's to Itaniums, and are now being blocked by the bean counters from consolidating most of our boxes into a new vitualised environment. We move slowly here, if ARM servers take off we may get to them 10 years or so later...
All I know is medfield atom's are not even on the nearly latest process and they are really fast. Battery lasts longer than any arm android I have seen.
Re: Arm takes over
Yes. Wait ... no. Or yes.
Until the financial boys get involved...
A monopoly will only crumble if something utterly compelling comes along to take its place and the monopoly doesn't change enough to compensate. By compelling I don't mean that some fanboy's get hardon's over the new tech but that it is financially compelling once you take into account migration costs, re training costs, hardware changes, software re writes or maintaining yet another software/hardware stack (that legacy stuff really isn't going anywhere), new toolsets etc
Once you get to scale Linux/Windows cost about the same to actually manage (and in many cases the toolsets are better for windows so its actually cheaper).
That's a lot of £££ that any new platform will have to save you before it's worth considering for the majority of people (excluding the handful of super sized data centres on the planet).
I notice that there still isn't any major backup package supporting Linux on ARM. Until this happens, there is no chance that you're going to see any meaningful use of ARM Linux in any competently run datacentre.
Depends on your use case. If your backup strategy is "rebuild from build server on fail", you don't care about Networker, Netbackup or whatever. The Hyperscale model is to have 100s of small servers, backing them all up individually would be a pain.
Anything which is changeable in any way needs to be backed up. You may have a large back end filesystem mounted up on NFS, then backed up by another machine for which there is a backup client. The problem is that many bare metal restore systems actually sit on top of backup software, unless you're going to roll your own.
But if this is a "big data" model you're not changing anything in that way, and you really don't need backups. Servers might be deployed with kickstart (or equivalent), changes made with cfengine, puppet or suchlike. If actual data is in hadoop or a similar system, you'd just rebuild the node and let the data re-replicate. This doesn't work for all use cases, but I can see an awful lot where it does.
> I notice that there still isn't any major backup package supporting Linux on ARM. Until this happens, there is no chance that you're going to see any meaningful use of ARM Linux in any competently run datacentre.
It doesn't need to. The ARM server nodes are optimised to provide particular services. The backup program does not need to run on each ARM chip, or on any ARM chip, it can be on some other type of device as it only needs to access the same file or data store as the servers do.
In fact the data store need not be ARM either.
The advantage of ARM is that 1) it uses less power, ie less cost for electricity which is a significant cost and 2) the cores not in use can shut down completely thus providing even more saving during times of lower workload. Having the odd machine with another CPU type for different tasks is a choice made on cost/benefit, not on some agenda.
Debian has supported Arm since 2.2 'potato' released 2000-08-15. Debian is a lot more 'major' than you might think.
I just don't get where you get the idea that you can't backup ARM servers, or even why it's a stumbling block to deployment. If you want them, there are plenty of backup solutions you can compile from source, or you can use the venerable rsync if you don't have any special requirements like snapshotting a filesystem so that it's in a consistent state during the backup (though I understand that LVM can do this).
The second point is to consider whether you really need backups in the first place. I think you may be misunderstanding the use case of (most?) ARM server deployments. You're probably more used to thinking of having a variety of servers each doing different things, or running a number of VMs, perhaps? I see the use case of ARM servers more in terms of grid or cluster computing. Looked at in that way, there's probably nothing on any of the nodes that you'll actually want to back up explicitly. The system image (or a large chunk of it, anyway) will probably reside on an NFS server and will be shared among several nodes. If you're using them for "OLTP" type applications, then your database is definitely going to be distributed, with replication of data across several nodes. The upshot of both of these points is that if something goes wrong with one of the nodes, it's not important: you just replace it or reimage it. If your database is already distributed and replicated across nodes, it can survive some number of failures like this, so again, there should be no need to backup individual nodes. You will want to make sure that you've got some way of backing up your entire database, but that's a whole different kettle of fish, and nothing to do with what you say is the problem here.
I wonder if anyone gets the Acorn reference in the title. It's an awful long time ago that ARM stood for Acorn RISC Machines...
I think you will find a lot do.
Especially since a lot of people have returned to the platform since RISC OS became available for the Pi.
Missing the point, I think...
Where I think everyone's missing the point, even ARM perhaps, it that everyone's still thinking in terms of the current paradigm. When the maturity & process differences are taken into consideration the ARM chips offer the potential of 10^2 x times as many processing cores as the x86 architecture does, for the same practical considerations, and what this means is that 10^9 core MPP systems are now viable. At that scale we can drop the Turing Machine model, where processing is separated from data, and progress to an Object Machine, where every element of data is an active processing element - data that finds and synthesises its own associations with other data elements.
At this point, software evolution can occur, in the literal sense, where you can start by generating random sequences of logic and see if they do anything useful and then randomly modify those sequences to see if any of them still work, and if they do, do they work better, or do they do anything new? So the issues about MS Vs. Linux (to symbolically represent all such arguments) are going to become moot in the not too distant future, because all software will be written by software.
Around the same time, if not before, extremely high-precision analogue electronics will be combined and integrated with digital electronics so that we can do not just binary digital processing but higher-base hardware processing, which means that you could, for example, feed a base-3 digital processor a compatible pair of base-2 & base-3 instructions at the same time and get two completety different and correct answers to two completely different tests. This is still the same old data processing paradigm; the next paradigm will be a different way of dealing with data.
Anyways though, the ARM architecture will get us to those 10^9+ MPP systems we really need for a start.
Re: Missing the point, I think...
10^9 core MPP systems are now viable. At that scale we can drop the Turing Machine model,
Not really. We're still stuck with the Turing model in an abstract sense and the Von Neumann model in more practical terms. We just have to adapt them to be more aware of multi-core and multi-processor systems. And in fact, we pretty much have done so years ago and there hasn't been any great paradigm shift.
and progress to an Object Machine, where every element of data is an active processing element
It sounds like you're talking about agent-based programming. Again, it hasn't caught on, except in writing botnets and perhaps back-ends for massively-multiplayer online games.
generating random sequences of logic and see if they do anything useful
And just how do you decide what's "useful"? Or as Robert Pirsig put it in Zen and The Art of Motorcycle Maintenance, "And what is good, Phaedrus, And what is not good—Need we ask anyone to tell us these things?" You'd probably enjoy reading that since it's really about philosophy, not hard computer science.
because all software will be written by software(*)
Of course. And the Singularity will arrive and bathing in Unicorn Milk will keep us young forever.
do not just binary digital processing but higher-base hardware processing ... feed a base-3 digital processor a compatible pair of base-2 & base-3 instructions
Hmm... Are you really amanfrommars in disguise? If so I claim my £5.
But seriously, do you even know what Turing-complete means? In particular, a Turing machine can be re-expressed in terms of Gödel numbers, which it turn can be mapped onto the set of natural numbers. Crucially, all practical number bases are isomorphic to each other, so binary, ternary or base 10 (or balanced ternary or whatever) all have the same expressive power so there's no theoretical reason to favour one over the other. It only comes down to issues of practicality. For most purposes binary is good enough, and it's only if you want to represent certain numbers with a finite number of digits that you might want to consider other bases (the string to represent 1/10 is infinitely long it binary, for example, while it's just "0.1" in decimal or binary coded decimal, for example). And in case you're wondering, going from the natural numbers to the reals doesn't magically grant your computer new powers either: the naturals are perfectly sufficient for "universal" computation, so, eg, a phinary-based computer can't do anything more than a binary one can, except be a pain to build and program. Another book recommendation for you: you might like Godel, Escher, Bach, and Eternal Golden Braid...
(*) Actually, there is one kind of "program that writes programs" that can benefit from having massive amount of cores to work with, though I mean "program" in the kind of mathematical sense that Turing did, rather than the way you think of it (eg, a word-processing package). I'm thinking of something like Turbo Codes, which are effectively bit-level programs that tell a receiving computer how to reconstruct some embedded data even if some of the bits are dropped or corrupted in transit.
Another, similar type of application is data compression, since you can treat the compressed data as a "program" that tells the decoder how to unpack the message. I think that that's the most interesting possible application in this realm: given enough computing power, we should be able to try out many different ways of compressing some given data and output a compressed string and a decompressor. Obviously, this still isn't going to be able to magically compress incompressible data and it's quite impractical as a replacement for general-purpose compression schemes like gzip, bzip and so on (since there is an infinite--or worse, transfinite--number of "languages" to consider, and the best compression ratio possible is sensitive to the choice of language) but it still could be quite useful for discovering good compression schemes for certain types of data. See Kolmogorov Complexity for background details.
party political broadcast by on behalf of arm
The fact that the announcement was hosted by ARM holdings undermines the message somewhat. Your going to see ARM in the DC to what extent is unknown as previously stated there has to be a compelling business case to deploy in large numbers. The tool chains for Linux on X86 are more developed for DC workloads. Arm is going to push Intel in the DC space but Atom is going to start eating ARMs market share in mobile. When we talk about ARm which arm chips are we talking about and how compatible are they with each other. ARMs memory management is still poor and anyone who runs a large visualised workload will tell you its memory you run out of long before cpu. Having said all of that Shuttleworth does have one thing right the way apps are written for the cloud is very different from the traditional X86 and ARM architecture apps. Ironically Intel is now one of the biggest contributors to open source software and this will help maintain its dominance. Having two strong competitors in the DC space can only be good for the consumer.
- Mounties always get their man: Heartbleed 'hacker', 19, CUFFED
- Analysis Oh no, Joe: WinPhone users already griping over 8.1 mega-update
- Leaked pics show EMBIGGENED iPhone 6 screen
- Opportunity selfie: Martian winds have given the spunky ol' rover a spring cleaning
- OK, we get the message, Microsoft: Windows Defender splats 1000s of WinXP, Server 2k3 PCs