back to article Meet ARM1, grandfather of today's mobe, tablet CPUs – watch it crunch code live in a browser

Chip geeks have produced an interactive blueprint of the ARM1 – the granddaddy of the processor cores powering billions of gadgets today, from Apple iPhones to Raspberry Pis, cameras, routers and Android tablets. The peeps behind the fascinating blog visual6502.org normally reverse-engineer chips by pulling the silicon out of …

Page:

  1. Voland's right hand Silver badge

    Variable record format

    Hehe... Blast from the past - full OS level record management in file IO. The app developer had no clue what is going on behind the scenes, VMS was managing it all for them including by default revisioning the file on each open for write. So if it decided to do the actual writes as variable size, the app developer would have had no clue of that - it would have looked like ordinary record retrieval to the application.

    The end result was the most insane open() syntax known to man. My recollections were that it took 5+ lines of optional args to open a file in VMS Pascal.

    1. Duncan Macdonald

      Re: Variable record format

      However - as VMS had the record management - almost any program could read almost any file - the record attributes were stored in the file header and an open() call without optional parameters used the attributes in the file header to read the file correctly. (None of the mess that is in Windows where some text files open correctly in Notepad - others need Wordpad.) From (very old) memory - ordinary variable length record text files needed no optional parameters - fixed length record files needed 2 parameters (type = fixed length and the length of each record) - it could however get messy if you were creating indexed files (but a sequential read of an indexed file could be performed by almost any program).

      The really bad case was reading a foreign (not created by VMS) binary file where everything had to be specified as the OS did not have valid data in the file header.

    2. Primus Secundus Tertius

      Re: Variable record format

      Yes, VMS files came in the proverbial 57 varieties. This was all well documented, but few people ever consulted the manuals, Many programmers got confused and made mistakes.

      It was as confusing as the old George 3 file varieties: graphic mode (for all-capitals text), normal mode (quite rare, upper and lower case), and allchars (normal plus control characters).

      1. TonyWilk

        Re: Variable record format

        George 3... sheesh, you just reminded me how old I'm getting. As a lowly student my files were mostly stored in 'on the shelf' format - as piles of punch cards or tape.

  2. skswales

    25,000 *transistors* - not gates!

  3. PleebSmasher

    >Eventually, about 18 months later, they produced the ARM1 – a tiny, efficient CPU fabricated by VLSI Technology with fewer than 25,000 gates using a 3,000nm (3μm) process. Today, a quad-core Intel Skylake Core i7 processor, with builtin GPU, has 1,350,000,000 gates using a 14nm process.

    Why not compare it to the Exynos 8890, Snapdragon 820, Kirin 950, or Mediatek Helio X20 instead of an x86 flagship chip?

    1. diodesign (Written by Reg staff) Silver badge

      Re: PleebSmasher

      Don't let me stop you -- off you go, then.

      C.

      1. Anonymous Coward
        Windows

        Re: PleebSmasher

        I was thinking the latest Tegra or an X-Gene.

        How do we edit the article then C?

        1. diodesign (Written by Reg staff) Silver badge

          Re: anonymous

          Just post here.

          C.

    2. Anonymous Coward
      Anonymous Coward

      Run out of cache

      And while we're at it, if we're comparing [just] processors, why not deduct the huge number of simple (and simply interconnected) transistors that make up the on chip caches.

    3. Dan 55 Silver badge
      Coat

      Personally I think it should have been compared to other desktop CPUs of the day, then a bit of data about ARM8 today and comparing that to the surviving rival CPU (x86), both i7 desktop and Atom mobile versions.

      Making El Reg a wiki is definitely the future.

  4. Tom 7

    If you wanted to learn about computing from the ground up

    Then if there is a gate/block level of this available then you have everything you need to cover simple logic gates on silicon all the way you to virtualised machines.

    I have spent some time trying to gather z80 material to do this but Zilog no longer have the original ccts etc. But this and GCC etc and you have it all.

    1. Paratrooping Parrot

      Re: If you wanted to learn about computing from the ground up

      awww It would have been great to see the Z80 as I was using Z80 machines.

  5. Anonymous Coward
    Anonymous Coward

    Layout Vs Schematic

    "Close up ... the semiconductor gate schematics for the ARM1"

    That looks like a layout (physical), not schematics (logical netlist).

    Probably from a CIF file, rather than GDSII.

    1. diodesign (Written by Reg staff) Silver badge

      Re: Layout Vs Schematic

      ~~ My chip articles bring the pedants to the yard. And they're like, our knowledge is better than yours. We can teach you but we have to charge. ~~

      It's fixed, ta. Once upon a time I used VLSI design software to layout gates and doping regions and cells and metallization layers and, arrgh, I thought I'd erased all that from my mind.

      C.

      1. Anonymous Coward
        Anonymous Coward

        Re: Layout Vs Schematic

        ASIC backend implementation is still an interesting, noble (and well paid) profession. ;-)

        1. The entire Radio 1 playlist commitee
          Happy

          Re: Layout Vs Schematic

          Yeh you just have to tell them that every now and then ... and keep the blinds closed

  6. Steve Graham

    memory corruption

    My first ever job -- in 1981 -- was writing VLSI design software on VAX/VMS. CIF files ring a distant bell, but I can remember nothing more.

  7. davidp231

    As I recall they thought it was broken because it produced such a small amount of heat. Something along those lines.

    1. Anonymous Coward
      Anonymous Coward

      I have a vague and hazy recollection that it produced such a small amount heat because it was broken... but a tiny current leaking from somewhere unexpected allowed it to function correctly anyway...

      ?

      1. Will Godfrey Silver badge

        There was a break in the supply, but as long as at least one IO line was high it was powered by the reverse diode on that line. Took them a while to discover why it would occasionally crash :)

  8. billse10

    that rang a bell too - found it right here:

    "Deeply puzzling, though, was the reading on the multimeter connected in series with the power supply. The needle was at zero: the processor seemed to be consuming no power whatsoever.

    As Wilson tells it: “The development board plugged the chip into had a fault: there was no current being sent down the power supply lines at all. The processor was actually running on leakage from the logic circuits. "

  9. Chika
    Coat

    "...[Acorn] imploded..."

    That's rich! Mind you, I suppose it's immaterial how Acorn was asset-raped this far down the line. Boland and his cronies made their pounds of flesh and ARM managed a success that has annoyed certain competitors ever since!

    1. Anonymous Coward
      Anonymous Coward

      "...[Acorn] imploded..."

      The history of Acorn and ICL presents two wonderful examples of politicians being utterly clueless at IT policy. Gordon Brown flogging off the gold reserves was trivial in comparison. ARM today is barely a medium size company, but its potential was to be the next Intel.

      1. Anonymous Coward
        Anonymous Coward

        Re: "...[Acorn] imploded..."

        ARM today is barely a medium size company, but its potential was to be Intel.

        FTFY

        O:-)

        1. asdf

          Re: "...[Acorn] imploded..."

          If one looked purely at each instruction set and where computing has recently gone nobody in there right mind would pick Intel with their POS x86 to be the 800lb guerilla. ARM has kept the core of their (imho superior) instruction set intact and grown a successful business with it for a few decades. Intel has tried repeatedly to kill their abomination but the market won't let them (a market that has rewarded them very handsomely until lately). Guess its been a good thing Intel has been a generation ahead of everyone else in fab technology (the real reason to Intel's success) which is how they made it work in most spaces historically. Sadly now that chips are fast enough and becoming a commodity that overhead is killing Intel and making ARM look real pretty indeed. ARM lets the companies that know how to do high volume low margin manufacturing do the heavy lifting and then they get their (rather fair) cut.

          1. asdf

            Re: "...[Acorn] imploded..."

            Also I am aware that the x86 IS has been emulated in hardware since the mid 1990s but even with emulation Intel with their state of the art fabbing as been unable until very recently to compete with ARM on mobile with x86. Also beware as it looks like one of El Reg's adverts (only page I had open at time) decided to try and serve up malware to me due to not remembering to run through privoxy (and NoScript on full blast) like I regularly do. Of course with me running Tails in a VM straight off an ISO file with no persistent storage and it flashing up obviously fake firefox out of date warrnings trying to run MalwarePretendingToUpdateFirefox.exe it wasn't going to get far. I just reset the VM as opposed to trying debug but wasn't real happy to see.

            1. Anonymous Coward
              Anonymous Coward

              Re: "...[Acorn] imploded..."

              I'm not actually sure why people care so much about the instruction set.

              When everyone had to code in assembly it mattered.. now that decent quality C compilers are available for free the 10 or so lines of assembly most (even low level embedded guys) programmers have to come up with every other year means the underlying assembly language means very little.

              Since humans don't care about the prettyness of the assembly language anymore surely code density etc should matter much more..

              You say the only reason Intel are "winning" is that they have the latest fab technology. Well the only reason ARM cores ship as many units as they do is that they can be produced on old production lines.

              Don't get me wrong. I like ARM but not because I'm in love with their instruction set. They are one of the few companies that make information on things like how debugging works over JTAG etc available so there is a decent ecosystem of free tools to work with ARM cores. On the other hand I'm not going to poo poo Intel based on some childish dislike of their instruction set. If there weren't significantly faster Intel machines out there developing for ARM machines would be many many times less productive.

              1. Anonymous Coward
                Anonymous Coward

                Re: "...[Acorn] imploded..."

                Because, AC, the instruction set is what your ultra high level wysiwyg code gets compiled into, so:

                1) Understanding the IS helps a competent programmer write efficient code

                2) No matter how good your code and compiler is, if the target is a heinous kludge like x86, your binaries will flop out bigger, slower and kludgier than if they'd been made for a more elegant architecture

                1. Anonymous Coward
                  Anonymous Coward

                  Re: "...[Acorn] imploded..."

                  x86 has much better code density than ARM. ARM had to license patents from Hitachi to come up with thumb.

                  1. Anonymous Coward
                    Anonymous Coward

                    Re: "...[Acorn] imploded..."

                    'x86 has much better code density than ARM.'

                    No

                    1. Anonymous Coward
                      Anonymous Coward

                      Re: "...[Acorn] imploded..."

                      >No

                      My own real world tests show that ARM binaries are 10-15% bigger than x86 ones.

                      You have to mix ARM and thumb in the same binary to get decent code density and performance.. so you move the nasty kludges from the instruction decoding in the CPU into the binaries.

                      And here I was thinking the ARM instruction set is some of gift from $deity that is perfect in every way.

                      1. Munchausen's proxy
                        Pint

                        Re: "...[Acorn] imploded..."

                        And here I was thinking the ARM instruction set is some of gift from $deity that is perfect in every way.

                        Nah, that would be PDP-11.

                        1. Roo

                          Re: "...[Acorn] imploded..."

                          Have an upvote for the PDP-11, but the Alpha ISA was more perfect. :)

                          Much as I love the -11, the Alpha's big collection of 64bit registers did make it's pretty easy to write *fast* assembler - in practice the compilers were pretty good too. I did debug a few bits of rabid C code I'd never seen before in no time flat by looking at the disassembled output and mapping it back to the source - the ISA made the compiler output trivial to map back to the source. I do miss that somtimes. :)

                    2. Anonymous Coward
                      Anonymous Coward

                      Re: "...[Acorn] imploded..."

                      "'x86 has much better code density than ARM.'

                      No"

                      Thank you for not clarifying that at all. Not even linking to other people's works that might have clarified that.

                    3. Roo

                      Re: "...[Acorn] imploded..."

                      As of 1992 on a particular bit set of benchmarks we ran the smallest code generated was for the INMOS T800, the x86 binaries were 30% bigger. The INMOS chips used a cute instruction encoding that made a lot of common ops single byte instructions, it was handy when you were trying to cram everything into the 1-4kb of on-board single cycle memory. ;)

                    4. asdf

                      Re: "...[Acorn] imploded..."

                      >'x86 has much better code density than ARM.'

                      Whether that is true or not is only somewhat relevant. Even if the code density is higher if it takes a lot more chip real estate to implement the instruction set and it can only be implemented somewhat efficiently its still going to use more energy and run hotter which is exactly what you want to avoid for mobile (and in the datacenter as well). From what I understand x86 is such a dog for mobile even emulating puts Intel at such a disadvantage it took them a herculean effort to finally even compete wtih ARM (and still not in ultra low power last I heard). What they are finding though is competing with ARM is not like competing with AMD. The payoff are not the type of margins Intel is used to.

                      1. Anonymous Coward
                        Anonymous Coward

                        Re: "...[Acorn] imploded..."

                        >Whether that is true or not is only somewhat relevant.

                        >Even if the code density is higher

                        Code density is a good benchmark of the "goodness" of an ISA that doesn't basically boil down to "it's good because I like it, that makes it good". Code density is such a big problem ARM have an alternative instruction set in their chips to make up for the main one.

                        >if it takes a lot more chip real estate

                        >to implement the instruction set

                        And that matters to end users because? The number of transistors Intel have to squeeze onto a chip does not keep me awake a night. There are lots and lots of products out in the real world that use ARM Cortex M? parts to implement stuff that could have been done with discreet logic or a 555 timer instead. Baby Jesus doesn't weep when a transistor is wasted.

                        >and it can only be implemented somewhat efficiently its still going to use more energy

                        >and run hotter which is exactly what you want to avoid for mobile

                        But not every machine in the world is mobile. Imagine developing for mobile/embedded platforms if you didn't have a hideous x86 box doing the grunt work of compiling all the tools and code for the target? The only reason mobile is works is because there are smelly x86 boxes on the desk and in the cloud doing the grunt work.

                        >From what I understand x86 is such a dog for mobile even emulating puts Intel

                        So you don't actually know. You read this "fact" somewhere and use it in your little rants against x86 without really knowing what you are talking about.

                        Intel's desktop x86 chips kick even ARM's latest stuff in the balls. Decent performance is not a disadvantage for mobile. If Intel could get an i7 class chip into the energy budget for a phone they would have a winner on their hands.

                        The problem for Intel apparently is that they can't retain the performance without breaking the energy budget (they seem to be making some progress though..). It's like performance increases complexity which in turn increases the required energy! Who would have thunk it!

                        The emulation point brings nothing to the table. Intel need an ARM emulator because of ARM's hold on the market. Emulation is processor intensive. ARM's ISA is no better at it.

                        >The payoff are not the type of margins Intel is used to.

                        ARM is a fabless semiconductor company that licenses designs with good energy performance and acceptable execution performance at low low prices and chips based on their designs can be produced on fabs that are a lot cheaper than what Intel is using for their top of the range lines. I'm not sure how Intel thought they'd have a chance at breaking into ARM's core business and make any money in the process. I'm sure they have seen shrinking shipments of their high performance lines and thought they need to make a move. Intel's attempt to get back into the microcontroller market with their Quark stuff is equally laughable.

                        But either way Intel's bad business decisions doesn't make x86 "bad".

                        1. Anonymous Coward
                          Anonymous Coward

                          Re: "...[Acorn] imploded..."

                          " Imagine developing for mobile/embedded platforms if you didn't have a hideous x86 box doing the grunt work of compiling all the tools and code for the target?"

                          If there was any doubt where you were coming from, it's clear now. And it's not a good place.

                          Lots of people don't have to *imagine* not using "hideous x86 box grunt work of compiling all the tools and code for the target". Lots of people have done it, yea even unto the days of PDP11s. There are still people using stuff other than x86 too, but the typical IT department's dependence on x86 means there aren't as many cross-tool setups on Unix, VMS, etc, as there used to be.

                          If x86 is so brilliant in general, why is it near invisible outside the IT department?

                          1. Anonymous Coward
                            Anonymous Coward

                            Re: "...[Acorn] imploded..."

                            >If there was any doubt where you were coming from, it's clear now. And it's not a good place.

                            Please do forget to mention where that place actually is.

                            >Lots of people don't have to *imagine* not using "hideous x86 box grunt

                            >work of compiling all the tools and code for the target".

                            Because they don't do work in a field that requires them to do lots of compiling, data processing etc.

                            But they'll consume content that has been processed by machines many times more powerful than their "mobile" device multiple times a day.

                            >There are still people using stuff other than x86 too, but the

                            On the desktop? Are there any desktop machines shipping in volume that aren't x86? The only ones I can think of are chromebooks and they aren't exactly winning any ass kicking competitions.

                            >typical IT department's dependence on

                            IT departments - The be all and end all of people that think that their job fixing printers is "working in high technology"

                            >Unix, VMS, etc, as there used to be.

                            Unix doesn't run on x86? You better tell that to all the Unix vendors that ported their breed of Unix to x86 as soon as they realised fast commodity priced x86 hardware was going to ruin their RISC party.

                            >If x86 is so brilliant in general, why is it near invisible outside the IT department?

                            Who said it's so brilliant? All I'm saying is it's not the ISIS of instruction sets and it's not like ARM is some amazing super technology sent from heaven to save us all. It's horses for courses.

                            If you want your desktop machine to be limited to performance levels of 5 years ago and only able to access a quarter of the RAM that my core i7 setup does knock yourself out.. but I'll be keeping my commodity machine with 32GB of RAM kthnxbye.

                            And not exactly invisible outside of the IT department unless your job fixing printers involves printers attached to machines that consume multiple rooms:

                            https://en.wikipedia.org/wiki/Supercomputer#/media/File:Processor_families_in_TOP500_supercomputers.svg

                        2. asdf

                          Re: "...[Acorn] imploded..."

                          Leaving a lot out to keep this short.

                          >The number of transistors Intel have to squeeze onto a chip does not keep me awake a night

                          No but it does very much affect the performance/energy trade off you allude to later.

                          >But not every machine in the world is mobile.

                          > I'm not sure how Intel thought they'd have a chance at breaking into ARM's core business and make any money in the process.

                          Mobile is the only segment still with decent growth which is why Intel is panicking. They are having their Kodak moment.

                          >But either way Intel's bad business decisions doesn't make x86 "bad".

                          Like I said nobody hates x86 more than Intel does, which is why they have tried repeatedly to kill it. It really has held them back in many ways even if it buttered their bread for decades. x86 is a prime example of how its not always the best product winning the market (Motorola ISA were so much better in the early days) but the one in the right place at the right time and most important at the right price.

                          1. asdf

                            Re: "...[Acorn] imploded..."

                            Just to add.

                            >x86 is a prime example of how its not always the best product winning the market

                            Few product lines ever have had a stronger network effect which is why it won the day and carried Intel to be one of the 30 biggest companies in the world but ironically may end up dragging it down to its doom as well.

                          2. Anonymous Coward
                            Anonymous Coward

                            Re: "...[Acorn] imploded..."

                            "nobody hates x86 more than Intel does"

                            Citation welcome. The story isn't quite as simple as that.

                            In the mid/late 1990s patent wars between Intel and DEC, Intel could have ended up with ownership of the Alpha architecture if they'd wanted to, or at least as an Alpha licencee (like Samsung were). As owners of Alpha they could also have had one implementation that was almost an SoC before most industry folk knew SoCs existed (the 21066). In addition Intel could also have ended up with ownership of DEC's StrongARM designs and designers (which they did) and carried on with them (which they didn't, not in any serious way).

                            Intel HQ chose to carry on their own sweet way with IA64 ("because 64bit x86 is impossible and IA64 is the answer"). Sadly high end DEC systems (by then Compaq systems) were among those drinking the IA64 KoolAid, and the Alpha fell by the wayside, despite very prescient stuff like this slightly-techy 1999 whitepaper from DEC's Alpha people explaining why IA64 would fail:

                            http://www.cs.trinity.edu/~mlewis/CSCI3294-F01/Papers/alpha_ia64.pdf

                            Then when AMD showed that x86-64 was not only possible but practical and popular, Intel HQ finally realised that "industry standard 64-bit" meant AMD64 not IA64 (and not Alpha or MIPS or Power or SPARC). But the IA64 lived on alongside x86-64 for a while, even though everyone with a clue knew IA64 was going nowhere.

                            Alongside all that, Intel HQ chose not to retain and enhance the StrongARM designs (and people) they did end up with in 1997, they chose to sell them off to Marvell and carry on down the x86 road.

                            If those are signs of hating x86, you could have fooled me.

                            Btw, much of this "x86 vs the rest" stuff could be, and was, written ten years or so ago. Ten years after, Intel still haven't got with the SoC programme (there's more to this than "mobile", as in mobile phones/tablets/etc).

                            E.g.

                            http://www.theregister.co.uk/2006/06/27/intel_sells_xscale/

                            "Intel is to flog off its XScale [nee StrongARM] processor operation, the chip giant said today. The move paves the way for it to push low-power x86 CPUs at mobile phone and PDA makers. The buyer is comms chip company Marvell Technology Group, which is paying $600m cash for the product line and taking on "certain liabilities"."

                            and (the following day)

                            http://www.theregister.co.uk/2006/06/28/intel_mobile_failure/

                            "Intel's name looks forever to be associated with the PC, now that it's ended a nine year dalliance with the phone business. The firesale of its 1,400 strong XScale processor division, and the write down of its cellular investments, means that Intel has passed up the chance to play in the largest volume chip market of them all. There are 2bn mobile phones in the world, and in many emerging markets the phone is the only computing device likely to achieve ubiquity."

                            Intel: the x86 company, now and always.

                            1. asdf

                              Re: "...[Acorn] imploded..."

                              All good points AC about Intel shitting the bed on moving away from x86. Intel of course suffers from a legendary case of not invented here like many companies so yes they have recognized the weaknesses from time to time in x86 but they haven been damned if they were going to use somebody else's solution. The exception being AMD64 but that was an existential crisis which Intel might be looking at again.

                              1. Anonymous Coward
                                Anonymous Coward

                                Re: "...[Acorn] imploded..."

                                "an existential crisis which Intel might be looking at again."

                                Intel need to get with the programme with respect to "System on Chip" designs, implementations, and business practices - there's more to "mobile" than the instruction set, even with $100/chip sold "contra revenue".

                                Without that change in direction, Intel are just continuing on the way to eventually being just another Power or SPARC class vendor, watching the desktop/laptop client sector becoming more and more a niche market, leaving "just" Xeon-class stuff supporing Intel and paying for their next two generations of chips and chip factories which they currently fund in part from volume desktop/laptop chip sales.

                                Intel won't have the embedded capability of Power or the IBM mainframe business, or the open architectureness of SPARC. (Apologies to Alpha and MIPS but for various reasons I think their time has probably passed).

                                ICBW, but I don't see where else Intel are headed (or have been headed for a few years). They've a few dollars in the bank so they'll be OK for a little while.

                                1. Anonymous Coward
                                  Anonymous Coward

                                  Re: "...[Acorn] imploded..."

                                  "Intel need to get with the programme with respect to "System on Chip" designs, implementations, and business practices"

                                  Apparently Intel HQ agree:

                                  http://www.theregister.co.uk/2015/11/23/intel_hires_qualcomms_compute_leader_to_lead_new_mobile_push/

                                  " Intel has hired Doctor Venkata “Murthy” Renduchintala.

                                  The new hire was until recently executive veep and president of Qualcomm's mobile and computing division, in other words Intel's nemesis as Chipzilla tried and failed to make a dent in mobile devices. For having pulled off that feat, Renduchintala gets to head Intel's new “Client and Internet of Things (IoT) Businesses and Systems Architecture Group.”"

                                2. fajensen
                                  Coat

                                  Re: "...[Acorn] imploded..."

                                  Without that change in direction, Intel are just continuing on the way to eventually being just another Power or SPARC class vendor,

                                  Problem is this: Should Intel come up with a really revolutionary design, a true "x86-killer", the net present value of their existing x86 IP and x86 product portfolio will rapidly drop towards Zero.

                                  The designers of the "x86-zombie-killer" would be "Destroying Shareholder Value" - and most of that IP is probably mortgaged & those bonds leveraged 50:1 so they might be flirting with bankruptcy even. By doing better!!

                                  That kind of change is not the kind of initiative that comes "from the top"; this can only happen when some rogue tech team manage to design the new technology "in stealth mode" and manage to push it out of the door *before* the accountants and the board can sabotage it!

                                  "The top" likes to talk a lot about such new revolutions, to placate investors and look good and "up with the trend" reality is: They want the existing product range, but slightly better, anything *competing* with the viability of the existing products will be sabotaged or killed directly.

                        3. cortland

                          Re: "...[Acorn] imploded..."

                          "Discrete" logic, not "discreet" " -- but I won't tell. Heh!

                        4. Torben Mogensen

                          Re: "...[Acorn] imploded..."

                          AC wrote: 'Code density is a good benchmark of the "goodness" of an ISA that doesn't basically boil down to "it's good because I like it, that makes it good".'

                          Code density is only one dimension of "goodness", and it is one of the hardest to measure. If you measure compiled code, the density depends as much on the compiler (and optimisation flags) as it does on the processor, and if you measure hand-written code, it depends a lot on whether the code was written for compactness or speed and how much effort the programmer put into this. So you should expect 10-20% error on such benchmarks. Also, for very large programs, the difference in code density is provably negligible: You can write an emulator for the more compact code in constant space, and the larger the code is, the smaller a proportion of the code size is taken by the emulator. This is basically what byte code formats (such as JVM) are for.

                          I agree that the original ARM ISA is not "optimal" when it comes to code density, but it was in the same ballpark as 80386 (using 32-bit code). The main reason ARM made an effort to further reduce code size and Intel did not was because ARM targeted small embedded systems and Intel targeted PCs and servers, where code density is not so important. Also, Thumb was designed for use on systems where the data bus was 8 or 16-bits wide, so having to read only 16 bits per instruction sped up code execution. The original ARM was not designed for code density, but for simplicity and speed.

                  2. Anonymous Coward
                    Anonymous Coward

                    Re: "...[Acorn] imploded..."

                    "ARM had to license patents from Hitachi to come up with thumb."

                    Citation welcome.

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like