back to article Meet ARM1, grandfather of today's mobe, tablet CPUs – watch it crunch code live in a browser

Chip geeks have produced an interactive blueprint of the ARM1 – the granddaddy of the processor cores powering billions of gadgets today, from Apple iPhones to Raspberry Pis, cameras, routers and Android tablets. The peeps behind the fascinating blog visual6502.org normally reverse-engineer chips by pulling the silicon out of …

Silver badge

Variable record format

Hehe... Blast from the past - full OS level record management in file IO. The app developer had no clue what is going on behind the scenes, VMS was managing it all for them including by default revisioning the file on each open for write. So if it decided to do the actual writes as variable size, the app developer would have had no clue of that - it would have looked like ordinary record retrieval to the application.

The end result was the most insane open() syntax known to man. My recollections were that it took 5+ lines of optional args to open a file in VMS Pascal.

8
2

Re: Variable record format

However - as VMS had the record management - almost any program could read almost any file - the record attributes were stored in the file header and an open() call without optional parameters used the attributes in the file header to read the file correctly. (None of the mess that is in Windows where some text files open correctly in Notepad - others need Wordpad.) From (very old) memory - ordinary variable length record text files needed no optional parameters - fixed length record files needed 2 parameters (type = fixed length and the length of each record) - it could however get messy if you were creating indexed files (but a sequential read of an indexed file could be performed by almost any program).

The really bad case was reading a foreign (not created by VMS) binary file where everything had to be specified as the OS did not have valid data in the file header.

6
2
Silver badge

Re: Variable record format

Yes, VMS files came in the proverbial 57 varieties. This was all well documented, but few people ever consulted the manuals, Many programmers got confused and made mistakes.

It was as confusing as the old George 3 file varieties: graphic mode (for all-capitals text), normal mode (quite rare, upper and lower case), and allchars (normal plus control characters).

2
1

Re: Variable record format

George 3... sheesh, you just reminded me how old I'm getting. As a lowly student my files were mostly stored in 'on the shelf' format - as piles of punch cards or tape.

2
1

25,000 *transistors* - not gates!

14
0

>Eventually, about 18 months later, they produced the ARM1 – a tiny, efficient CPU fabricated by VLSI Technology with fewer than 25,000 gates using a 3,000nm (3μm) process. Today, a quad-core Intel Skylake Core i7 processor, with builtin GPU, has 1,350,000,000 gates using a 14nm process.

Why not compare it to the Exynos 8890, Snapdragon 820, Kirin 950, or Mediatek Helio X20 instead of an x86 flagship chip?

8
0
(Written by Reg staff) Silver badge

Re: PleebSmasher

Don't let me stop you -- off you go, then.

C.

12
3
Windows

Re: PleebSmasher

I was thinking the latest Tegra or an X-Gene.

How do we edit the article then C?

0
0
(Written by Reg staff) Silver badge

Re: anonymous

Just post here.

C.

1
0
Anonymous Coward

Run out of cache

And while we're at it, if we're comparing [just] processors, why not deduct the huge number of simple (and simply interconnected) transistors that make up the on chip caches.

1
1
Silver badge
Coat

Personally I think it should have been compared to other desktop CPUs of the day, then a bit of data about ARM8 today and comparing that to the surviving rival CPU (x86), both i7 desktop and Atom mobile versions.

Making El Reg a wiki is definitely the future.

5
0
Silver badge

If you wanted to learn about computing from the ground up

Then if there is a gate/block level of this available then you have everything you need to cover simple logic gates on silicon all the way you to virtualised machines.

I have spent some time trying to gather z80 material to do this but Zilog no longer have the original ccts etc. But this and GCC etc and you have it all.

5
0

Re: If you wanted to learn about computing from the ground up

awww It would have been great to see the Z80 as I was using Z80 machines.

0
0
Anonymous Coward

Layout Vs Schematic

"Close up ... the semiconductor gate schematics for the ARM1"

That looks like a layout (physical), not schematics (logical netlist).

Probably from a CIF file, rather than GDSII.

3
0
(Written by Reg staff) Silver badge

Re: Layout Vs Schematic

~~ My chip articles bring the pedants to the yard. And they're like, our knowledge is better than yours. We can teach you but we have to charge. ~~

It's fixed, ta. Once upon a time I used VLSI design software to layout gates and doping regions and cells and metallization layers and, arrgh, I thought I'd erased all that from my mind.

C.

16
0
Anonymous Coward

Re: Layout Vs Schematic

ASIC backend implementation is still an interesting, noble (and well paid) profession. ;-)

4
0
Happy

Re: Layout Vs Schematic

Yeh you just have to tell them that every now and then ... and keep the blinds closed

0
0

memory corruption

My first ever job -- in 1981 -- was writing VLSI design software on VAX/VMS. CIF files ring a distant bell, but I can remember nothing more.

0
0
Bronze badge

As I recall they thought it was broken because it produced such a small amount of heat. Something along those lines.

1
0
Anonymous Coward

I have a vague and hazy recollection that it produced such a small amount heat because it was broken... but a tiny current leaking from somewhere unexpected allowed it to function correctly anyway...

?

3
0
Silver badge

There was a break in the supply, but as long as at least one IO line was high it was powered by the reverse diode on that line. Took them a while to discover why it would occasionally crash :)

7
0

that rang a bell too - found it right here:

"Deeply puzzling, though, was the reading on the multimeter connected in series with the power supply. The needle was at zero: the processor seemed to be consuming no power whatsoever.

As Wilson tells it: “The development board plugged the chip into had a fault: there was no current being sent down the power supply lines at all. The processor was actually running on leakage from the logic circuits. "

8
0
Coat

"...[Acorn] imploded..."

That's rich! Mind you, I suppose it's immaterial how Acorn was asset-raped this far down the line. Boland and his cronies made their pounds of flesh and ARM managed a success that has annoyed certain competitors ever since!

6
0
Silver badge

"...[Acorn] imploded..."

The history of Acorn and ICL presents two wonderful examples of politicians being utterly clueless at IT policy. Gordon Brown flogging off the gold reserves was trivial in comparison. ARM today is barely a medium size company, but its potential was to be the next Intel.

9
0
Anonymous Coward

Re: "...[Acorn] imploded..."

ARM today is barely a medium size company, but its potential was to be Intel.

FTFY

O:-)

2
0
Silver badge

Re: "...[Acorn] imploded..."

If one looked purely at each instruction set and where computing has recently gone nobody in there right mind would pick Intel with their POS x86 to be the 800lb guerilla. ARM has kept the core of their (imho superior) instruction set intact and grown a successful business with it for a few decades. Intel has tried repeatedly to kill their abomination but the market won't let them (a market that has rewarded them very handsomely until lately). Guess its been a good thing Intel has been a generation ahead of everyone else in fab technology (the real reason to Intel's success) which is how they made it work in most spaces historically. Sadly now that chips are fast enough and becoming a commodity that overhead is killing Intel and making ARM look real pretty indeed. ARM lets the companies that know how to do high volume low margin manufacturing do the heavy lifting and then they get their (rather fair) cut.

10
0
Silver badge

Re: "...[Acorn] imploded..."

Also I am aware that the x86 IS has been emulated in hardware since the mid 1990s but even with emulation Intel with their state of the art fabbing as been unable until very recently to compete with ARM on mobile with x86. Also beware as it looks like one of El Reg's adverts (only page I had open at time) decided to try and serve up malware to me due to not remembering to run through privoxy (and NoScript on full blast) like I regularly do. Of course with me running Tails in a VM straight off an ISO file with no persistent storage and it flashing up obviously fake firefox out of date warrnings trying to run MalwarePretendingToUpdateFirefox.exe it wasn't going to get far. I just reset the VM as opposed to trying debug but wasn't real happy to see.

5
0
Anonymous Coward

Re: "...[Acorn] imploded..."

I'm not actually sure why people care so much about the instruction set.

When everyone had to code in assembly it mattered.. now that decent quality C compilers are available for free the 10 or so lines of assembly most (even low level embedded guys) programmers have to come up with every other year means the underlying assembly language means very little.

Since humans don't care about the prettyness of the assembly language anymore surely code density etc should matter much more..

You say the only reason Intel are "winning" is that they have the latest fab technology. Well the only reason ARM cores ship as many units as they do is that they can be produced on old production lines.

Don't get me wrong. I like ARM but not because I'm in love with their instruction set. They are one of the few companies that make information on things like how debugging works over JTAG etc available so there is a decent ecosystem of free tools to work with ARM cores. On the other hand I'm not going to poo poo Intel based on some childish dislike of their instruction set. If there weren't significantly faster Intel machines out there developing for ARM machines would be many many times less productive.

2
10
Anonymous Coward

Re: "...[Acorn] imploded..."

Because, AC, the instruction set is what your ultra high level wysiwyg code gets compiled into, so:

1) Understanding the IS helps a competent programmer write efficient code

2) No matter how good your code and compiler is, if the target is a heinous kludge like x86, your binaries will flop out bigger, slower and kludgier than if they'd been made for a more elegant architecture

8
2
Anonymous Coward

Re: "...[Acorn] imploded..."

x86 has much better code density than ARM. ARM had to license patents from Hitachi to come up with thumb.

1
8
Anonymous Coward

Re: "...[Acorn] imploded..."

'x86 has much better code density than ARM.'

No

3
2
Anonymous Coward

Re: "...[Acorn] imploded..."

>No

My own real world tests show that ARM binaries are 10-15% bigger than x86 ones.

You have to mix ARM and thumb in the same binary to get decent code density and performance.. so you move the nasty kludges from the instruction decoding in the CPU into the binaries.

And here I was thinking the ARM instruction set is some of gift from $deity that is perfect in every way.

1
4
Anonymous Coward

Re: "...[Acorn] imploded..."

>ARM has kept the core of their (imho superior) instruction set intact

and this is simply not true. There are differences between the different ARM instruction sets i.e. some versions can handle unaligned accesses for some instructions that others can't.

Then you have the fact what people call "ARM" is a combination of the base instruction set, optional extensions and different FPU configurations. Until the cortex A stuff happened and the FPU stuff became a little bit saner you basically had to avoid using the FPU if you wanted your binaries to work on more than one ARM machine.

2
1
Anonymous Coward

Re: "...[Acorn] imploded..."

"'x86 has much better code density than ARM.'

No"

Thank you for not clarifying that at all. Not even linking to other people's works that might have clarified that.

3
0
Anonymous Coward

Re: "...[Acorn] imploded..."

"ARM had to license patents from Hitachi to come up with thumb."

Citation welcome.

1
0
Roo
Silver badge

Re: "...[Acorn] imploded..."

As of 1992 on a particular bit set of benchmarks we ran the smallest code generated was for the INMOS T800, the x86 binaries were 30% bigger. The INMOS chips used a cute instruction encoding that made a lot of common ops single byte instructions, it was handy when you were trying to cram everything into the 1-4kb of on-board single cycle memory. ;)

3
0
Anonymous Coward

Re: "...[Acorn] imploded..."

>Citation welcome

https://lwn.net/Articles/647636/

"The SuperH architecture is so dense that a 2009 research paper [PDF] plotted it ahead of every architecture other than x86, x86_64, and CRIS v32. ARM even licensed the SuperH patent portfolio to create its Thumb instruction set in the mid-1990s."

The claim apparently comes from one of the guys that designed the SH2 and holds/held the patents which relate to instruct length and code density.

2
0
Silver badge

Re: "...[Acorn] imploded..."

>'x86 has much better code density than ARM.'

Whether that is true or not is only somewhat relevant. Even if the code density is higher if it takes a lot more chip real estate to implement the instruction set and it can only be implemented somewhat efficiently its still going to use more energy and run hotter which is exactly what you want to avoid for mobile (and in the datacenter as well). From what I understand x86 is such a dog for mobile even emulating puts Intel at such a disadvantage it took them a herculean effort to finally even compete wtih ARM (and still not in ultra low power last I heard). What they are finding though is competing with ARM is not like competing with AMD. The payoff are not the type of margins Intel is used to.

5
0
Anonymous Coward

Re: "...[Acorn] imploded..."

>Whether that is true or not is only somewhat relevant.

>Even if the code density is higher

Code density is a good benchmark of the "goodness" of an ISA that doesn't basically boil down to "it's good because I like it, that makes it good". Code density is such a big problem ARM have an alternative instruction set in their chips to make up for the main one.

>if it takes a lot more chip real estate

>to implement the instruction set

And that matters to end users because? The number of transistors Intel have to squeeze onto a chip does not keep me awake a night. There are lots and lots of products out in the real world that use ARM Cortex M? parts to implement stuff that could have been done with discreet logic or a 555 timer instead. Baby Jesus doesn't weep when a transistor is wasted.

>and it can only be implemented somewhat efficiently its still going to use more energy

>and run hotter which is exactly what you want to avoid for mobile

But not every machine in the world is mobile. Imagine developing for mobile/embedded platforms if you didn't have a hideous x86 box doing the grunt work of compiling all the tools and code for the target? The only reason mobile is works is because there are smelly x86 boxes on the desk and in the cloud doing the grunt work.

>From what I understand x86 is such a dog for mobile even emulating puts Intel

So you don't actually know. You read this "fact" somewhere and use it in your little rants against x86 without really knowing what you are talking about.

Intel's desktop x86 chips kick even ARM's latest stuff in the balls. Decent performance is not a disadvantage for mobile. If Intel could get an i7 class chip into the energy budget for a phone they would have a winner on their hands.

The problem for Intel apparently is that they can't retain the performance without breaking the energy budget (they seem to be making some progress though..). It's like performance increases complexity which in turn increases the required energy! Who would have thunk it!

The emulation point brings nothing to the table. Intel need an ARM emulator because of ARM's hold on the market. Emulation is processor intensive. ARM's ISA is no better at it.

>The payoff are not the type of margins Intel is used to.

ARM is a fabless semiconductor company that licenses designs with good energy performance and acceptable execution performance at low low prices and chips based on their designs can be produced on fabs that are a lot cheaper than what Intel is using for their top of the range lines. I'm not sure how Intel thought they'd have a chance at breaking into ARM's core business and make any money in the process. I'm sure they have seen shrinking shipments of their high performance lines and thought they need to make a move. Intel's attempt to get back into the microcontroller market with their Quark stuff is equally laughable.

But either way Intel's bad business decisions doesn't make x86 "bad".

3
5
Anonymous Coward

Re: "...[Acorn] imploded..."

" Imagine developing for mobile/embedded platforms if you didn't have a hideous x86 box doing the grunt work of compiling all the tools and code for the target?"

If there was any doubt where you were coming from, it's clear now. And it's not a good place.

Lots of people don't have to *imagine* not using "hideous x86 box grunt work of compiling all the tools and code for the target". Lots of people have done it, yea even unto the days of PDP11s. There are still people using stuff other than x86 too, but the typical IT department's dependence on x86 means there aren't as many cross-tool setups on Unix, VMS, etc, as there used to be.

If x86 is so brilliant in general, why is it near invisible outside the IT department?

4
2
Anonymous Coward

Re: "...[Acorn] imploded..."

>If there was any doubt where you were coming from, it's clear now. And it's not a good place.

Please do forget to mention where that place actually is.

>Lots of people don't have to *imagine* not using "hideous x86 box grunt

>work of compiling all the tools and code for the target".

Because they don't do work in a field that requires them to do lots of compiling, data processing etc.

But they'll consume content that has been processed by machines many times more powerful than their "mobile" device multiple times a day.

>There are still people using stuff other than x86 too, but the

On the desktop? Are there any desktop machines shipping in volume that aren't x86? The only ones I can think of are chromebooks and they aren't exactly winning any ass kicking competitions.

>typical IT department's dependence on

IT departments - The be all and end all of people that think that their job fixing printers is "working in high technology"

>Unix, VMS, etc, as there used to be.

Unix doesn't run on x86? You better tell that to all the Unix vendors that ported their breed of Unix to x86 as soon as they realised fast commodity priced x86 hardware was going to ruin their RISC party.

>If x86 is so brilliant in general, why is it near invisible outside the IT department?

Who said it's so brilliant? All I'm saying is it's not the ISIS of instruction sets and it's not like ARM is some amazing super technology sent from heaven to save us all. It's horses for courses.

If you want your desktop machine to be limited to performance levels of 5 years ago and only able to access a quarter of the RAM that my core i7 setup does knock yourself out.. but I'll be keeping my commodity machine with 32GB of RAM kthnxbye.

And not exactly invisible outside of the IT department unless your job fixing printers involves printers attached to machines that consume multiple rooms:

https://en.wikipedia.org/wiki/Supercomputer#/media/File:Processor_families_in_TOP500_supercomputers.svg

4
4
Anonymous Coward

Re: "...[Acorn] imploded..."

"The SuperH architecture is so dense that a 2009 research paper [PDF] plotted it ahead of every architecture other than x86, x86_64, and CRIS v32"

Thanks for the lwn link.

According to the graphs in the referenced PDF, there's not much density difference between SH3 and its close competitors. The generic features which contribute to code density are covered in reasonable depth but reasons for *SH3 specifically* being a winner are barely touched on, which is a shame. The expiration of patents seems to be a major reason for looking again at SH3 to create an open source processor design. All that being said, a comment on the LWN article says:

"There has been further code density work since that 2009 paper, and SH3 is now beaten by a few others including THUMB and THUMB2.

http://www.deater.net/weave/vmwprod/asm/ll/ll.html

That's partly because I've spent more time on ARM platforms lately; I haven't had a reason to go back and re-optimize SH3."

ARM has sufficient advantages that for many (most?) purposes it wins volume applications without much debate. Not all of them, but lots. Code density is just one of many factors to be looked at when choosing a chip.

2
0
Anonymous Coward

Re: "...[Acorn] imploded..."

>ARM has sufficient advantages that for many (most?) purposes it

>wins volume applications without much debate.

If you need millions of chips on the cheap you need something you can produce with high yields for the lowest cost possible and ARM fits that market. Some of the other poster's arguments are like comparing the top of the range Intel product and the top of the range ARM design with an F1 car with a relatively fast commodity car but thinking the ARM design is the F1 car because of some preconceived notions about the ARM/RISC designs being "betterer".

>Code density is just one of many factors to be

>looked at when choosing a chip.

Exactly. If I have a power budget of a few nano amps in standby and a processing requirement of blinking an LED once a second or so fully active then having something more complex than an 8bit microcontroller would be insane. But does the fact that the 8bit controller does that job with less power than a full computer could make the full computer ISIS of the instruction stream processing world? Nope. Code density, performance per watt etc are things we can actually look at and compare and probably notice that increasing one metric causes another desirable metric to suffer. "I liked the way that it was it basic enough that even I could code for it" isn't something we can work with to objectively decide.

1
1
Silver badge

Re: "...[Acorn] imploded..."

Leaving a lot out to keep this short.

>The number of transistors Intel have to squeeze onto a chip does not keep me awake a night

No but it does very much affect the performance/energy trade off you allude to later.

>But not every machine in the world is mobile.

> I'm not sure how Intel thought they'd have a chance at breaking into ARM's core business and make any money in the process.

Mobile is the only segment still with decent growth which is why Intel is panicking. They are having their Kodak moment.

>But either way Intel's bad business decisions doesn't make x86 "bad".

Like I said nobody hates x86 more than Intel does, which is why they have tried repeatedly to kill it. It really has held them back in many ways even if it buttered their bread for decades. x86 is a prime example of how its not always the best product winning the market (Motorola ISA were so much better in the early days) but the one in the right place at the right time and most important at the right price.

3
0
Silver badge

Re: "...[Acorn] imploded..."

Just to add.

>x86 is a prime example of how its not always the best product winning the market

Few product lines ever have had a stronger network effect which is why it won the day and carried Intel to be one of the 30 biggest companies in the world but ironically may end up dragging it down to its doom as well.

0
0

Re: "...[Acorn] imploded..."

"Discrete" logic, not "discreet" " -- but I won't tell. Heh!

2
0

Re: "...[Acorn] imploded..."

Asdf wrote: " Intel has tried repeatedly to kill their abomination but the market won't let them"

That is mainly because the processors that Intel designed to replace the x86 were utter crap. Most people vaguely remember the Itanium failure, but few these days recall the iAPX 432, which was supposed to replace the 8080. Due to delays, Intel decided to make a "stop-gap" solution called 8086 for use until the 432 was ready. While Intel managed to make functional 432 processors, they ran extremely slow, partly because of an object-oriented data model and partly due to bit-level alignment of data access. Parts of the memory-protection hardware made it into the 80286 and later x86 designs, but the rest was scrapped. Itanium did not do much better, so Intel had to copy AMD's 64-bit x86 design, which must have been a blow to their pride.

If Intel had designed a simple 32-bit processor back in the early 1980s, ARM probably would not have been. Acorn designed the ARM not because they wanted to compete with x86 and other processors, but because they were not satisfied with the current commercial selection of 16/32 bit microprocessor (mainly Intel 8086, Motorola 68000, Zilog Z8000 and National 32016). If there had been a good and cheap 16/32-bit design commercially available, Acorn would have picked that.

6
0

Re: "...[Acorn] imploded..."

AC wrote: 'Code density is a good benchmark of the "goodness" of an ISA that doesn't basically boil down to "it's good because I like it, that makes it good".'

Code density is only one dimension of "goodness", and it is one of the hardest to measure. If you measure compiled code, the density depends as much on the compiler (and optimisation flags) as it does on the processor, and if you measure hand-written code, it depends a lot on whether the code was written for compactness or speed and how much effort the programmer put into this. So you should expect 10-20% error on such benchmarks. Also, for very large programs, the difference in code density is provably negligible: You can write an emulator for the more compact code in constant space, and the larger the code is, the smaller a proportion of the code size is taken by the emulator. This is basically what byte code formats (such as JVM) are for.

I agree that the original ARM ISA is not "optimal" when it comes to code density, but it was in the same ballpark as 80386 (using 32-bit code). The main reason ARM made an effort to further reduce code size and Intel did not was because ARM targeted small embedded systems and Intel targeted PCs and servers, where code density is not so important. Also, Thumb was designed for use on systems where the data bus was 8 or 16-bits wide, so having to read only 16 bits per instruction sped up code execution. The original ARM was not designed for code density, but for simplicity and speed.

3
0
Anonymous Coward

Re: "...[Acorn] imploded..."

"nobody hates x86 more than Intel does"

Citation welcome. The story isn't quite as simple as that.

In the mid/late 1990s patent wars between Intel and DEC, Intel could have ended up with ownership of the Alpha architecture if they'd wanted to, or at least as an Alpha licencee (like Samsung were). As owners of Alpha they could also have had one implementation that was almost an SoC before most industry folk knew SoCs existed (the 21066). In addition Intel could also have ended up with ownership of DEC's StrongARM designs and designers (which they did) and carried on with them (which they didn't, not in any serious way).

Intel HQ chose to carry on their own sweet way with IA64 ("because 64bit x86 is impossible and IA64 is the answer"). Sadly high end DEC systems (by then Compaq systems) were among those drinking the IA64 KoolAid, and the Alpha fell by the wayside, despite very prescient stuff like this slightly-techy 1999 whitepaper from DEC's Alpha people explaining why IA64 would fail:

http://www.cs.trinity.edu/~mlewis/CSCI3294-F01/Papers/alpha_ia64.pdf

Then when AMD showed that x86-64 was not only possible but practical and popular, Intel HQ finally realised that "industry standard 64-bit" meant AMD64 not IA64 (and not Alpha or MIPS or Power or SPARC). But the IA64 lived on alongside x86-64 for a while, even though everyone with a clue knew IA64 was going nowhere.

Alongside all that, Intel HQ chose not to retain and enhance the StrongARM designs (and people) they did end up with in 1997, they chose to sell them off to Marvell and carry on down the x86 road.

If those are signs of hating x86, you could have fooled me.

Btw, much of this "x86 vs the rest" stuff could be, and was, written ten years or so ago. Ten years after, Intel still haven't got with the SoC programme (there's more to this than "mobile", as in mobile phones/tablets/etc).

E.g.

http://www.theregister.co.uk/2006/06/27/intel_sells_xscale/

"Intel is to flog off its XScale [nee StrongARM] processor operation, the chip giant said today. The move paves the way for it to push low-power x86 CPUs at mobile phone and PDA makers. The buyer is comms chip company Marvell Technology Group, which is paying $600m cash for the product line and taking on "certain liabilities"."

and (the following day)

http://www.theregister.co.uk/2006/06/28/intel_mobile_failure/

"Intel's name looks forever to be associated with the PC, now that it's ended a nine year dalliance with the phone business. The firesale of its 1,400 strong XScale processor division, and the write down of its cellular investments, means that Intel has passed up the chance to play in the largest volume chip market of them all. There are 2bn mobile phones in the world, and in many emerging markets the phone is the only computing device likely to achieve ubiquity."

Intel: the x86 company, now and always.

5
0
Bronze badge
Angel

Re: "...[Acorn] imploded..."

When everyone had to code in assembly it mattered.. now that decent quality C compilers are available for free ....

1) Never done a Board Support Package before, have we laddie?

2) The people doing the GCC versions (some may argue about quality here) are most certainly writing the assembly that the compiler tool-chain will splurge from the C source; Simple instruction sets makes their job easier which makes your C-code work better.

6
0

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Forums

Biting the hand that feeds IT © 1998–2017