Something will roll...
I'd LOVE to say, "Heads will roll!!!", but the reality is, "Eyes will roll."
Arm has taken offline its website attacking rival processor architecture RISC-V within days of it going live – after its own staff objected to the underhand tactic. The site – riscv-basics.com – was created at the end of June, and attempted to smear open-source RISC-V, listing five reasons why Arm cores are a better choice …
Fat Freddy's Cat says: "Remember the Wayback Machine before you publicly shit on somebody."
That's unnecessary :-)
RISC-V and Arm both run software written in C, C++, Go, Rust, Python.
That's excellent. For the languages coming after the postincrement operator.
Get that 20-century shit out of the design space ... or suffer the pain of forcing yourself to try to build a processor that pretends it's from the 70s, only faster.
(you can keep it if you run it in a language VM, but seriously why bother?)
ARM was originally designed with a view to fast intrepeted code execution (like the 6502) and assembler for OS.
There used to be lively debate about coding an OS is anything other than assembler at Acorn ( I say lively but in reality they would have just laughed at you ) but since these days it is okay to lob crap out the door and pretend it is finished then C is a fitting language for the times.
Certainly whilst people continue to buy it
Both you and the website you have linked maintain a distinct lack of knowledge:
Processors do not run C code. Processors do not even run assembly. They run CPU instructions. When you compile a C program, it is turned into assembly, which is translated into instructions. When you run your "hello world" Java program, it is turned into corresponding assembly just in time to become instructions to be run. When you compile a Rust program, it is turned into assembly, which is translated into instructions.
The way CPU instructions (and therefore assembly) works is long and complicated, but it's essentially short computation instructions. Move address A to C. Divide address A by B, sending results to A. Grab a new free address, so we're not limited to addresses that were compiled in. Read from an output of another program that controls the keyboard, sending this number that represents a letter to that dynamically decided address. Simple stuff.
If you've ever read both, you'll notice C is quite close to assembly it becomes. In fact, it suffers from many of the same problems. In fact, it suffers from those problems because it doesn't try to abstract too much from the assembly it will ultimately be. When another language works around this, it does so by automatically working around it when compiled. This is abstraction, because you don't control it. Ultimately, it makes no difference, and the resulting assembly has no discerning features that point to one language or another.
Processor makers do not design for C because they have no way to figure out if the instructions they get are from C code, it could be from literally anything else. Because that's not how compiling works. All those processor problems claimed as workarounds to improve running C? It improved running code of literally every language on that processor as well. Because everything is instructions. The problem here is how instructions for a programmable calculator work, which ultimately decides how assembly is designed. Not C.
Additionally, I absolutely dare you to make a CPU incompatible with C. If you do, you will also find everything else is.
Sadly you're not exactly correct there.
In the real world people who buy processors look at their performance on standard benchmarks such as SPEC or (horrors!) Dhrystone or CoreMark. These benchmarks are written in C. People who design processors therefore design them to run C as quickly as possible.
Certainly, the processor has no idea whether the instructions it is running originally came from C or Haskell, but if you were designing a processor mainly to run Haskell or Lisp or Prolog programs then the instructions might well look a bit different.
Companies tried building specialised processors for those languages (and others) in the 1980s, but the rate of improvement in conventional processors designed for C (or Pascal ... they're essentially identical at this level) was so fast that any improvement in efficiency was lost in lack of pure MHz.
Now that MHz has been stalled for a while, we may well start to see specialised processors -- or at least specialised instructions added to general purpose processors -- making a comeback.
First issue -- Any C compatible CPU needs to have Byte addressing. This doubles the size of pointers unnecessarily.
Java can actually access 32 Gig of RAM with just 32 bit pointers. Because objects are not byte aligned, and it knows that. Huge saving in memory. And C's pointers into anything kills garbage collection anyway.
A 32 bit Non-C CPU would contain enough address space for virtually any application today, or for the next several decades. Notice how memory requirements have stabilized at about 4 gig for a basic PC? The doubling of the transistor count is purely for C.
If you are going to have huge pointers, then adding tag bits can hugely optimize dynamic type checking. But C does not do that, so they are not available, even though 48 bit address spaces are larger than anyone will use.
The second issue is that it impossible to implement modern, efficient garbage collection in C. A third is that C does not detect integer overflow, which should be standard.
"Real programmers with Real Work to do use FORTRAN"
Some real engineers* still use FORTRAN too. I have to use a "FORTRAN-like" language for some thermal modelling every now and then, but I'd far rather be using Python.
*This does not include people who clean things and have a misleading job title. For some reason, we had an e-mail the other day around our (Aerospace) company telling us that an engineer would be coming in to clean the water coolers...
Saying that, I suppose I can handle a bottle of disinfectant easily as well as the next sentient being. Maybe that's my next big project? I helped make a crater on the surface of Mars. How hard can cleaning the water cooler be? It's not as if I'm designing the "B" ark,
Some real engineers* still use FORTRAN too. I have to use a "FORTRAN-like" language for some thermal modelling every now and then, but I'd far rather be using Python.
Everyone uses python - but I hoped el'reg readers would recognise
Aha - Ada ... part of the cunning plan from the US government to subvert the European computer industry! US says everything must use Ada or VHDL unless there's an genuine reason to use something else. All of Europe falls for this and adopts Ada and VHDL. Meanwhile, back in the US everyone remains with C and verilog with the "genuine reason" of "we want get things done".
"A lot of real programmers were taught on PASCAL though."
Also a lot of real programmers of a certain age still remember having to code when memory was measured in kilobytes.
As for Pascal I once managed to take the then *massive* RM TXED program (an almost wyswig text editor weighing in at a seemingly incomprehensible 6-8kB) and strip it down to a usable subset to fit in the 2kB slot allocated for the program editor in, I think, TRANSAM Pascal ... try telling the youth of today you can fit an editor into 2kB and they wont believe you.
Well, yes, but isn't having been taught in Pascal only marginally better than having been taught in C++? Surely being taught in decent languages would be more productive of competent programmers. Perhaps starting from Dijkstra's book "A discipline of programming" and then learing to use his Guarded Command Language along with Hoare Logic (and maybe some Haskell too) would enable certain universities to turn out decent programmers instead of awarding first class honours to people who specialise in C++ or Pascal but couldn't even write a working "Hello World" in any language.
Some real engineers* still use FORTRAN too. I have to use a "FORTRAN-like" language for some thermal modelling every now and then, but I'd far rather be using Python.
Congrats, A/C on replying to dismiss assertion of Real programmers with Real Work to do use FORTRAN - really wasn't necessary, it's an old joke.
Clearly being a Rocket (or at least Aerospace) Engineer isn't necessary for some tasks after all, that actually makes me feel better about my job.
C does not require pointers to 8-bit values. It requires sizeof(char) == 1 and that char has at least 8 bits. TMS320C40 has pointers to 16-bit values. I programmed it just fine in C. You could have pointers to 32-bit values and C would still work just fine. Programs written in C and many other languages assume pointers point at 8 bits. That causes those programs to crash on exotic architectures where the assumption is not true. It is not a fault of the language. It is either a decision taken by the programmer to support only the most common hardware or (far more likely) the programmer had no idea that pointers could point at anything other than bytes.
If you create a new architecture where pointers point at things bigger than bytes, large amounts of software will not work on it without some programmer going through the source code and fixing every part that assumes pointers point at bytes. This will not just hurt programs written in C/C++. The software I wrote for TMS320C40 had a small quantity of ASCII strings. These wasted one byte per character because the extra code required to implement pointers to bytes would have been bigger than the potential saving. Build a bigger general purpose CPU with 32-bit chars and you may save on pointer size but now byte streams cost four times as much memory or a huge performance hit from emulating pointer to bytes in software (while bringing back either 64-bit pointers or 4GB address space.)
No I did not notice memory memory requirements stabilized at about 4GB. I typically work on the small size. My largest machine is 2GB with most having considerably less. On this site you will find an unusually high proportion of people who would have problems being limited to a 32GB address space. Quadrupling the size of all bytes streams would increase memory requirements for many users, not just the extremes who are over-represented here.
Garbage collection is a serious problem for me as it causes programs not to run in a deterministic amount of time. One of the great benefits of C is it does not inflict garbage collection on me unless I choose to use a library that provides garbage collected objects.
The OS kernel (written in C) could map blocks of memory to the same address to support dynamic type tags inside pointers. It would thrash the memory translation caches, but those could be increased in size at considerable expense of transistor count. C would have no serious problem extracting and comparing a type encoded into pointers. Your pointer type fields inside pointers could be implemented right now in software with existing hardware. Go off and implement it and we will see if your plan provides real benefits over storing dynamic type in the object.
It is extremely possible to implement modern efficient garbage collection in C or any Turing complete language. (Python's garbage collector is implemented in C).
Modular arithmetic is an option selected by programmers (or selected for them by default). If you want overflow detection the option for gcc is -ftrapv.
Nothing you said makes any sense... Allow me to try and clear up some misunderstandings.
Byte addressing has absolutely nothing to do with the language you program in. There is no reason at all that a "C compatible CPU" (what is that anyway?) needs to have byte addressing in hardware, or that other languages don't need it. If a CPU has a 64bit data path, and retrieves 64 bits of data from RAM at a time, you can still easily address a single byte. You just need to select the right byte in the 64bits of data. Which can be done inside the CPU, or inside the compiler. x86 assembly for instance lets you access registers as bytes (eg. AH register) or larger (AX for 16 bits, EAX for 32 bits etc) even if it's a 64bit CPU.
Why would byte addressing double the size of pointers? It increases the size of a pointer by 1 bit... If you would make a CPU that can only access 16 bits at a time instead of 8 bits, all you need to do is remove one address line (A0) from the CPU, but that still has nothing do to with the size of a pointer. There are lots of reasons to access individual bytes, regardless of the programming language, so pointers would still need byte access. Even if the programming language does not have C-style pointers, you still need to be able to access information on byte level (stupid examples: ASCII text, serial ports, MPEG video, graphics, ...).
Java has no "pointers", so that statement does not make sense. The amount of memory Java can access is limited by the CPU it runs on, it has nothing to do with the language.
Alignment of objects has nothing to do with pointer size. At all. Totally unrelated. On a 64bit processor, objects are likely aligned on 64bit boundaries. That does not mean pointers only point to 64bit based addresses. If anything the pointer size would be related to the smallest variable size, not alignment. In C and most other languages, a byte is the smallest variable size, so pointers still need to point to individual bytes. Java for instance has a byte primitive, which (surprise surprise) is 1 byte big. So, Java works with byte addressing regardless of what you claim. Any system that can run Java needs to be able to address individual bytes. That can be done in software (in the jvm) or in hardware, that is irrelevant.
The (standard) JAVA jvm is written in C by the way. So if your processor can't run C, it can't run Java. Unless it has a baked in jvm, but then it would only be able to run jvm based languages and nothing else. Those do exist, but they are understandably not very popular.
Garbage collection is not part of the C standard, that is true. But lots of areas where C is (still) used don't actually want garbage collection, because there are lots of problems with it (slow, not deterministic, not safe, ...). In anything (hard) real time, garbage collection is unwanted. You will find that none of the so called low level languages (the ones operating systems and the like are written in, like C, C++, rust) have a garbage collector.
Claiming that supporting C doubles the transistor count is completely ludicrous. You jump from doubling pointer sizes (which is already wrong) to doubling transistor count, but there is absolutely no linear relation between them. Even if you mean doubling the amount of supported RAM that would only mean adding one address line which is only a tiny fraction of the total amount of transistors used to make a CPU.
Your statement of "48bit address spaces are larger that anyone will use" is proven wrong by the current Intel processors, they already have 52 address lines (but I bet you'll claim that's because they support C, right? Wrong! It's determined by the amount of RAM they can address).
Claiming that implementing garbage collection in C is impossible is again baseless. It is possible to write anything in C. Since it is a very low level language that is close to the hardware, there is nothing you can't write in C. It might not be the most efficient way to do it in terms of effort, but it most certainly is possible. The Java garbage collector for instance is written in C... It's not a very good one, but still.
I'm not sure where your thinking comes from, but it seems to me you have a lot to learn about CPU design. It's a very interesting field but I'm afraid you'll have to let go of a lot of misconceptions if you want to study it further... I hope I cleared some of them up for you.
Huge saving in memory.
Hah, that would be the day. Java is well known for first consuming all memory available to it and then consuming all disk space available to 'java.util.logging.Logger' while whining about it - in many places, because to the common idiiet Java-developer, one can never just log to the standard facility*.
Guess that must be because it is written in C and not because nobody ever really managed to write something decent and nice in Java!
*) I made about 7 kEUR in one year for on-call only because of this feature as a mini-BOFH. So, its not all bad and I see how and why someone might like Java.
"Java is well known for first consuming all memory available to it and then consuming all disk space available to 'java.util.logging.Logger' while whining about it - in many places, because to the common idiiet Java-developer, one can never just log to the standard facility*."
Oh God, brings back memories. I hope the younger programmers who have sneered at me for "not using Logger" have by now found out why. But I doubt it. Even though one of them did manage to fill up an Azure VM over a weekend and wondered why it wasn't working on Monday.
> Any C compatible CPU needs to have Byte addressing.
That is true but a byte is not what you think it is. The C language defines a "byte" as the smallest addressable unit of storage but that does not equate to it being 8-bits. To prove this is true you can have a fully compliant C compiler that works on some strange architectures, for example a DSP where the C-language defined "byte" is a 32-bit unit. Oh my.
Whist in general I agree with your [ost, it is possible to optimise for certain high level languages.
C for example, along with many other languages uses (a/the) stack as a pointer to temporary variable space.
This avoids the overheads associated with memory allocation and garbage collection, but opens a security hole in that the stack is not just for data, but for instruction pointers as well.
So a chip designed for C MIGHT have a second data stack pointer. Or a register that represents the l
limits in the stack space where access is allowed.
Its not easy to cover all the bases, but it could be possible.
What I am saying is that chips were originally designed to run assembler, and the compiler was a faster way to write it.
Nowadays we know that they will probably be executing C code most of the time, and it makes sense to adjust the hardware to match that.
C does produce tight assembler that looks remarkably like C, but what it does not do is produce optimal assembler using constructs taht do not suit te language.
I never did manage to get an early C compiler to construct a call table - a list of addresses of subroutines to call depending on the index value in some register. Mostly because the syntax of indirection was so ugly and it hadn't been written to[parse it.
On the other hand a CASE statement or a set of ifthenelses was fine, if bulkier.
That is C.
Other languages that suffer/benefit from lots of dynamic memory allocation might in fact have chips with parallel cores handling the mapping of real memory to a virtual memory space, such as an SSD does ..thereby freeing the main cores from garbage collection and memory allocation
Or take FORTH. That is a language that benefits from certain hardware features too, but absolutely doesn't need loads of registers.
Hardware drove language development in the 70s. But the reverse is true today.
A CPU "runs FORTRAN fast" if it runs the kind of machine code a FORTRAN compiler will typically emit fast, and in this sense it is perfectly legitimate to say that a CPU is designed to run C fast.
Since these days hardly anyone thinks the design of the Burroughs B6500 was a good idea, we don't have too many computers that run Algol well but C not nearly as well.
I find it amusing that my perfectly factual post ... and from someone helping design RISC-V CPUs and working on RISC-V compilers to run C fast ... got 30 downvotes here. Apparently a lot of people are half-educated. Oh well, lol etc.
"Apparently a lot of people are half-educated"
Apparently you didnt proof-read your post before posting did you. Maybe you would not have so many downvotes if you mentioned the word compiler instead of insisting that the CPU runs C which unless you are building a C interpreter/VM in your RISC-V CPU is completely ridiculous.
When you eat food, does your body digest it before burning the energy? Or will you insist to your doctor that your circulatory system pumps the chewed bits of sandwich directly to your muscles and then complain to him/her about the idiots who down vote you for insisting that is how the human body works and anyone who believes in stomach acid and bowel movements is as un-educated as a flat-earther.
I'm not sure what happened to the Burroughs operating system written in Algol 68, but the ICL operating system written in that language (for hardware designed to support that language as well as others) is still going strong - Fujitsu (who built hardware designed for that language for ICL and much later bought ICL) are still selling it and are still trying to recruit people who understand the operating system, databases, middleware, and language because they want to keep it going as it's very much wanted by enough of their customers to matter. Now I'm in my mid-70s and it's decades since I was part of that development and I'm more interested in other things these days, but I'm still proud of what I helped to achieve way back when.
>Additionally, I absolutely dare you to make a CPU incompatible with C. If you do, you will also find everything else is.
I cannot see how to do that. On the other hand I know there are cases where C is hard: the 6502 processor. C relies on variables on stack. The 6502 stack has 256 single byte entries and stack pointer relative addressing takes a lot of work. In fact making a virtual CPU that abstracts this in is easier to program.
And back in the day nearly 30 years ago it was said that ARM was inspired by 6502 though few if any reliable sources exist today, ARM however is far more suited to stack and C software than 6502 ever was.
"Though few if any reliable sources exist today"
The person who designed the original ARM instruction set - Steve Wilson - is still alive and well, "living under an assumed name" as Sophie Wilson, working in Cambridge for Broadcom. If you put "Sophie Wilson" into google you will find various youtube videos where Sophie explains the origins of the ARM instruction set design and the link to the 6502.
There are also easy to find videos of Steven Furber, who led the original ARM CPU implementation, describing the inspirations of the project.
The original ARM CPU project was done at Acorn computers in Cambridge, starting in 1983. The name ARM original was an acronym for Acorn RISC Machine.
Acorn's hugely successful (in the UK) 8-bit personal computer, called The BBC Micro, used the 6502, which is how Acorn became a big company, allowing them the resources to develop their own CPU.
The ARM CPU was used in Acorn desktop computers called Archimedes released in 1987. Both Wilson and Furber acknowledge the Berkeley RISC work as a key inspiration for the original ARM design.
Another key inspiration for the project that they mention is visiting the design centre of another CPU company (I can't remember which one) and seeing that it was a very small team working in a house converted to an office, and thinking "if they can do it with a team this small, then we can do it ourselves".
Usually only politics provides the opportunity to justifiably use words like "tommyrot" and "moonshine", since technologists, unlike politicos, normally make some effort to hew to evidence-based facts ... but, wow. What a lot of half-baked, poorly informed, badly reasoned tosh. You should be damned glad you posted anonymously.
I think I understand that you, AC, badly failed a C/C++ module somewhere in the past and may still be feeling the sting of a U-- grade. And I'm glad if you susbequently found one of the many modern languages that provide hand-holding and wet wipes and now believe that you are a true hairy-assed coding bro. You may yet do some good work, especially if you work on that humility thing and allow yourself to pin your ears back and learn properly about the nuts, bolts and grubby bits.
But leave it another couple of years before posting, eh?
Downvoted after reading the linked article, and in particular finding that others had already made there the comments that were necessary to be made. That should work well.
Actually, what might properly be considered harmful is building a computing system whose behaviors are not readily predictable and/or readily understandable at design time. That is a quick recipe for trouble in later life, as has become increasingly public in recent months courtesy of design flaws with trendy marketable names getting coverage here and elsewhere.
The potential for this kind of unwelcome behavior has actually been understood (but not widely acknowledged) for at least a decade in sectors which care about predictable behavior of systems, e.g. the safety critical sector. For example, the timing behavior of cache-dependent systems, especially those with branch prediction and speculative execution, was documented as an area of concern for safe (secure) systems.
One such document set which is freely available was the Aerospace Vehicle Systems Institute's series of reports on the criteria to use when selecting a processor for use in safe/secure systems. It was funded by various safety-critical end user companies (companies such as Boeing, Lockheed, UTC, etc) and is freely downloadable courtesy of the Federal Aviation Authority.
Go have a read. It might be interesting and informative.
I'm amazed at how many people don't seem to understand what a compiler is, or indeed what assembly language is, and the relationship between the two.
It does start to explain a few things about modern programming, that apparently so many programmers, sorry, I think they like to be called 'developers' now, seem to think that the CPU is executing the exact code they just wrote.
wait... sequential processing considered harmful? That's what the article is saying. Everything must be paralle. OK, but given two parallel machines with 100 streams running, the one whose sequential processing speed is twice the speed of the other will run all 100 streams at twice the speed. I fail to see the dichotomy between serial and parallel performance. Processors today have muiltiple cores, and rarely would a single processor use all the hardware available. Running multiple co-operating sequential tasks (à la Hoare CSP) is a model that works. The 100 serial tasks could be written in any language.
Getting the sequential logic right using python, and the looking at C if performance becomes an issue is a reasonably useful heuristic for developing individual processes. An application should almost never be a single monolithic sequential process, but a composition of many. The lack of widely adopted practices for elegant IPC is hardly unique to C. Python is the same. Composition is an interesting problem, but aside from heavy computation, it is not at all clear that language constructs are more helpful (or closer to the actual hardware) than just writing explicit CSP.
I mostly agree, but would much rather use one of Milner's languages (SCCS, ACCS, or CCS) than Hoare's CSP. Maybe that's because I had more contact with Milner than with Hoare, or it may just be the horrible vending machine in Hoare's first CSP paper.
Back in the 70s one of the goals of processor designs would be to build instruction sets that would translate directly from high level language constructs. If you want to see what an advanced 1970s machine architecture looks like then try the ICL 2900 series processors. The problem with these architectures was that the real world mix of instructions tended to be mostly loads and stores with a handful of more specialized instructions so architectures were trending towards the RISC pattern anyway.
As for 'C' being a low level language -- it is. Its what used to be known as a Systems Programming Language, a glorified assembler. Its intended use is to write system components and languages and should never have been used for applications. It ended up as a general purpose language because of the way that mass computing evolved in the 1980. (....and yes, since you're asking, people do have to write assembler type stuff, you need it to start and run the processor(s) and other hardware subsystems.)
You show complete lack of understanding on the subject of the CPU.
It does not run C. CPU designers do not adjust the CPU to run C any better just because the programmer wrote in C.
The CPU ONLY runs machine code. The performance of a C program vs a program written in Haskell for example depends entirely on the compiler (assuming you compile the haskell).
It is the compiler, for any language, that will take into account for any performance features implemented in a CPU. It is not restricted or exclusive to C but C compilers are likely the first to support that CPU feature because a heck of a lot of stiff is written in C.
The instructions in a CPU do not differ based on the language used by the programmer. In the past some CPU's were created to run LISP directly, being essentially a hardware VM interpreting the LISP byte code. But today generally your CPU implements the X86, AMD64 or ARM RISC instruction set and that is completely ignorant of your haskell or C coding preference.
When you write in Haskell and compile your get an executable blob of machine code. When you write the same program in C you get a blob of machine code. Both blobs use exactly the same machine code, the only difference between them being what instructions the compiler chose to use in creating that code. Thus the compiler is the cause of performance differences, not the hardware. The Haskell compiler may always be slower than the C compiler because its not taking advantage of advanced features of the CPU because the programmer who wrote the compiler has not added the code to do so. The CPU dictates nothing that affects this.
Its the same as saying that your electricity supplier makes electricity that is designed to work in LED lightbulbs but not in CFL light bulbs or a certain brand of toaster that they dont like. Trust me, I'm getting solar panels when my supplier denies power to my kitchen socket when I plug in my toaster that was made by a company that insulted them.
electricity = electricity
machine code = machine code
assembly -> assembler = machine code
C -> compiler = machine code
Haskell -> compiler or interpreter = machine code
Java -> compiler = bytecode -> JVM = machine code
C++ -> compiler = machine code
Perl -> JIT compiler = machine code
See any pattern there?
I suggest you read the datasheet for a CPU you "think" provides different instructions based on high level language choice and then write machine code to disprove yourself.
Hey! I was just trying to “inform a lively industry debate" and to "cultivate a healthy discussion around architectural choices". As you all know, noone has a greater passion for open source than me. In fact only an ugly bed-wetting pansy would get his knickers in a knot about the whole tempest in a teacup which we've all, ALL, already forgotten about.
How about a follow up article? It would be interesting to know if RiscV compilers are as well optimised yet.
How will RiscV avoid fragmentation of the instruction set, i.e. same op-codes having different implementations?
Can they adopt a standard like UEFI so an OS can be compiled once to boot on all capable RiscV devices?
How soon before RaspVian + RaspVpie?
RISC-V compilers are certainly newer and less optimised than ARM ones.
Despite this, SiFive's new E20 and E21 cores outperform ARM's Cortex-M0+ and Coptex-M4 on a Dhrystone MIPS/MHz and Coremarks/MHz basis, when both are compiled with gcc.
The ARM chips benchmark higher than the SiFive ones when using the IAD compiler. IAD has promised a beta of a RISC-V compiler for around the end of the year.
The RISC-V standard suggests that all but the very smallest RISC-V systems should include a "device tree" description of themselves in an onboard ROM.
The Raspberry Pi and other similar boards use obsolete SoCs that have already shipped in the millions in phones or other devices. For example the Odroid XU4 uses the same Samsung Exynos 5422 SoC as was in the Galaxy S5 phone. The Odroid C2 uses an Amlogic S905 SoC that was designed for set-top boxes. (I highly recommend both these boards over Raspberry Pi btw if you do want a high performance ARM board at a good price)
As RISC-V is new, it will be a while before there are obsolete SoCs that have already had their costs amortised in consumer products. What you have now in the Sifive HiFive1 ($59 320 MHz Arduino-compatable) and HiFive Unleashed ($999 quad core 1.5 GHz Linux board) are development boards aimed at professional engineers to evaluate the technology and prototype their software and products before they get their own hardware made. While $999 is expensive compared to a Pi or Odroid it's a drop in the bucket if you're paying an engineer $100k+ to work with it. Not to mention that the HiFive Unleashed has a lot of expensive stuff on it ... the 8 GB of DDR4 costs a lot more all by itself than the the Pi or Odroid (which have 1 or 2 GB of DDR3) retail for.
The key point of Risc-V is that it has a core ISA + set of standard extensions which are guaranteed not to change and are sufficient for standard programming, OS etc. Thus assuming implementations include the standard base config the things like OSes will run and standard compilers will gernerate code for the core. If people want to add extensions to the ISA then they can add their own instructions for their application domain and can run code using those on their cores.
So code (including OS) can definitely be compiled to run on any RiscV core but "efficiient" applications using extensions to the ISA for a particular application maybe limited to certain cores.
It's obviously not beyond the resources of an Apple or Samsung to design their own RISC cpu.
The real clever stuff comes in the process design which, at least on the higher end parts, you need to do yourself.
With opensource compilers the tool chain and so the experienced developers are there for you.
I always assumed it was that if you were an Apple or Samsung the royalties to ARM dropped to teh point where they were just less than creating your own architecture
Both Samsung and Apple depend on instruction set architectures. So whatever they do they need to be able to run ARM-code.
So they can't make it ARM compatible as ARM would sue, and they couldn't make an emulator, since ARM would sue.
The hope is that Risc-V might become important enough that Application developers see it as important enough to compile their software for it. Yes, Android can use Java, but none, but the most trivial apps can work without additional native blobs.
I think, with no data to back it up other than hearsay, that the ARM business model was broadly to say quite reasonably 'you design te silicon we design the architecture and then, your hardware is compatible with 3rd party software..'
I,e it was open source Wintel emulation in terms of business model Unless CPU chip architecture is your thing, open source lets you get on with spending money on better silicon and not having to worry about software support. Just as banging in an INTEL chip meant it could run windows...and running windows meant access to a huge base of software.
And ARM were not greedy.
So I am surprised not that it has taken so long, but that it has happened at all.
Do we actually need ANOTHER chip architecture?
Obviously these people think we do.
>> the consumer still makes the ultimate decision.
This is not about the bigger/fatter ARM cores (at least not yet). Those are where the larger ecosystems matter.
These are about the smaller arm cores, which mostly run proprietary code, that consumers never see or touch. Your DECT phone, wifi, usb dongle, and so on. Clocked say no more than 300 MHz if that.
As the code is proprietary and likely mostly written in a higher level language and therefore reasonbale portable, a switch of compiler is far more feasible here and is the market where arm made themselves - well before smartphones and SBCs came along.
So if RISC-V offers a compiler and enough performance, the switch is controlled and internal to the development teams involved.
Hence why ARM is panicking. RISC-V does not have to the best, it just needs to be able to do a reasonable job that justifies the cost difference and lowers the overheads per device which is the ARM licence model. It's like capex vs opex.
I suspect ARM will be revising their license model for the cortex M range if (when?) RISC-V takes off for this market.
Biting the hand that feeds IT © 1998–2019