Reply to post: Most Surprised

'Neural network' spotted deep inside Samsung's Galaxy S7 silicon brain

bazza Silver badge

Most Surprised

Hang on a minute, what's going on here?

Instruction decode? Branch prediction? It's as if someone has decided that ARM is a CISC instruction set all of a sudden and needs to be re-implemented. But ARM is already RISC (very RISCy in fact), and even the 64bit version needs only 48,000ish transistors to implement.

How can it be better to add all that rename, decode and microcode nonsense on top? That's surely going to be a good demonstration of the law of diminishing returns. Wouldn't it be better simply to use all those extra transistors as extra cache (which is always useful), or a whole extra core, instead?

3W at 2+ GHz and not quicker than a competing design at single core performance? Well I think that about answers it. I don't know what Apple have done, but I'd not heard that they (or anyone else) had gone down the same microcode route.

Neural nets for branch prediction? Well, why not I suppose, but from a pure CPU design point of view isn't it a kind of surrender? It's a bit like saying "we don't know how to do this properly" and deciding to build something that cannot be mathematically analysed instead and hoping it's better. That's fine if the result is good...

It does mean that this is useless for hard real-time applications. Branch execution time is now impossible to predict.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon