Re: 3,500 LEDs
The memory appears to be an array of bistable flip-flops with a density of approximately 1 bit per square inch.
A bloke in Cambridge, UK, is building a computer processor using 14,000 individual transistors and 3,500 LEDs – all by hand, piece by piece. James Newman said his Mega Processor relies almost entirely on the hand-soldered components, and will ultimately demonstrate how data travels through and is processed in a simple CPU core …
A very cool project.
Reminds me a bit of "The Elements of Computing Systems" , although the book used software emulators so you don't need so much physical space. Enough to get started on the principles of CPU design though.
(not associated in any way with this, just an interested reader that bought the book)
"Reminds me at Uni ...
I programmed a bit-slice CPU ...
And hand soldered my final year project."
Mine were much less fun.
Had to design and build a bar code reader. It did the utterly pointless thing of reading the bar code, and recreating it on a plotter - also built by me.
Had to build a memory expansion for the "D5E evaluation kit" to hold the necessary lookup tables and "driver" for the plotter.
I learned far more about bar codes than anyone would ever really wish to know. :)
Totally agree with the commenters who suggest housing this in a public institution as an educational exhibit. NMoC was my first thought but Cambridge makes more sense geographically (damn you both - put it in my local science museum so that I can go and ogle it!).
El Reg is being a tad lazy by comparing its performance to integrated circuits of yore. It would be nice to see how it stacks up on the whole continuum of computing, at least as far back as the first valve-based systems.
All hail the man in a shed (or spare bedroom) with too much time on his hands!
(And before anyone points it out I know it's von Neumann)
because I think it's a ridiculous project :-)
Just because you can, doesn't mean you should. This is the opposite of progress - deliberately doing thousands of small repetitive tasks that a machine can do much better (for almost every definition of better - smaller, faster, cheaper, more reliably, using less resources)...
"This is the opposite of progress [...] a machine can do much better "
Yes, it is, and a step back in time. It goes back to the roots and *wonderfully* demonstrates how computers work. I can see that being a fantastic educational tool for those who want to learn about it, before they go off and build machines which produce the next generation of Raspberry Pi.
You didn't think that this 14x2m project was going to go into mass production for you to buy and use, did you?
I shudder to think he'd be running it at 5V… that's a 100A busbar that'd be needed to power the thing!
I've seen what power leads carrying 400A look like, feeding a 20kW 3-phase BLDC motor. We'd lock the rotor to measure torque, in doing so, we'd watch two wires get attracted to each other, and two repel each other according to magnetism (F = (µ₀I₁I₂l)/(2πr)).
Doing a 8-bit design would have been less than half the work.
That is something of a mixed bag. It reduces complexity in terms of components and wiring but significantly increases design effort. Large parts of a 16 bitter are simply the equivalent 8 bit circuit replicated but going in the other direction introduces some significant extra issues. Presumably you would want to address more than 256 bytes memory so that makes an address (at least) two words long. Similarly it's somewhere between difficult and impossible to encode a complete, useful instruction set in eight bits so you have multi-word instructions too. That gives you a large amount of hassle co-ordinating those half quantities and you need multiple cycles to send those values around. That in turn adds complications as you co-ordinate timing in multiple-cycle instructions.
I mentioned somewhere above that 12-bitter I set about designing a few years ago. 12 bits was chosen very deliberately as the simplest option - it's the narrowest width where you can sensibly have arithmetic, addresses and instructions all the same length. All instructions were single cycle so keeping everything in sync was also made a lot simpler, even if multiple cycles would have allowed you to crank up the clock rate a little. It did limit you to a 4096 word address space but I considered that adequate for a demonstration system.
The 6502 does everything in 8-bits width, except addresses. An opcode byte is optionally followed by an 1 byte immediate value, or a 1 or 2 byte address. I'm guessing that it has 2-bit field in the instruction byte that causes it to load 0, 1 or 2 following bytes to a register determined by rest of the opcode (and increment the PC). If performance is no big concern, shouldn't this operand loading sequence be implementable with a simple state machine? Though I must admit my knowledge about CPU design comes from one mostly-forgotten university course a quarter-century ago... (the final exercise was creating a paper design, which was not even simulated, never mind built.)
We managed to make one single register, but we used all of post-grad air pistons kit in our pneumatics lab. Just for the heck of it. Ours could retain the ram memory even without power, obviously. It would have become even bigger if we tried to do anything larger.
But hey, kudos, that was mighty impressive. I guess that's what would have happened if we didn't have single-die processors, or in a post-apocalyptic steampunk future.
That's nothing, when he can do it in Minecraft, I'll be impressed.
(And before I get spammed, yes I know there are Minecraft computers, but the limitations of the 'platform' do quickly make large scale computer exponentially more difficult than actual computer construction, so they tend to be rubbish.)
Biting the hand that feeds IT © 1998–2019