What's that in brontosauri?
On November 15, 1971, 40 years ago this Tuesday, an advertisment appeared in Electronic News for a new kind of chip – one that could perform different operations by obeying instructions given to it. That first microprocessor was the Intel 4004, a 4-bit chip developed in 1970 by Intel engineers Federico Faggin, Ted Hoff, and …
What's that in brontosauri?
But that's the Americans for you, never using the same standard sauropod-derived measurements as the rest of us.
Re: "The 8080 wasn't alone, though – there was plenty of competition in the earlier days, such as the Zilog Z80, Motorola 6800, and MOS Technology 6501, which Pawlowski told us were all essentially equal competitors at the time."
The Z80 took the 8080 architecture and expanded it with more 16 bit registers like the IX and IY index registers; it came after those other processors, so wasn't really a competitor "at the time", and soon took Intels market for general purpose microprocessors. For us Brits, its most obvious manifestation was in the Sinclair ZX80 and 81, but it was also used in many embedded systems. I loved programming those things.
The first microprocessor I worked with was the 6800, which I thought had a better architecture than the 8080, but CP/M ran on 8080 (and Z80) and was too dominant by the time the 6800 came along.
I never wrote software in assembler for the 6502, as I didn't like the architecture at all - but that didn't stop me loving my BBC micro.
"It ran at 740KHz"
It's kHz. Little 'k'. Grrr.
The 6502 (originally 6501) was designed by the same people who did the 6800 and shares a lot in common. In fact the original version (6501) was pin compatible with the 6800 until motorola sued and made them redesign it.
So I find it hard to think why you would hate two very similar CPUs.
You've just got to think of it as a load/store architecture, with the zero page acting like the register bank in other machines and accesses everywhere else being expensive. You end up doing most of your business logic with the two-or-three cycle instructions acting on the zero page and occasionally wander into the elaborate four-upward cycle instructions to fetch tabular data. Oh, and I guess you have to get used to the slightly weird one's complement subtraction but it ends up just being a carry inversion since all arithmetic is with carry.
I prefer the Z80 but I think that's just because I know them only through the home computers and the popular 6502s always had to confuse the issue on video circuitry, the 6502's relatively poor random memory access speeds seemingly making people want to back away from just giving it a framebuffer in a sensible order.
I didn't realise it ran _THAT_ slow! Jeez.
The 6502 uses twos complement arithmetic. You have to set the carry with SEC before you begin a subtraction, is all, because there is no SCS (set carry and subtract) instruction. Afterwards, the carry will be set unless we had to borrow one from the next byte.
Also, the 6502 writes multiple-byte numbers units-first; but the 6800 is units-last.
As to the "out of order framebuffer" thing, I always thought this was just a consequence of using the display generation process (which obviously must read every byte of the framebuffer memory) to perform DRAM refreshing. (The Z80 can do its own refreshing, but only up to 16K bytes as the R register is only 7 bits.)
It was much cleaner than the macro-ised multiple cycle 8080. I used to write in it for the bbc micro, oric, atmos and dragon assembling by hand (writing assembler and then converting to machine code by looking up the instructions in the reference manuals.
Aye, ya tell the yoong kids today...and they woont believe ya ;)
Was supposed to be twice as fast. It would then be comparable to an IBM 1620 which Ted Hoff was VERY familiar with. It turns out that the yield on the chips if they were to be specified at the higher speed would be very poor. Yes, the 4004 was compared to an IBM 1620, Business Week in the day had a picture of one that was compared. I owned that one (and still do if I can find it).
Interesting days of microprocessors in the "beginning". Much has changed in 40 years. For example, before they had quartz windows on EPROMS, you needed to have them X-rayed to erase them. Now you just ask and they (flash) forget.
That was before the 2508, which neded (iirc) +12, -5 and +5 to function.
I reckon I should stick core-rope on my machine. Slow me down a bit.
Oak veneered processors.....
They don't make 'em like they used to, do they?
I had forgotten how primitive we were in the 1970s and that picture reminds me of the vital role of stripped pine in the fabrications of chips.
everything was wood veneer - why should processors be any different.
These days we call it "steampunk". Back then, it was normal.
Ahhh... a deep varnished wood-finish processor with delicate brass corners. That would be something worth showing off.
The hard part was shoveling the coal into that little tiny oven.
I like the way the DDR3 drivers take up almost as much area as one of the cores in the latest part. They are practically an FPGA in themselves.
While I don't claim to be an expert I think the uniform vastness you refer to, is cache and has not much to do with "driving the DDR3" memory. It's true though that recently, cache is the most transistor intensive area of the CPU's and tends to be a major determinant of final price.
(If I'm wrong can someone point out the correct interpretation?)
"...the chip itself wasn't all that impressive. It ran at 740KHz, had around 2,300 transistors that communicated with their surroundings through a grand total of 16 pins..."
Sounds not entirely unlike some of the embedded processors some of us still use these days...
Thanks El reg for what could either be a trip down memory lane for us old timers, or an educational article for the younger generation!
A lot history covered in this article, I was fortunate [or unfortunate] enough to have worked at an Intel Distributor in the 90's. We had massive posters of the 286, 386 and 486[DX] cores in our tech department and it was amazing how much interest these would attract.
Once again, thanks for an informative, interesting and somewhat nostalgic article.. Two Thumbs Up!
Rik, been really enjoying your recent articles, another good one, cheers.
A lot Intel bias going on there. For starters, the Pentium was not the first "superscalar" processor, as the technique had been implemented as far back as the 1960s by Seymour Cray. As for the poxy 8088 and it's offspring, they're still hindering advances in programming by making pretty much everyone cater for the brain damaged x86 instruction set. If only IBM had chosen a chip with a decent instruction set (the Motorola 68k for instance), then we may have seen advances in instruction set design going hand in hand with advances in manufacturing processes. Even Intel have acknowledged the problems of the x86 architecture - by creating alternative processors like the i960 and then putting a RISC core behind a complex decoder for more recent x86 implementations (we'll just forget about Itanium, as that just proves Intel can still fuck things up on a major scale).
Now thank God that that was not at all biased.
If you actually read the article before you post your rant, you'll find that when Pawloski describes the Pentium as "the first superscalar machine", he clearly means it was the first superscalar x86. On the very next page of the story he says they were "playing catch-up" and knew that they could have started on a superscalar version earlier.
I expect Pawloski is well aware that the Pentium wasn't even Intel's first superscalar chip - that was the i960, from the late 1980s.
Nope, it was the first commercially available microprocessor...
The very first one was the MP944, but it was a bit... restricted :)
Brontosauri? Surely everyone knows full well that the SI unit of comparative size, is double decker buses.
Or "football pitch" and "Wales" for area, depending on scale.
Heard a Rhod Gilbert rant that basically went along the lines of:
"I know what new readers really mean when they the scale of disasters in terms of "Wales". They wish it was Wales are each time are muttering under their breath "but NOT Wales". I know your intentions. Ah, but thejokes on you! Now Wales is used as a scale it cannot be obliterated or you wouldn't have anything to measure disasters by!"
The story of teh scusess of intel microprocesors is that commercial and not technical factors dominate.
The 8086 was very much inferior to the 68K and the 16032 it was probably on a par with the Z8000. I rember Intel trying to sell to me at that time and they always emphasised price, the agreement with AMD that gave guarantee of supply and assurance on pricing, and support. They never tried to sell on performance or technical aspects because it was well behind Motorola.
The PC then came out and things changed very rapidly. Intel broke the AMD arrangement and the price of the first non-agreement part the 80287 sky rocketed. Technically intel parts were still very much second best but they sold fantastic numbers o fparts. The 80286 retained the awkward segmented architecture extended withprotected mode performance was still very poor. The 386 finally had a sensible memory architecture but still had the nasty special purpose registers and complicate dinstruction set and performacnce was still very poor compard to other micros. It was probably not until the pentium that Intel gained parity with other microprocessors.
None of these technical things mattered, one design decision by IBM made Intel the dominant microprocessor company with massive reources despite not because of their technical design.
You really don't know when to press space, do you?
Remembering the iAPX432.
I have very vague memories from uni where we were told that the design of the 4004 was actually carried out by an intern / gap year student, and design flaws were carried through a number of iterations of follow-up chipsets to retain backwards compatibility. This story was probably apocryphal - it's nearly 20 years since I heard it so I cannot remember any more details! Anyone else heard anything about this?
I miss the simple days of the 8080/Z80/6502/6800/6809 where the layouts weren't critical and the instruction sets were easy to use at the machine level. Every clock cycle and every byte counted back then, with memory limited by cost and the address bus.
Seeing a Z80 emulator run on a 686-class machine or better and claiming to be the equivalent of a Z80 with some fantastic clock speed does bring home the huge performance gains.
Meanwhile, I'll stick to programming my embedded 8051, another processor that dates back to simpler times.
...when bytes were were real bytes, Motherboards could be fixed with a soldering iron, "intellectual property" meant you'd paid off your Encyclopedia Britannia, and 'programming' meant hand coding raw MC. Maybe assembler if hung over.
And yes, counting every damn clock cycle.
God, I feel old.... <sniff.>
<insert Four Yorkshiremen sketch here>
In page 8, I read: "Nehalem was a 45nm part, and a follow-on to the first 45nm parts – code-named Penryn – which introduced the second of Bohr's process improvements"
But in fact believe that Penryn preceeded Nehalem, since Penryn was based on the Core architecture, the one immediately before Nehalem.
Nehalem was a follow on to penryn. That is what it said. That means penryn came first. Where is the problem?
All it says it penryn was the first 45nm part and the nehalem was a follow up 45nm part.
So your objection seems to be to agree with what it said.
Of course, I misread the original text even after copying it. Thanks for the precision.
And now the crippled x86 design looks doomed by those sitting up and taking notice of ARM, especially in the mobile/laptop/low-power-server markets.
They are like MS, they got their foot in the door because of a lazy decision big blue made one day, and have been laughing to the bank ever since.
Really didn't rate them at all until the 80386. Fond memories I must admit, felt that was the milestone when PCs became truly useful (or 'fun' in terms of gaming!)
Correct me if I'm wrong, Sir Wiggum (or anyone else for that matter) but x86 is also known as CISC, or complete instruction set computer/ing, whereas ARM, PowerPC etc. are RISC or reduced instruction set computer/ing. And Apple not too recently jumped from PPC to x86 for their processors... Why do you describe x86 as crippled?
We really need a ? icon!
The RISC/CISC debate is essentially irrelevant for general-purpose CPUs these days. The major CISC architectures long ago went to decoding CISC instructions into micro-instructions that are processed by (superscalar) RISCy cores. Meanwhile, supposedly RISC architectures got steadily more complex, starting with IBM's RIOS (the first POWER implementation).
And CISC/RISC was never a matter of being "capable" or "crippled". The CISC/RISC distinction was invented when "RISC" was coined to describe CPUs that deliberately restricted their instruction sets to those instructions that could be done in a single clock cycle. That followed from the observation that compilers rarely used fancy multi-cycle instructions in their generated code anyway. CPUs like VAX provided all sorts of nifty operations for the benefit of assembly programmers, but when most software was being written in HLLs anyway, it made more sense to optimize the simple opcodes. And having the same one-cycle timing for all instructions makes that easier.
Some people feel the x86 architecture is "crippled" because its early members had various failings (segmented memory architecture, few general-purpose registers), and later members have carried some of those along for the sake of compatibility, while maintaining an arguably excessive instruction set. Of course, modern x86 CPUs are rarely used in anything other than flat-memory mode and have more registers than their ancestors; but I don't know that you'd find many folks who'd describe the x86 architecture as elegant.
Good article as someone who cut his teeth on Z80s and 6502s I love stuff like this. Anyone got any suggestions for further reading on microprocessor history? Books, PDFs, URLs, whatever ....
Friend of mine's father worked for Courtauld in Coventry. They had some of the first 4004's in the UK, and he kindly gave me a copy of the 4004 handbook.
I was a teenager at the time, trying to diligently figure out (analogue) electronics.
When I started to read it, I was transformed to a totally different world.
It took me ages to figure out timing diagrams, truth tables, concept of registers, the instruction set...not being versed in digital electronics at that time. The concept was obviously completely foreign to a spotty schoolboy.
However, I persevered, and finally understood it. (No Google to help you in those days).
That book - I don't know where it is now - was the rocket under my ass to the path I was to take.
If I read this right the 4004 was purpose-built to be embedded in a calculator.
Reading the calculator's capabilities, I get the feeling that the 4004 is a bit of overkill for something that does the 4 basic calculations plus percentages with storage for one number. I understand that back in the day even this would have been avant-garde but was such a processor really the minimum necessary to deliver the performance?
Paris, cause I'm just as clueless
It was. But if you are building a calculator and intel can design and build you one chip for $5 or so to do the job, or you can use twelve existing chips costing way more than $5 (and making the calculator bigger), which would you go with?
Who cares if the 4004 is overkill for a desk calculator. It's cheap and small.
1. IT was not built on the Intel 4004 or its successors. The information technology industry started in the 1950s with pioneering data processing applications leveraging emerging computing technology. Remember LEO, and the IBM 1401? They were certainly information technology systems. You'd have to use a pretty discrete and tortured definition of IT to claim the 4004 was its first building brick.
2. You use the phrase 'first processor' to describe the 4004. Here comes more pedantry... This is not true either. It was the first commodity, commercially available microprocessor -- which is to say an IC with all the traditional components of a CPU. Computer processors in modern sense date back to at least 1949 and EDSAC. The Digital PDP-11, a direct contemporary of the 4004, certainly has a processor, as did all it's ancestors. What it didn't have was a single chip 'microprocessor.'
That's debatable. For one, the 8008 was worked on in parallel. But it wasn't intel's own idea; it was a commission to a specified instruction set. intel didn't really like it; the 8008 team got poached for the 4004. So to consistently push the 4004 as the first uP is, well, a bit of a case of NIH on intel's part, starting back then. And then there's the Four Phase Systems AL1 that was available earlier, but only as part of a product, not as parts. Of course intel will claim high and low it was their 4004; nobody's around to contest the claim and hey marketeering for great history rewriting.
Ok, now I really feel old...
"After the 8086/8 came the 80286..."
No, it didn't. After the 8086/8 came the 80186/8, which was then followed by the 80286.
I remember coding in 80186 assembly on my dad's Tandy 2000...
My Beeb Master has a 186 board, AKA the 512 Board. I also used 186 RM Nimbus at school.
systemdwith faint praise
Biting the hand that feeds IT © 1998–2017