Re: Nothing wrong with the chips.
"No, it's not that simple. Code is a product. It is paid for with money."
So after all that...it's still the code that's the problem!
After being pronounced dead this past February - in Nature, no less - Moore’s Law seems to be having a very weird afterlife. Within the space of the last thirty days we've seen: Intel announce some next-generation CPUs that aren’t very much faster than the last generation of CPUs; Intel delay, again, the release of some of …
"That's why the largest current FPGA's and VLSI chips have billions of transistors, you "connect them up"."
And at the end of it they generally solve ONE problem. Now imagine a hardwired chip that had EVERY modern graphics algorithm built into it. Seeing my point?
"And at the end of it they generally solve ONE problem. Now imagine a hardwired chip that had EVERY modern graphics algorithm built into it. Seeing my point?"
Jeez calm down, if you are a developer your output isn't going to be replaced by an FPGA anytime soon. The story is saying that hardware is more efficient in a lot of cases and these will increase. Nobody is saying that hardware can replace software, any more that they are saying that software is possible without hardware.
"Jeez calm down, if you are a developer your output isn't going to be replaced by an FPGA anytime soon."
Calm down? Wtf, I was just making a point. Why are some people so wet they see any disagreement as some kind of confrontation? And my original point was that to reproduce the functions of a modern graphics card using hard wired logic (no microcode) would require a massive die.
The 386 used microcode, like a System/360 Model 65. The 486, though, was hardwired, like a System/360 Model 75.
It is true, though, that even the latest x86 processors use microcode to handle a few complex instructions, as do the latest System z processors. Most RISC chips, though, eschew instructions so complex as to make microcode a necessity, even though they still do things like floating-point arithmetic that take multiple cycles.
Any complex modern computer processor requires microcode to operate. Microcode is software.
It kind of is and it kind of isn't
Microcode is a series of bit patterns that enable/disable/connects the various chunks of logic blocks in the CPU "fabric" although fabric is probably not the right word
So while its updateable microcode is not really what you would consider a program or software
Caveat: I'm not a CPU designer but I play one on the internet :)
I don't buy the premise of this story.
Yes, doing stuff in specialized hardware gives you a 200x, or a 1000x-boost over doing it in software on a general-purpose chip.
But that's a one-time boost. At the end of the day, the performance of that hardware is still going to be limited by its process density.
So all you're really doing is delaying the point at which you can no longer improve performance, even in hardware, at the cost of adding extra chippery for various functions.
No, process density is only one factor, there are others such as the process itself (Lithography etc.) and the device type being implemented. Currently the full density can't be exploited due to these other limitations, however there are ways round this. For example the FinFET devices now being used have advantages over previous devices so the technology continues to move forward. All the way from bipolar to CMOS, SOI etc. the devices have been improved. Same applies to the process and the process density. Engineering is problem solving after all.
[coined in a paper by T.H. Myer and I.E. Sutherland On the Design of Display Processors, Comm. ACM, Vol. 11, no. 6, June 1968)] Term used to refer to a well-known effect whereby function in a computing system family is migrated out to special-purpose peripheral hardware for speed, then the peripheral evolves toward more computing power as it does its job, then somebody notices that it is inefficient to support two asymmetrical processors in the architecture and folds the function back into the main CPU, at which point the cycle begins again.
Several iterations of this cycle have been observed in graphics-processor design, and at least one or two in communications and floating-point processors. Also known as the Wheel of Life, the Wheel of Samsara, and other variations of the basic Hindu/Buddhist theological idea. See also blitter.
", wringing every last bit of capacity out of the transistor."
WTF are you talking about?
A transistor in a circuit dedicated to video decompression for example sits doing nothing when you are not decompressing video. A transistor sitting idle most of the time is hardly squeezing every last bit of anything out of it.
Dedicated circuits can be faster and use less energy. They cost to manufacture and development is expensive. High volume applications (to amortise development costs) with low energy requirements like smart phones are an obvious candidate especially high end smart phones where production cost is less of an obstacle.
"A transistor in a circuit dedicated to video decompression for example sits doing nothing when you are not decompressing video."
But if the times when it's NOT decompressing video (or compositing a UI or whatever task it is dedicated to perform) are few and far between, then odds are you get a net benefit for it. That's part of what's happening now. They're taking a look at what things CPUs have to do all the time and offloading them so that the CPU has more time for more generalized workloads, much like having a specialist for handling particular jobs that happen to come up quite frequently.
We didn't always have a math co-processor in the CPU. Intel and AMD both added extensions into their processors, including ones to increase multimedia performance. Outside of computers we have multiple ASICs in everything from DVD players, receivers, to cars
One could even argue that software is driving a need for more logic in hardware.
The argument being that you're starting to see similar kinds of software being used all the time. If you have a particular job being done again and again, it becomes practical to push this function into an ASIC to (a) speed up the turnaround on that process, and (b) to offload work so that the CPU can concentrate on more generalized tasks. That's one reason SIMD/vector computing instructions were introduced: to better deal with common math functions that were used in programs of the day. It's recent Intel CPUs include AES-NI: because an increased need for security has pushed the use of AES so much we end up using it all over the place.
They've discovered Chuck Moore's Law
Less is More
Software has layers of abstraction, so it doesn't matter what hardware you run an application on. Programmers want to focus on the high level task and not on the nuts and bolts.
How wrong that paradigm is, and I hope it goes away.
AT&T's old single Power PC chip had communication channels internal to the chip to allow anything the processor did to be IO'd externally.. IBM's current Power 8 chip has 8 processors inside a wrapped around a non-blocking switching network on chip connected to a wideband IO MXR..
Haswell and newer Intel chips have the Processors, alas they have a blocking communications system forcing space/ time/ space MXing for the IO stuff.. like limited instruction set boxes, they might appear to be faster, their data crunched throughput is really not much faster..
Hypervisor Software and divided data streams allow these Intel chips to scream.. however the IBM Power 8 architecture CPU's runs simply as fast w/o special sauce software to make it work faster, or at all (sort of what this article implies= Hardware + Special sauce gives Moore's Law traction).. Happy C-64 ?? (still faster than my Haswell laptop)..
IMHO= a non blocking network is needed on chip to take advantage of the many cores on a chip..RS.
When the PC turned up it was indeed neat to have a computer that was small and cheap enough to own. The price we paid for it was a return to hardware architecture that was twenty years out of date. Increases in hardware performance masked this giant step backwards, we got used to code bloat being managed by plummeting memory prices and blazing fast processors. (You'd be amazed at just how fast a generic Intel processor is when its not encumbered by the software we normally run on it.)
We're finally moving into a world where we wanted to be in the 1980s, and its possible because -- finally -- hardware and tools don't require multi-million dollar investments to build anything, the blocks are cheap, the tools are cheap and the techniques are well understood.
Its unfortunate that our software technology is still pretty crude -- in fact modern applications programming looks awfully like "chuck a load of mud at the wall and see what sticks". This might be a practical solution to getting the job done with the available resources but the size of modern programs for their functionality is embarrassing. (...and no, memory isn't cheap -- the parts are but the time taken to load and unload the stuff mounts up)
Biting the hand that feeds IT © 1998–2019