Sounds very interesting
Anything that allows more flexible access to all this compute power would help extend the raneg of algorithms you could run on them.
AMD is to manufacture microprocessors that connect their on-board CPU and GPU components more intelligently than ever before. The upcoming chips will utilise a technique AMD calls Heterogeneous Queuing (hQ). This new approach puts the GPU on an equal footing with the CPU: no longer will the graphics engine have to wait for the …
"Anyone knowing PCI-Express specs should be able to understand that AMD is adding nothing really here."
I'm guessing that you are referring to all that DMA & address hackery stuff that PCI-E provides, and I agree that wouldn't be a new thing for AMD to crow about. I think the "new" bit is attempting to standardise how work is specified and dispatched to the GPU, if this reduces the number of crappy binary only GFX drivers that would be a good thing for the developer & customer too. :)
I disagree: while it is possible to implement a similar solution using PCIe trickery, it is complex and might not work as efficiently as you'd expect, and will almost certainly break if you decide to change your hardware setup. AMD are introducing a software API which will presumably be both efficient and forwards compatible.
Doesn't this create a whole new means for virus-writers to infect people's computers ?
Instead of infecting things through the CPU, a new kind of virus would be able to run on the GPU and not be bothered with things like Adres Space Layout Randomisation or even Data Execution Prevention.
No matter how hard you try and dress it up and shout 'New Technology', 'New Architecture', the fact is that main memory is already inadequate for the CPU alone, let alone sharing it with the GPU.
If that wasn't the case, we wouldn't have THREE damned levels of cache between main memory and the CPU, now would we?
Given the effectively random addresses that a second accesser of memory makes from the CPU's point of view, those cache lines are absolutely essential, how many levels of cache is the GPU going to get as the next step in trying to make a crap, penny-pinching idea work at last, after all these years?
actually, you are wrong. The caches are needed for all the trickery needed to reduce the latency; however adding another consumer, while increasing bandwidth utilisation, may not necessarily increase the latency (as long as the demand stays below max, which is almost always the case).
Obviously, you still need RAM for frame buffer dedicated to GPU only, because here bandwidth demand is very high, but sharing memory between CPU and GPU means you no longer need to copy the textures. Just let the CPU put them somewhere in RAM, and GPU will just use them. You should be able to do the same for shaders.
Faster ram is being worked on, but for the last 25 years it's always been "just around the corner" whilst dynamic ram somehow kept moving the goalposts by going faster and faster.
If/when new ram technology shows up, there's a good chance it'll change a lot of things as there's a good chance it'll not only be fast, but also non-volatile and that's something that even static ram never really managed to achieve.
I'd say that'd result in lower power consumption but dram power consumption is down in the noise compared to the average display's power draw.