Perhaps I misunderstood, then
The statement "By adding the two processors, the sequential code can run on the CPU, the parallel code can run on the GPU, and as a result you can get the benefit of the both. We call it heterogeneous computing" seemed to indicate that the collection of processors were still seen separate processors; perhaps in the same chip package, but still separate. It still mentions CPU and GPU as different parts to a whole.
My thought was not for a collection of independent GPU and CPU processors, but rather a processor chip with a few fast cores and many parallel cores, all in a single package. Not a large collection of mostly-fast cores, but a few very fast cores, and a lot of slow, massively parallel cores. Instead of choosing from the beginning to write a program based on a CPU or a GPU, a programmer would write however he or she pleased. Then, either the compiler, or the processor itself in real time, would choose what instructions get executed on which core.
Further, if the fast core was overtaxes, a few of the single-core instructions could run on the massive parallel cores. If the parallel instructions were full, a few serial instructions could go through the fast processor - the processor could balance itself.
Unlike, for example, the MIC processor, which is basically a GPU that runs x86 instructions, or the possible ARM SoC, which is merely a CPU and a GPU put in the same package, but no more tied together than if they were separate.
Maybe I'm seeing a difference where there is none, but it seemed clear at the time :-)