To add on to what Scott mentioned; it's important to know that "speed" has a lot of facets in this kind of conversation. Clock speeds only point to the signal generator and the ability to accept/process instructions. Things like bit width and efficiency also play a big part.
We often use the word speed interchangeably when it's not really appropriate. For example, network speeds being indicated by using how many bits per second the link is capable of transferring isn't the only measurement to consider when latency can have an impact on the ability to realize that potential.
I my memory serves me (and adding to what Scott mentioned), the divergence in the x86 world occurred when the clock generator in the CPU (circa Intel 486) tried to go beyond 33Mhz. The memory available at the time couldn't operate at frequencies higher than that and the system boards' circuit design had a problem with EM interference because of right angle traces and electrons "flying off the track".