Re: "whoever successfully builds a reliable, mass-producable qubit"
"Wow, Trevor. Usually I find your comments fairly reasonable, even if I don't agree with them; but you've really lost the plot on this one."
Only because you seem to believe that QC is only good for two different algorithms. I'm far less convinced.
"QC offers very little for the vast majority of mainframe workloads."
"They're rarely CPU-bound in the first place."
Again, agree. That said, however, the few things that are CPU bound are typically great bit huge database work. A huge chunk of that i I/O bound, but even when you can get enough of the DB into fast enough memory you run into CPU issues. This is not only where I think QC can help, it's also one of the things x86 can't really do well. (Power, Itanic et al having largely evolved to deal with these problems while x86 kept on the general compute path.)
And the major barriers to replacing mainframes with an "ultra-resiliant x86 cluster" are perceived risk, decades of strange proprietary add-on software and obscure APIs, and customers' lack of knowledge about what they're actually running.
Again, agree. That said, a lot of customers are looking to rewrite and move off onto ultra-resiliant x86 clusters. While some of that is possible, a major barrier is the ability to move the great big databases off, while still retaining the performance.
Very few businesses are using mainframes for big-data processing. They may have terabyte databases, but they're not dealing with big-data loads.
An interesting assertion, and not my understanding at all. I am lead to believe that many businesses using mainframes are working with giganamous databases that they have to do a large number of searches against. Datasets are so large that the searches become a problem for x86. I'd be quite happy to be proven wrong on that.
And QC doesn't help with many big-data problems anyway. Grover's algorithm is optimal, and it runs in O(N1/2) time and O(lg N) space. So if a search would have taken an hour on a classical computer, it'd take a little under 8 minutes on a QC, all else being equal - and that's only if you have enough qubits. For large N, even lg N starts to become a problem if you're running a lot of simultaneous queries - and if you're not, why is QC useful for your application? - if the resource is scarce.
Where QC helps - and for that matter, mainframes too - is searching a large dataset quickly. Traffic simulation and logistics are both repeated to me as examples of workloads where, apparently, multi-squillion-dollar mainframes are required and x86 clusters just don't do what is needed.
As for what QC is supposed to do for "custom interconnects" I cannot guess.
I don't think QC will replace custom interconnects. I think A3Cube and like setups will commoditise high-speed, low-latency interconnects to the point that there's no longer a need for the custom stuff. Thus the margin will evaporate.
That means that the real money will shift to quantum interconnects as the demand for secure transmission grows. Will that be in-datacenter? Probably not. But in the networking world, I think the margins are going to move away from lashing together servers and towards quantum-secure comns. (Which, apparently, we can now do using mostly regular equipment? I need to investigate that more...)
Many of the potential customers in our market can't even start to disentangle the thousands of undocumented programs they have on their mainframes, in order to find a subset suitable for a trial migration. Even with the help of source-code application-suite analysis tools. And that's when they have source.
And yet they are trying. They are migrating. A trickle here, a trickle there...and this business is evaporating. What happens when the heavy lifting of the DBs (and their associated gobs of RAM) is no longer needed? When your "mainframe" can be stuffed into 2U + a 4U QC to run all that legacy stuff? I doubt you'll be getting the kind of money for it that you were getting when you could sell two whole racks to do the same job...and that's my point.
QCs on their own are not going to kill the mainframe. They're just one additional wound. Mainframes are dying the death of a thousand papercuts as technology in general makes them no longer relevant.
I just think that QCs ability to deal with big databses, fast factoring and - if my sources are correct, natural language - will take some away some of the remaining "you need a mainframe for this" workloads...hence stealing the margin.