The More You Tighten Your Grip...
GeForce and TITAN GPUs were never designed for data center deployments with the complex hardware, software, and thermal requirements for 24x7 operation, where there are often multi-stack racks.
That's strange. It's almost like Nvidia has forgotten that numerous enthusiasts have built systems with complex, 4-way SLI that use far more exotic methods for handling extreme thermal requirements (water, liquid nitrogen) than you'd find in a datacenter.
I can think of multiple ways to deploy servers with consumer GPUs outside of a traditional datacenter, yet minimize the increased physical and environmental risks of doing so. Nvidia's hanging this all on a nebulous "datacenter" concept is farcical.
Isn't 2018 supposed to be the year when organizations begin to move 100% of their workloads to the cloud? And also they when machine learning explodes into widespread use? With Nvidia seeking to increase the cost of datacenter usage of their products by 4,000%, can it be both?