The wish list for the data centre of the future, according to Register Readers, is heavily focussed on service delivery, with tightly organised pools of resources and massive amounts of virtualisation – and of course, fault-tolerance to the Nth degree. Cloud will play a part in this, naturally. But not in the way some vendors …
The will to create the Data Center of the Future.
The Data Center of the future should be like current utilities. You should access it wherever and whenever you wish at a cost that fades into the background and you should not care in the least what is supplying it at the back end. Drinking fountains and washrooms are generally free most places you go in our neck of the woods. You don't have to put a quarter in a lamp post to get light. Roadways are expected to be there and expected not to drop you into a sinkhole. It may happen, but the responsibility for dealing with it does not reside with the driver. You can expect to pay for gasoline as your drive the streets, but you are not expected to arrange for a gasoline station to be there. Infrastructure people and the marketplace remove that responsibility from the end user. Traffic lights do not require you to negotiate to get a true answer as to whether they are red, green or yellow.
We need to open all avenues of bandwidth and distribute function truly across the cloud with both social and technological constraints to prevent intrusion, inspection, tampering, etc. For practical purposes, the physical location of data should be entirely transparent and irrelevant for someone accessing data. As long as they have the address and the key it should appear in real-time as needed regardless of where it is permanently stored.
To get where we need to go, we need to overhaul legislation to free up EM bandwidth and physical 'rights of way', make ill gotten data the 'fruit of the poison tree', educate our technical people on the crucial nature of open ended bandwidth and enlighten them as to genuinely cloud distributed processing and data. The cloud proper does not have a physical location any more than the number 7, that is why it is called 'the cloud'.
Deduplication is something of a failed enterprise, but only because it is done in islands rather than globally. There are easily millions of copies of the same blocks of data. We only need enough redundant copies to secure the data appropriate to its value and otherwise retrieve it using proper hashes*
*'Proper hashes' are a research issue. It is very difficult to produce true collision avoiding hashes across very large vector spaces and IMO will likely require hashes much larger than we currently use to be reasonably safe. In fact, I would design an open-ended hash definition so that bit width could scale upward as necessary.
The vast majority of the world's storage, network bandwidth and CPU cycles are evaporating unused and have been doing so for decades. What is used is used wastefully in the most extreme ways. What we need is a body of protocols that allow resources to be rented such that the 'max-min' equation for maximum utility is reached. As we move forward, requirements for CPU/Storage/Bandwidth will continue to outstrip supply and unless we re-architect the cloud, the disparity between what is available and what is usable will likely increase.
Let's at least recover the resources we are wasting.
It is a research issue, but my gut tells me that so called 'IP' is a 'drag' on the system that far outweighs any value it has to the body politic. Rent seekers are a powerful lobby and already have gained control over vast reaches of bandwidth. Rent seeking per se should be outlawed. The existing scientific arts are largely a communal birthright and no group has any moral right to deny the use of them to the rest of us.
As it currently stands, rent-seeking against copyrights and patents destroys incredible amounts of wealth. One need only refer to the RIAA's own figures as to the purported value of their copyrights to see that the RIAA is a punishing and unreasonable drag on the distribution of the value vested in cultural artifacts such as music.
Copyrights, Patents and Trademarks were never intended to benefit alleged 'rights holders' over the body politic. They were proposed as mechanisms to provide an incentive for creators to advance culture. Their intent was to increase aggregate wealth to the benefit of all at the expense of providing limited control to creators so they could realize a profit from their creative work.
To the extent that rent-seeking behavior increases costs of the data center (whether localized or distributed), it acts against the public interest and it diminishes the benefit/cost ratio of the data center.
Removing the various 'rent seeking' 'taxes' for data center hardware patents, software copyrights, data copyrights and network access is an important consideration. A future network center unencumbered by such things would be much more cost effective and function much better than one hobbled by these entirely unnecessary costs.
The 'data center' of the future should not really exist as a localized physical entity. The world's CPU/Bandwidth and Storage resources should be distributed in hierarchies of 'data furnaces' across the globe as local home home/small office/subnets linked by high bandwidth buses, community way stations and higher collective facilities dictated by demand/cost/benefit.
For the foreseeable future, bandwidth and latency will likely dictate the structure of the network. The need for physical proximity of CPU and Storage will result in there being much greater quantities of both at the edges of the network and therefore leaves at the edges of the network can be expected to be contributing less latency sensitive resources back to the global pool.
It is technically possible to have a significantly higher level of security than we have now. However, we can never entirely secure anything, in my opinion, because the fundamental need for feasible 'reachability' necessarily entails a security weakness. Therefore, we need the sternest of legislation making improperly obtained data of any kind unusable by unauthorized parties.
We have, in my opinion, already a dangerous situation with respect to global governance of the Internet. A major priority should be to design protocols that are exceedingly difficult for hostile parties to interfere with. Furthermore, we need a more sophisticated protocol suite that allows the Internet to route around bad players, particularly the very strongest ones such as 'super power' states like China and the U.S..
My vision is one of an internet where package inspection of source or destination is exceedingly difficult to determine except by the sender and receiver with a 'sender pay' model that allows redundant routing commensurate with the privacy required.
With respect to hardware nuts and bolts, we need to design standardized reusable components similar to the standardization of things like fasteners (screws, bolts, etc). This is entirely doable, but our current economic incentives militate against this.
TCP/IP lifts data transmission conceptually off of hardware. In theory, a TCP/IP packet could be routed with pen and paper or even by word of mouth. With the exception of QOS deficiencies and the disastrous 32 bit addressing scheme, it has done a wonderful job of forcing a usable 'lingua franca' amongst the world's devices. It has also proven to be remarkably self-healing for such a crude protocol. I believe a similar upgrade to communications protocols could lift this higher still to provide future proof addressing, packet opacity and QOS routing based solely on a user pay model.
Nothing is perfect, but even a shaky protocol like agreed upon keys and reasonable pseudo random pads with random salts and encrypted time-stamps from time servers would allow packets to be significantly more SPAM-Proof. That, coupled with a non-trivial sender-pay charge to defray network costs and 'buy' the user's attention would be likely to kill off what we currently know as SPAM. IIRC, eliminating SPAM would return a significant percentage of the world's bandwidth back to its owners.
Everything described above is doable with current technology and is consonant with at least the spirit of our common law legal heritage and the culture that supports it. What is required to implement such a thing is simply the collective will to do so.
I see it another way - in future the smart people will build their infrastructure to be "abandonable" i.e. if it goes down it doesn't matter because it's all replicated in 4 other continents and they can pick up the traffic instantly. You know - what the "cloud" is supposed to be, and how companies like Google amongst many others do it.
simply a re branding of Mainframe?
- Product round-up Six of the best gaming keyboard and mouse combos
- China building SUPERSONIC SUBMARINE that travels in a BUBBLE
- Boffins attempt to prove the UNIVERSE IS JUST A HOLOGRAM
- Review Raspberry Pi B+: PHWOAR, get a load of those pins
- Linux turns 23 and Linus Torvalds celebrates as only he can