A research project has found an algorithm for getting the best use out of cloud-scale data centre server flash caches and identifying workloads best placed completely in flash. Yesterday's USENIX conference audience had a presentation about this called "Janus: Optimal Flash Provisioning for Cloud Storage Workloads" In Roman …
Don't suppose anyone here has read the paper?
This looks like decent work, but I'm curious how and to what extent it's different from the past couple decades' work on unified hierarchical storage management, particularly when using self-tuning algorithms.
In 1991 AIX 3.0 had a unified VMM that handled physical memory, virtual memory, and file system caching (with virtual filesystems spanning multiple volumes), and treated it all uniformly - filesystem writes were handled just like physical-memory pages getting paged to disk. Around that time hierarchical storage management (HSM) was a popular topic, too, with people researching algorithms for optimal allocation of large data sets across RAM, disk, and slower media such as tape and optical.