Re: Obvious bull
So one scenario is file is created on a x86 box and then updated on an Alpha box, So the Alpha system has to be able to compress in the same way, while still meeting to meet the performance criteria.
Why would that matter? If the OS is doing the compression then it will present the compressed file to any external programs as uncompressed. Another box accessing the file over the network would see it as uncompressed (network compression technology is different). And this still doesn't really back up the fairly flimsy assertion that Alpha's weren't up to the job. I certainly don't remember this being an issue when the architectures were being compared back in the day. It's certainly not a RISC/CISC issue. At least not if the algorithm is being correctly implemented. But I seem to remember that many of the performance "improvements" (display, networking, printing) in NT 4 were done especially for x86 chips which suck at context-switching. Coincidentally NT 4 was seen as MS abandoning any pretence of writing a single OS for many different architectures.
I'm not a whizz at compression but I think the common approach now is to use a container approach as opposed to trying to manage compression on individual files in place.