untitled
Doesn't sound lossless to me - more like a complete transcode to a different format, that just happens to keep the same file name.
Ocarina can now compress Flash and MPEG2 videos with no visible loss of quality. The company can now reduce the amount of data - it doesn't call it deduplication - in 900 file types, some of which can simply not be reduced at all with deduplication or standard compression techniques. Its addition of Adobe Flash video formats …
Hope they tested against todays encoders, which already achieved that 40% improvement over the last 5 years or so. They've nearly hit that level of improvement on Freeview over the last 2 years.
Surely they wouldn't rig the samples with old, badly compressed mpeg2 to make their transcoder/recoder look good... would they?
[I work for Ocarina]
Thanks for everyone's comments.
Just to clarify what we're doing; for web-distributed file types (GIF, JPG, FLV{h.264}) we optionally use lossy techniques. Some lossy opportunities we use are reduction of non-visual info (eg huffman table optimization), spatial optimization (aligning DCT quantization with the HVS), better macroblock management, motion compensation, and more intelligent VBR. Our intent is to never introduce visual artifacts and we even have some portrait studios (making big before/after prints) who have validated the algorithms. With this 'visually lossless' approach, we keep the files on disk in their native format so customers can capture benefits not just in storage savings, but also bandwidth reduction and page-load-time improvements.
For production workflows where loss generally isn't desired we apply a fully bit4bit lossless workflow and use all the proprietary compression we can for maximum reduction. For ingest formats like DV we can get 50% or more. For MPEG2 we're seeing around 20-30% at Beta customers...enough to be meaningful for say a broadcaster's archival system.
And we definitely don't rig the tests ;-) the results are based on customer data-sets only. We work across a thousand file types (so far) and no one here has time to craft a bunch of application-specific data-sets from scratch. Results will vary from customer to customer, and someone who is a real codec expert can almost certainly approximate our results on a specific file type. But we find in practice people don't do that, and that still doesn't provide a scalable dedupe & compression platform that also works on all the other 100 file types in a given customer's workflow, and integrates well with their existing storage system.
I wrote the white paper on Native Format Optimization that talks about the visually lossless approach. I think you have to fill out a form to get it, but you can check it out at www.ocarinanetworks.com
The man (or woman) who works for Ocarina says "we keep the files on disk in their native format", which means they're looking for "peephole optimizations" in the encoding of the MPEG format, while fully complying with the MPEG specs. The MPEG specs have some features that most encoders don't bother to make use of because they're too much work for too little benefit. That's especially true of MPEG4's H.264.
When you reduce the filesize by 20% using commonplace free software for MPEG and JPEG today, the visible degradation is usually negligible. You'll be able to see differences between the two images compared side-by-side, alright, but you'll have to scrutinize with great concentration to be able to convince youself that the smaller of the two is inferior looking. The differences are very subtle.
In my opinion, the stuff he (or she) is talking about is not worth paying money for because it's too minor.
Agreed that this feels like an advert, but disagree that it's useless - when you add up all the amount of time and effort that has gone into storage management by manually re-encoding images and videos or compressing files in my company over the past few years, a system like this that does it automatically for our millions of files would be worth it.