HPC storage supplier Panasas says its DataRunner transfer software can move files between systems and sites up to 16 times faster than standard network transfer protocols, such as FTP and rsync. Panasas' CMO Barbara Murphy said in a canned quote: “DataRunner assures high-speed, secure data movement while maintaining up to 100 …
"The supported operating systems are RHEL, SUSE Linux Enterprise Server, Centos and Mac OS"
If data transfer software is going to be successful, it needs to support every OS under the sun, from z/OS and OS2200, through all the mini computer offerings and all the commercial and non-commercial UNIXes to Windows, Linux, Mac OS and preferably all the obscure OSes you've never heard of. I needs to support every possible permission system, exotic encryption methods and external scheduling. Linux and MacOS is not good enough, even if it can go 16 times faster than FTP.
Must be out for VC cash
When I see:
"secure data movement while maintaining up to 100 per cent bandwidth utilisation" I know they are playing buzz word bingo.
Sorry I don't want 100% use, I want less. It's easy to flood a link to 100%, but if 99% is useless crap, what does it matter?
Hook, line and sinker
Sent to me anonymously:-
Congratulations on taking the press release bait hook, line and sinker. This really is a complete troll release.
Do I have to spell it out to you? They're claiming they can be "maintaining up to 100 per cent bandwidth utilisation", which means simply stuffing the line as full as it will go. This is empathically not a design goal of rsync, quite the opposite really.
If this is their design goal then they ought to try and compare with bittorrent, a protocol designed to exploit weaknesses in TCP's ideas of "fairness" (confused? ask Andrew for the Briscoe paper redux) to stuff the line as full as it will go.
By contrast, rsync tries to reduce the need to transfer anything to the absolute minimum, preferring to leave the line idle while it works out what can be safely skipped. So this comparison is a little dishonest, to be quite unduly charitable about it.
Yet you bought it and wrote a nice little piece regurgitating their lies. How nice.
My setup (for uploading sites) is based on scripted rsync, and it's (realistically) over 50 times faster than FTP (which is just painful for 10,000 files etc). Considering it easily saturates the line (apart from a few seconds of working out what to send) I don't see how you can get a 16x improvement. Unless rsync normally only uses 1/16th of the bandwidth, which I really don't think it does.
Being faster than FTP is also hardly an achievement.
Who sane would FTP 10,000 files without zipping them up first anyway?
Perhaps El Reg should clearly mark any articles that are essentially advertising, much as they are reporting that the US is requiring search engines to.. http://www.theregister.co.uk/2013/06/26/ftc_tells_search_engines_to_distinguish_ads_from_results/
Mark articles as advertising ..
> Perhaps El Reg should clearly mark any articles that are essentially advertising
You read my mind ...
up to 16 times faster
But the graph shows Sydney improving from 43 minutes to 2 minutes, which is a speedup of 21.5x
So is the graph a lie then?
> 100 per cent bandwidth utilisation regardless of network conditions, file size, distance or latency
Now that's an obvious pork pie!
Are we supposed to be impressed that the vendor's dedup software has dedup'd the vendor's own test data so well?
here's what happened
marketing droid: "You... engineer. What's the performance figures? I need to write some marketing blurb"
engineer: "well, with dedup, it really depends on the input"
marketing droid: <confused silence>
engineer: "ok, well if we put highly compressible data in, it gets highly compressed. If there's millions of copies of the same file, we'll get an improvement of millions of times over. But that's not a likely scenario."
marketing droid: "so you're saying I can put anything I like?"
engineer: "yes but it would be misleading"
marketing droid: "Good. what's the biggest number that won't be totally obviously a lie?"
engineer: "sigh. 16 is a computery number, use that"
Something wrong with those figures
Roughly 17 minutes to transfer 10GB allowing for latency on a 1gb/s connection.
Assuming you are actually "utilising 100% of the bandwidth" (as they claim) ...
1gb/s = raw throughput of roughly (give or take) 1GB every 10 seconds (8 bits per byte) + any overheads.
Tested locally on my 1gb/s LAN connection when using roughly 80% (give or take with variation) of my bandwidth I can push from a SSD on my local machine to a 5 disk NAS 1GB in under 20 seconds (i do this all the time).
given that the internet inherently comes with "excessive latency" (lets just call it that for argumentative purposes) ... i'll assume an additional 50% on my time with a clean direct line.
so thats 30 seconds per GB x 10 GB = 300 seconds (about 5 minutes).
So actually at their fastest speed on 1gb/s they are only actually using 1 3rd of the available bandiwidth.
Their slowest being a lot worse.
That said ....
I have 130mb/s internet connection and using bittorent i can stream 10GB down in 40 mins easy (again something i do all the time)!
So are they even using 1/10th of the bandwidth?
Re: Something wrong with those figures
yeah but that's because LAN latency is 100-1000 times better than internet latency, %50 slowdown is pretty conservative
These are a dime a dozen now and seem to possibly license Aspera's original FASP technology that utilises some kind of UDP flood to send globs of data from one boring place to another far off boring place. When you have latency in the range of +500ms and loss of more than a few percent TCP transfers like FTP just can't ever get anywhere near their theoretical speed no matter what the bandwidth between your sites. Pretty sure movies studios have been relying on this sort of tech for ages. I tested tech like this vs FTP between Australia and Bangkok and the improvements were pretty massive.
(ignoring marketing droid wang, the tech seems to work pretty well in general)
New trans-oceanic cables in use?
The chart implies they are running GbEthernet from LA to New York, Rome, Sydney.and Tokyo.
I though that GbE cables has a limit of 100m(?). How many repeaters did they use? And how did they get them to work with all those oceans to cross?
Re: New trans-oceanic cables in use?
Gigabit OVER CAT5/6 may have distance limitations of about that (I stand to be corrected).
For 10Gbps (never mond 1Gbps) over FIBRE it is circa 300 metres for short range, and a lot, lot longer for long range. And that's 10 Gbps I'm talking about.
(Off the top of my head 10Gbps/Infiniband over CX4 copper is limited to 25 metres)
Guess who had his head in wiring cabinets this morning?
You can now buy fibre cables to extend Infiniband across campus distances using fibre.
Take a look at SuperJanet, Internet II
or if you are in the movie industry Sohonet http://www.sohonet.com/
SIXTEEN times faster than FTP
"DataRunner assures high-speed, secure data movement while maintaining up to 100 percent bandwidth utilization regardless of network conditions, file size, distance or latency."
If it does what it says on the tin, then I'm impressed ...
- The land of Milk and Sammy: Free music app touted by Samsung
- The long war on 'DRAM price fixing' is over: Claim YOUR spoils now (It's worth a few beers)
- Privacy warriors lob sueball at Facebook buyout of WhatsApp
- Dell thuds down low-cost lap workstation for
cheapfrugal creatives or engineers
- 20 Freescale staff on vanished Malaysia Airlines flight MH370