back to article IBM's secret weapon to battle Google, Amazon, Microsoft clouds: A HOLLYWOOD STAR

IBM has bought Emmy-winning bulk data transfer biz Aspera to ease the shipping of large files from on-premises boxes to remote data centers including those operated by cloud providers. The acquisition was announced on Thursday, and will see Big Blue put Aspera's UDP-based "fasp" transfer protocol and associated software-only …

COMMENTS

This topic is closed for new posts.
  1. Nate Amsden

    sounds pretty cool

    and useful tech to have. Wonder what the cost is ? On a kinda-sorta-semi-related-but-maybe-not note I wrote a blog post last year about astonishing WAN performance with scp and rsync-over-ssh over a Dell Sonicwall VPN on highly compressed files no less was able to sustain greater than 10 megabytes a second between Atlanta and Amsterdam (~95ms of latency) on a single connection. Outside of the VPN throughput was closer to what one might expect around 600-800 kilobytes a second. No WAN optimization functionality enabled on the Sonic Walls (and support confirmed even if it was there is no protocol optimizations for SSH, which of course is encrypted in itself).

    Both sides have a gigabit link, I've never been able to get a good answer out of Dell as to how this is possible but I've repeated it again and again and again over many months. I can transfer ~250GB in a matter of hours between the sites. Which is literally ~3x faster than the Atlanta facility can transfer to Amazon cloud on the east cost(throughput is of course limited by latency). And of course it's far simpler, I just scp <filename> or rsync -ave ssh <filename>.

    I'm sure if I was able to dive into the tcp packets I would discover the answer, but I'm not that tech oriented when it comes to networking. I confirmed with multiple network gurus that know a lot more than me that this performance is unexpected.

    If your interested in reading I won't post the link directly but you can find it with a google search "freakish performance sonicwall". It attracted the attention of Dell themselves at one point, but they were never able to get me in touch with someone senior enough to explain the situation (they promised they would on several occasions).

    On another slightly-related-but-not-really note a few years ago I wrote a distributed file transfer system for a company that among other things leveraged HPN-SSH which is a WAN-optimized SSH, combine with (at the time unique I think) the ability to disable encryption for data transfers(while maintaining encryption for authentication) it made for a very scalable, and incredibly reliable(much more so than I was expecting) system. The files that were being transferred were basically compressed apache-style access logs with tons of cookie info for an advertising company, and it transferred probably 10TB of compressed (~3TB post compression) a day from multiple sites. The files were split up based on customer ID, so the file distribution system was built to automatically send files in parallel making for even better throughput. Load balanced SSH/rsync servers with a shared common key at the central storage area received the files. It was a pretty fun project.

    So if your in the market for doing large data transfers over ssh over WAN connections, consider Dell Sonicwall, and/or HPN-SSH ...

  2. Anonymous Coward
    Go

    Bad news is that I was hoping that this article somehow involved Scarlett Johannson...

    The good news is that apparently you can win an Emmy for IT.

    Start practicing those acceptance speeches, people!!

    1. Hud Dunlap

      @Marketing Hack ( TI has one)

      http://www.ti.com/corp/docs/company/history/timeline/semicon/1990/docs/98dlp_hornbeck_emmy.htm

      It is in the display cases of one of their Dallas Fabs.

  3. dan1980

    Game unchanged. Unsurprisingly.

    The last paragraph is the kicker.

    Whenever I speak to a client about migrating an application to an online provider, I always start with an ideal transfer rate - assuming that the data will transmit at the full line rate - and work from there. There's no point even testing if the theoretical maximum is too slow!

    Last time the estimate was 500 hours - 3 weeks. Even then, it would have actually taken closer to 4 weeks as, after that initial transfer was finished, we would have had to send changes. That would have taken a few days, so more changes to be sent, and so on until the time decreased to the point where we were sending just one day of changes and could therefore cut-over with some measure of control.

    Of course, none of that is to say that this technology is not impressive, but there is a big difference between increasing the speed of data that already has to be transferred and being a 'game changer', which I would equate with enabling the use of a data transfer where, previously, couriered HDDs were used.

  4. MartinBZM
    Black Helicopters

    Never ...

    under-estimate the bandwidth of a swallow with an USB stick tied to its leg!

    1. Anonymous Coward
      Anonymous Coward

      Re: Never ...

      We use RAID arrays tied to rocket-powered ostriches.

This topic is closed for new posts.

Other stories you might like