back to article Don't Fedex your tapes, people! We're so fast it's SANdulous – WANrockIT

Bridgeworks says it can make communicating with offsite tape libraries vastly faster than physically shipping tapes, and vastly cheaper if you are using a remote Fibre Channel SAN or site-to-site replication before transmitting data to the tape library. The basic pitch is: "Don't do this. Send your data direct to a SCSI tape …

  1. Phil O'Sophical Silver badge

    When the round trip latency was increased to 360ms, simulating a longer WAN distance, the raw transmission time for 1GB was 42.6 mins while with WANrockIT it decreased to 12 seconds.

    So they're using a bulk-transfer protocol with large window sizes, and possibly separate streams within the physical link? Is this really all that novel?

    How does it compare with FTPing a tape image? Or even the classic "747 full of DVDs (or Bluerays)" ?

    1. John H Woods Silver badge

      "How does it compare with FTPing a tape image? Or even the classic "747 full of DVDs (or Bluerays)"

      The bandwidth of a 747 full of media is well in excess of 10TB/s [1] so the raw transmission time for 1GB is less than a millisecond. A motorcycle courier can manage 1GB/s (i.e. 10Gb/s) London to Edinburgh.

      Ping time is several hours though!

      I cannot remember a time in the past (nor envisage one in the future) when any networks had a higher bandwidth than the movement of contemporary physical media.

      [1] A 747 can carry 100 tonnes of cargo (I think), a 2TB SSD weighs less than 100g including appropriate packaging, meaning that is 2 Exabytes per Jumbo, say 8 hours for a LON->NYC flight time and 2 more hours handling time, around 50TB/s unless my maths is letting me down.

      1. SolidSquid

        Quick check of Amazon, a WD Red 6TB weighs 753g, so estimate for weight isn't far off but you can up the number of exabytes per plane to 6, making it 150TB/s

        1. John H Woods Silver badge

          @SolidSquid

          In that case, rust is only about 8TB/Kg, compared to about 30TB/kg for SSDs.

          100 tonnes of Samsung 850EVO 2TB SSDs at 66g is 3EB which I rounded [1] to 2EB; 100 tonnes of WD Red 6TBs at 753g is only 0.8 EB

          [1] We probably need packing overhead but, in any case, when I'm guestimating I like to go for what I call 'currency logs' in other words, chose a 1, 2 or 5 then a number of 0s. I find this is a good compromise between the intuitiveness of 'order of magnitude' and the difficulty of, in situations like this, getting enough precision for even 1 significant figure (although I've never really been sure whether the choice for first digit should include '8').

      2. Trevor Gale

        "I cannot remember a time ... when any networks had a higher bandwidth than the movement of contemporary physical media."

        One has to add in the time taken to transfer all of this data to the physical media concerned, which might not be insignificant - before there's anything to move / transport.

    2. ClaireB

      No it doesn't necessarily run with large window sizes it can still run with any window size and get the performance. Note we do not set the window size we let TCP/IP do that. This is a single stream to a single tape drive.

      PS We can make FTP run faster but we can't do anything about making a 747 go any faster.

      1. allthecoolshortnamesweretaken

        "We can make FTP run faster but we can't do anything about making a 747 go any faster."

        You and me probably can't, but I'd say Boeing and Rolls-Royce would be able, if the need arose. Fuel economy would probably suck, though.

      2. Phil O'Sophical Silver badge

        @ClaireB

        The article says "We're told 40ms of latency is approximately half way across the USA. Transmitting 1GB through an unaccelerated link with that latency took 5 mins 20 secs. WANrockIT accelerated it to 11.8 secs."

        Since latency has no actual impact on one-way data transfer speed, any improvement is surely down to delaying acknowledgements, either by using large packets or large windows? Using large packets has a problem on noisy links, since even a small number of retries can kill performance. You're still subject to the laws of physics, and I suspect that a well-tuned TCP/IP stack with judicious packet and window sizes could match these figures, at least for a bulk transfer copy protocol.

        1. Anonymous Coward
          Anonymous Coward

          Re: @ClaireB

          Why would you use a 747? You could use one of those massive container ships and your data rate would be far higher due to the volume even though it is literally "the slow boat to China".

          1. ClaireB

            Re: @ClaireB

            What happens if you have a disaster while your boat is on the high seas?

            1. Anonymous Coward
              Anonymous Coward

              Re: @ClaireB

              What happens if you have a disaster while your boat is on the high seas?

              Use 4 ships in RAID config.

        2. ClaireB

          Re: @ClaireB

          In answer to your question, only if your large packet size and RWS has the ability to fill the link completely ie no quiet time but when you do iSCSI transfers over a network you cannot ignore the SCSI status phase.

  2. Warm Braw

    This is weird

    The previous advertarticle about PORTrockIT has a graph that shows no real benefit from acceleration in the absence of packet loss, regardless of round trip time, but significant benefits in the presence of packet loss.

    As far as I can gather (the company's website doesn't work with AdBbock Plus), WANrockIT is basically the same technology in a box, so it's surprising to see these claims relating to improved performance in the face of latency alone. If they can do it for one, why not for the other?

    1. ClaireB

      Re: This is weird

      There are a number of whitepapers on the website which explain in detail the results and why they differ. The previous article referred to SnapMirror and SnapVault where as this is native iSCSI.

      1. Warm Braw

        Re: This is weird

        Sorry, so the PORTrockIT stuff is application-specific and the WANrockIT stuff is lower layer?

        While I admit to being easily confused, the white papers refer only to PORTrockIT and the WANrockIT manuals all point to products that appear actually to be called SANslide, so the differentiation is not immediately obvious...

  3. Martin
    FAIL

    Is this an article or an advert?

    Bridgeworks says this "performance makes off-site tape replication into a realistic and attractive proposition for organisations of all sizes," and that would certainly seem to be true.

    Where on earth is your usual cynicism, El Reg?

    Replace "and that would certainly seem to be true" with "but I for one would like to see these results repeated by someone else before I was in any way convinced."

  4. CAPS LOCK

    LITERALLY incrediible...

    ... extraordinary claims need extraordinary evidence to back them up...

    1. Fatman
      Joke

      Re: LITERALLY incrediible...

      <quote>... extraordinary claims need extraordinary evidence to back them up...</quote>

      Then PLEASE tell THAT to SCO (aka Caldera)'s lawyers, I had thought that zombie was dead, but it keeps on rising from the grave.

    2. ClaireB

      Re: LITERALLY incrediible...

      You are welcome to our labs any time to collect the evidence. When we prove the point lunch is on you.

  5. Anonymous Coward
    Anonymous Coward

    Ok, cut the cables...

    Sever a submarine cable between the two locations... Then ship the tapes home by conventional means... What do the tests show?

    What fault tolerance is there to pick up and continue a backup after widespread disconnection... ? I honestly don't see any rigorous testing here....

    Corporate outsourcing will mean no one notices until 'the IT infrastructure' falls over ... Confidence is not high!

    1. ClaireB

      Re: Ok, cut the cables...

      Yes! That is why you can set primary and failover connections. As for a failure in the middle we will act exactly the same way as though you had lost a FC connection or an iSCSI connection within a DC. A tape drive failure will reported exactly as a local tape drive failure would be. We have accounts out there that have subjected our technology to severe fault injections and we have passed. Oh and by the way, typically you can read the tape at the same speeds you write to it.

  6. drtune

    Graphs show that a shitty solution to an long-solved problem is still shitty

    Right, so you run a protocol that has poor performance on high latency links due to ACKing, and say "look! It's poor!".

    Oh - the 1970's called and they want their due credit for adaptive TCP window scaling...

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon