back to article Toshiba to demo vid streaming without any work by the CPU

Toshiba's NPEngine hardware directly streams video from SSDs to IP networks without using host server CPU cycles or memory. Tosh claims the dedicated hardware delivers up to 64,000 x 40Gbit/sec video streams – way more than the 20,000 or so an average 2U server is said to be able to stream. The Toshiba hardware, a server card …

COMMENTS

This topic is closed for new posts.
  1. Sarev
    Headmaster

    I think you mean...

    > delivers up to 64,000 x 40Gbit/sec video streams

    Jesus - a 40Gbit/sec video stream - it that UltraHD or something?! I think you mean it can deliver up to 64,000 streams over its up to four 10Gbit/sec network interfaces. The streams themselves are likely to be somewhat lower bandwidth.

    1. Andrew Garrard

      Re: I think you mean...

      I'd assumed it was a typo for 40Mbit/sec video streams, 40Mbit/sec being the upper limit for Blu-Ray content. But in retrospect, at least one other version of this report says "64,000 streams at 40Gbit/sec" meaning *total*. So I buy your version.

  2. James Cooke
    WTF?

    Personal use?

    Really I don't see where you get personal use as part of this. Regular PCs can offload things like h256 to their graphics card for decoding and similarly mobile devices have specialised decoders.

    Maybe pro video editors will be wanting their CPU/GPU doing something while they're watching video but the rest of us will be quite happy without that extra cost.

    1. Anonymous Coward
      Meh

      Re: Personal use?

      I think the personal use aspect is somewhat looking at the aspect that the end-user gets a stream from the server; but yes your right form the end user with a PC etc is not going to get excited by this at all.

      Now what it does do is offer (assumed) that this chip will enable smaller video streaming servers using less power. Which is funny in many respects as in the real early days video streaming was in effect down to dedicated harware decoding chips, which all started to change to software around the pentium introduction, and now back again - flip flop between software and dedicated hardware. One of the main reasons for this was the introduction of the mmx instructions that enabled cpu's to take on some of that processing being done by dedicated chips. Now when the cost of electricity not getting cheaper the appeal of lower watt's performing chips starts to get higher.

      Another aspect to all this is that over the past 10-15 years alot of standards have been improving (as they always do) and are now mature enough were you can do them in dedicated chipsets knowing they wont be utterly superceeded before the year is out. Or in other words the balance is there.

      So for end user consumer kit this wont do anything as savings wont pan out for something they can already do. But for dedicated servers that are there just to stream to end users then this does allow for smaller kit and lower power usage. this is at teh cost of flexablity a normal CPU solution offers, but for the production life of the product, i dont see that being an issue at all.

      Questions I have that would impact how alot of us would read this are:

      1) how many streams per watt can this chip handle in contrast to CPu solution

      2) why didn't they have a 2-chip solution that streams from memory and a chip that loads into memory the contents of the ssd etc and make the whole I/O contention a little more managable and also have a cip that can be used in other markets to load storage into memory (memory controllers another area, but you see the drift).

      3) will more porn streams become cheaper

      4) how does this compare to a GPU solution for performance and cost per stream in watts

      5) how long before a new Annex of the standard is brought out making this look less appealing

      1. Charlie Clark Silver badge

        GPU shouldn't come into it.

        The description, which appears largely to be quoting the press release, implies that the card is effectively a beefed up network card for distributing encoded streams across the network. Ideal for production houses, content owners and presumably content delivery networks if it means that media servers with largely redundant CPUs can be replaced by dedicated, low power but hopefully still cheap boxes. GPUs should only be involved in the initial encoding and decoding of those streams. But, as it's from Toshiba I do wonder whether there isn't a Cell doing the grunt work.

    2. Jamie Kitson

      Re: Personal use?

      Just the point I was going to make, since when do "notebooks, tablets and Ultrabooks" stream video to the network? *Possibly* NAS boxes, but as you say, not exactly a personal use scenario.

    3. Anonymous Coward
      Anonymous Coward

      Re: Personal use?

      there's no real personal use here... this is a solution aimed at the broadcasters so they can get video to the masses in a more efficient manner.

      the fact it supports adaptive streaming out of the box, but which flavour is going to be interesting (Smooth Streaming, HLS, Adobe Adaptive, MPEG-DASH?) and of course the studios first question will be "what about DRM"

    4. Anonymous Coward
      Anonymous Coward

      Re: Personal use?

      No, some aspects of h.264 decode can be, not the whole thing. There are still downsides.

      However, maybe "H256" is completely offloadable, and you're ahead of the curve 8P

  3. Anonymous Coward
    Anonymous Coward

    Amiga - DMA

    Why is this soundin like they way the Amiga computer used to things DMA.

    http://en.wikipedia.org/wiki/Amiga_Chip_RAM#Direct_memory_access

    1. Anonymous Coward
      Anonymous Coward

      DMA was old before the Amiga was even dreamed of, guys.

      Much though I love the Amiga, DMA was a standard design technique since mainframe days.

  4. eJ2095
    Happy

    Beat me to it

    Was going to say amiga also

  5. Matt 29
    WTF?

    Different kettle of fish

    Useful card, but the personal use idea's dont even come into it.

    The process for sending video in IP from a video file on a disk/SSD is nothing like the process for rendering a video on a display at the client side... Yes the article was a little on the small side, but don't beef it out with useless comparisons to other parts of slightly-video-related computing. :) Concise is good!

    1. Dave 126 Silver badge

      Re: Different kettle of fish

      Agreed. The article confused some commenters. Modern PCs or NAS boxes aren't worried by streaming video to your TV, and over a LAN there is no need for 'adaptive bit-rate' cleverness, and other tricks required to get video to play smoothly across the interwebs.

      I can't see a home application, for this. DMA already works.

  6. Steve Knox

    IT Rule

    "This use of dedicated hardware flies in the face of general IT assumptions that commodity hardware wins out over specialised processing hardware such as FPGAs and ASICs. "

    As I understand it, the general IT rule is that dedicated hardware is faster, commodity systems are cheaper. Often this means that you can get more bang for your buck from commodity hardware (because you can buy more). But I've never heard it said that commodity systems are faster (or even as fast as) dedicated hardware on a 1:1 basis. Since price isn't even mentioned in the article, I don't think we have a basis for comparison.

    1. Morg
      Boffin

      Re: IT Rule

      Exactly .//. WTF type of comments . do you see x86 Nexus core switches somewhere ? no ? well ... that's it, networking kit is ASIC only, FPGA/X86 assisted sometimes but that's it . And what's streaming ? network kit.

      ..Fail comments really.

  7. Christian Berger

    Uhm, so what does it do?

    Does it shift 40 GBits of data from SSDs to the NIC? That's something which DMA can already do. The Linux Kernel has had zero copy for years. You tell the SSD to DMA the sector into RAM, you tell the NIC to get it and send it out. There's not that much to it.

    And for dedicated hardware it's still lame. Digital VTRs were able to handle a Gigabit already by the late 1980s and they actually had to process the video, shuffling around the bits and applying all sorts of error protection. Now 40 gigs is certainly harder, but a 40 fold increase in 20 years is not really impressive.

  8. wowfood

    I can't see this being much use for the general consumer t all. I do however see google probably trying to buy a metric ton of these new cards for their youtube servers. They've been trying to expand the network to keep up with more video demand for a while now. If one of these cards can do more work than one of their servers it seems like the cheaper faster upgrade alternative.

This topic is closed for new posts.

Other stories you might like