BBC programmers discover raw sockets.
Back in September, The Register's networking desk chatted to a company called Teclo about the limitations of TCP performance in the Linux stack. That work, described here, included moving TCP/IP processing off to user-space to avoid the complex processing that the kernel has accumulated over the years. It's no surprise, then …
BBC programmers discover raw sockets.
NO! This is precisely not what is being described.
The performance problem is due to the buffer-allocation and copying that goes on in the kernel when receiving and transmitting packets via the LAN interface - this applies just as much to "raw" datagrams as to TCP/IP.
There have been a few iterations around a solution, including PF_RING, but the real problem is that the generic Linux approach to device drivers doesn't really work with high-speed network devices. Worth reading up on Netmap for some more concrete details.
The Linux networking stack is, well, sub-optimal, to put it kindly. Getting it all out of kernel space isn't really the answer in the long term, but it does show the kernel developers a way forward.
EDIT: There is, incidentally, the question of whether TCP/IP is even the right protocol suite for this type of application (too small a window size, poor packet loss recovery), but the driver issues are independent of the high layer protocols...
"The Linux networking stack is, well, sub-optimal, to put it kindly."
A general purpose OS - or general purpose H/W for that matter - isn't optimised for anything in particular. It has to be a jack of all trades balancing performance against security, ease of use, multi-tasking & whatever. If you want optimisation according to some specific criterion you use something special purpose. You want real time response you use a real time OS. You want to mine Bitcoins you use ASICs.
True in general. However, it's not true that because you can't achieve optimal performance for a certain set of criteria that any improvement at all is impossible. There are a lot of ways in which a general purpose OS can reduce network processing overhead - virtual addresses and caches notwithstanding.
These include better use of scatter/gather features of the NIC or DMA controllers, careful control of allocation and copying and not shunting data to and from user space when the ultimate source or destination is another driver (eg streaming from/to a network to/from a file). None of these would render any other part of the operating system less usable. Nor do they require any form of "real time" response.
"These include better use of scatter/gather features of the NIC or DMA "
Which NIC or DMA?
If you tie the S/W to specific H/W, e.g. a particular model of NIC then you lose the ability to plug in different H/W. If you provide for alternative H/W you end up with a modular structure which has its own overhead. I'm not saying that something which has grown at the rate the Linux kernel has is going to be the result of a whole series of ideal decisions (I can think of a few I disagree with) but if you try to do everything there are going to be trade-offs.
Nothing really new here!
But then I have worked through the late 70's and early 80's developments in computing. It was the rise of cheap computers on the back of rapid advances in CPU performance that many functions that were handled by dedicated (and hence expensive) network adaptors (this is the real reason why the TLI library in Unix exists, it enabled an application to transfer data and hand off processing to the network adapter) were moved into the them lightly loaded workstation CPU. Similar design decisions lead to disc controller intelligence, graphics processing, modem processing ("soft modem") and other intelligent peripheral logic being moved into the CPU.
Subsequently we have seen the resurrection of dedicated graphics processors and 'intelligent' disk I/O controllers, but not the resurrection of dedicated and intelligent network processors; it seems the BBC and friends have discovered a need for one.
Interestingly, even with intelligent network protocol processors, performance was a big issue and any vendor looking seriously at high-speed networking always had to tweak the protocols so that they could be implemented in silicon - in fact protocols such as XTP, were developed that combined the functions of the network and transport layers; but these really upset the TCP/IP and protocol layering purists and so didn't garner much support... [So just another reason for not really bothering with IPv6.]
Why don't they just fix the Linux network stack so it has a proper modular architecture like Windows does (NDIS) ? TOE and similar hardware acceleration is way behind under Linux as it's a bolt on after-thought. This has been a long standing and widely known Linux weakness - and one that becomes more apparent as we move to ever faster network connections.
Broadcast hardware to do the functions the bbc are playing with already exist on the market today, implemented in dedicated silicon from the likes of motorola and others in rack mount formats capable of handling any bitrate encoding and distribution you choose. Rackmount encoders, streamers, switching solutions, watermark insertion servers etc all done in dedicated silicon inside with just a little os to manage the asic config itself on some management plane with the heavy packet flow happening on dedicated fibre links.
I'd be shocked if SKY or BT are shunting their streams about before distribution using ip running on a linux based computer enough for this to be a issue.
But.. That would mean BBC contributing to open source, and the common good. Surely, we can't have that?
"I'd be shocked if SKY or BT are shunting their streams about before distribution using ip running on a linux based computer enough for this to be a issue."
You're likely to be surprised because they do, in fact many encoders and the like are really linux computers with dedicated encoder cores only if you are lucky and many still using off the shelf nics etc, The really expensive ones don't but they are really expensive...
Ty the time you get away from the content preparation to the real business of content distribution where throughput is really large then you very are likely to be using COTS hardware where tricks like this could make a signfiicant different
>Which NIC or DMA?
If your pick the right driver abstraction you can make use of hardware capabilities when they are available and fall back to something less performant if they aren't: this is well-trodden ground for operating system design. It can even simplify the overall design - it's a slightly different topic, but, for example, the driver model in NetBSD eliminates a lot of redundant architecture-specific code just by better abstracting the individual operations.
If you pick the wrong abstraction, you can never take advantage of hardware acceleration. Some of the choices that have been made in Linux to date have their origins in the mists of Unix past and are not necessarily the right choice for the future. Software evolves over time and I simply don't accept any argument of the type it's too difficult/pointless/not invented here/everything is perfect.
Oh come on, the Linux Network Stack could use some work but it's still a heck of a lot faster and much more sane than the pile of crud that is the Windows Network Stack.
BBC contributing to open source
If that surprised you, then Dirac will probably completely shock you.
Is that still true? I know it was certainly true in the past, but presumably Microsoft has made some improvements to something in Windows while they were ruining the GUI. Anyone seen any benchmarks comparing TCP/IP performance on the latest Linux to Windows Server 2016?
"the Linux Network Stack could use some work but it's still a heck of a lot faster and much more sane than the pile of crud that is the Windows Network Stack."
Clearly you havn't tried using real world 10GB, 40GB and Mellanox type low latency connectivity. Windows is significantly faster than unmodified Linux - and more efficient - with significantly lower CPU use.
I call bullshit. Mellanox cards are specifically designed to *not* use the kernel
"I call bullshit. Mellanox cards are specifically designed to *not* use the kernel"
You call wrongly. Mellanox cards can only support hardware offload by hacking the Linux kernel with a hardware specific modification to support this. On Windows they can just inject a filter driver at the right layer in the NDIS stack.
Now I know people have opinions on the licence fee etc
But this is the BBC at its finest. Writing kernal bypasses to get better throughput.
This is why I pay my licence fee for the few incredible moments in my lifetime where I can be proud of a public utility that I fund indirectly showing off the skills of their talented staff
Watching rugby on ITVPlayer at the weekend made me realise just how good the BBC's iPlayer is. Just need to privatise the BBC to spin off the technology arm as a pure internet broadcaster.
I too love the BBC but I am surprised at their hypocrisy.
They obviously have the devs and in-house knowledge to hack Linux so isn't it a pity that as far as their customers are concerned they don't recognise the fact that some of us actually use Linux on the desktop?
The BBC has the facility in the iPlayer to allow users to download programmes for later consumption, all that is except users of Linux.
Currently they are running a beta programme using HTML5, again for everyone except Linux.
I have written to them on these subjects and have received a polite but firm reply to the effect that "We do not support Linux" Presumably not because it's too difficult but that they see no future in supporting the OS, yet they use the thing themselves.
As I said, hypocrites.
Not hypocrites, just large and with various bits that dont talk to other bits
Well, the way it works is that when there is a vacuum somebody will write a perl script to fill it.
Presumably this was sarcasm?
I'm not 100% sure.
I just wish BBC could spend less time on 4K worries, and get rid of Flash NOW.
just large and with various bits that dont talk to other bits
Net result, the organisation says one thing and does another.
a pretense of having a virtuous character, moral or religious beliefs or principles, etc., that one does not really possess.
I daresay that attribute in a human is derived from the same root cause - bits of the brain that ought to talk to each other but don't.
Knowledge of the means by which a behaviour occurs doesn't alter the behaviour itself or its effect on others.
Also known as "an explanation isn't an excuse"
Sport at 30fps sourced from 50i? Possibly converted from 30fps again to display at 50Hz on your TV.
Looks like shiit on any service.
I suspect there is a whole generation now doesn't actually know what real TV is supposed to look like.
"I just wish BBC could spend less time on 4K worries, and get rid of Flash NOW."
They already are. There is a HTML5 beta trial going on right now.
I'm using the HTML 5 beta on Linux?... If I recall correctly the HTML5 beta is restricted only by Browser. In fact Chrome on "Linux" (presumably Ubuntu) is listed as one of their test platforms...
They have had a "trial" for years and years now. Perhpas it's a new trial..
Really moving quickly there.
It's a newish trial AFAIK. Annoyingly I still can't get it to work in Firefox on Xubuntu. However, at least I can now watch the iPlayer directly on Chrome on Android without having to install the BBC's horrendous app.
Not real time of course, but just use get-iplayer. Download in HD overnight and watch at your leisure.
I think HTML5 support is dependent on if the browser has a DRM module. Does Firefox for Linux have it? If it doesn't then maybe Chrome for Linux does. If it does but still doesn't work, try spoofing the user agent to Windows.
From the BBC blog:
"We’re currently testing the HTML5 player with:
• Firefox 41
• Opera 32
• Safari on iOS 5 and above
• BlackBerry OS 10.3.1 and above
• Internet Explorer 11 and Microsoft Edge on Windows 10
• Google Chrome on all platforms"
It's down to browser support for the features they want to use rather than your choice of OS. Have you tried it in Chrome on Linux yet?
They're rolling out 50p on the iPlayer. It's been available for some 'channels' of content for a quite a while now. Go check out the Russian F1 Grand Prix highlights programme in HD - 2908 kbps of 50p.
Try get_iplayer - it has the PVR GUI and and is a far better interface (far quicker) than any official iplayer app.
Added bonus is you get to keep any tv/radio programme forever.
I think it's very wrong of BBC to support a virus-like program like Chrome above open source browsers. If Firefox is lacking something, they should have provided that something and CONTRIBUTED to open source. Sure, Chrome is nice, but it's also a platform for Google to put its tentacles in your computer forever.
Why isn't BBC more idealistsic? It's founded by a kind of tax after all (pay, or else..).
I'll have a look, but I would be extremely surprised if any of my IPlayer capable devices supported outputting 50p to my TV or to any other display. So I guess it would be 50p displayed on 60p then.
I don't think my Now TV puck or the Chrome Cast stick does 50 Hz. USA is all that matter, innit?
Perhaps if I launch iPlayer on the HTPC (which I have forced to 50Hz).. Hurray, that's gonna be a lovely end user experience!
You do realize right that chrome is just the chromium open source project? Google add their own stuff onto certain cuts of the chromium project's work. There is nothing stopping you downloading chromium directly if you don't like the google additions. FF has become dog slow over the years, even with no plugins activated it starts up slower than chrome with plugins.
When most (all?) TV panels and their image processing run natively at 60 Hz, and all these widgets also output at 60 Hz... Just let one device do the fps resampling.
My 60 Hz-native Samsung TV (admittedly old, but a good quality panel) does horrible things if you feed it 25 or 50 Hz, so it only gets fed by a PC through HDMI at 60 Hz. The NVIDIA GPU does an excellent job of interpolating iPlayer content.
Resampling 50 -> 60 is easier than 60 -> 50 (+1 frame every 100 ms will be almost imperceptible...) I don't imagine pixel rise/fall lag can even really keep up with that unless you have a VERY expensive panel.
I don't even think graphics cards do it this way anyway, a decent GPU should utilise some form of frame interpolation. TBH I'd rather have my GPU do this than my TV.
I'm no big Firefox fan, but initial startup time isn't a strong criteria for me when picking a browser.
Chrome is not a lot like Chromium, but yes, Chromium is the open source part of Chrome.
Firefox is a memory hog on a an epic scale, and the programmers involved can't see anything wrong with that. FAIL!
But all Google products are massive resource-hogs too. It takes a lot of resources to spy continously on the users. For example, I doubled my Android phone's battery life by hunting down and disable all Google spying related stuff I could find.
What, you have a TV that can't do 50Hz? And you are in a PAL country?
I don't believe this to be true.
You don't want any conversion from 50 to 60 Hz, period. No matter how fancy, or how "only once".
I know not why, but here (mid-Devon), iplayer is shite (on various boxen/bits of kit), whilst Netflix and amazon prime are just fine...
"But this is the BBC at its finest. Writing kernal bypasses to get better throughput."
Network adaptor drivers have had to do this for years under Linux to be able to support the hardware offload features of modern NICs. This isn't a BBC issue, it's a Linux issue.
"Sport at 30fps sourced from 50i? Possibly converted from 30fps again to display at 50Hz on your TV."
No one sane is going to capture or broadcast at a non standard rate like 30Hz as it introduces judder.
1080p/50 is recommended by the EBU for HDTV:
"I would be extremely surprised if any of my IPlayer capable devices supported outputting 50p to my TV or to any other display"
I would be surprised if they didn't. Even if you are unfortunate enough to be in a region landed with inferior NTSC (Never Twice the Same Colour) broadcasts, supporting 1080p/50Hz is standard elsewhere - and technically simpler than supporting 60Hz.
"So I guess it would be 50p displayed on 60p then."
To do that, they have to slow it down to from 50Hz to 48Hz, and then do 3:2 pulldown, giving you a familiarly crap NTSC movie-like experience....
It won't work with Chromium unless you install the DRM plugin.
"When most (all?) TV panels and their image processing run natively at 60 Hz, and all these widgets also output at 60 Hz... Just let one device do the fps resampling."
No they don't. Many run at 100Hz, 200Hz, 300HZ, and 400Hz as a quick hunt round panel specs tell me.
If your panel is native 60Hz, it's likely a fairly crap one.
Don't forget the majority of the world uses PAL or SECAM based solutions and that most of the world also uses 50Hz AC mains power.
Downloading programs works on my android phone, which is based on Linux right?
Cant you use your super power Linux skills to decompile the android app and make it work for your desktop?
Also "iPlayer" is a brilliant name, once you realise it's not only a video playback tool, it's one that relies on the IP layer.
Too be fair using linux on the back end does not imply linux on the front end.
The two levels of usage and use cases are substantially different.
That plus the blowback they'd get if they chose the wrong version of desktop.
And from the comments on the register any version is always the wrong one to someone.
get_iplayer is indeed in perl
Biting the hand that feeds IT © 1998–2018