Netflix has created a blueprint for how companies might use neural networks to analyze information in – you guessed it – the cloud. The video-streaming company and long-time Amazon Web Services customer announced on Monday that it had figured out how to apply a technique pioneered by Google AI chief and Stanford Professor Andrew …
Not in the UK so couldn't watch the video. I'm assuming it's the same as this:
The most worrying part of the interview is the assumption (from Paxman and Dexter) that "picking up" coding is simple. You can pick it up in a day? Paxman is shocked that Dexter has been learning for a year.
Worryingly, the assumption seems to be that you can just pick it up quickly. Also, I hope the real lessons cover more than HTML and jQuery.
Not much new here
As usual, a run-of-the-mill machine-learning story is blown up all out of proportion.
Netflix has probably done some novel work here, but based on the article and blog post, it's straightforward development, not revolutionary. The blog calls deep learning1 "a new algorithmic technique", but then goes on to admit it's been around "for some time" - yeah, since 1980 (with major advances in early 1990s and mid 2000s through now). And using GPU networks had already been done by Ng and his team.
So basically Netflix's innovation is to host the thing in AWS. They take advantage of their ability to partition their data set into small partitions to use a different topology than Ng did, but that should be pretty obvious to any skilled practitioner in this area.
There are a few other good bits in the blog post - for example their discussion of working around CUDA bugs - but these are technical niceties.
1Ugh. DL is just a hierarchy of (artificial) neural networks. Calling it "Deep Learning", particularly with the capitals, is bombast.