Not much new here
As usual, a run-of-the-mill machine-learning story is blown up all out of proportion.
Netflix has probably done some novel work here, but based on the article and blog post, it's straightforward development, not revolutionary. The blog calls deep learning1 "a new algorithmic technique", but then goes on to admit it's been around "for some time" - yeah, since 1980 (with major advances in early 1990s and mid 2000s through now). And using GPU networks had already been done by Ng and his team.
So basically Netflix's innovation is to host the thing in AWS. They take advantage of their ability to partition their data set into small partitions to use a different topology than Ng did, but that should be pretty obvious to any skilled practitioner in this area.
There are a few other good bits in the blog post - for example their discussion of working around CUDA bugs - but these are technical niceties.
1Ugh. DL is just a hierarchy of (artificial) neural networks. Calling it "Deep Learning", particularly with the capitals, is bombast.