do androids dream of electric sheep?
I haven't read the paper, so I'm not sure of why the authors decided to investigate this, or even how it's implemented. While I was reading the article, though, I was thinking about a couple of things. First, is how they reckon that sleep is necessary for most (if not all) things with a brain. Something to do with assimilating memories and inputs, most likely, and shifting experiences around between different layers of memory. The other thing that I was thinking about is research on combining neural nets with expert systems of some kind, particularly of the fuzzy-logic variety. Oh, and also some of the stuff that Douglas Hofstadter was researching on "creative analogies" and kinds of symbolic intelligence.
Like I said, I have no idea how these guys are implementing their nets, but it seems to me that something that mimics the way the human brain dreams, complete with multiple levels of memory (with associated reinforcement and deliberate forgetting) and some sort of symbolic reinterpretation of neural network states (equivalent to codifying an expert system) would give you a system that is capable of the same kind of trick as outlined in the article. Namely, integrating new "experiences" and "skills" without nuking what's there already.
The biggest problem with neural nets is that they are opaque. You can observe its "thinking" only by reference to the outputs, but explaining the reasons (and hence giving a usable expert system that isn't just a non-symbolic rehash of the neural weights) isn't easy. Still, if you could combine a kind of symbolic (associative) memory with something that's designed to play around with stored memories (ie, dream), for example, building trial fuzzy cognitive maps, you could perhaps compress the large neural network state matrices into some more manageable expert-system-like rules.
I'm sure that the learning algorithms would have to be adapted for this to work. You can't just compress a neural network state into a fixed expert system without lossage. So as stuff is shifted around between different types of memory, the system would have to self-check to make sure that the new model still works with the training set. Probably this would involve replaying and reformulating the steps that the net made as it learned (or "experienced") as a result of being corrected (with back-propagation or whatever). I imagine that a kind of blockchain structure could work very well, albeit one that provides a very subjective and revisionist version of events, thanks to it needing to be rewritten as the underlying representation of stored knowledge shifts around across the different memories and procedural parts.