I'll attempt to explain this
"coding schemes propose that the transmitter buffer several packets, encode them, and send them as a single transmission"
No they don't.
Network coding schemes are more like a RAID approach where instead of sending (say) three independent packets you send (say) four, such that the three original packets can be reconstructed from any three of the four. This uses linear algebra - each original packet is a vector, and a random linear combination of them is also a vector. If you send aX + bY + cZ *plus* a,b,c, three times, you end up with a 3x3 matrix. You can tell if this matrix has a unique solution - if it does, you find X, Y and Z; if it doesn't, you wait for another row of the matrix to arrive. a,b,c are just bytes, so you only add a few bytes to the packet size.
The MIT innovation applies to TCP/IP in particular. TCP/IP will throttle a link if packets are lost, assuming the links is congested (i.e. packet loss => the router is too busy). But on WiFi, packets can be lost at random owing to interference. So if you have a less-than-perfect WiFi signal, TCP/IP will throttle your link. To avoid this, the MIT guys want to give TCP/IP some indication that progress is still being made, even though packets are being lost from time to time.
What the MIT guys are doing is saying, when we receive a packet, let's try to solve the matrix and find out how close we are to a solution. The matrix may leave 0 degrees of freedom in the solution (i.e. it's solved), 1 degree of freedom (we need one more linearly-independent combination of the packets), 2 degrees of freedom, etc. When the number of degrees of freedom is reduced, then it sends an acknowledgement that says "I am closer to receiving some information than I was before".
During congestion, a bunch of packets will go away, and the link will NACK as it did before (maybe a bunch of packets are lost in a row, or fully one half don't arrive => link is congested). But if packets go away at random but not enough to break the RAID recovery process, the receiver can still happily send ACKs indicating that progress is being made, even though packets are being lost.
And this causes these amazing 10x throughput effects.
I'm no TCP/IP expert so I don't know exactly how this integrates with the stack, but that's the basic maths behind it anyway.