so, they licensed/re-implemented S3
> To get a block, the system issues a get request to all three regions, waits for the fastest two responses, and cancels the remaining request.
This is just run of the mill erasure-coding. Amazon S3 is (was) 9/18 (any 9 fragments out of total of 18) to reassemble data. 6 fragments are stored in a single 'datacenter'. The web tier issues 18 requests and the first 9 that respond are run thru the reconstruction algorithm. The others are discarded.
I very much doubt Dropbox did 2/3, but something like 10/15 (or 8/12, 12/18, etc) where 5 fragments go to a single 'datacenter'. So actually they aren't grabbing fragments from 2 sites, but just "2 sites' worth". When marketing writes slides and journos who don't know any better just parrot them, this is the kind of inaccurate technical representations that happen.