For specialised processing requirements maybe.
"The results would be returned across the internet to the phone, speeding up tasks like graphics processing and supporting high end video or gaming. Intel even says CloneCloud would be able to decide dynamically whether a task would be better processed by the device itself or in the cloud, depending on its processing burden and the quality of the network connection."
Currently only somewhat specialised processing requirements benefit from this approach, where the latency requirement isn't fast and bandwidth requirements of the job input/output is small, but the compute/memory demand of the job is high. I do this currently for spam content analysis where the MTA virtual server processor in a datacentre that looks at an email and accepts or rejects it offloads the content analysis to a faster CPU at home with more memory but slower bandwidth. But I don't see either of network bandwidth or latency being fast enough very soon to enable this to be done for graphics processing or high end video gaming, where a lot of CPU and memory has to be very close to the display right in front of the end users' eyeballs. Getting a faster computed response to a deep strategy game, e.g. go, using a Monte-Carlo simulated annealing approach maybe, assuming a very large parallel supercomputer is available over the network link rather than in front of the player. But again, this is a more specialised requirement.
Automating this would usefully require an extra layer in the exec() layer of an application which collects statistics based on input/output and CPU/memory loadings. The administrator of the system in question would need to choose to add the overhead of checking for suitability for selected candidate jobs, or the overhead added by the extra layer involved in checking for suitability of all jobs would outweigh the benefit for the few jobs which would benefit.