Is it possible somebody technically competent could be used to analyse this rather than just write a scare story? Don't you think those looking at the design of what is called reverse power systems have actually thought of some of these issues?
Any RP system will not be dependent on any single line. It will scavenge power off of several lines and not single line (or even several lines) will cause loss of service. Here's a posting I made from another forum based on calculations of the properties of phone lines.
To put all this in perspective, it’s worth doing some calculations to see what level of power can be delivered this way and over what distance. The first thing to note is that the telephone system is (relatively) low voltage and works at a nominal 50V DC. The second thing to note is the resistance of the relatively light gauge of typical phone wire (0.5mm diameter approx). That works out at about 9.8 ohms for 100 metres, although this has to be doubled as there are two wires. So that’s 19.6 ohms per metre “round trip”. The maximum power delivered at the load for a 50m line would be 127W, dropping to 64W at 100m and just 21W at 300m. However, this is highly inefficient as it means drawing double that power at the source as half the power will be wasted in transmission losses.
A more practical solution is to draw the same current off of each line. If we set that at (say) 100ma, it would provide 4.9W over a 50m line yet still provide 4.4W from a 300m line. The power required at each customer’s site would be 5W. At current UK retail power rates that would be of the order of £5 a year, or about 50p per month. (The node mentioned in the article consumes rather less than 3W per line). Note that the power conversion required at both the customer’s premises and the node will incur some losses, but modern conversion circuitry is now quite efficient at doing this.
The biggest problem is possibly that the node will only become practical once a certain minimum number of lines are active. Whilst a large amount of the power requires will be proportional to the number of lines actually in use (largely to power the line amps), there will also be a moderately large fixed element to power the optical circuitry, network switches and so on. It’s that minimum power requirement that might be an issue. Clearly my nominal 100ma could be boosted to (say) 300ma which would provide 13.2W at the load over a 100m line (but only 9.7W over a 300m line). If the node’s minimum power requirement was, say, 50W then it might be possible to power it with just 5 active lines. Of course the subscribers would then be faced with another £15 per year on their electricity bills.
It may well be this “minimum subscriber” issue that’s the biggest issue. Clearly designing nodes with the lowest possible base power requirement will be essential. Ideally you want a node capable of operating at less than 10W with a single line. That’s just about a practical amount of power to deliver over a single line of up to about 250m (that customer would find about £20 on their annual electricity bill).
So there’s the problem. Design a node that can work off of a 10W power budget when supporting a single line and which scales up power consumption at about 2.5W per line added.
Nb. battery backup could be provided at the customer premises or the remote node. It would be more efficient at the remote node. Of course if there is a power cut, then most likely the customer nodes would go down as well, so this may only be required for powering phones. In this case I see little need for a very substantial battery.