I am disappointed that almost all the commentards here have missed the irony in the article. It is actually really quite thought-provoking.
Of course we are told that the devices are just listening for their wake-up keywords. And some of them probably are. But we have no way of knowing what undocumented wake up keywords are built in, or whether there are any other circumstances in which they will start to record, send and process audio.
There have been various rumours of Google, Amazon and Smart TVs listening in for shopping-related terms in order to target advertising. And if they aren't doing that today, they certainly will be just as soon as they can get good enough local processing (which won't be hard in mains-powered devices).
The article raises the question: if they are going to do that for their own commercial ends why wouldn't we require them to also do similar things for social good reasons? Good question.
It also highlights the fact that if that question is asked, the manufacturers will push back very hard because the last thing they want is for us to be reminded that they are listening all the time and could be processing anything we say. They either will want to make a virtue of not being advertising-driven (Apple) or they need us to forget all about them being there and being unguarded in what we say (everyone else).
And, of course, that is without even getting into the surveillance issues.
Good, thought-provoking article. Pity that we don't teach irony any more and people started discussing how a device would decide automatically whether to call the police (particularly as the answer is obvious: do what a human would do, ask "are you all right?").