Dubious value proposition?
Has anyone asked Mr. Glass why "one or many viewers" would want to "experience [your] life AT THIS VERY MOMENT" [emphasis mine - TFMR] rather than living their own?
I can see that having a hands-free camera aligned with your eyes may help you shoot a video when you are in Venice. I see absolutely no value in pinging your mates and streaming it to them in real time - I am sure they'll appreciate a little editing (unlike with a video camera that you need to hold and point, I expect you to move your head more freely) and the comfort of watching it at their convenience, even (horror!) at the expense of a bit of a delay. So, the camera may be useful, but not for the stated reason.
It is occasionally useful to share images in real time ("Is this the exact thing you wanted, darling?" sent to your attractive half who is not in the shop and even not in the same country), but we do it with our phones well enough, and "occasionally" means you don't need to keep the stupid thing on your nose.
I can think of two use cases for live, unedited video streaming from your glasses: (a) a SAS commando streaming what he sees to HQ; (b) a bunch of kids around town scouting pubs with the largest number of pretty girls / handsome men and sharing video info to vote on the venue for the night. I don't even know if case (b) will be popular or what the "subjects" will think of the geeks with Glasses snooping around.
I assume concert venues and such will cavity-search for Glasses to avoid 50 people "attending" the concert virtually for the price of one ticket, so we can discount that. Besides, the nearest mast will surely be overloaded by 10K streams (out of 50K attendees).
When we move away from live communication, how often does one need to look something up or obtain additional info? Certainly there is no need for the "enabling device" to be perched on your nose all the time for those purposes. Pull the device out of your pocket/purse when needed. Getting the answer a few seconds faster is not compelling (and how will the Glass eliminate the need to check multiple sources?). Combined with the inevitably more limited interface (and likely more intrusive and less private to boot: how do you search for something, by saying it out loud? [*]) than even your phone, what's the win?
[*] I have never observed anyone searching for stuff using voice interface - on Android or Apple - beyond a first demo to mates ( a few years ago) accompanied by chuckles.