I can see it now...
"Can you fly that thing?"
"Not yet" <Looks at control panel and activates Voice Search with chosen keyword> "Tank; I need a pilot program for a V-212 helicopter..."
Google's Project Glass computing specs could solve one of technology's most enduring problems – finding where you put the remote control. A patent filing from the Chocolate Factory shows that the firm wants to build control of everyday objects into its head-mounted hardware so that the wearer can use voice commands to order …
There was also the scene in the original "Terminator" where Arnold quickly consults the blueprint of a manual transmission for a truck he has commandeered.
And of course countless others I certainly don't know about. Definitely in Vernor Vinge's "Deepness In the Sky".
And actually, I do think that the engineers at Boeing or Airbus have already been doing that to facilitate inspection of planes. You can just mark the rivets to inspect in a red glow ...
Totally already invented vague idea. Clearly the USPTO will allow that.
I don't remember that particular scene, but several times we see, as if through the Terminator's eyes, text information. Like in T2, in the bikers' cafe, when he is chosing clothes from someone.
However, it always amused me as to why the Terminator is designed to get data from its database, then turn it into text, then super-impose the text onto the vision channel, then (presumabbly) to use Optical Character Recognition to turn it back into data again !
More seriously, I always considered the text to be simply displaying Terminator's internal monologue.
Much better cinematic method than an audio monologue as it emphasised the difference between a human monologue (traditionally done with a voice-over) and the high-speed robot monologue (multilogue?) of the Terminator.
What I'm not sure about is where the idea of a robot's monologue being shown as text and diagrams came from. It has become very common though!
Is my film critique showing? Damn.
Not as bad as I would feel when my 'imaginary friends' started spouting adverts at me. They'd stop being friends, imaginary or not, real fast.
Not that I have any intention of ever using this technology, given the way things have gone. It's funny; as a kid I would have eagerly embraced computerised glasses, but then when I was a kid future technology wasn't about tracking your every act and exploiting every possible human psychological vulnerability in order to sell you something.
If Google are awarded this patent, will it mean that Apple can't do a similar thing with their 'wearable' personal iWatch device thing?
You have a portable, personal display device, with wireless network comms - you have other comms compatible devices nearby - they can communicate with each other -> This 'invention' is obvious. My Dell laser printer already tells me when it's low on paper or toner or whatever. It does it by an alert notification popup on my personal, portable laptop screen.
All I need to do is wear the laptop on a tray strapped from my shoulders and I'd be in violation of Google's patent.
A watch-type appliance requires one or two arm/hand combinations - even looking at it takes one combo.
That means the ability to do an independent task is compromised from the start.
Google Glasses, on the other hand permit the complete, independent use of both arm/hand combinations with eyes and voice acting as the control interface.
With image capture and matching, Glasses could find an appropriate drawing/reference document and display it - all the while keeping the hands free.
It helps to have used Voice Commands or video enhancement previously, it takes a bit of training.
The watch arena seems to be getting a little crowded these days and the old protagonists - Samsung and Apple - are there, too.
All these cries about privacy are red herrings - everyone is selling everything - including Apple.
Both are pointless. But glasses are a lot less acceptable to wear for someone who doesn't usually wear them. A watch is at least securely strapped to your arm.
Just wait until someone gets their eyesight seriously damaged because they were punched while wearing Google glass.
Just wait until someone gets their eyesight seriously damaged because they were punched while wearing Google glass.
So some neanderthal gets to break their knuckles on polycarbonate/ABS/whatever and metal, and at the same time provide accurate video of exactly who was guilty of the assault?
I'm as creeped out by the idea of Google cameras everywhere as anybody, but I fail to see a downside here. I'll happily take a smack in the mush in exchange for regular payments extracted from said neanderthal by force of law.
(And yes, I know neanderthals were more civilised than the stereotype lets on, but hey, this is the Reg. Who needs accuracy?)
Hi! The car that you are driving is running low on fuel. There's a suitable garage a couple of miles ahead on the right.
And your sales for this month are a bit low according to this spreadsheet.
Oops! And while you were reading these pop-up notifications you ran through a red light and squashed a couple of pedestrians who were too busy reading their google glasses to notice you coming.
Strange that voice control is fashionable again.
It has so many known and experimented limitations and is useful in so few scenarii.
There has been perfectly working voice control on computers and phones for many years, mostly never used because it is generally neither practical nor productive.
Shouldn't glasses be yet another 'dumb screen' controlled by smartphones, ultrabooks and car computers?
Voice control could work if only the devs were allowed to do it properly.
First, it needs and "expert" mode. ie, The device does not keep asking you things after each command.
Next, it needs to handle all commands you can normally access from the system menus, not just a subset.
I'm looking at YOU Garmin (3950 sat nav) and YOU Samsung (Galaxy S2)
And no, saying "Hello Galaxy" to start the voice recognition is not something I'd be happy with doing in public. I'm a little over 5 years old now. At least the satnav lets me change the activation word(s).
Actually, in terms of the Galaxy, it's a huge step back from the old Nokia or even the HTC Win6.3 phone I had. In those cases, a single button press followed by saying the contact name would dial the number. Quick, simple, and 99% of the reason for having voice recognition on a phone (at least in my case).
My journey to work is already plagued by bad passive iPod music, whistling message alerts and people who have set their fondleslabs and smartphones to emit a feedback sound every time ANY kind of button is pressed. I just know these hipster jerks are ulcerating to sit their with future-tech glasses going "Set reminder - Giles and Sorcha, 28th, wheat free...check Facebook...tab...tab...like...like...retweet...tab...tab..."
Maybe the judge will let me off.
Having an offset display at a different focal length only compounds the distraction potential, and the voice recognition will require networked computing currently; so cloud tethered overkill. Better to put all the voice recognition in a local device, like say a mobile via a BT heatset, with enough processing power to do it locally.
This is just more cloud misplaced BS; in Sci-Fi this kind of appliance requires 3D displays, complete transparent optical overlays (e.g. in contacts), or machine eyes, not fragile overpriced consumerist trinkets.
I expect the users will get cooked skin from the hot (risky much!) Li-on battery, hot CPU/GPUs and image projectors, and a get frontal lobe and eye cancer from all frequent pulsed Microwaves from the Microwave transceiver in the glasses.