May I be the first to say...
MWHAHAHAH!
Researchers from Radboud University Nijmegen are claiming that with a sufficiently-sensitive MRI and decent mathematical modelling, they can reconstruct images of the brain recognising letters seen by the test subject. Specifically, the researchers say they have “used data from the scanner to determine what a test subject is …
My thoughts too. Can't wait for the day they think to combine it with the latest crackpot "extreme" pornography laws.
Of course it'll be used to screen convicted sex offenders first... then sexual terrorism suspects... then candidate priests & teachers... then the portable version for on the spot checks in accordance with very strict independent oversight...
"Excuse me a moment Sir"
OMG! Think of Camilla. Think of Camilla...
"Right Sir, I'm arresting you under articles 7 and 11 of The Extremely Unthinkable Degenerates Act on suspicion of aggravated picturing of extreme bestiality and necrophilia..."
This post has been deleted by its author
That would be a valid criticism if the researcher were describing the technique as mind-reading which, as far as I can tell, they aren't. I suspect the real usefulness of the technique is likely to be in mind control (by, not of!) since I seem to recall that thinking about something is neurologically similar to looking at it.
Of course, it is a valid criticism of all those "OMG Mind Reading is Coming" headlines. I'll leave it to my fellow readers to decide if El Reg is guilty of that.
... the real usefulness of the technique is likely to be in mind control...
As with a large part of basic scientific research, the real benefit is that it lays the foundations for more research and development. While this is an impressive accomplishment, it has no direct application to anything that I am aware of. It furthers our understanding of how the brain works and how it relates to our environment. Trying to pigeonhole research as being useful or not often misses the point that simply gaining greater understanding of the subject can open doors that we did not know were even there.
One possible side benefit of this research, though: this the sort of thing that might get kids interested in science and technology.
"it has no direct application to anything that I am aware of"
Tell that to Stephen Hawkins! There may come a point at which the only way he can communicate is by reading his brainwaves.
As to looking up the answer in the back, there is a strong case for using neural networks based on the ART approach where the ideal is selected as a candidate and fed back into the input data to reinforce and speed up recognition. This top down vs bottom up approach to object recogntion has been debated for a long time now and I for one agree with this top down reinforcement paradigm. In terms of a neurological basis for this, it could be argued that repeated exposure to fuzzy stimulus such as different types of tree will eventually give rise to a perfect prototype tree which then becomes the candidate stimulus for object recognition (or in this case letter recognition).
"it has no direct application to anything that I am aware of"
Tell that to Stephen Hawkins! There may come a point at which the only way he can communicate is by reading his brainwaves.
By "no direct application" I mean "it will take quite a bit more effort to be several generations from anything that can be terribly useful in the real world." In the case of the good Dr Hawking (I assume you mean Stephen Hawking, the physicist paralyzed by ALS, not Stephen Hawkins, the Australian who took gold at the 1992 Summer Olympics), most everything he has is a prototype, so he would most likely be a good candidate for early testing. He is, after all, involved with similar research. His situation is such that interpreting what he is reading is not the issue; it is more finding out what he has to say.
It doesn't matter which direction the control is going, but it would be interesting to see what the unmolested data was to see whether human pattern recognition could spot it. If we could, then maybe there's something interesting here.
I suspect that mind control will not be widely used for a wile because it will require exactly the kind of focussed and concentrated thought that our entire culture is training our brains to be incapable of. I suspect it is way easier to type than to concentrate hard and clearly enough to be machine readable and consequently it will also be slower for a very long time.
Thinking really clearly is a non-trivial activity.
I don't understand the article's implication that teaching the computer what letters look like is somehow cheating. How the feck else is it going to know?? You wouldn't expect a child who hasn't yet been taught the alphabet to be able to identify the images either.
Most people could be accused of having a grasshopper mind, and this is basically true - unless you see the letter and repeat it over and over and over again to yourself while in the MRI - there is a good chance the actual thought of the letter would exist for a tiny tiny fraction of a second. It's sort of already proven that more advanced technology that is in development (and that actually works) can easily be scuppered by simply not following the rules:
Machine Operator: "Now Mr potential Murderer, we want you to lie in this here brain scanner and replay how you murdered your next door neighbour in your head, over and over again 50 times"
Potential Murdered: "OK" (In head, humming - "oh I do like to be be beside the seaside".......)
....some time later....
Machine Operator: "No, he couldn't of done it - he was at the seaside according to the machine"
I also used voxels, and a detection technique that if googled would reveal my identity so I will pass.
Pretty much, I used K nearest neighbor clustering algorithms on voxelized data from "detector" (not to be named) through lead shielding containing nuclear material (the distance between voxels was a non-linear equation though). Obviously, the material we are looking for is a input, and the "algorithm" was taught from that. Pretty much we were looking for Uranium, and we had sample data for Uranium, that is how we "taught" our algorithms (if you see a deflection angle distribution similar to f(x) probability it is uranium if clustered properly is Y) . If they published images in which a discrete letter was shown, I would say they are fitting the data based on some shit method (select all pixels in desired result boundaries :-) ). But since there is noise in the images, and significant noise (looks like they used ROOT actually too lol), most likely they used simple steps of filtering to reduce to where they got.
Anyone wanna pay 40$ so I can see the original paper? lol it is a decent journal they couldn't have fudged the analysis that bad, but hey, you never know.
This would be a life changing thing for my son. He is completely aware and can move (somewhat) his left side. A bullet in his brain has left him without speech. All other forms of adaptive tech haven't been able to help him. His eyes are not completely focused together, his hand cannot move well. He loves his iPad but only to watch HBOgo, Hulu, Netflix, Facebook and Pandora. His fingers cannot do any typing. He learned morse code easily enough but has a hard time hitting the iPad screen accurately.
We are so hopeful for a future brain imaging device to let him communicate with the world. A video on this page: http://www.learnmorsecode.info/philip-morse-code/ shows his predicament.
So please tech/science community do not think this is a trivial matter.