Apple has been granted a patent for a projection system that can enable multiple viewers to simultaneously view 3D images without the need for those dorky 3D glasses. The patent, succinctly entitled "Three-dimensional display system," is fiendishly complex, but its goal is simple: to provide "highly effective, practical, …
So how does this work with multiple people watching?
You'd need to track each person around the room and project a separate pair of images for each one. That scales well...
Now this looks like a real patent
This is indeed different from all 3D displays I have seen and used.
...Not least because they were displays you could see and use...
"Although many companies are involved in autostereoscopic research and development, Apple's patent confidently picks apart the limitations of three categories of those efforts:"
I fI was one of those companies I'ld be pointing out that Apples solution only allows for one viewer to see the 3D image. A second observer on the other side of the room would not see the image correctly.
Sounds like Snow Crash ...
The futuristic computer doohickeys in that book worked in a similar fashion for immersive VR, sending indidual beams onto your retina to create the scene (though obviously for one user at a time, you'd negate the 'bottleneck' potential)
(oh and wotcha Ms Bee)
So many problems, where to start.
So I'm prepared to concede that it could work for one person - maybe. The thing is the further you get from the screen the more accurately the system needs to work since the difference in angle between the observers two eyes will shrink. Also the bigger the screen gets the greater the variation in the angles you need to project the picture at but still hitting the bullseye several million times a second. The screen and projector have to be rock solid, as others have mentioned, or track each other and the observer REALLY accurately.
As others have mentioned you also need to be tracking the observers' faces to accurately know where their eyes are.
Is the projector also allowing for people lying on the floor, standing up and all altitudes/attitudes in between?
Now lets add another viewer to the mix and we're really cranking up the bandwidth and the complexity of the calculatons. There must also be a point where there are so many observers in the room or where observers are so close together that the different images 'bleed' into your own view not only spoiling the '3D effect but blurring/tinging the image.
I'm sure there's more but . . . icon says it.
I agree with all of the other posters who believe there should be some limitation on patenting technological fantasies.
It's too bad Apple didn't call you before wasting their time. I'm sure that the people who worked on this would have been grateful for your expert advice.
... and another thing
What happens when the observer turns slightly sideways?
i bet the performance is much snappier than 2D displays
In the land of the Apple patent
the one eyed man is fucked.
I'm deeply skeptical too. But if it could be made to work for just one viewer and just targeting a face sized area (so no 3D) it would make a pretty neat monitor replacement. The screen could be at a comfortable distance and the projector could be very low power since most of the light would be getting to the viewer - and no-one would be able to see over your shoulder...
Beg to differ
Well, I beg to differ on the effectiveness, and innovativeness of this idea. I suspect that it can be made to work and with not too great a leap in both compute power and projection technology.
Years ago I set up a single wall VR system. SGI Onyx, Crystal Eyes shutter glasses, Ascension Flock of Birds motion tracker. We ran a mix of open source and proprietory software on it, and it provided a very nice semi-immersive 3D environment. It suffered from all the defects that are well, known, not the least of which is the problem that there is no depth of field. That problem is common to every 3D system in existence, including all the movies.
But, looking at this patent, and looking at what we had to work with, what we could achieve with the then available technology, versus what is needed here, I don't think the jump isn't nearly as large as people think. Probably the most expensive thing to make will be the screen, and that is simply a matrix of thousands of shiny hemispheres.
The issues of tracking the subjects and locating their eyes is pretty much a solved problem. The Old Flock of Birds did it very well, but needed a sensor mounted on the shutter glasses. There are modern multi-camera systems that can identify and track humans to enough resolution. The projection systems could be addressed with little more than a stack of modern LCD or DLP projectors. As a rough approximation you need one projector worth of illumination per eye observing the screen. Considering what we used in old VR systems, this is dirt cheap. The compute power needed to convolve the image isn't actually all that much. It is simple geometry - all you need to do is determine where the eye is - cast a ray back to each hemishphere, calculate the bounce, and find which pixel in the reflected screen you pick up. There will be conflicts, and some image degradation, but it may be manageable. Once you know the mapping from hemishphere to projected pixel you just scramble the projected images with the map. A couple of FPGAs would be coasting, Not likely doable with a home gaming PC, but certainly doable with a more high end processing setup.
@Beg to differ
But your system didn't need to pinpoint the correct microscopic spot on millions of tiny reflective dots in real-time correcting for any relative motion between the projector, screen and observer. Also you didn't need the light to be reflected from each of those microscopic dots with almost zero dispersion directly to the observers eye. Remember if another view er catches sight of any unintended scatter from another viewer's image then their image will be compromised. As some other poster said you will almost certainly need laser-like beam convergence to have any hope of carrying off that trick.
Still to be convinced.
Plead to quarrel
"cast a ray back to each hemishphere"
Just to expand on that slightly, you mean calculate 2,073,600 rays per eye, per viewer, 60 times a second, and manipulate 2,073,600 sub-millimeter lenses in a tight grid, accurately to thousandths of a millimeter, based on data tracked from measuring 2 centimeter squared targets in 3D space, 10 feet away from the screen, and about 3 centimeters apart. That's a lot LOT more complicated than you suggest.
I'm not sure which world a "high end processing setup", more powerful than a gaming rig PC, just to display the image, makes sense. Here's my 2 TFLOPS PS3 for running the game, and I'll just connect that to my 10 TFLOPS projector. What? Even if this thing could be created, it would cost a literal fortune, and at best produce an image about the same a current polarised efforts. It's just a crazy idea - it could be made to work, in a lab at least, but it's just so much more convoluted than other methods there's just no point.
Cut out the middleman
If proposing a 3D system based on tracking eye position and accurate aiming of laser beams then why bother with a screen? Just draw directly onto the retina. Oh yeah, 'cos they might have trouble patenting an idea that SF authors thought of ages ago.
It tracks you and then displays the appropriate image
Am I the only one thinking of Microsoft Kinnect?
Apparently so, otherwise that would be an MS patent.
Presumably they are now busily kicking themselves to death while trying to work out if any of the Kinect patents can be stretched to get 'em a piece of the licensing action.....
To get rid of glasses...
....but maintain accurate "eye detection" why not have a headband of some kind with sensors for each eye.
When you put on the headband you make sure the sensors are right above your eyes, then allow for approx. 1/2 inch down angle, and as long as the machine can "see" the sensors, it will then be able to accurately display the images for each eye.
The sensors can be extremely small, and cheaply made, so the "headband" should not cost to much.
Granted you would need one for each person, however it should be cheap enough to allow multiple sets per unit.
@To get rid of glasses . . .
But then why not wear glasses and simplify the whole system? The whole point of the fiendish complexity of this thing is that uviewers are completely unencumbered. Headbands kind of defeats the point.
Apple Invents ?? WTF !!
all they need now is someone to invent it and get it working, then 'hey presto !!' Apple owns the patent!!
It proves one point.. Apple don;t invent anything... they are justa good marketing company !!
Sounds similar to the Microsoft one.
Microsoft demonstrated something that sounds remarkably similar back in June:
Ooh a shiny!
I can just see it being like one of them Pokemon cards you flip round and it looks like it's moving. I wonder if the TVs will come with a free stick of gum.
Personally I wouldn't mind wearing the glasses if they didn't look so $hit. Why do they have to look like Where's Wally rejections.
I can't see how this would work for people lying down on their side or at an angle. Either the signal to the second eye would have to come on a different line or the viewer would get a confused image until they stit upright.
Or I may have no idea whatsoever.
Another patent for a future claim...
So basically not much more than an idea on the back of a matchbox and they have another patent they can pull out of the cupboard in the future if anyone so much as goes near any of their idea.
Totally rediculous how they can patent a stick man drawing without so much as anything to back it up. Going by that you could effectively patent any future tec you could think of and then just wait for somebody to go anywhere near it.
Daft idea anyway, going to all the trouble to try and track where multiple users eyeballs are and then somehow manipulating numerous deflectors to get the image to the correct person and correct eye.
Sync'd glasses/contact lenses are simple and already work very well, certainly as well as this would. Up the resolutions, up the frequency, glasses become more and more clear etc. - job done. Next we wait until we have in eye projection systems or holographic displays and, after that, direct neural projection.
How about forgetting 3D and just film at 300+fps?
How about the industry no focus on 3D technology and just start producing in a film speed at a very high frames per second? That alone will give the presented picture more depth in the field! I don't care about things "coming at me". Give me the depth and I'll really enjoy the scene greatly!
Seems like they're 130% of the way to implementing a dynamic lenticular system, but that wouldn't work any better than an 100% dynamic lenticular system. Why not just leave it at 100% and you don't have to worry about tracking the viewers at all?
What's the point of "no glasses"?
Surely better to leverage the rose-tinted pair already worn by all Apple-users?
The flaw's in the drawing
One of the generic boxes in their diagram is a "Digital Signal Processor". These things are specialized processors for running algorithms on streaming data, they're not magic boxes. This suggests a certain cluelessness about how this thing would actually work which suggests that its yet another valueless patent designed as a land grab. These are crap -- they exist so that when someone (probably a Japanese company) puts in the leg work to make something that actually works they can turn up with their hand out claiming they "invented" the technology.
The patent system is a joke. It fails to protect the real workers against the predations of the carpetbaggers.
far worse pond scum lawyers.
Come back in 50 years
Theoretically the idea of shooting the wanted light into every eye watching the screen is excellent. It means that heads can move to anywhere, and tilted. As a viewer approaches the screen the image gets extra depth. Each person present could see a different film. This is terrific.
But as everyone above has observed, it is way beyond the possible, and I add, now.
The patent has a life of twenty five years in the States, so it will have lapsed by the time technology catches up.
So it's good only for porn, then? Exactly what I would expect from Apple...
(But seriously, what are the limits on the number of people whose heads can be individually tracked and provided with unique stereoscopic images?)
so... a 3d display version of the wii head tracking demos that are all over youtube?
You know, the ones where it looks like you're looking through a window...
Not new, so stay tuned
This is an old, old idea, its just become possible with current tech. So - will be some interesting patent news down the track once Toshiba, Sony, LG etc get on the case. Stay tuned!
MAYBE the hardcore processing is going to be done in the cloud, via Apple's billion-dollar data center (http://www.theregister.co.uk/2009/05/26/new_apple_data_center/), with a new-generation Apple TV device delivering the results to your home...?
So now we're reading excessively accurate positioning data about any number of viewers bodies, heads, eyes and pupils, along with data about the projector position and screen position, sending this over the internet, letting the server convert all of this into an insane amount of motor instructions,sending it back down the internet where it finally moves several million tiny lenses by incredibly precise amounts.
And you reckon thats going to work 60 times a second, with no foreseeable issues? Under current technology, you would be doing just fantastic if you got the overall latency down to 1 second, let along the millisecond response times needed.
And that's not even the biggest problem - the biggest problem is the projector. Under current technology it's just not possible. Don't be swayed by talk about sub pixel fields. That doesn't even begin to help with this type of problem. Individually targeting pixels in this way requires motorised lenses 100x smaller than anything in existence, and not just one - over 2 million, closely packed. And not even one lens - a focal system PER PIXEL is also required. And then there's simple stuff, like the data bus itself. 2 million pixels, each requiring a pitch and yaw, and a focal setting, to an incredibly high level of accuracy, probably at least 32 bit per channel. A quick calculation shows you'd need at least 11 gigabytes (88 gigabit) per second just to pass the control signals back, not even including the image itself.
Even if any of this did become possible many many years down the line, it's just a non starter. There's so many other ways of generating 3D images which don't require mainframes, laser arrays, millions of tiny focal systems, highly specialised screens, and magic.
Apple Tosh ... not a recipe ...
actually the name of the process where some brainless scientist gets together with a guileless patent attorney and dream up, or re-engineer an existing patent, and produce sketches and prose that is sufficiently obtuse to confuse the American Patents Office.
Sharks with frickin laser beams,
Train them to strobe peoples eyes with an image. (This idea is hereby in the public domain. The use of sharks is figurative and can be interchanged with any other laser based mechanical method for the delivery of an image onto a subjects retina. Use of three or more lasers also allowed).
A post about Apple and no mention of the word 'fanbo*' by the el Reg Pavlovians? Surely some mistake...
Not sure this is safe..
I'm pretty sure that firing tight-beam lasers at a perfectly non-lambertian screen has been done before, but we'd usually regard that as disco lighting, and tend to avoid looking down the beam. There's also the minor question of creating a coloured image - the blue source could be a bit a pain (literally)!
This looks like it was dreamt up on a Friday evening, and some idiot took the beer mats to the patent office.
What's the point?
Seriously, what does anyone see in these things (no pun intended)?
Simulated-3D displays have limited applications in some fields - engineering, medicine, etc - but there passive-polarized glasses work just fine for nearly all users. And that tech's been around for ages - I saw commercial systems at SIGGRAPH in 1988.
For entertainment, the appeal of simulated-3D video utterly escapes me. If I want to see in 3D (which I don't, particularly, except to the extent it's functionally useful), I look at the real world, which is far more interesting than anything in some James Cameron film anyway. TV is interesting when it features strong plots, complex characters, engaging dialogue. Fortunately, even the USPTO is unlikely to let Apple patent those.
And you damn stereoscopic kids get off my lawn.
- Mounties always get their man: Heartbleed 'hacker', 19, CUFFED
- Batten down the hatches, Ubuntu 14.04 LTS due in TWO DAYS
- Samsung Galaxy S5 fingerprint scanner hacked in just 4 DAYS
- Feast your PUNY eyes on highest resolution phone display EVER
- AMD demos 'Berlin' Opteron, world's first heterogeneous system architecture server chip