While the rest of the TV industry goes romping off into the third dimension, Sharp has decided to do its own thing by focusing instead on the "fourth pixel". The "Quattron" technology in its new range of Aquos-brand LCD sets gives them yellow pixels in addition to the standard red, green and blue primary colours. Sharp Aquos …
4 colours from a 3-colour signal?
Can anyone tell us how they are doing this? The Sharp website refuses to say anything except "there are 4 colours".
But thinking mathematically, the incoming signal has only three degrees of freedom and the output (controlling the pixels) must have four, so are they just using heuristics to (for example) use the yellow emitter to supply the fraction where R=G? That probably moves the gamut around a bit, presumably to cover an area more useful to human observers, but probably doesn't increase that area at all.
It's an interesting development though because the raw hardware probably DOES have a much wider gamut than the signals it is asked to display, and THAT breaks the log-jam on extending the gamut of TV.
Previously, there were no devices capable of displaying a higher gamut signal, so there was no incentive to create broadcast or disc-based content that had a higher gamut, so there was no incentive to create a device that could display it. Deadlock.
If this device delivers a better picture quality on existing signals AND could display a higher gamut, it breaks that deadlock.
Yes, the colour space is only 3-dimensional. This here is a cut through it:
As you can see, by mixing 3 colours you can only get the colours within the triangle. In a real system, all the colours need to be within the "horseshoe". Adding a fourth colour gives you a fourth colour and therefore enables you to have a better coverage of the space.
Now there are systems which map more of the colourspace into something which is compatible to normal RGB systems. Look at "Pro Photo RGB" for example. It extends beyond the visible colours, and probably bejond what a normal RGB monitor can do. Yet even if uncorrected it should bring acceptable colour on an RGB screen.
By the way, the curvy line of the horseshoe is the pure spectrum.
the visible 'Horseshoe' clearly extends outside the triangle on the GB side not on the GR Side hence there is nothing to gain from adding yellow. It only covers what is already there.
Im not convince that the additonal processing cost (in terms of both processing and power) is worth the 'additional' colour gain.
Signal is not "3 colors"
Video is normally sent as YCbCr, representing or luminance, green/red, blue/yellow (might have those last two shades wrong, look it up) - the point is the signal isn't coming out as RGB, but is being converted to RGB from a wider-gamut color space. So converting to RGBY instead of RGB is not magicking something out of nothing.
Colour differences for beginners
Cb and Cr are actually (B-Y) and (R-Y) scaled by a fixed factor - from the three signals it is possible (using a matrix) to reproduce RGB. The colour difference signals, however do not have the same resolution as the Y signal. In fact, in NTSC the colour difference signals did not even have the same bandwidth as each other.
Broadcast engineers are familiar with the horseshoe of chomaticity, and are also aware that most of the deficiencies occur in the cyan and magenta regions (due solely to the chosen primaries), and then only in very satuarated colours, which should not include fleshtones. There have always been linear matrices in cameras to improve their colorimetric accuracy. By changing the agreed colorimetry of display devices, colours MAY be more pleasing, but not necessarily more accurate
Oh and, in TV tems. Y designates the luminance or brighness NOT yellow , given that in printing K represents black, might not the correct designator be "W"?
Yellow in RGB displays
The obvious reason for adding colours, as noted above, is to increase the gamut of colours available. But those changes need not be at the colour extremes; indeed, they can increase the gamut of NEUTRAL colours too.
This seems silly; you get a neutral grey by equal R,G,B signals right? Well, no actually. If you look at a video display that's not been calibrated the "white" will actually be bluish, around 9300K. It is possible to correct for the colour cast by dimming the blue, but that also dims the picture.
The solution is to add yellow light instead. Blue is an additive primary colour and yellow is a subtractive primary colour; chromatically, a reduction in one is equal to an increase in the other. Ok, the thery works but how do you decide how much yellow to add?
My guess is it's made up on the ratios of red+green versus blue. The brighter and more neutral-to-yellow tones will receive appropriate boosts, and then the screen can display wonderful paper-white and natural skin tones.
This shortcoming of RGB is analogous to the shortcomings of its theoretical subtractive sibling CMY colour. If you overprint solid cyan, magenta and yellow inks you don't get black, you get a nasty brown. So software removes some coverage from all three channels and replaces it with a fourth plate, black ink: and there you get CMYK, 4-colour process.
Now, not even 4-colour process is perfect, although it's quite good. There is Pantone Hexachrome, which adds red and green process inks. Some high-quality giclée plotters use CMYK plus pastel C,M,Y inks for best quality in pale tones. And printers can use spot-colour 'bump plates' for super-bright reds, or special finished like varnish, metallic or flourescent inks.
What no component inputs and vesa wall mounting capability. For something state of the art its lacking in a couple of basic features.
Library photos, methinks
Don't judge the physical aspects of the set by these images - looking through the Aquos section of the Sharp website shows several sets of varying sizes and specifications all illustrated with these same images... However, if you then head for the page in that section relating to this set, the technical blurb there suggests that this particular set DOES feature both component input and VESA mounting points.
"Unlike the Sony, the Sharp lacks 24p playback for Blu-ray Discs."
"However, £2000 is quite steep for a 46in screen these days, so you are paying a big premium for those extra yellow pixels. As a result, the 46LE821E will probably appeal mainly to Sky HD subscribers or Blu-ray buffs who will appreciate the richness that the 46LE821E brings to HD content. ®"
I think the buffs will pass.
Did you test the screen with a colourimeter?
No text here
I had been entertaining hopes that this technology would indeed provide richer color, and I am heartened to hear that it has actually succeeded. I hope that their patent will be licensed, or that they will have a horde of imitators when it expires.
Even under a microscope, the pixels on my Sony don't look anything like as good as the RGB (and now yellow) ones in your photo.
Seriously though, Sharp may be onto something here, they've found out how to fit in smaller pixels and whilst everyone else is racing off to make rather dubious looking 3D, this lot have done something good about the colour gamut instead.
Maybe we should start thinking about cyan and magenta pixels as well. It worked for the photoprinter market.
I would definitely prefer to watch 2D TV done well than 3D done badly.
How about 2K (2048 pix resolution, 4K would be even better but that's getting greedy), 300Hz frame rate and 6 colours RGBCMY for a next generation standard>?
> Maybe we should start thinking about cyan and magenta pixels as well.
> It worked for the photoprinter market.
To complete your idea, you'd want black pixels as well.
Pixels don't look anything like as good!
Ah ok I see what you did there..
I'd happily stare at those 'pixels' all day...
whoa. wait one minute..
>the 46LE821E will probably appeal mainly to Sky HD subscribers or Blu-ray buffs who will appreciate the richness that the 46LE821E brings to HD content.
..but earlier you clearly stated that it cannot play 24p BluRay mode.
I like the innovation...but that doesnt deserve the massive premium
Sharp knows their audience? ;P
"...produced much subtler and more realistic colours, especially on skin-tones." and "...claims that gold is a particularly difficult colour for conventional RGB sets to reproduce" with these two choice bits, would you say Sharp is trying to market specifically to the adult movie watching connisseur rather than the spectacle wearing Avatar wannabes?
Not the first time
I remember when Sharp actually tried this early in the last decade with 4-color CRTs as well, with the fourth color being, surprise, surprise, yellow. They shelved it after a while for no reason. I'm surprised that it took them that long to reintroduce the technology to LCD.
As for why they're not interested in the 3D race, well, they already won it, in 2003.
And the technology has been improved since and will make it's first consumer appearance outside Japan on the upcoming Nintendo 3DS.
Black helicopter. Because 3D displays that don't need glasses are truly ahead of it's time.
At least Sharp have a go at innovating.
I remember their PerfectPAL LCD screens from around 5 years ago.
They were 960x540 so fitted a standard PAL signal (you dont get to see 35 or so lines so 540 works fine) and then made a simple half downscale for 1080 material.
None of the awful upscaling that HD screens gave.
In fact I still have a 32" version and with Freeview and DVD it looks great. 1080/HD material and the Xbox360 look super smooth on it too.
People often comment on how good the screen quality is and are surprised when I mention it isnt HD.
Why does every review ALWAYS comment on the built in speakers?? No one in their right mind would actually use them. If you're blowing the better part of £2k on a TV you're going to have/be getting a proper sound system too... or so one would hope! But interesting article and tech none the less, but I'll stick to my projector and substantially bigger screen for far less than a weeny 42" TV and that's WITH the sound system.
Couldn't agree more with JWS. If there's one thing that's gone backwards with the move away from CRT to 'thin' sets, it's the quality of the inbuilt sound - not that I ever heard a CRT that sounded particuarly good - but the laws of physics are difficult to cheat.
Shouldn't there also be some kind of law to stop speakers that are less than about 8" in diameter being described as "sub-woofers" :-)
I wonder if somebody with the extra (yellow/orange) cone in their eyes could tell the difference?
I suspect that somebody who can actually "see" a different frequency might be able to appreciate the colours rather than people who only see in three colours (like most of us) who can't - it's interesting to upgrade the hardware of our TVs but the hardware interface to our brains haven't had the equivalent upgrade (unless you're a tetrachromat).
Actually we do see in yellow
We have three types of cone cells, sensitive to short, medium and long wavelengths, but these, other than one of them, don't actually match RGB.
Short wave peaks at violet, not blue.
Medium wave does actually peaks at green.
Short wave peaks at, extra points for anyone who guesses this, Yellow :-)
So rather than Red, Green and Blue, we actually see in Violet, Green and Yellow.
RMG monitors will always be an approximation, as they can''t actually reproduce what we see, just a subset of what we see. In order to see true colour images, you'd need a monitor that peaked at the same colour wavelengths as our eyes do.
So adding Yellow is probably a good idea.
Re: Actually we do see in yellow.
That was quite informative, btw. Hope you don't mind a tangential wibble about it...
I guess the reason we conventionally use RGB rather than Yellow, Green, Violet is that rather than trying to hit the peaks for each kind of photoreceptor, we're trying to produce colours that stimulate *only* those receptors: therefore more extreme colours are used for monitors.
[Hmm, I wonder if that might explain the mismatch between artist's primary colours, and the RGB secondary colours. I always wondered about that...]
So why is adding Yellow a good idea? Still not entirely clear to me. Maybe it's just to improve colour resolution in the red-yellow-green region (skin tones, wood, fruit, etc)?
more black thoughts
Fly in ointment "The contrast could have been a little better on deep blacks".
Next time they could add black pixels (although SONY might have patents), it helps to spot the black helicopters.
They missed the basics.
"The set supports DLNA networking, so you can connect it to a home network and stream photos and music from a networked media server – although you can’t stream video for some reason. There’s no internet connectivity or any kind of web-based service either."
(quoted from article)
What use is a TV that can't stream video?
I swear, I will not be buying another TV until someone comes out with one that has a built-in capability to stream DVD subdirectories from a NAS device (i.e. I can ditch my DVD player.) That will be either official support, or a modified firmware. I own a lot of DVDs, and I am beginning to view a DVD player in the same fashion as a floppy disk drive - yet another type of media to swap whenever I want to watch something new.
I basically want the lot on my NAS, my media in the basement, and my shelf space reclaimed.
Green is not a primary color. Red, blue and yellow are.
Light, not pigment
When referring to pigment, red, yellow, and blue are the primary colours.
However, when referring to light, red, green, and blue are the primary colours.
When we're talking about LIGHT its RGB.
and we're in the UK, so its colours...
sounds like an interesting idea, BUT
havent decent monitors and TVs given beatiful and faithful colour representation to photographers for years? what happened to those? Not to mention every colour can be made with those three.
Maybe my limited understanding of this makes me look stupid, but stuff has always been the right colour on my displays for close to forever.
Nearly £2k but it doesn't do DLNA video streaming, "standard" out of the box doesn't look any better than your test Samsung and no 24p for BluRay.
You must be joking. £800 for a Sony Bravia with the missing bits above and 3 year warranty.
For twice the cost of the above Sony I want the full feature set, yellow pixel AND 3D.
So long as extra pixels means more of those type of boffins then it's fine by me. On a related note, as this tv is particularly good at reproducing skin tones, and has nothing special in the way of audio, maybe it should be more targetted at the ardent grumbleflick connoisseur.
They all stayed at 3 pixels..
We said "fuck 3, we're going to have 4 pixels" That's right 4 ! Match that fuckers!
Could be useful
If you look at the eye's receptors (http://en.wikipedia.org/wiki/File:Cone-fundamentals-with-srgb-spectrum.png), you can see that a yellow subpixel would be very useful, covering an area where we have a lot of sensitivitiy.
That said, it is ultimately pointless since all of our signals are downgraded to RGB at some point (usually at the CCD in a digital camera).
@Adam 38; RGB lights can't reproduce a full RGB recordings gamut....?
"FAIL: That said, it is ultimately pointless since all of our signals are downgraded to RGB at some point (usually at the CCD in a digital camera)."
(Disclaimer: I'm some guy who figured out the following for himself, so it might be bollocks. Take it with a pinch of salt).
Assume "pure" cyan is 495nm. It stimulates the "blue" (420nm peak sensitivity) and "green" (534nm) receptors, but not the "red" ones (564nm) ones because they don't have much sensitivity to cyan's wavelength.
So both the human eye (and an idealised camera with the same sensitivity characteristics) see high levels of blue and of green, but very low levels of red, and the camera is capable of recording this response.
But what if we then try to reproduce cyan by shining the measured proportions of red, green and blue alone? Well, while the level of red light we include in the mix is in itself negligible, our red receptors *do* have some sensitivity to green light so- unlike when they were exposed to the genuine cyan- they're going to have some moderate response.
This response of the red receptors dilutes the "purity" of the fake (green + blue) cyan versus the genuine cyan, and it's why one could theoretically improve the colour gamut by including pure cyan rather than relying on mixing green + blue alone.
In short, it should theoretically be possible to *record* more colours using R, G and B sensors alone than it is possible to *reproduce* (in a manner convincing to the eye) using R, G and B lights alone.
I used cyan in this example because there's a bigger gap between blue and the green/red peaks. I assume that Sharp did it with yellow instead because the eye has more sensitivity in that area and it's better for skin tones.
Disclaimer: As I said I figured out the above for myself and it could be bollocks- but it does make sense and would explain the restricted gamut of R, G and B reproduction.
Btw, there are NO primary colours of light.
Light is electromagnetic radation. Most light sources emit a vast and broad specrtra which goes way below and way above what we can see, thus covering the entire narrow band which we consider to be "visible" light. Colour is the result of these EM waves hitting objects which absorb or distort these waves in a particular way, increasing or decreasing the intensity of certain frequencies of the wave.
The fallacy of "primary colours" comes from the fact that we have 3 types of colour receptors in our eyes. (Very, very lucky people actually have 4, search for "tetrachromacy"). The notion of 3 colours was decided when we started producing screens, to broadly match these colours to the three receptors that we have. As someone has already noted above:
Purple -> Blue
Green -> Green
Yellow -> Red
- Review Samsung Galaxy Note 8: Proof the pen is mightier?
- Nuke plants to rely on PDP-11 code UNTIL 2050!
- Spin doctors brazenly fiddle with tiny bits in front of the neighbours
- Game Theory Out with a bang: The Last of Us lets PS3 exit with head held high
- Flash flaw potentially makes every webcam or laptop a PEEPHOLE