Re: 8-bit
<puts broadcast engineer hat on>
The issue is less with the number of colours which can be differentiated, but what happens when those colours are gradually changing and are adjacent; increasing bit depth allows smaller quantisation gaps and also, courtesy of noise in the image, a better naturally dithered result - so you don't so easily see steps in the picture.
But that said, there are always going to be pathological images which show some unpleasant artefact or other, even in their original uncompressed state.
The broadcast research, back in the day when digital transmission within a broadcast centre was being developed, showed that eight bit colour depth with a few levels reserved at either end of the scale was sufficient for the majority of viewers. Much video equipment used ten bits internally, and CCIR 601 digital video used ten bits, simply to provide some room; a little extra precision that could be either kept or thrown away at the end.
But consider a pathological image: a flat field of a single colour, with luminance changing very slightly across the width of the image... if the difference between the sides is small enough to only be a couple of bits worth of data, you *will* see vertical stripes as the bits flip, unless your bits are so small that they are finer than the eye can resolve. You simply do not see that with an analogue system, even if the change is at the level of the signal noise. The noise acts to dither the signal and smear the change across the picture.
If that sounds an unreal image, it's not: think of a blue sky shading to the horizon...