> This might set off a sh*tstorm, but here goes:
> I'm curious - has anyone here performed formal tests regarding the
> ability to discern pixel resolution with different types of content?
> Obviously realistically simulating a starfield as seen from the
> Earth's surface requires close to an arcminute of resolution, but
> what about other types of static and moving imagery? I ask because
> Tim Horn mentioned at the Immersive Cinema Workshop that they
> conducted tests at the Hamburg Planetarium, and that many people
> couldn't tell the difference between 1K, 2K, and 4K imagery.
> I've seen a considerable amount of time spent on discussing the
> importance of developing increasingly higher resolution displays,
> and this assumption seems to be all but unquestioned. What's not
> discussed is the tradeoff between creative freedom and the needs for
> edge-blending, multi-channel application development,
> projector/laser maintenance, etc. What if in the early days of
> moving images it was determined that humans can actually perceive
> 120fps so everyone refused to make 24 or 30fps films or television
> shows? The lack of film resolution video cameras hasn't stopped
> independent filmmakers from making blockbusters with standard def
> video gear. In the audio equivalent, try to get most people to tell
> you the difference between a 44.1 KHz signal or a 96KHz signal.
I think it all depends on exactly what you are trying to accomplish
with a high resolution display that approaches human eye resolution.
With much of the full dome show content, it isn't necessary at all.
Based on tests that we've done for our shows, we decided to render the
stars in deep space scenes to be much larger than single-pixel size
(which itself was nowhere near human perceptual limits), just because
they looked better for our system.
Resolving two point sources (1 arcminute) or resolving a grating
pattern of black and white lines (1-2 arcmin) is usually what is
refered to when people talk about human visual resolution. These
resolutions match the Nyquist-sampled spacing of the receptors in the
foveal region of the retina, which is about 20-30 arcseconds. However
there are several other ways to measure human visual resolution.
The most familiar to almost everyone is letter acuity (about 5
arcmin), the ability to resolve alphabet letters, such as those in the
Snellen eye test. Someone with 20/20 vision is able to identify
correctly a 5-arcmin letter 90% of the time.
Then there is stereoscopic acuity (10 arcseconds), which is the
ability to resolve the slight difference in angle of the same object
in depth from the two eyes. Note that this is 6 times smaller than
the point source acuity.
Finally there is vernier acuity, which allows us to tell that two
line segments are not exactly collinear:
Like for stereo acuity, people can perceive much finer differences--in
this case, 10 arcseconds again--than can be assumed based on the
"normal" resolution of our eyes. The vernier and stereo acuities are
therefore known as "superacuities" since they are resolving detail
that is less than the photoreceptor spacing in the eye. In order for
such perception to work, there must be important post-vision
processing that is going on inside the brain to make discerning fine
detail possible. And the post-processor is using input from both
eyes, not just a single one, since studies have shown that binocular
input show significant improvements over monocular input in acuities
(e.g., Campbell & Green 1965, Nature, 208, 191-192). This processing
is not just spatial but temporal in nature, since we perceive higher
resolutions when looking at a series of animated images compared to a
still image from that animation sequence.
Whether we want to build a projection system that matches or exceeds
the human eye "resolution" should be a question that is determined by
what you want to accomplish. If you want to do Nyquist sampling or
over-sampling, 100-150 pixels per degree would seem to be perfectly
okay. But even then, you would not cover superacuity effects. You
could use other tricks. For instance, Colin Ware has done an
experiment (mentioned in his book _Information Visualization_, 2/e,
2004) that using antialiasing can help improve success rates in
vernier acuity tests. (His book is an excellent introduction to human
perception as it relates to VR applications. Another is Chapter 2 of
R. Stuart's _The Design of Virtual Environments_, 1996.)
If you are creating a large format display for research purposes, then
you may want to pack in as many pixels per angular area as possible.
This is especially so if your tasks involve resolving fine detail that
approach the limit of human acuities and superacuities.
For public entertainment, you can (and everyone in the field has been)
getting away with low resolution displays that come nowhere close to
human eye acuities. Again having engaging content can help a lot in
that respect. Why the Hamburg Planetarium got the results they did
may be related to the phenomenon of contrast acuity falling off as we
age. Ware writes about this in his book, but for a reference, see
Owlsley et al. (1983, Vision Research, 23, 689-699). Sensitivity for
higher frequencies, and any frequencies below 1 cycle/degree
dramatically drops as people age. One can also map threshold
sensitivies for different spatial and temporal frequencies, and there
are also dropoffs on the high and low-frequency ends for both these
ranges (Kelly 1979, Journal of the Optical Society of America, 69,
At the low end of the scale, there are clear limits to how bad your
spatial and temporal resolution is before it starts causing problems.
The risk of perceptual stress and even epileptic fits for some people
is increased when you couple extremely low spatial and/or temporal
resolution images with an immersive display. A hard lower limit in
spatial resolution can be found at striped patterns modulated at 3
cycles/degree, which causes visual stress in most individuals, and a
temporal resolution of 20 Hz (see A. Wilkins, _Visual Stress_, 1995).
This is the equivalent of stretching a display 600 pixels across to
fill a dome using a fisheye, so obviously no one is plumbing such
depths. However one can make other qualitative and aesthetic arguments
to keep the resolution well above this.
I'm sure this is just a small sampling of the work that has been done,
and someone who studies perceptual psychology will be able to dig up
more. However I can almost guarantee there is very little to no
research performed for these topics in highly immersive spaces like
digital full-domes. Since much of the past work on perception in
immersive technologies have been done with head-mounted displays and
CAVEs, and domes are very different beasts, there is a wealth of
> It would be very helpful to have formal studies conducted to
> determine the *actual* importance of resolution versus the assumed
> importance of sheer pixel density. I've had incredibly
> psychologically immersive experiences using gameboys and with 1000
> pixels across the dome with good gameplay or narrative, and equally
> bland experiences in IMAX theaters during films without much to say.
> While I have no doubt the best, brightest, and highest resolution
> systems are necessary for specific applications, it seems that we
> will be able to get more artists and scientists interested in
> producing for this medium if we can provide quantifiable research
> concerning what is an "acceptable" level of resolution, size,
> brightness, etc for producing and showing different materials.
> david mcconville
--kachun +** Dr. Ka Chun Yu **+
+** Curator of Space Science **+
+** Denver Museum of Nature & Science **+
+** 2001 Colorado, Denver, CO 80205-5798 **+