Building on Phil Groce's suggestions:
>To standardize brightness of full-dome system, one should use incident
>brightness of white light projected through the center of the lens
>system measured in lux (lumen/meter squared) at a standard distance such
>as one meter from the lens front surface. This lux measurement
>eliminates the dome reflectivity factor, and the image size factor
>(full-dome vs. truncated dome or pixel/degree factor) when evaluating
>the brightness of projection systems and allows theater designers to
>calculate the desired dome size and/or dome reflectivity to achieve a
>certain reflected NIT or foot-lambert. It will allow us to compare
>apples to apples.
Light measurements are the most mixed up set of units in all the
sciences, I think. Luminous flux, luminance, illuminance, foot-Lamberts
and nits. next it will be photons per fortnight (just kidding - that
would actually be radiant flux, not to be confused with luminous flux
which accounts for eye sensitivity).
Below is a summary of units taken from
Q light quantity lumen-hour radiant energy
lumen-second as corrected for
F luminous flux lumen radiant energy flux
as corrected for
I luminous intensity candle one lumen per
candela one lumen per
candlepower one lumen per
E illumination foot-candle lumen/foot^2
B luminance candle/foot^2 see unit def's.
foot-Lambert = (1/pi)
Lambert = (1/pi)
stilb = 1
nit = 1 candle/meter^2
I personally prefer total lumens actually delivered to the dome screen
as the IPS standard for projector "brightness" (luminous flux). Display
engineers commonly use lumens, which are not dependent on the projected
angle. Not all projection systems are hemispheric, of course.
Using lumens as a measure, a truncated hemispheric projection will
deliver more lumens because of the greater utilization of the video
frame (thus better light utilization), but the field-of-view of the lens
is immaterial (except for optical losses in the different lenses, which
should already be accounted for in the spec). The luminance in
foot-Lamberts (or nits, if you must) can be calculated by dividing the
luminous flux in lumens by the area of the screen in square feet (or
square meters for nits), then multiplying by the screen reflectivity.
This assumes a lambertian screen that scatters light equally in all
directions, which most domes are designed to do.
Regarding peak brightness, with all due respect to D'nardo, CRT
projectors often use this measure and it is useful for them. When doing
a shootout between a 7,000 lumen (ok, 4130 IPS lumen) fisheye LCoS
projector (a D'nardo's lens, in fact) and 6-projector CRT system, I was
surprised to see the CRT grids and stars appearing brighter than the
LCoS. That's because, although the ANSI brightness for the CRT
projectors were 250 lumens each, they had over 1200 peak lumens, and
stars/grids permit peak operation. When projecting the entire earth,
however, the LCoS projector won out, since the CRT went into electron
beam current limiting mode. In another decade or so this will be moot
since all CRT projectors will have died by then and you will need to
visit a museum to see one.
>Contrast ratios can be standardized as well by measuring "black level"
>(at the same projector lamp setting) using this same standard lux
>measurement method and determining the ratio of the black and white
>light lux values. As Ed suggests, measuring incident lux of projected
>black and white checkerboard screens could also be used for an even more
>real-world measurement. While none of these methods will give a true
>representation of the image contrast once a dome gets hold of an image,
>it will allow us to compare oranges to oranges. It has been my
>experience that the contrast differences in data projectors as published
>by manufacturers is not as nearly as critical as the the number of lens
>elements, the effectiveness of internal lens coatings and the
>reflectivity of the dome.
Measuring contrast is tricky - fortunately it is a simple ratio, so
absolute measurements are not required (one can work in luminance,
illumination, or luminous flux). A projected checkerboard (ANSI
Contrast - ratio of white checkerboard square to black one) actually
measures the theater scattering performance, not the display system
performance. That's because the contrast limit with a checkerboard in a
dome is cross-dome scattered light, which in the best of cases yields a
10:1 contrast ratio for a 0.3-ish reflectivity dome screen. A 0.45
reflectivity screen yields a 7:1 contrast ratio as I recall. This
figure also depends on the brightness of fixtures, furnishings and
finishings in the theater. White chairs, for instance, scatter more
light and wash out the screen even worse. "Checkerboard contrast" is a
useful measure - but for the theater, not the display system.
One IPS standard for displays should be sequential contrast, the ratio
of an all-white frame to an all-black frame. Problem is, the black
level is often out of range of common light meters... I therefore
measure sequential contrast using a white reflective (paper) target
placed a meter or so from the projector. Make sure that the white level
is not saturating the meter - this recently caused me some grief. I've
also pointed a spot photometer directly into the lens of a CRT and
measured the ratio of black/white on the phosphor. It's fine as long as
you don't change anything between measurements except for the video
level (a calibrated ND filter may be required to extend the range of the
meter). Sequential contrast, more than any other measure, tells you how
well the projection system will look when the image fades to black or
attempts to project a minimal image against black such as stars.
Another IPS standard that we need is measuring the contrast limit of the
projector itself due to internal scattering. This cannot be measured
without projecting an image. In my display standards paper:
I suggest that we adopt a test pattern
consisting of a single 12 degree diameter white disc with a 3 degree
diameter black hole cut out in the center of it for this measurement.
The hope is that light scattered into the center of the disc will be
dominated by internal scattering within the projector and not cross-dome
scatter. I don't know what the limits of this measurement are, but it's
sure better than a checkerboard. The best way to measure projector
contrast is, of course, to remove it from the theater and place it into
a controlled space without a reflective dome hanging above.
Regarding resolution, the simulator standard is line-pairs/arcminute,
with eye-limited resolution at one line-pair per arcminute or so (varies
from person-to-person and also varies with brightness!). I've suggested
pixels per degree as an IPS standard measure of resolution. Here's why.
To start with, modern digital projectors have discrete pixels and
typically have a high MTF (a measure of how much one pixel blurs into
adjacent pixels), so pixels per degree, while not a "pure" measure of
resolution, does the trick when comparing most systems. Should a
non-digital system come along (i.e. CRT, scanned laser, etc) we can
force them to an "effective pixels" measurement with a 50% MTF
requirement. This means that two parallel lines define a pixel pitch or
spacing when they are so close together that the dark space between them
is 50% white. In other words, when the two lines are so close that they
bleed onto one another and fill in the space between them to 50% of
their brightness - we arbitrarily call that the effective pixel width.
Of course, this depends on electron beam focus (for CRT) or Gaussian
beam width (for scanned laser). Additional specification would be
required to fully describe these non-digital systems: the total number
of addressable pixels. For instance, 4000x4000 pixels may be
addressable (according to the A/D sample rate and analog bandwidth), but
only 2000x2000 may be resolvable (based on 50% NTF criterion).
>Finally, there are many other image quality factors. When it comes to
>single full-dome lenses, line pair resolution must be considered. As
>chips get smaller with greater pixel density (resolution), line pair
>separation must go up. A single 1400 x 1400 poorly resolved image will
>not be perceived to be as sharp as a well resolved 1024 x 1024 image.
>Then there is the issue of the chromatic aberration of the lens as
>measured in the center of the lens and at the edge of field. All fisheye
>lenses suffer from a certain degree of "coma". All of these factors
>should be weighed in determining image quality and in evaluating
>Give me on any day a seam-free, pixel-sharp, bright, high contrast
>1kx1k image relatively free of chromatic aberration and coma over a dim.
>low-contrast, fuzzy, and aberrated 4k x 4k image with obvious seams.
>Audiences are not dummies. That is why they don't seem to mind lower
>resolution systems when the image is bright, seamless and well
Coma and chromatic aberration are more of an issue with fisheye systems,
while edge-blend artifacts and color balance are the bane of
edge-blended systems. My paper presents a method for measuring
edge-blend uniformity and color balance. Test patterns can help with
coma and chromatic issues, especially if you are looking for a pass-fail
test. But you'll not likely see manufacturers voluntarily quantify
these factors. That is, after all, the reason for these specs. Please
also note that these are fulldome SYSTEM specifications, not individual
PROJECTOR specifications. The idea here is to define what will be
actually seen on a given dome screen so there are no customer surprises.