Color Guidelines

Expand Messages
• There have been several messages posted over the last few days about color imaging. There has been a mix of correct and incorrect information, so I thought I
Message 1 of 5 , Feb 2, 2006
• 0 Attachment
There have been several messages posted over the last few days about color
imaging. There has been a mix of correct and incorrect information, so I
thought I would take some time to set the record straight.

First, I want to start where discussions about color imaging seldom start -
but where they should always start: noise. More specifically, signal to
noise ratio.

What is true of luminance images - it is all about the noise - is also true
of color images. Everything good and bad you experience about color imaging
ultimately has to do with the signal to noise ratio in the data.

Yep, data. It is certainly convenient that our eyes can take photons as
input and generate an image as output. But the bottom line is that what we
are talking about here is _data_. The rules that apply to data (signal and
noise) apply here. That is good news, and that is bad news.

The bad news is that data is not an intuitive concept. Oh, we like to think
we understand, but the fact is, the combination of data and noise behaves in
some pretty non-intuitive ways. You can't use your experience of choosing
moving target, to understand noise. Noise behaves in ways that have nothing
to do with survival or mating. <G> As a result, it can surprise us. The tool
that we use to understand (and more importantly, validate) these surprises
is......mathematics.

If we approach signal and noise with math, we learn some things that are
surprising (a good euphemism for non-intuitive and "just plain weird").

So let's start at the very beginning, and build to some supportable
conclusions.

Most imagers eventually get a decent feel for signal to noise. The key
concept is one I posted an extended discussion of recently: optimal
sub-exposure time. I turned that discussion into a tutorial on the book web
page. Normally, you need to have a paid-up subscription to the web site to
view tutorials, but I made this one publicly viewable because it's an
important concept. Knowing why, and how, to optimize sub-exposure times
leads to a deeper understanding of how data, signal, and noise co-exist. And
that can make a big difference in the results you get from imaging.

Here is a link to the sub-exposure tutorial:

http://www.newastro.com/newastro/tutorials/noise/noise.asp

I'm going to assume that you have either read the above, or that you
understand the basic concept behind optimal sub-exposure times. Briefly:

Whereas read noise results in the same level of uncertainty in the data
every time;

Whereas shot noise is always the square root of the signal (and thus grows
with exposure time, but at a slower rate than signal grows);

Be it therefore agreed that exposures of sufficient length will allow the
shot noise to swamp the read noise;

And be it further agreed that this allows a large total exposure time to be
subdivided into shorter, more practical individual exposures, without
significant noise penalty from the numerous individual readouts.

Witnesseth, that this therefore yields excellent signal to noise ratio
without having to make sacrifices to the God of Read Noise.

Once you are on this path of optimal exposure time, your luminance exposures
can be as deep as you are willing to go in total exposure time. (Granted,
the God of Light Pollution still demands his tithe, but you hopefully get my
point.)

If this path is good for luminance, might it also be good for color? Yes!
YES!

Noise controls how rich, deep, and accurate your color can be.

When you take weighted exposure times for color, what you are really doing
is attempting to get the same signal to noise ratio in all three color
channels. HOWEVER: if your color exposure times are too short, then read
noise will be a significant factor, and your color ratios will not work
effectively!!!

So the goal for BALANCED color is equal signal to noise in all three
channels. The primary obstacles to simple achievement of this goal are:

* Variations in the quantum efficiency of your CCD sensor with wavelength
(your chip might be more sensitive in red, for example)

* Variation in the amount of light passed by a given filter (your blue
filter might pass less light, for example).

There are multiple ways to calculate the right answer for any give
filter/chip combination, but the goal is the same: to adjust exposure times
through each filter to account for these variations.

Note: there are other ways to achieve balanced color, such as taking more
exposures through a certain filter. However, this results in a different
balance between signal/shot noise and read noise, which complicates the
calculations.

There is a simple way to make all of this simpler: determine the optimal
sub-exposure times for each of your color channels, and use exposure
durations for individual exposures that are at least long enough to swamp
the read noise with shot noise. Then, even if you do not get perfectly
balanced color, you have enough signal to work with to achieve a
satisfactory balance.

Note: if read noise is significant, then the differences in signal to noise
will give the colors different strengths. A noisier color is a weaker color.
The noisiest color in your RGB set controls how much color you can have
overall, since boosting saturation will reveal the noise at some point, and
the weakest color will reveal noise first, leading to a color balance
problem that can't be fixed unless you decrease color saturation to hide the
noise. This may or may not sink in immediately, but it's an extremely
important concept to understand clearly.

The last important thing to know is the effect of signal to noise on color
in the image. The previous paragraph says it all, though pretty tersely and
the inference might not be clear.

So let's back up and look at luminance imaging. What happens when you work
with the optimal sub-exposure time in luminance, and then take more and more
individual images? You increase S/N, and dim objects become clearer, subtle
contrast becomes discernable, etc. All good things in an image come from
reducing uncertainty in the data (that is, having better S/N).

With color, similar good things happen. As you improve the S/N of your color
data by taking larger numbers of optimal sub-exposures, you remove
uncertainty from the color information so that dimmer and dimmer objects
take on clean color. And of course the color of bright objects gets richer
and richer.

That's all there is to it.

http://www.newastro.com <http://www.newastro.com/>

[Non-text portions of this message have been removed]
• Ron, Thank you very much for the clear guidance. One question: let s say you ve determined that to swamp read noise, you need a backgroud ADU of about 900.
Message 2 of 5 , Feb 2, 2006
• 0 Attachment
Ron,

Thank you very much for the clear guidance.

One question: let's say you've determined that to swamp read noise,
you need a backgroud ADU of about 900. With my ST8E and good sky
conditions, that means subexposures of about 10 minutes for the
luminance (bin=1). If I take the RGB at bin=2, does that change my

(I also take 10 min subexposures for the RGB and end up with
background ADU's for the RGB of ~1300, ~1700, ~1000 respectively.)

Jim McMillan

--- In ccd-newastro@yahoogroups.com, "Ron Wodaski" <yahoo@...> wrote:
>
> There have been several messages posted over the last few days
> imaging. There has been a mix of correct and incorrect information,
so I
> thought I would take some time to set the record straight.
>
> First, I want to start where discussions about color imaging seldom
start -
> but where they should always start: noise. More specifically,
signal to
> noise ratio.
>
> What is true of luminance images - it is all about the noise - is
also true
> of color images. Everything good and bad you experience about color
imaging
> ultimately has to do with the signal to noise ratio in the data.
>
> Yep, data. It is certainly convenient that our eyes can take
photons as
> input and generate an image as output. But the bottom line is that
what we
> are talking about here is _data_. The rules that apply to data
(signal and
> noise) apply here. That is good news, and that is bad news.
>
> The bad news is that data is not an intuitive concept. Oh, we like
to think
> we understand, but the fact is, the combination of data and noise
behaves in
> some pretty non-intuitive ways. You can't use your experience of
choosing
chasing a
> moving target, to understand noise. Noise behaves in ways that have
nothing
> to do with survival or mating. <G> As a result, it can surprise us.
The tool
> that we use to understand (and more importantly, validate) these
surprises
> is......mathematics.
>
> If we approach signal and noise with math, we learn some things
that are
> surprising (a good euphemism for non-intuitive and "just plain
weird").
>
> So let's start at the very beginning, and build to some supportable
> conclusions.
>
> Most imagers eventually get a decent feel for signal to noise. The
key
> concept is one I posted an extended discussion of recently: optimal
> sub-exposure time. I turned that discussion into a tutorial on the
book web
> page. Normally, you need to have a paid-up subscription to the web
site to
> view tutorials, but I made this one publicly viewable because it's
an
> important concept. Knowing why, and how, to optimize sub-exposure
times
> leads to a deeper understanding of how data, signal, and noise co-
exist. And
> that can make a big difference in the results you get from imaging.
>
> Here is a link to the sub-exposure tutorial:
>
> http://www.newastro.com/newastro/tutorials/noise/noise.asp
>
> I'm going to assume that you have either read the above, or that you
> understand the basic concept behind optimal sub-exposure times.
Briefly:
>
> Whereas read noise results in the same level of uncertainty in the
data
> every time;
>
> Whereas shot noise is always the square root of the signal (and
thus grows
> with exposure time, but at a slower rate than signal grows);
>
> Be it therefore agreed that exposures of sufficient length will
allow the
> shot noise to swamp the read noise;
>
> And be it further agreed that this allows a large total exposure
time to be
> subdivided into shorter, more practical individual exposures,
without
> significant noise penalty from the numerous individual readouts.
>
> Witnesseth, that this therefore yields excellent signal to noise
ratio
> without having to make sacrifices to the God of Read Noise.
>
> Once you are on this path of optimal exposure time, your luminance
exposures
> can be as deep as you are willing to go in total exposure time.
(Granted,
> the God of Light Pollution still demands his tithe, but you
hopefully get my
> point.)
>
> If this path is good for luminance, might it also be good for
color? Yes!
> YES!
>
> Noise controls how rich, deep, and accurate your color can be.
>
> When you take weighted exposure times for color, what you are
really doing
> is attempting to get the same signal to noise ratio in all three
color
> channels. HOWEVER: if your color exposure times are too short, then
> noise will be a significant factor, and your color ratios will not
work
> effectively!!!
>
> So the goal for BALANCED color is equal signal to noise in all three
> channels. The primary obstacles to simple achievement of this goal
are:
>
> * Variations in the quantum efficiency of your CCD sensor with
wavelength
> (your chip might be more sensitive in red, for example)
>
> * Variation in the amount of light passed by a given filter (your
blue
> filter might pass less light, for example).
>
> There are multiple ways to calculate the right answer for any give
> filter/chip combination, but the goal is the same: to adjust
exposure times
> through each filter to account for these variations.
>
> Note: there are other ways to achieve balanced color, such as
taking more
> exposures through a certain filter. However, this results in a
different
> balance between signal/shot noise and read noise, which complicates
the
> calculations.
>
> There is a simple way to make all of this simpler: determine the
optimal
> sub-exposure times for each of your color channels, and use exposure
> durations for individual exposures that are at least long enough to
swamp
> the read noise with shot noise. Then, even if you do not get
perfectly
> balanced color, you have enough signal to work with to achieve a
> satisfactory balance.
>
> Note: if read noise is significant, then the differences in signal
to noise
> will give the colors different strengths. A noisier color is a
weaker color.
> The noisiest color in your RGB set controls how much color you can
have
> overall, since boosting saturation will reveal the noise at some
point, and
> the weakest color will reveal noise first, leading to a color
balance
> problem that can't be fixed unless you decrease color saturation to
hide the
> noise. This may or may not sink in immediately, but it's an
extremely
> important concept to understand clearly.
>
> The last important thing to know is the effect of signal to noise
on color
> in the image. The previous paragraph says it all, though pretty
tersely and
> the inference might not be clear.
>
> So let's back up and look at luminance imaging. What happens when
you work
> with the optimal sub-exposure time in luminance, and then take more
and more
> individual images? You increase S/N, and dim objects become
clearer, subtle
> contrast becomes discernable, etc. All good things in an image come
from
> reducing uncertainty in the data (that is, having better S/N).
>
> With color, similar good things happen. As you improve the S/N of
> data by taking larger numbers of optimal sub-exposures, you remove
> uncertainty from the color information so that dimmer and dimmer
objects
> take on clean color. And of course the color of bright objects gets
richer
> and richer.
>
> That's all there is to it.
>
>
> http://www.newastro.com <http://www.newastro.com/>
>
>
>
>
>
>
> [Non-text portions of this message have been removed]
>
• Jim, Good question and one I asked of Stan Moore, but never got an answer...hopefully Ron will chime in. However, I suspect that the background adu count
Message 3 of 5 , Feb 2, 2006
• 0 Attachment
Jim,
Good question and one I asked of Stan Moore, but never got an

However, I suspect that the background adu count levels (per Stan's
formulas) may, indeed, be different for binned images versus the
unbinned requirements simply because the read noise is decreased
when one is binning. What this means "emperically" is something I
have never gotten an answer to. Again, hopefully Ron will chime in
and help with this.

Regards,

Randy Nulman
http://www.nulman.darkhorizons.org

--- In ccd-newastro@yahoogroups.com, "mcmillanjr4221"
<valueware@...> wrote:
>
> Ron,
>
> Thank you very much for the clear guidance.
>
> One question: let's say you've determined that to swamp read
noise,
> you need a backgroud ADU of about 900. With my ST8E and good sky
> conditions, that means subexposures of about 10 minutes for the
> luminance (bin=1). If I take the RGB at bin=2, does that change
my
>
> (I also take 10 min subexposures for the RGB and end up with
> background ADU's for the RGB of ~1300, ~1700, ~1000 respectively.)
>
>
> Jim McMillan
>
> --- In ccd-newastro@yahoogroups.com, "Ron Wodaski" <yahoo@> wrote:
> >
> > There have been several messages posted over the last few days
> > imaging. There has been a mix of correct and incorrect
information,
> so I
> > thought I would take some time to set the record straight.
> >
> > First, I want to start where discussions about color imaging
seldom
> start -
> > but where they should always start: noise. More specifically,
> signal to
> > noise ratio.
> >
> > What is true of luminance images - it is all about the noise -
is
> also true
> > of color images. Everything good and bad you experience about
color
> imaging
> > ultimately has to do with the signal to noise ratio in the data.
> >
> > Yep, data. It is certainly convenient that our eyes can take
> photons as
> > input and generate an image as output. But the bottom line is
that
> what we
> > are talking about here is _data_. The rules that apply to data
> (signal and
> > noise) apply here. That is good news, and that is bad news.
> >
> > The bad news is that data is not an intuitive concept. Oh, we
like
> to think
> > we understand, but the fact is, the combination of data and
noise
> behaves in
> > some pretty non-intuitive ways. You can't use your experience of
> choosing
> chasing a
> > moving target, to understand noise. Noise behaves in ways that
have
> nothing
> > to do with survival or mating. <G> As a result, it can surprise
us.
> The tool
> > that we use to understand (and more importantly, validate) these
> surprises
> > is......mathematics.
> >
> > If we approach signal and noise with math, we learn some things
> that are
> > surprising (a good euphemism for non-intuitive and "just plain
> weird").
> >
> > So let's start at the very beginning, and build to some
supportable
> > conclusions.
> >
> > Most imagers eventually get a decent feel for signal to noise.
The
> key
> > concept is one I posted an extended discussion of recently:
optimal
> > sub-exposure time. I turned that discussion into a tutorial on
the
> book web
> > page. Normally, you need to have a paid-up subscription to the
web
> site to
> > view tutorials, but I made this one publicly viewable because
it's
> an
> > important concept. Knowing why, and how, to optimize sub-
exposure
> times
> > leads to a deeper understanding of how data, signal, and noise
co-
> exist. And
> > that can make a big difference in the results you get from
imaging.
> >
> > Here is a link to the sub-exposure tutorial:
> >
> > http://www.newastro.com/newastro/tutorials/noise/noise.asp
> >
> > I'm going to assume that you have either read the above, or that
you
> > understand the basic concept behind optimal sub-exposure times.
> Briefly:
> >
> > Whereas read noise results in the same level of uncertainty in
the
> data
> > every time;
> >
> > Whereas shot noise is always the square root of the signal (and
> thus grows
> > with exposure time, but at a slower rate than signal grows);
> >
> > Be it therefore agreed that exposures of sufficient length will
> allow the
> > shot noise to swamp the read noise;
> >
> > And be it further agreed that this allows a large total exposure
> time to be
> > subdivided into shorter, more practical individual exposures,
> without
> > significant noise penalty from the numerous individual readouts.
> >
> > Witnesseth, that this therefore yields excellent signal to noise
> ratio
> > without having to make sacrifices to the God of Read Noise.
> >
> > Once you are on this path of optimal exposure time, your
luminance
> exposures
> > can be as deep as you are willing to go in total exposure time.
> (Granted,
> > the God of Light Pollution still demands his tithe, but you
> hopefully get my
> > point.)
> >
> > If this path is good for luminance, might it also be good for
> color? Yes!
> > YES!
> >
> > Noise controls how rich, deep, and accurate your color can be.
> >
> > When you take weighted exposure times for color, what you are
> really doing
> > is attempting to get the same signal to noise ratio in all three
> color
> > channels. HOWEVER: if your color exposure times are too short,
then
> > noise will be a significant factor, and your color ratios will
not
> work
> > effectively!!!
> >
> > So the goal for BALANCED color is equal signal to noise in all
three
> > channels. The primary obstacles to simple achievement of this
goal
> are:
> >
> > * Variations in the quantum efficiency of your CCD sensor with
> wavelength
> > (your chip might be more sensitive in red, for example)
> >
> > * Variation in the amount of light passed by a given filter
(your
> blue
> > filter might pass less light, for example).
> >
> > There are multiple ways to calculate the right answer for any
give
> > filter/chip combination, but the goal is the same: to adjust
> exposure times
> > through each filter to account for these variations.
> >
> > Note: there are other ways to achieve balanced color, such as
> taking more
> > exposures through a certain filter. However, this results in a
> different
> > balance between signal/shot noise and read noise, which
complicates
> the
> > calculations.
> >
> > There is a simple way to make all of this simpler: determine the
> optimal
> > sub-exposure times for each of your color channels, and use
exposure
> > durations for individual exposures that are at least long enough
to
> swamp
> > the read noise with shot noise. Then, even if you do not get
> perfectly
> > balanced color, you have enough signal to work with to achieve a
> > satisfactory balance.
> >
> > Note: if read noise is significant, then the differences in
signal
> to noise
> > will give the colors different strengths. A noisier color is a
> weaker color.
> > The noisiest color in your RGB set controls how much color you
can
> have
> > overall, since boosting saturation will reveal the noise at some
> point, and
> > the weakest color will reveal noise first, leading to a color
> balance
> > problem that can't be fixed unless you decrease color saturation
to
> hide the
> > noise. This may or may not sink in immediately, but it's an
> extremely
> > important concept to understand clearly.
> >
> > The last important thing to know is the effect of signal to
noise
> on color
> > in the image. The previous paragraph says it all, though pretty
> tersely and
> > the inference might not be clear.
> >
> > So let's back up and look at luminance imaging. What happens
when
> you work
> > with the optimal sub-exposure time in luminance, and then take
more
> and more
> > individual images? You increase S/N, and dim objects become
> clearer, subtle
> > contrast becomes discernable, etc. All good things in an image
come
> from
> > reducing uncertainty in the data (that is, having better S/N).
> >
> > With color, similar good things happen. As you improve the S/N
of
> > data by taking larger numbers of optimal sub-exposures, you
remove
> > uncertainty from the color information so that dimmer and dimmer
> objects
> > take on clean color. And of course the color of bright objects
gets
> richer
> > and richer.
> >
> > That's all there is to it.
> >
> >
> > http://www.newastro.com <http://www.newastro.com/>
> >
> >
> >
> >
> >
> >
> > [Non-text portions of this message have been removed]
> >
>
• Jim, That s the main reason for binning: the read noise per (bigger) pixel is about the same as the read noise for an unbinned pixel, but the signal is
Message 4 of 5 , Feb 2, 2006
• 0 Attachment
Jim,

That's the main reason for binning: the read noise per (bigger) pixel
is about the same as the read noise for an unbinned pixel, but the
signal is quadrupled. The result is that signal swamps read noise 4
times faster when binning 2x2.

ADU level to shoot for in the background depends on the camera's gain
setting: all SBIG cameras, AFAIK, have the same gain setting in
binned mode as they do unbinned. The ADU level to shoot for is then
the same same for SBIG. For cameras that vary gain setting for binned
modes, the ADU level will be different (SX cameras, for example).

Regards,

-Paul

--- In ccd-newastro@yahoogroups.com, "mcmillanjr4221" <valueware@...>
wrote:
>
> Ron,
>
> Thank you very much for the clear guidance.
>
> One question: let's say you've determined that to swamp read
noise,
> you need a backgroud ADU of about 900. With my ST8E and good sky
> conditions, that means subexposures of about 10 minutes for the
> luminance (bin=1). If I take the RGB at bin=2, does that change my
>
> (I also take 10 min subexposures for the RGB and end up with
> background ADU's for the RGB of ~1300, ~1700, ~1000 respectively.)
>
>
> Jim McMillan
>
> --- In ccd-newastro@yahoogroups.com, "Ron Wodaski" <yahoo@> wrote:
> >
> > There have been several messages posted over the last few days
> > imaging. There has been a mix of correct and incorrect
information,
> so I
> > thought I would take some time to set the record straight.
> >
> > First, I want to start where discussions about color imaging
seldom
> start -
> > but where they should always start: noise. More specifically,
> signal to
> > noise ratio.
> >
> > What is true of luminance images - it is all about the noise - is
> also true
> > of color images. Everything good and bad you experience about
color
> imaging
> > ultimately has to do with the signal to noise ratio in the data.
> >
> > Yep, data. It is certainly convenient that our eyes can take
> photons as
> > input and generate an image as output. But the bottom line is
that
> what we
> > are talking about here is _data_. The rules that apply to data
> (signal and
> > noise) apply here. That is good news, and that is bad news.
> >
> > The bad news is that data is not an intuitive concept. Oh, we
like
> to think
> > we understand, but the fact is, the combination of data and noise
> behaves in
> > some pretty non-intuitive ways. You can't use your experience of
> choosing
> chasing a
> > moving target, to understand noise. Noise behaves in ways that
have
> nothing
> > to do with survival or mating. <G> As a result, it can surprise
us.
> The tool
> > that we use to understand (and more importantly, validate) these
> surprises
> > is......mathematics.
> >
> > If we approach signal and noise with math, we learn some things
> that are
> > surprising (a good euphemism for non-intuitive and "just plain
> weird").
> >
> > So let's start at the very beginning, and build to some
supportable
> > conclusions.
> >
> > Most imagers eventually get a decent feel for signal to noise.
The
> key
> > concept is one I posted an extended discussion of recently:
optimal
> > sub-exposure time. I turned that discussion into a tutorial on
the
> book web
> > page. Normally, you need to have a paid-up subscription to the
web
> site to
> > view tutorials, but I made this one publicly viewable because
it's
> an
> > important concept. Knowing why, and how, to optimize sub-exposure
> times
> > leads to a deeper understanding of how data, signal, and noise co-
> exist. And
> > that can make a big difference in the results you get from
imaging.
> >
> > Here is a link to the sub-exposure tutorial:
> >
> > http://www.newastro.com/newastro/tutorials/noise/noise.asp
> >
> > I'm going to assume that you have either read the above, or that
you
> > understand the basic concept behind optimal sub-exposure times.
> Briefly:
> >
> > Whereas read noise results in the same level of uncertainty in
the
> data
> > every time;
> >
> > Whereas shot noise is always the square root of the signal (and
> thus grows
> > with exposure time, but at a slower rate than signal grows);
> >
> > Be it therefore agreed that exposures of sufficient length will
> allow the
> > shot noise to swamp the read noise;
> >
> > And be it further agreed that this allows a large total exposure
> time to be
> > subdivided into shorter, more practical individual exposures,
> without
> > significant noise penalty from the numerous individual readouts.
> >
> > Witnesseth, that this therefore yields excellent signal to noise
> ratio
> > without having to make sacrifices to the God of Read Noise.
> >
> > Once you are on this path of optimal exposure time, your
luminance
> exposures
> > can be as deep as you are willing to go in total exposure time.
> (Granted,
> > the God of Light Pollution still demands his tithe, but you
> hopefully get my
> > point.)
> >
> > If this path is good for luminance, might it also be good for
> color? Yes!
> > YES!
> >
> > Noise controls how rich, deep, and accurate your color can be.
> >
> > When you take weighted exposure times for color, what you are
> really doing
> > is attempting to get the same signal to noise ratio in all three
> color
> > channels. HOWEVER: if your color exposure times are too short,
then
> > noise will be a significant factor, and your color ratios will
not
> work
> > effectively!!!
> >
> > So the goal for BALANCED color is equal signal to noise in all
three
> > channels. The primary obstacles to simple achievement of this
goal
> are:
> >
> > * Variations in the quantum efficiency of your CCD sensor with
> wavelength
> > (your chip might be more sensitive in red, for example)
> >
> > * Variation in the amount of light passed by a given filter (your
> blue
> > filter might pass less light, for example).
> >
> > There are multiple ways to calculate the right answer for any give
> > filter/chip combination, but the goal is the same: to adjust
> exposure times
> > through each filter to account for these variations.
> >
> > Note: there are other ways to achieve balanced color, such as
> taking more
> > exposures through a certain filter. However, this results in a
> different
> > balance between signal/shot noise and read noise, which
complicates
> the
> > calculations.
> >
> > There is a simple way to make all of this simpler: determine the
> optimal
> > sub-exposure times for each of your color channels, and use
exposure
> > durations for individual exposures that are at least long enough
to
> swamp
> > the read noise with shot noise. Then, even if you do not get
> perfectly
> > balanced color, you have enough signal to work with to achieve a
> > satisfactory balance.
> >
> > Note: if read noise is significant, then the differences in
signal
> to noise
> > will give the colors different strengths. A noisier color is a
> weaker color.
> > The noisiest color in your RGB set controls how much color you
can
> have
> > overall, since boosting saturation will reveal the noise at some
> point, and
> > the weakest color will reveal noise first, leading to a color
> balance
> > problem that can't be fixed unless you decrease color saturation
to
> hide the
> > noise. This may or may not sink in immediately, but it's an
> extremely
> > important concept to understand clearly.
> >
> > The last important thing to know is the effect of signal to noise
> on color
> > in the image. The previous paragraph says it all, though pretty
> tersely and
> > the inference might not be clear.
> >
> > So let's back up and look at luminance imaging. What happens when
> you work
> > with the optimal sub-exposure time in luminance, and then take
more
> and more
> > individual images? You increase S/N, and dim objects become
> clearer, subtle
> > contrast becomes discernable, etc. All good things in an image
come
> from
> > reducing uncertainty in the data (that is, having better S/N).
> >
> > With color, similar good things happen. As you improve the S/N of
> > data by taking larger numbers of optimal sub-exposures, you remove
> > uncertainty from the color information so that dimmer and dimmer
> objects
> > take on clean color. And of course the color of bright objects
gets
> richer
> > and richer.
> >
> > That's all there is to it.
> >
> >
> > http://www.newastro.com <http://www.newastro.com/>
> >
> >
> >
> >
> >
> >
> > [Non-text portions of this message have been removed]
> >
>
• Thanks, Paul. That makes sense to me. So, I was doing it right - even though I didn t know it for sure!!! Regards, Jim ... pixel ... 4 ... gain ... then ...
Message 5 of 5 , Feb 2, 2006
• 0 Attachment
Thanks, Paul. That makes sense to me. So, I was doing it right -
even though I didn't know it for sure!!!

Regards,

Jim

--- In ccd-newastro@yahoogroups.com, "Paul K" <paul@...> wrote:
>
> Jim,
>
> That's the main reason for binning: the read noise per (bigger)
pixel
> is about the same as the read noise for an unbinned pixel, but the
> signal is quadrupled. The result is that signal swamps read noise
4
> times faster when binning 2x2.
>
> ADU level to shoot for in the background depends on the camera's
gain
> setting: all SBIG cameras, AFAIK, have the same gain setting in
> binned mode as they do unbinned. The ADU level to shoot for is
then
> the same same for SBIG. For cameras that vary gain setting for
binned
> modes, the ADU level will be different (SX cameras, for example).
>
> Regards,
>
> -Paul
>
> --- In ccd-newastro@yahoogroups.com, "mcmillanjr4221" <valueware@>
> wrote:
> >
> > Ron,
> >
> > Thank you very much for the clear guidance.
> >
> > One question: let's say you've determined that to swamp read
> noise,
> > you need a backgroud ADU of about 900. With my ST8E and good
sky
> > conditions, that means subexposures of about 10 minutes for the
> > luminance (bin=1). If I take the RGB at bin=2, does that change
my
> >
> > (I also take 10 min subexposures for the RGB and end up with
> > background ADU's for the RGB of ~1300, ~1700, ~1000
respectively.)
> >
> >
> > Jim McMillan
> >
> > --- In ccd-newastro@yahoogroups.com, "Ron Wodaski" <yahoo@>
wrote:
> > >
> > > There have been several messages posted over the last few days
> > > imaging. There has been a mix of correct and incorrect
> information,
> > so I
> > > thought I would take some time to set the record straight.
> > >
> > > First, I want to start where discussions about color imaging
> seldom
> > start -
> > > but where they should always start: noise. More specifically,
> > signal to
> > > noise ratio.
> > >
> > > What is true of luminance images - it is all about the noise -
is
> > also true
> > > of color images. Everything good and bad you experience about
> color
> > imaging
> > > ultimately has to do with the signal to noise ratio in the
data.
> > >
> > > Yep, data. It is certainly convenient that our eyes can take
> > photons as
> > > input and generate an image as output. But the bottom line is
> that
> > what we
> > > are talking about here is _data_. The rules that apply to data
> > (signal and
> > > noise) apply here. That is good news, and that is bad news.
> > >
> > > The bad news is that data is not an intuitive concept. Oh, we
> like
> > to think
> > > we understand, but the fact is, the combination of data and
noise
> > behaves in
> > > some pretty non-intuitive ways. You can't use your experience
of
> > choosing
> > > avocados at the market, or your instinctive understanding of
> > chasing a
> > > moving target, to understand noise. Noise behaves in ways that
> have
> > nothing
> > > to do with survival or mating. <G> As a result, it can
surprise
> us.
> > The tool
> > > that we use to understand (and more importantly, validate)
these
> > surprises
> > > is......mathematics.
> > >
> > > If we approach signal and noise with math, we learn some
things
> > that are
> > > surprising (a good euphemism for non-intuitive and "just plain
> > weird").
> > >
> > > So let's start at the very beginning, and build to some
> supportable
> > > conclusions.
> > >
> > > Most imagers eventually get a decent feel for signal to noise.
> The
> > key
> > > concept is one I posted an extended discussion of recently:
> optimal
> > > sub-exposure time. I turned that discussion into a tutorial on
> the
> > book web
> > > page. Normally, you need to have a paid-up subscription to the
> web
> > site to
> > > view tutorials, but I made this one publicly viewable because
> it's
> > an
> > > important concept. Knowing why, and how, to optimize sub-
exposure
> > times
> > > leads to a deeper understanding of how data, signal, and noise
co-
> > exist. And
> > > that can make a big difference in the results you get from
> imaging.
> > >
> > > Here is a link to the sub-exposure tutorial:
> > >
> > > http://www.newastro.com/newastro/tutorials/noise/noise.asp
> > >
> > > I'm going to assume that you have either read the above, or
that
> you
> > > understand the basic concept behind optimal sub-exposure
times.
> > Briefly:
> > >
> > > Whereas read noise results in the same level of uncertainty in
> the
> > data
> > > every time;
> > >
> > > Whereas shot noise is always the square root of the signal
(and
> > thus grows
> > > with exposure time, but at a slower rate than signal grows);
> > >
> > > Be it therefore agreed that exposures of sufficient length
will
> > allow the
> > > shot noise to swamp the read noise;
> > >
> > > And be it further agreed that this allows a large total
exposure
> > time to be
> > > subdivided into shorter, more practical individual exposures,
> > without
> > > significant noise penalty from the numerous individual
> > >
> > > Witnesseth, that this therefore yields excellent signal to
noise
> > ratio
> > > without having to make sacrifices to the God of Read Noise.
> > >
> > > Once you are on this path of optimal exposure time, your
> luminance
> > exposures
> > > can be as deep as you are willing to go in total exposure
time.
> > (Granted,
> > > the God of Light Pollution still demands his tithe, but you
> > hopefully get my
> > > point.)
> > >
> > > If this path is good for luminance, might it also be good for
> > color? Yes!
> > > YES!
> > >
> > > Noise controls how rich, deep, and accurate your color can be.
> > >
> > > When you take weighted exposure times for color, what you are
> > really doing
> > > is attempting to get the same signal to noise ratio in all
three
> > color
> > > channels. HOWEVER: if your color exposure times are too short,
> then
> > > noise will be a significant factor, and your color ratios will
> not
> > work
> > > effectively!!!
> > >
> > > So the goal for BALANCED color is equal signal to noise in all
> three
> > > channels. The primary obstacles to simple achievement of this
> goal
> > are:
> > >
> > > * Variations in the quantum efficiency of your CCD sensor with
> > wavelength
> > > (your chip might be more sensitive in red, for example)
> > >
> > > * Variation in the amount of light passed by a given filter
(your
> > blue
> > > filter might pass less light, for example).
> > >
> > > There are multiple ways to calculate the right answer for any
give
> > > filter/chip combination, but the goal is the same: to adjust
> > exposure times
> > > through each filter to account for these variations.
> > >
> > > Note: there are other ways to achieve balanced color, such as
> > taking more
> > > exposures through a certain filter. However, this results in a
> > different
> > > balance between signal/shot noise and read noise, which
> complicates
> > the
> > > calculations.
> > >
> > > There is a simple way to make all of this simpler: determine
the
> > optimal
> > > sub-exposure times for each of your color channels, and use
> exposure
> > > durations for individual exposures that are at least long
enough
> to
> > swamp
> > > the read noise with shot noise. Then, even if you do not get
> > perfectly
> > > balanced color, you have enough signal to work with to achieve
a
> > > satisfactory balance.
> > >
> > > Note: if read noise is significant, then the differences in
> signal
> > to noise
> > > will give the colors different strengths. A noisier color is a
> > weaker color.
> > > The noisiest color in your RGB set controls how much color you
> can
> > have
> > > overall, since boosting saturation will reveal the noise at
some
> > point, and
> > > the weakest color will reveal noise first, leading to a color
> > balance
> > > problem that can't be fixed unless you decrease color
saturation
> to
> > hide the
> > > noise. This may or may not sink in immediately, but it's an
> > extremely
> > > important concept to understand clearly.
> > >
> > > The last important thing to know is the effect of signal to
noise
> > on color
> > > in the image. The previous paragraph says it all, though
pretty
> > tersely and
> > > the inference might not be clear.
> > >
> > > So let's back up and look at luminance imaging. What happens
when
> > you work
> > > with the optimal sub-exposure time in luminance, and then take
> more
> > and more
> > > individual images? You increase S/N, and dim objects become
> > clearer, subtle
> > > contrast becomes discernable, etc. All good things in an image
> come
> > from
> > > reducing uncertainty in the data (that is, having better S/N).
> > >
> > > With color, similar good things happen. As you improve the S/N
of
> > > data by taking larger numbers of optimal sub-exposures, you
remove
> > > uncertainty from the color information so that dimmer and
dimmer
> > objects
> > > take on clean color. And of course the color of bright objects
> gets
> > richer
> > > and richer.
> > >
> > > That's all there is to it.
> > >
> > >