Loading ...
Sorry, an error occurred while loading the content.
 

Some of you postulate reduction of image size for clarity...

Expand Messages
  • Ken Warner
    Some of you have said that reducing the image to 70% of it s original size will increase sharpness for reasons having to do with the Bayer pattern of the
    Message 1 of 13 , Aug 24, 2011
      Some of you have said that reducing the image to 70% of it's original size will increase sharpness for reasons having to do with the Bayer pattern of the sensor. I would like to have a better understanding of that rational.

      Are there any papers that discuss that technique? Is it valid? Is 70% of original appropriate or is there a better ratio?
    • Sacha Griffin
      Nobody says this. Just that it may not decrease it very much. It some cases for poor lenses and many others you may even be able to go much further, indicating
      Message 2 of 13 , Aug 24, 2011
        Nobody says this. Just that it may not decrease it very much.
        It some cases for poor lenses and many others you may even be able to go
        much further, indicating file resolution bloat and little detail. One reason
        gigapixel records are pointless.
        One test you can do is to downsize and then upsize right back. If you can't
        notice a difference you're better off in that smaller size. You'd be
        surprised.

        Sacha Griffin
        Southern Digital Solutions LLC - Atlanta, Georgia
        http://www.seeit360.net
        http://twitter.com/SeeIt360
        http://www.facebook.com/panoramas/
        IM: sachagriffin007@...
        Office: 404-551-4275
        GV: 404-665-9990


        On Aug 24, 2011, at 11:33 PM, Ken Warner <kwarner000@...> wrote:



        Some of you have said that reducing the image to 70% of it's original size
        will increase sharpness for reasons having to do with the Bayer pattern of
        the sensor. I would like to have a better understanding of that rational.

        Are there any papers that discuss that technique? Is it valid? Is 70% of
        original appropriate or is there a better ratio?



        [Non-text portions of this message have been removed]
      • Erik Krause
        ... Ken Turkowski does. His page is apparently gone: http://www.worldserver.com/ and I hope he is well. However, the rule is mentioned on VRMAG:
        Message 3 of 13 , Aug 25, 2011
          Am 25.08.2011 06:01, schrieb Sacha Griffin:
          > Nobody says this. Just that it may not decrease it very much.

          Ken Turkowski does. His page is apparently gone:
          http://www.worldserver.com/ and I hope he is well.

          However, the rule is mentioned on VRMAG: http://tinyurl.com/c9q23u and
          referenced on Hans' page: http://www.panoramas.dk/cubefaces/

          --
          Erik Krause
          http://www.erik-krause.de
        • Ken Warner
          . “Ken explained about optimizing the resolution of panoramas with his 70% rule. There is negligible loss of image quality when shrunk by 70%. This reduced
          Message 4 of 13 , Aug 25, 2011
            . “Ken explained about optimizing the resolution of panoramas with his 70% rule. There is negligible loss of image quality when shrunk by 70%. This reduced the file size by half! This is because the Bayer pattern found on most digital camera sensors interpolates up to 30% of the image resolution.”

            That's what I don't understand. Why is this true when there is a one to one correspondence between the active area of the sensor and the raw image pixels? A pixel in a sensor is made up usually of 4 sensors per pixel. So when it said that a sensor has a 10meg sensor, that means 40meg sensors for 10meg of pixels right?

            So it's already reducing the image.

            And on Han's demo page, if you look closely, there is better image quality in the larger cube faces so I don't think the 70% rule holds -- I would like it to hold -- but I don't think it does.


            Erik Krause wrote:
            > Am 25.08.2011 06:01, schrieb Sacha Griffin:
            >> Nobody says this. Just that it may not decrease it very much.
            >
            > Ken Turkowski does. His page is apparently gone:
            > http://www.worldserver.com/ and I hope he is well.
            >
            > However, the rule is mentioned on VRMAG: http://tinyurl.com/c9q23u and
            > referenced on Hans' page: http://www.panoramas.dk/cubefaces/
            >
          • Erik Krause
            ... No. It has 10 meg of sensor pixels. 2.5 meg of blue and red, 5 meg of green. So you have 10 meg for brightness information but far less for color. It is
            Message 5 of 13 , Aug 25, 2011
              Am 25.08.2011 20:22, schrieb Ken Warner:
              > That's what I don't understand. Why is this true when there is a
              > one to one correspondence between the active area of the sensor and
              > the raw image pixels? A pixel in a sensor is made up usually of 4
              > sensors per pixel. So when it said that a sensor has a 10meg sensor,
              > that means 40meg sensors for 10meg of pixels right?

              No. It has 10 meg of sensor pixels. 2.5 meg of blue and red, 5 meg of
              green. So you have 10 meg for brightness information but far less for
              color. It is the duty of the interpolation algorithm to make the best of
              this and this is what makes the difference between raw converters (among
              other things like highlight restoration, moiree suppression etc.).

              --
              Erik Krause
              http://www.erik-krause.de
            • Ken Warner
              Oh, so when the camera makers say 10 meg sensor, they really mean 10 meg of sensor elements not pixels. So a 10 meg sensor has 2.5 meg pixels? I thought the
              Message 6 of 13 , Aug 25, 2011
                Oh, so when the camera makers say 10 meg sensor, they really mean 10 meg of sensor elements not pixels. So a 10 meg sensor has 2.5 meg pixels?

                I thought the camera makers would quote the number of pixels.

                So a 24meg sensor really has only 6 meg pixels?

                Erik Krause wrote:
                > Am 25.08.2011 20:22, schrieb Ken Warner:
                >> That's what I don't understand. Why is this true when there is a
                >> one to one correspondence between the active area of the sensor and
                >> the raw image pixels? A pixel in a sensor is made up usually of 4
                >> sensors per pixel. So when it said that a sensor has a 10meg sensor,
                >> that means 40meg sensors for 10meg of pixels right?
                >
                > No. It has 10 meg of sensor pixels. 2.5 meg of blue and red, 5 meg of
                > green. So you have 10 meg for brightness information but far less for
                > color. It is the duty of the interpolation algorithm to make the best of
                > this and this is what makes the difference between raw converters (among
                > other things like highlight restoration, moiree suppression etc.).
                >
              • Erik Krause
                ... No, it has 10 meg black/white sensor cells. It might or might not be correct to call them pixels. However, the data read from the sensor does not contain
                Message 7 of 13 , Aug 25, 2011
                  Am 25.08.2011 20:56, schrieb Ken Warner:
                  > Oh, so when the camera makers say 10 meg sensor, they really mean 10
                  > meg of sensor elements not pixels. So a 10 meg sensor has 2.5 meg
                  > pixels?

                  No, it has 10 meg black/white sensor cells. It might or might not be
                  correct to call them pixels. However, the data read from the sensor does
                  not contain color information. It's mere b/w data, but since a filter is
                  placed in front of the sensor cells, the interpolation algorithm will
                  know which cell represents red, blue or green, use the brightness
                  information from all of them and interpolate the color information
                  between them, such that you get an image with 10 meg of RGB pixels.

                  With dcraw you can output the non interpolated data if you use the -d
                  switch.

                  --
                  Erik Krause
                  http://www.erik-krause.de
                • Hans
                  ... I agree that the 70% might not be true today with modern sensors. My tests show more like 80%. Here is a page i did acouple of years ago where you can use
                  Message 8 of 13 , Aug 25, 2011
                    --- In PanoToolsNG@yahoogroups.com, Ken Warner <kwarner000@...> wrote:
                    >
                    >
                    > . "Ken explained about optimizing the resolution of panoramas with his 70% rule. There is negligible loss of image quality when shrunk by 70%. This reduced the file size by half! This is because the Bayer pattern found on most digital camera sensors interpolates up to 30% of the image resolution."
                    >
                    > That's what I don't understand. Why is this true when there is a one to one correspondence between the active area of the sensor and the raw image pixels? A pixel in a sensor is made up usually of 4 sensors per pixel. So when it said that a sensor has a 10meg sensor, that means 40meg sensors for 10meg of pixels right?
                    >
                    > So it's already reducing the image.
                    >
                    > And on Han's demo page, if you look closely, there is better image quality in the larger cube faces so I don't think the 70% rule holds -- I would like it to hold -- but I don't think it does.

                    I agree that the 70% might not be true today with modern sensors.
                    My tests show more like 80%.

                    Here is a page i did acouple of years ago where you can use to see that.
                    http://www.panoramas.dk/panorama/cubeface-sizes/

                    The pano loads in the smallest cubefaces 1280x1280.
                    Load the largest 3500x3500 and zoom until your pano matches the small image which shows 100%. This is the maximum from the original 11000x5500 panorama and it matches the original files from camera.

                    Now just load the 2750x2750 cubefaces without doing any zoom in/out.
                    Can you see any difference.
                    You should be able to toggle between the 2 sizes easy when they are in cache.

                    2750x2750 is actually around 80% of the original size.

                    Hans



                    >
                    >
                    > Erik Krause wrote:
                    > > Am 25.08.2011 06:01, schrieb Sacha Griffin:
                    > >> Nobody says this. Just that it may not decrease it very much.
                    > >
                    > > Ken Turkowski does. His page is apparently gone:
                    > > http://www.worldserver.com/ and I hope he is well.
                    > >
                    > > However, the rule is mentioned on VRMAG: http://tinyurl.com/c9q23u and
                    > > referenced on Hans' page: http://www.panoramas.dk/cubefaces/
                    > >
                    >
                  • Sacha Griffin
                    There s a big difference between saying you get better quality with a 70% reduced size and not losing quality. Resize a 10megabyte file to 1x1 pixels. You may
                    Message 9 of 13 , Aug 25, 2011
                      There's a big difference between saying you get better quality with a 70%
                      reduced size and not losing quality.

                      Resize a 10megabyte file to 1x1 pixels. You may say its sharper, but it's
                      quite missing the point.



                      I'm not exactly sure what Han's page is claiming if anything. The larger
                      cubefaces are CLEARLY sharper.

                      In any case, your mileage may vary, so I recommend always doing real world
                      tests.

                      Case in point there were a few images from a 400mm mirror lens and a normal
                      100mm lens.

                      Resizing the 400mm image to match the 100mm size and back again showed zero
                      loss of quality which gives you the basic conclusion that using that lens
                      was pointless.



                      From: PanoToolsNG@yahoogroups.com [mailto:PanoToolsNG@yahoogroups.com] On
                      Behalf Of Erik Krause
                      Sent: Thursday, August 25, 2011 2:04 PM
                      To: PanoToolsNG@yahoogroups.com
                      Subject: [PanoToolsNG] Re: Some of you postulate reduction of image size for
                      clarity...





                      Am 25.08.2011 06:01, schrieb Sacha Griffin:
                      > Nobody says this. Just that it may not decrease it very much.

                      Ken Turkowski does. His page is apparently gone:
                      http://www.worldserver.com/ and I hope he is well.

                      However, the rule is mentioned on VRMAG: http://tinyurl.com/c9q23u and
                      referenced on Hans' page: http://www.panoramas.dk/cubefaces/

                      --
                      Erik Krause
                      http://www.erik-krause.de





                      [Non-text portions of this message have been removed]
                    • Hans
                      ... Found Kens Google lecture about it. http://www.dicklyon.com/phototech/PhotoTech_27_Resolution_Slides.pdf Hans
                      Message 10 of 13 , Aug 25, 2011
                        --- In PanoToolsNG@yahoogroups.com, Erik Krause <erik.krause@...> wrote:
                        >
                        > Am 25.08.2011 06:01, schrieb Sacha Griffin:
                        > > Nobody says this. Just that it may not decrease it very much.
                        >
                        > Ken Turkowski does. His page is apparently gone:
                        > http://www.worldserver.com/ and I hope he is well.
                        >
                        > However, the rule is mentioned on VRMAG: http://tinyurl.com/c9q23u and
                        > referenced on Hans' page: http://www.panoramas.dk/cubefaces/
                        >

                        Found Kens Google lecture about it.
                        http://www.dicklyon.com/phototech/PhotoTech_27_Resolution_Slides.pdf

                        Hans
                      • Ken Warner
                        Thanks for the slides -- they are interesting. However, they are almost 5 years old which means some of the data in them is even older. And resolution is more
                        Message 11 of 13 , Aug 25, 2011
                          Thanks for the slides -- they are interesting.

                          However, they are almost 5 years old which means some of the data in them is even older. And resolution is more a function of power of the image processing chips and the cleverness of the firmware programmers for that chip that is in your camera.

                          As the programmers and chips get better, the images that the image quality gets better. And each camera maker will have different image quality from the same kind and size sensor because they will have different programming methods.

                          Turkowski says "negligible loss of quality" -- negligible is a subjective metric.

                          I don't think 70% is a hard and fast rule. Maybe a guideline. Maybe not at all useful. From what I see, bigger is better. Reduce the size of the image and you have to lose some information. Especially true when you think that different jpeg compressors can produce different results.

                          We try to balance size of the pano with speed of transmission over the internet. If we didn't have to worry about speed, we wouldn't have to worry about size and would just put the largest image up.

                          For practical purposes for web publishing, one can probably use less than full size images and still have a good looking presentation. But a hard and fast 70% reduction -- I'm not sure about that.

                          Hans wrote:
                          >
                          > --- In PanoToolsNG@yahoogroups.com, Erik Krause <erik.krause@...> wrote:
                          >> Am 25.08.2011 06:01, schrieb Sacha Griffin:
                          >>> Nobody says this. Just that it may not decrease it very much.
                          >> Ken Turkowski does. His page is apparently gone:
                          >> http://www.worldserver.com/ and I hope he is well.
                          >>
                          >> However, the rule is mentioned on VRMAG: http://tinyurl.com/c9q23u and
                          >> referenced on Hans' page: http://www.panoramas.dk/cubefaces/
                          >>
                          >
                          > Found Kens Google lecture about it.
                          > http://www.dicklyon.com/phototech/PhotoTech_27_Resolution_Slides.pdf
                          >
                          > Hans
                          >
                          >
                          >
                        • Erik Krause
                          ... As Sacha wrote: Reduce and enlarge again, overlay with the original, eventually use difference layer mode in photoshop to actually see what has changed.
                          Message 12 of 13 , Aug 25, 2011
                            Am 25.08.2011 22:18, schrieb Ken Warner:
                            > For practical purposes for web publishing, one can probably use less
                            > than full size images and still have a good looking presentation.
                            > But a hard and fast 70% reduction -- I'm not sure about that.

                            As Sacha wrote: Reduce and enlarge again, overlay with the original,
                            eventually use difference layer mode in photoshop to actually see what
                            has changed. During one of the gigapixel discussions recently someone
                            even wrote a script to do that automatically and determine the smallest
                            size which contains (almost) the same information.

                            However, there never will be the full information, since some is lost.
                            There is no possibility to get it again, no matter how good the
                            interpolation is. Imagine a fine diagonal line which hits the green
                            pixels only. How should a software determine whether it is green or
                            white? The effect is visible as moiré, btw.

                            But there certainly are differences depending on the image content. I'd
                            guess that a mostly black and white image will suffer more from
                            reduction since all sensor pixels potentially contribute while an image
                            that is evenly bright and has only color contrast could possibly be
                            reduced to 50% without loss...

                            --
                            Erik Krause
                            http://www.erik-krause.de
                          • Ken Warner
                            Yup. No easy answer. Image content is a huge determinant.
                            Message 13 of 13 , Aug 25, 2011
                              Yup. No easy answer. Image content is a huge determinant.

                              Erik Krause wrote:
                              > Am 25.08.2011 22:18, schrieb Ken Warner:
                              >> For practical purposes for web publishing, one can probably use less
                              >> than full size images and still have a good looking presentation.
                              >> But a hard and fast 70% reduction -- I'm not sure about that.
                              >
                              > As Sacha wrote: Reduce and enlarge again, overlay with the original,
                              > eventually use difference layer mode in photoshop to actually see what
                              > has changed. During one of the gigapixel discussions recently someone
                              > even wrote a script to do that automatically and determine the smallest
                              > size which contains (almost) the same information.
                              >
                              > However, there never will be the full information, since some is lost.
                              > There is no possibility to get it again, no matter how good the
                              > interpolation is. Imagine a fine diagonal line which hits the green
                              > pixels only. How should a software determine whether it is green or
                              > white? The effect is visible as moiré, btw.
                              >
                              > But there certainly are differences depending on the image content. I'd
                              > guess that a mostly black and white image will suffer more from
                              > reduction since all sensor pixels potentially contribute while an image
                              > that is evenly bright and has only color contrast could possibly be
                              > reduced to 50% without loss...
                              >
                            Your message has been successfully submitted and would be delivered to recipients shortly.