Loading ...
Sorry, an error occurred while loading the content.

different pixel size from RAW files

Expand Messages
  • Jeffrey Martin | 360Cities.net
    i was using dcraw to develop some files from my canon 300d. instead of the normal 3072 x 2048 images made by every other raw developing program, I got 3088 x
    Message 1 of 10 , Feb 1 10:48 AM
    • 0 Attachment
      i was using dcraw to develop some files from my canon 300d.

      instead of the normal 3072 x 2048 images made by every other raw developing
      program, I got 3088 x 2056 pixels.

      Why is this happening? when I googled "3088 2056 3072 2048 300d" I got lots
      of chinese pages pointing to various
      things, and also this one -
      http://www.the-digital-picture.com/Canon-Lenses/Field-of-View-Crop-Factor.aspx
      which shows canon 300d (digital rebel) to be the same 3088 x 2056.

      So why are all raw developers making only 3072 x 2048 from this sensor?
      Is it the same with all cameras? Are they cheating us out of our hard-earned
      pixels?? ;-)))

      thanks for any insight anyone can provide!


      [Non-text portions of this message have been removed]
    • Jeffrey Martin | 360Cities.net
      that link I just provided also shows info about DLA (diffraction limited aperture) which I haven t seen anywhere else. the canon 5d apparently has a DLA of
      Message 2 of 10 , Feb 1 10:52 AM
      • 0 Attachment
        that link I just provided also shows info about DLA (diffraction limited
        aperture) which I haven't seen anywhere else. the canon 5d apparently has a
        DLA of 13.2 - very interesting! (I thought it was more like f/8 or f/11)

        here is the link again
        http://www.the-digital-picture.com/Canon-Lenses/Field-of-View-Crop-Factor.aspx



        On Mon, Feb 1, 2010 at 7:48 PM, Jeffrey Martin | 360Cities.net <
        360cities@...> wrote:

        > i was using dcraw to develop some files from my canon 300d.
        >
        > instead of the normal 3072 x 2048 images made by every other raw developing
        > program, I got 3088 x 2056 pixels.
        >
        > Why is this happening? when I googled "3088 2056 3072 2048 300d" I got lots
        > of chinese pages pointing to various
        > things, and also this one -
        > http://www.the-digital-picture.com/Canon-Lenses/Field-of-View-Crop-Factor.aspx
        > which shows canon 300d (digital rebel) to be the same 3088 x 2056.
        >
        > So why are all raw developers making only 3072 x 2048 from this sensor?
        > Is it the same with all cameras? Are they cheating us out of our
        > hard-earned pixels?? ;-)))
        >
        > thanks for any insight anyone can provide!
        >
        >
        >
        >
        >


        [Non-text portions of this message have been removed]
      • Erik Krause
        ... The sensor delivers some more pixels than specified. dcraw always uses them, other raw converters stick to the sizes specified in meta data. The actual
        Message 3 of 10 , Feb 1 12:06 PM
        • 0 Attachment
          Am 01.02.2010 19:48, schrieb Jeffrey Martin | 360Cities.net:

          > instead of the normal 3072 x 2048 images made by every other raw developing
          > program, I got 3088 x 2056 pixels.

          The sensor delivers some more pixels than specified. dcraw always uses
          them, other raw converters stick to the sizes specified in meta data.

          The actual sensor is even larger. The data is recorded in the Maker
          Notes fields "SensorWidth" and "SensorHeight". The size all other raw
          converters use is called "CanonImageWidth" and "CanonImageHeight".

          --
          Erik Krause
          http://www.erik-krause.de
        • Erik Krause
          ... The concept of DLA is a bit misleading or at least not relevant in practice. It doesn t say f.e. how blurry your image will get. Since you usually stop
          Message 4 of 10 , Feb 1 12:29 PM
          • 0 Attachment
            Am 01.02.2010 19:52, schrieb Jeffrey Martin | 360Cities.net:
            > that link I just provided also shows info about DLA (diffraction limited
            > aperture) which I haven't seen anywhere else. the canon 5d apparently has a
            > DLA of 13.2 - very interesting! (I thought it was more like f/8 or f/11)
            >
            > here is the link again
            > http://www.the-digital-picture.com/Canon-Lenses/Field-of-View-Crop-Factor.aspx

            The concept of DLA is a bit misleading or at least not relevant in
            practice. It doesn't say f.e. how blurry your image will get. Since you
            usually stop down in order to get more depth of field, the concept of
            the optimum aperture is far better. It says how far you have to stop
            down to get a desired depth of field range as sharp as possible. The
            principle is described on http://www.kenrockwell.com/tech/focus.htm

            If you want to get an idea of the optimum aperture you can use my depth
            of field calculator which was translated by the Open Photographic
            Society: http://tinyurl.com/DOF-calculator

            If you want pixel-level sharpness you must choose "Digi" format and then
            play with the "Permissable actual circle of confusion" diameter until
            the "Required Megapixels" field shows your actual megapixels.

            --
            Erik Krause
            http://www.erik-krause.de
          • Daniel Reetz
            On Mon, Feb 1, 2010 at 12:48 PM, Jeffrey Martin | 360Cities.net ... As you know, raw images are uninterpolated grayscale data from the sensor. Each of the
            Message 5 of 10 , Feb 1 12:57 PM
            • 0 Attachment
              On Mon, Feb 1, 2010 at 12:48 PM, Jeffrey Martin | 360Cities.net
              <360cities@...> wrote:

              > i was using dcraw to develop some files from my canon 300d.
              >
              > instead of the normal 3072 x 2048 images made by every other raw developing
              > program, I got 3088 x 2056 pixels.
              >
              > Why is this happening?

              As you know, raw images are uninterpolated grayscale data from the
              sensor. Each of the photosites (pixels) on your sensor records the
              luminous intensity of only one color value ("R" "G" or "B") because it
              has a little colored filter over it. In other words, each pixel sees
              only how red that spot is, green that spot is, or blue that spot is.
              So what you want to do is "demosaic" the image, which means to figure
              out what all three colors should have been at that pixel/photosite.

              A raw demosaicing program like DCRAW uses a few different methods to
              determine the RGB value at any photosite/pixel. In almost all cases,
              you have to look at the neighboring pixel(s) and see what color they
              were. Then you make an educated guess about the missing two values at
              each pixel, according to your algorithm (there are many, many
              algorithms).

              Because each pixel needs neighboring pixels to determine their final
              "color", even edge pixels, the manufacturer "saves" some "extra"
              pixels on the edge of the sensor to fill in the color values on the
              edge of the image they promised you (3072x2048).

              Because DCRAW is extra awesome and doesn't care about the will of
              manufacturers, it will give you these "extra" pixels.That's what you
              are seeing.

              That's not all these "extra" pixels are used for. Sometimes they are
              used to get a true "black" (or zero-photon) value for noise removal.
              Sometimes they are used for other image processing tricks.


              >
              > So why are all raw developers making only 3072 x 2048 from this sensor?
              > Is it the same with all cameras? Are they cheating us out of our hard-earned
              > pixels?? ;-)))

              Nope.

              Daniel Reetz
            • Michel THOBY
              Hi Jeffrey, ... Yes the official image dimensions are always stealing you from real and useful pixels. You should know that additional (unexposed) pixels to
              Message 6 of 10 , Feb 1 1:36 PM
              • 0 Attachment
                Hi Jeffrey,

                > Message du 01/02/10 19:56
                > De : "Jeffrey Martin | 360Cities.net" <360cities@...>
                > A : "panotoolsng"

                > Copie à :
                > Objet : [PanoToolsNG] different pixel size from RAW files

                > So why are all raw developers making only 3072 x 2048 from this sensor?
                > Is it the same with all cameras? Are they cheating us out of our hard-earned
                > pixels?? ;-)))
                >
                > thanks for any insight anyone can provide!

                Yes the "official" image dimensions are always stealing you from real and useful pixels.
                You should know that additional (unexposed) pixels to those extra pixels -that you are now viewing by converting with dcraw- are also recorded in the raw file. They are used internally by the camera for calibration and for mitigating thermal noise of the JPEG image for example.
                This cropping is willingly done and partially for our good: the extra "hidden pixels" allow simpler Bayer filtering conversion algorithm along the edge. But they are also used by most raw converters for additional purposes.
                As an example: notice that correction of TCA may induce a (minuscule) problem on two edges of the rectangular image: the TCA correction shifts the Red channel and the Blue channel with respect to the fixed Green channel. If the shift is applied on the final "cropped" image, then some rows of pixels are only two-channels stacked along two sides of this image.That's what you get with the TCA correction of Photoshop CS (on a TIF or JPEG image)... Note that this is not crucial with a fisheye circular image with lots of black pixels on the four corner though:-)
                If shift for correction is applied **prior** to the cropping step (i.e. using the whole image with the extra pixels of the raw format), then the final"cropped" image is composed of a three-channels stack all over the whole rectangular area: Adobe Camera Raw for instance, does TCA correction this way.

                Michel

                PS: Rawnalyze by Gabor Schreiner is a wonderful tool (free but Windows only) that teaches the user many unknown aspects of the raw image properties:
                http://www.cryptobola.com/PhotoBola/Rawnalyze.htm
              • Fernando Chaves
                Hi, Following the link below you will find a program, by Thomas Knoll, which recover all the hidden pixels in a raw file
                Message 7 of 10 , Feb 1 1:46 PM
                • 0 Attachment
                  Hi,

                  Following the link below you will find a program, by Thomas Knoll, which
                  recover all the hidden pixels in a raw file
                  http://www.luminous-landscape.com/contents/DNG-Recover-Edges.shtml
                  Best regards,


                  Fernando


                  -----Message d'origine-----
                  De : PanoToolsNGsyahoogroups.com [mailto:PanoToolsNGsyahoogroups.com] De la
                  part de Erik Krause
                  Envoyé : 1 février 2010 15:07
                  À : PanoToolsNG@yahoogroups.com
                  Objet : [PanoToolsNG] Re: different pixel size from RAW files

                  Am 01.02.2010 19:48, schrieb Jeffrey Martin | 360Cities.net:

                  > instead of the normal 3072 x 2048 images made by every other raw
                  developing
                  > program, I got 3088 x 2056 pixels.

                  The sensor delivers some more pixels than specified. dcraw always uses
                  them, other raw converters stick to the sizes specified in meta data.

                  The actual sensor is even larger. The data is recorded in the Maker
                  Notes fields "SensorWidth" and "SensorHeight". The size all other raw
                  converters use is called "CanonImageWidth" and "CanonImageHeight".

                  --
                  Erik Krause
                  http://www.erik-krause.de
                • Erik Krause
                  ... Seems as if this program simply modifies the DefaultCropSize EXIF tag in the DNG file. The result has (almost) the same size like the version created by
                  Message 8 of 10 , Feb 1 2:16 PM
                  • 0 Attachment
                    Am 01.02.2010 22:46, schrieb Fernando Chaves:
                    > Following the link below you will find a program, by Thomas Knoll, which
                    > recover all the hidden pixels in a raw file
                    > http://www.luminous-landscape.com/contents/DNG-Recover-Edges.shtml

                    Seems as if this program simply modifies the DefaultCropSize EXIF tag in
                    the DNG file. The result has (almost) the same size like the version
                    created by dcraw...

                    The new crop size apparently is calculated from the ActiveArea EXIF tag
                    ("51 158 3804 5792" in my case). It's interesting to see that dcraw uses
                    the exact differences from the ActiveArea tag (5634 3753), while
                    DNGRecoverEdges uses a value rounded to even (5634 3752).

                    --
                    Erik Krause
                    http://www.erik-krause.de
                  • Michel THOBY
                    ... ....correction of TCA may induce a (minuscule) problem on two edges of the rectangular image: the TCA correction shifts the Red channel and the Blue
                    Message 9 of 10 , Feb 1 10:54 PM
                    • 0 Attachment
                      > Message du 01/02/10 22:36
                      > De : "Michel THOBY"

                      ....correction of TCA may induce a (minuscule) problem on two edges of the rectangular image: the TCA correction shifts the Red channel and the Blue channel with respect to the fixed Green channel. If the shift is applied on the final "cropped" image, then some rows of pixels are only two-channels stacked along two sides of this image.

                      I must correct my wrong statement: TCA is due to difference in the respective size of the three (R,V and B) channels: the aberration increases along the radius and is nil at the center of the image. Two of the channels (often R and B) are re-sized to fit with the third (G). Consequently, the defect that may occur after correction, affects ALL four edges of the image.

                      Michel
                    • prague
                      Thanks Erik, Daniel, and Michel for the great explanations! So, Daniel, in dcraw, is there a way to turn off these bonus pixels? I don t want them :-)
                      Message 10 of 10 , Feb 2 5:45 AM
                      • 0 Attachment
                        Thanks Erik, Daniel, and Michel for the great explanations!

                        So, Daniel, in dcraw, is there a way to turn off these "bonus" pixels? I don't want them :-)


                        --- In PanoToolsNG@yahoogroups.com, Michel THOBY <thobymichel@...> wrote:
                        >
                        >
                        > > Message du 01/02/10 22:36
                        > > De : "Michel THOBY"
                        >
                        > ....correction of TCA may induce a (minuscule) problem on two edges of the rectangular image: the TCA correction shifts the Red channel and the Blue channel with respect to the fixed Green channel. If the shift is applied on the final "cropped" image, then some rows of pixels are only two-channels stacked along two sides of this image.
                        >
                        > I must correct my wrong statement: TCA is due to difference in the respective size of the three (R,V and B) channels: the aberration increases along the radius and is nil at the center of the image. Two of the channels (often R and B) are re-sized to fit with the third (G). Consequently, the defect that may occur after correction, affects ALL four edges of the image.
                        >
                        > Michel
                        >
                      Your message has been successfully submitted and would be delivered to recipients shortly.