Loading ...
Sorry, an error occurred while loading the content.

How to calculate the possible size of a panorma?

Expand Messages
  • Tom! Striewisch
    Hello. I read at different sources that the picture of a fisheye has a (from the center of the image to its border circle) growing increase on the covered
    Message 1 of 6 , Sep 10, 2006
    • 0 Attachment
      Hello.

      I read at different sources that the picture of a fisheye has a (from
      the center of the image to its border circle) growing increase on the
      covered field of view.

      This leads to the conclusion, that one has less "real"
      pixel-per-degree of the resulting panorama if one uses more of the
      outer parts of the fisheyeimage.
      (I'm not considering that most lenses, esp.(?) fisheyelenses do not
      show their best performance in the outer part of the image.)
      If in the opposite one uses only the inner parts of the images by
      taking and using more images for the final panorama, the resolution
      per degree should be higher.
      So the number of images taken for a panorama with the *same*
      combination of camera and lens should result in a different possible
      size of the panorama if one takes different numbers of images. The
      more pictures one uses, the less outer parts are used, the greater the
      size of the panorama could be.

      So how to calculate that? ;-)

      Do the different software we could use for making panoramas takes his
      different possible sizes into consideration?

      Tom!
      Kugelpanorama-Schulungen: http://www.kugelbild.de
    • Erik Krause
      ... Not exactly. It has a lower focal length at the borders of the image circle than in the middle. ... In this case: no. If you consider resolution you must
      Message 2 of 6 , Sep 10, 2006
      • 0 Attachment
        On Sunday, September 10, 2006 at 9:36, Tom! Striewisch wrote:

        > I read at different sources that the picture of a fisheye has a (from
        > the center of the image to its border circle) growing increase on the
        > covered field of view.

        Not exactly. It has a lower focal length at the borders of the image
        circle than in the middle.

        > This leads to the conclusion, that one has less "real"
        > pixel-per-degree of the resulting panorama if one uses more of the
        > outer parts of the fisheyeimage.

        In this case: no. If you consider resolution you must look at the way
        a fisheye maps the reality to the sensor. This is described in
        http://wiki.panotools.org/Fisheye_Projection

        Considered that the second formula is a good approximation of the
        mapping scheme we can see that the angle theta and the radius are
        equivalent. This is: any angle of view is mapped to an equivalent
        distance on the film/sensor and reverse that the size of any pixel
        represents a constant "field of view".

        That's different for a normal (rectilinear) lens, where the angle of
        view of any pixel increases with the distance of the image center
        (just revert the tangen formula). Hence you can make someone thin
        very fat if you place him near the border of a super wide angle shot
        ;-)

        There is a nice comparison on the old Dersch page:
        http://www.path.unimelb.edu.au/~dersch/perspective/Wide_Angle_Perspect
        ive.html (in case of broken link: http://tinyurl.com/r4n7u )

        > (I'm not considering that most lenses, esp.(?) fisheyelenses do not
        > show their best performance in the outer part of the image.)
        > If in the opposite one uses only the inner parts of the images by
        > taking and using more images for the final panorama, the resolution
        > per degree should be higher.

        Unfortunately not. You can see this, if you remap a 180° circular
        fisheye image to equirectangular: It gets an exact square (in case
        there are no lens distortions). Now compare the image details along
        the horizon in both images - they should be spaced almost equal.

        You can also try to cut the 180° circle to a 50% sized rectangle and
        stitch it into the first remapped full size image. You end up with
        approximately half the field of view (90°) and half the size (along
        the horizon only) in the equirect.

        > So the number of images taken for a panorama with the *same*
        > combination of camera and lens should result in a different possible
        > size of the panorama if one takes different numbers of images. The
        > more pictures one uses, the less outer parts are used, the greater the
        > size of the panorama could be.

        This could be the case if you don't consider sensor resolution as
        limiting but te actual lens resolution (which might well be lower
        nowadays for some 16 MP sensors or bad lenses ;-) but you wrote this
        wasn't your point...

        > So how to calculate that? ;-)

        PTGui does it for you ;-) Or use the formula from the wiki page...

        best regards
        --
        Erik Krause
        Resources, not only for panorama creation:
        http://www.erik-krause.de/
      • Flemming V. Larsen
        adding to this subject I d also like a calculator (javacript websites) to convert fov, pan and tilt (+ max/min) values between the different viewers. Now that
        Message 3 of 6 , Sep 10, 2006
        • 0 Attachment
          adding to this subject I'd also like a calculator (javacript websites) to
          convert fov, pan and tilt (+ max/min) values between the different viewers.
          Now that we have some very good plugindetectors we want to give people more
          options - so this would be a real timesaver.

          - Flemming
        • Hans Nyberg
          ... I can not agree with you Erik. I say Yes it will be higher. The best way to use a fullframe fisheye is to take 3 images at 60 degrees instread of 1 top
          Message 4 of 6 , Sep 10, 2006
          • 0 Attachment
            --- In PanoToolsNG@yahoogroups.com, "Erik Krause" <erik.krause@...> wrote:

            > > (I'm not considering that most lenses, esp.(?) fisheyelenses do not
            > > show their best performance in the outer part of the image.)
            > > If in the opposite one uses only the inner parts of the images by
            > > taking and using more images for the final panorama, the resolution
            > > per degree should be higher.
            >
            > Unfortunately not. You can see this, if you remap a 180° circular
            > fisheye image to equirectangular: It gets an exact square (in case
            > there are no lens distortions). Now compare the image details along
            > the horizon in both images - they should be spaced almost equal.

            I can not agree with you Erik.
            I say Yes it will be higher.
            The best way to use a fullframe fisheye is to take 3 images at 60 degrees instread of 1 top
            image.
            This will give you almost the same resolution all over.

            This is very easy to prove.
            If you take an 8 mm fisheye and calcualte the FOV you will se that the area at the last
            5mm of the circle contains 46 degrees.
            However at the centre 5mm is 36 degrees.

            Just check it by using Frank's calculator here.
            http://www.frankvanderpol.nl/fov_pan_calc.htm

            This means that even if your fisheye is super sharp your resolution in the areas at Nadir
            and Zenith is only 80% of what you get by doing a multirow or by using the centre of the
            fisheye.

            Hans
            www.panoramas.dk
          • Erik Krause
            ... This depends on the actual fishey. The formula r = f*theta is an approximation of course, but even the sine formula is not exact for a real fisheye (see
            Message 5 of 6 , Sep 10, 2006
            • 0 Attachment
              On Sunday, September 10, 2006 at 11:38, Hans Nyberg wrote:

              > > Unfortunately not. You can see this, if you remap a 180° circular
              > > fisheye image to equirectangular: It gets an exact square (in case
              > > there are no lens distortions). Now compare the image details along
              > > the horizon in both images - they should be spaced almost equal.
              >
              > I can not agree with you Erik.
              > I say Yes it will be higher.

              This depends on the actual fishey. The formula r = f*theta is an
              approximation of course, but even the sine formula is not exact for a
              real fisheye (see below).

              > This is very easy to prove. If you take an 8 mm fisheye and calcualte
              > the FOV you will se that the area at the last 5mm of the circle
              > contains 46 degrees. However at the centre 5mm is 36 degrees.

              You have a small error in your calculation. The image circle of an
              ideal 8mm lens with sine mapping is 11.3mm. If you subtract 5mm you
              get 6.3mm and if you use the formula you get a theta of 46° but this
              is only 44° from the border (90°).

              But the image circle radius for the linear mapping would be 12.6mm
              (considered 8mm focal length) instead of 11.3 like we take it as
              given. Hence we must adjust the focal length in the formula to fit
              those facts - it would result to 7.2mm.

              Now calculating again we find 40° for the 5mm radius and 50° for the
              6.3mm radius - which is obviously 40° from the border. However, there
              is a difference in the middle radii of about 9%. (f.e. the 5.5mm
              radius is mapped from 43.8° with linear mapping and from 40.2° with
              sine mapping). I guess this was what Helmut Dersh meant when he wrote
              "you won't see a big difference between the two"...

              > Just check it by using Frank's calculator here.
              > http://www.frankvanderpol.nl/fov_pan_calc.htm

              Frank uses the more exact sine formula of course. But if you consider
              a real life fisheye there are additional distortions - check your a,
              b and c lens correction parameters...

              Michel Thoby tried to estimate the mapping formula for the popular
              sigma 8mm and Nikkor 10.5mm fisheye and found that they are
              reasonably different from the ideal formulas:
              http://tinyurl.com/b9sgn especially the graph on
              http://tinyurl.com/eg3vr

              > This means that even if your fisheye is super sharp your resolution in
              > the areas at Nadir and Zenith is only 80% of what you get by doing a
              > multirow or by using the centre of the fisheye.

              If I didn't make a mistake, no, sorry...

              best regards
              --
              Erik Krause
              Resources, not only for panorama creation:
              http://www.erik-krause.de/
            • Erik Krause
              ... My apologies - this is not right. What I wrote here says that for a linear mapping the mapping is linear - this is circular reasoning. I was carried away
              Message 6 of 6 , Sep 10, 2006
              • 0 Attachment
                On Sunday, September 10, 2006 at 16:48, Erik Krause wrote:

                > But the image circle radius for the linear mapping would be 12.6mm
                > (considered 8mm focal length) instead of 11.3 like we take it as
                > given. Hence we must adjust the focal length in the formula to fit
                > those facts - it would result to 7.2mm.
                >
                > Now calculating again we find 40° for the 5mm radius and 50° for the
                > 6.3mm radius - which is obviously 40° from the border. However, there
                > is a difference in the middle radii of about 9%. (f.e. the 5.5mm
                > radius is mapped from 43.8° with linear mapping and from 40.2° with
                > sine mapping). I guess this was what Helmut Dersh meant when he wrote
                > "you won't see a big difference between the two"...

                My apologies - this is not right. What I wrote here says that for a
                linear mapping the mapping is linear - this is circular reasoning.

                I was carried away by the fancy diagrams I made to get it sorted out.
                Yes, you get a higher resolution if you take the inner circle only.
                However, you won't get only 80% of that resolution if you take the
                full image circle (this would be the case if you use the outer parts
                of the image *only*). You get about 90%...

                Please accept my apologies...

                best regards
                --
                Erik Krause
                Resources, not only for panorama creation:
                http://www.erik-krause.de/
              Your message has been successfully submitted and would be delivered to recipients shortly.