Loading ...
Sorry, an error occurred while loading the content.
 

Re: What is the actual Field of View?

Expand Messages
  • engstrom_henrik
    This is a surveillance application where a large area will be monitored by multiple cameras. To get a full overview, ten video cameras will form a 360 degree
    Message 1 of 23 , Jan 7, 2011
      This is a surveillance application where a large area will be monitored by multiple cameras. To get a full overview, ten video cameras will form a 360 degree coverage setup in a circular array. Each individual camera will be connected to a separate monitor in a remote surveillance center. The monitors will be positioned in a circular array with an operator placed in the middle, to mimic the scene viewed by the cameras.

      There was an earlier attempt that involved actual stitching, using data projectors instead of monitors (www.equipe-usa.com). It worked very well on the blending of images – an almost seamless view was created. But as you know, video cameras has got much less pixel resolution than todays DSLRs. And when transforming an image in a non-linear fashion (e.g. stitching) you will get worse apparent resolution, something one simply cannot afford for this type of surveillance. This was easy to see with the naked eye in this case, the images were far from the original sharpness.

      The outcome of the earlier attempt was that the best image quality (for video surveillance purposes) is achieved when there is a 1:1 correspondence between camera and display device resolution. Therefore no stitching should take place. The system operators will see some camera view overlaps on adjacent monitors, but that is much less a critical factor than the apparent resolution. The operators must be able to detect and identify all moving objects in the videos, including small objects.

      Using higher resolution cameras and perform a stitching/scaling/sharpening process before presentation on the monitors could be a way around this. Right now, that would mean a lower frame rate from the camera outputs, because the data transfer technology used here cannot handle much more bandwidth right now. And a lower frame rate is not acceptable. This might be something for the future though.

      I am sorry for any confusion, but the Cinegon 1.4/8 lens will not be used for this actual project. I just had it lying around and used it to test panotools capabilities. I believe it to be a lens of quite high quality. It could be that other manufacturers may not have such high quality in their manufacturing process, so the spread of focal lengths between individual samples may be higher. Since the best lens (in terms of appropriate focal length) may not be a Schneider or similar high quality, I cannot rule out that one have to verify that each lens sample fulfills its specification before mounting or replacing it in the system.

      Please do not take me as completely ignorant or stupid; though I have learnt a lot by reading all the posts in this thread, I am still not completely sure if I received an answer to my original question. That was if the hfov panotools report is the exact same angle as the physical camera sees - i.e. leftmost to rightmost pixel at the vertical center line. Or, if hfov is some virtual camera angle that needs to be radially compensated to get the physical angle. I am leaning to the latter, but am not 100% certain. If so, how shall one compensate the hfov to get the physical angle?

      Regards
      Henrik
    • Jim Watters
      ... The best way to have panotools optimize FoV is to take a 360 pano. If you were to optimism HFoV but set the distortion values for abc at 0 then the HFoV
      Message 2 of 23 , Jan 7, 2011
        On 2011-01-07 7:03 PM, engstrom_henrik wrote:
        > I am still not completely sure if I received an answer to my original question. That was if the hfov panotools report is the exact same angle as the physical camera sees - i.e. leftmost to rightmost pixel at the vertical center line. Or, if hfov is some virtual camera angle that needs to be radially compensated to get the physical angle. I am leaning to the latter, but am not 100% certain. If so, how shall one compensate the hfov to get the physical angle?
        >
        > Regards
        > Henrik

        The best way to have panotools optimize FoV is to take a 360 pano.

        If you were to optimism HFoV but set the distortion values for abc at 0 then the
        HFoV would be correct.

        If the input images are in portrait then you could optimize HFoV and the abc
        values and HFoV would be correct.

        But if the input images are in landscape mode and there are values for abc then
        the HFoV would be changed slightly from the abc values. Panotools maintains the
        FoV on the shortest dimension when using abc distortion correction. When the
        input images are in portrait the shortest dimension is also the HFoV so even
        with optimizing distortion the HFoV will be the same.

        Up till a couple days I never thought of this. Generally the difference in FoV
        created by the distortion values are small and as long as I got a good finished
        pano I did not care.

        --
        Jim Watters
        http://photocreations.ca
      • Erik Krause
        ... What stitching engine did you use? If you don t scale up a single transformation step (and panotools combines all necessary warping into one step) should
        Message 3 of 23 , Jan 8, 2011
          Am 08.01.2011 00:03, schrieb engstrom_henrik:
          > And when transforming an image in a non-linear fashion (e.g.
          > stitching) you will get worse apparent resolution, something one
          > simply cannot afford for this type of surveillance. This was easy to
          > see with the naked eye in this case, the images were far from the
          > original sharpness.

          What stitching engine did you use? If you don't scale up a single
          transformation step (and panotools combines all necessary warping into
          one step) should not cause any noticeable image degradation. At least
          not if you use one of the better interpolation algorithms.

          --
          Erik Krause
          http://www.erik-krause.de
        • paul womack
          ... Doesn t that provide a perfect emulation of bad things happening behind the viewer s back? BugBear
          Message 4 of 23 , Jan 10, 2011
            engstrom_henrik wrote:
            > This is a surveillance application where a large area will be monitored by multiple cameras. To get a full overview, ten video cameras will form a 360 degree coverage setup in a circular array. Each individual camera will be connected to a separate monitor in a remote surveillance center. The monitors will be positioned in a circular array with an operator placed in the middle, to mimic the scene viewed by the cameras.

            Doesn't that provide a perfect emulation of bad things happening behind the viewer's back?

            BugBear
          • engstrom_henrik
            ... Thanks Jim, Your description of the portrait/landscape aspects makes perfect sense to me. If I find any more information on this subject I will post it
            Message 5 of 23 , Jan 10, 2011
              --- In PanoToolsNG@yahoogroups.com, Jim Watters <jwatters@...> wrote:
              >
              > On 2011-01-07 7:03 PM, engstrom_henrik wrote:
              > > I am still not completely sure if I received an answer to my original question. That was if the hfov panotools report is the exact same angle as the physical camera sees - i.e. leftmost to rightmost pixel at the vertical center line. Or, if hfov is some virtual camera angle that needs to be radially compensated to get the physical angle. I am leaning to the latter, but am not 100% certain. If so, how shall one compensate the hfov to get the physical angle?
              > >
              > > Regards
              > > Henrik
              >
              > The best way to have panotools optimize FoV is to take a 360 pano.
              >
              > If you were to optimism HFoV but set the distortion values for abc at 0 then the
              > HFoV would be correct.
              >
              > If the input images are in portrait then you could optimize HFoV and the abc
              > values and HFoV would be correct.
              >
              > But if the input images are in landscape mode and there are values for abc then
              > the HFoV would be changed slightly from the abc values. Panotools maintains the
              > FoV on the shortest dimension when using abc distortion correction. When the
              > input images are in portrait the shortest dimension is also the HFoV so even
              > with optimizing distortion the HFoV will be the same.
              >
              > Up till a couple days I never thought of this. Generally the difference in FoV
              > created by the distortion values are small and as long as I got a good finished
              > pano I did not care.
              >
              > --
              > Jim Watters
              > http://photocreations.ca
              >

              Thanks Jim,

              Your description of the portrait/landscape aspects makes perfect sense to me. If I find any more information on this subject I will post it here.

              Regards
              Henrik
            • engstrom_henrik
              ... I do not know exactly how the stitching was performed in that particular case, it s the IPR of another company. I am guessing something like bicubic
              Message 6 of 23 , Jan 10, 2011
                --- In PanoToolsNG@yahoogroups.com, Erik Krause <erik.krause@...> wrote:
                >
                > Am 08.01.2011 00:03, schrieb engstrom_henrik:
                > > And when transforming an image in a non-linear fashion (e.g.
                > > stitching) you will get worse apparent resolution, something one
                > > simply cannot afford for this type of surveillance. This was easy to
                > > see with the naked eye in this case, the images were far from the
                > > original sharpness.
                >
                > What stitching engine did you use? If you don't scale up a single
                > transformation step (and panotools combines all necessary warping into
                > one step) should not cause any noticeable image degradation. At least
                > not if you use one of the better interpolation algorithms.
                >
                > --
                > Erik Krause
                > http://www.erik-krause.de
                >

                I do not know exactly how the stitching was performed in that particular case, it's the IPR of another company. I am guessing something like bicubic interpolation (one-step).

                Panotools have got much more advanced, near sinc, blending operations. But to my knowledge they all remove some high-frequency information. In this particular application, it can be beneficial to be able to read text on a vehicle. And this is where the transformations may hurt performance, since the text contains quite high-frequency edges. Another aspect is the ability to detect small vehicles far away.

                But I really would like to be proven wrong here, it could solve a lot of problems. I have uploaded some test images, I hope they can be viewed correctly;

                http://www.flickr.com/photos/21287457@N05/

                - 01_hf_pattern is an extremely high-frequency pattern (256x256 pixels) used as reference.
                - 02_rotate_1_0 is the reference pattern rotated 1.0 degrees in GIMP (possibly bicubic?).
                - 03_ hf_pattern_zoom is the top-left 64x64 pixels zoomed-in.
                - 04_ rotate_1_0_zoom is the top-left 64x64 pixels zoomed-in.

                It tries to illustrate effects of transformation of a high-frequency image data. It is an extreme case, but still shows that information may get completely lost in the process. Can a more advanced interpolator perform much better in this case?

                If one uses for example 10-15 MP cameras one may not think of this as a problem. Here, typically ~1 MP cameras are used, and it can be very noticeable.

                Regards
                Henrik
              • Erik Krause
                ... This is a real extreme example indeed. However, it is of purely theoretical use, since (I think) Nyquist theorem will prohibit that you ever get this. It
                Message 7 of 23 , Jan 10, 2011
                  Am 10.01.2011 21:59, schrieb engstrom_henrik:
                  > But I really would like to be proven wrong here, it could solve a lot
                  > of problems. I have uploaded some test images, I hope they can be
                  > viewed correctly;
                  >
                  > http://www.flickr.com/photos/21287457@N05/
                  >
                  > - 01_hf_pattern is an extremely high-frequency pattern (256x256
                  > pixels) used as reference.

                  This is a real extreme example indeed. However, it is of purely
                  theoretical use, since (I think) Nyquist theorem will prohibit that you
                  ever get this.

                  It is not possible to rotate your 01_hf_pattern example at the same size
                  without loosing information. This is because a pixel which is in between
                  a black and a white pixel will be grey, no matter what interpolator you
                  choose (except nearest neighbor, where you get an aliasing pattern).
                  This most probably is due to Nyquist theorem again.

                  But if you enlarge and rotate the image you don't loose much, since now
                  there is more than one pixel to display a black-white boundary. If you
                  do it in gimp you need to first enlarge, then rotate. Panotools does it
                  in one step. Interesting that the worst and fastest interpolator -
                  nearest neighbor (actually not an interpolator but a pixel picker) -
                  yields the sharpest result in this case (albeit with aliasing steps).

                  A comparison of the "classical" panotools interpolators is on
                  http://photocreations.ca/interpolator

                  The "new" ones feature adaptive kernel sizes which avoid aliasing while
                  downsizing.

                  --
                  Erik Krause
                  Offenburger Str. 33
                  79108 Freiburg
                Your message has been successfully submitted and would be delivered to recipients shortly.