Loading ...
Sorry, an error occurred while loading the content.

Stereo still panoramas from Wifi synced Protune Gopro cameras shooting video

Expand Messages
  • panovrx
    I have been doing quite a lot of testing with two Gopro Hero 2 cameras using the Wifi Bacpacs and the Remote to see what sort of sync is achievable for video
    Message 1 of 3 , Oct 16, 2012
    • 0 Attachment
      I have been doing quite a lot of testing with two Gopro Hero 2 cameras using the Wifi Bacpacs and the Remote to see what sort of sync is achievable for video shooting. It is mostly as good as can be got with the dedicated 3d kit backs and sync connection cord but it is much more convenient not having to deal with a sync connection cord and you get remote start and stop at the top of poles etc. Also soon (and already with 3rd party apps) their will be live streaming monitoring on my Android phone. You have more batteries to worry about though

      With Protune compression and dynamic range selected, 360 still panoramas stitched from Gopro video frame sequences are distinctly better looking than before.

      Stereo still panoramas using video-shooting Wifi synced Gopros with Protune settings looks very promising I think for rapid semi-automatic stereo capture and stitching of action scenes.

      Here is a draft stereo 360 panorama (anaglyph) of a crowded street scene captured in 15 seconds from Wifi synced Gopros
      (360/180 equi -- use DevalVR etc to view interactively)
      http://www.mediavr.com/pittstgopro.jpg

      The angular coverage of the stitched panoramas is about 160 degrees vertically. (I had the cameras rolled 30 degrees off vertical so I could use the extra diagonal coverage of the lenses).

      This is without any retouching and using Smartblend blending of the crowds.
      The obvious errors are not sync errors but Smartblend differences in stitching the L and R panoramas. Where Smartblend operated similarly on the crowd areas in the L and R panoramas -- the sync is good. By outputing individual layers from the source images the Smartblend differences errors could be retouched out.

      One of the advantages of video for capturing stereo still panoramas is that you have lots of frames so you can use one camera in a NPP location (the left camera here) and other offset (about 15cm interaxial here) and still get good blending for the offset camera. (The alternative is to have the cameras symmetrically disposed but that complicates a definitive calibration solution for either cameras.) I used a rotator that spun the cameras around continuously in about 15 seconds and I reduced the number of frames for stitching to 25% of the original number available. So it was equivalent to shooting about 8 fps instead of 30fps. I still had about 100 frames for 360 for each camera so the non-NPP character of the right camera was not an issue.

      PeterM
    • panovrx
      How to improve equality of action of Smartblend stitching of L and R stereo panoramas .... How to smart blend crowd etc scenes in stereo? This is the main
      Message 2 of 3 , Oct 16, 2012
      • 0 Attachment
        How to improve equality of action of Smartblend stitching of L and R stereo panoramas ....

        How to smart blend crowd etc scenes in stereo? This is the main remaining practical problem for action stereo panos I think. With my current workflow I convert the source L/R videos into frames, reduce the number of frames to 25% say, make level and aligned the first L and R fisheye frames . Then I convert the L and R video frame sequences to level 30 degree wide, 180 degree high equi strips. Then I calibrate and stitch the narrow L equi strips to a 360/180 equi panorama and apply that solution as a template to the R equi strips and stitch that to get the R panorama.

        In the stitching of the equi strips for the L and R panoramas I use Smartblend to deal with multiple appearances of the crowd in different shots. Now the R equi strips are aligned with the most distant areas of the L equi strips (because that is the simplest way to align them). So the most distant areas line up but closer areas are more or less displaced laterally from each other. This is why the Smartblend action is different for the L and R stitching.

        To get better equivalence in the Smartblend stitching you can try to align the L and R equi strips at the main foreground distance instead of at infinity . You can do this in PTGui by iteratively trying different yaw values when you generate the R equi strips. Now your L and R equi strips are aligned horizontally at some foreground depth instead of at infinity. Apply the L panorama template as before and now your Smartblending will be more equal L and R.

        Ideally there would be a smart blend option tailored to stereo source frame sets. I would detect the lateral alignment before blending.

        PeterM



        --- In PanoToolsNG@yahoogroups.com, "panovrx" <mediavr@...> wrote:
        >
        > I have been doing quite a lot of testing with two Gopro Hero 2 cameras using the Wifi Bacpacs and the Remote to see what sort of sync is achievable for video shooting. It is mostly as good as can be got with the dedicated 3d kit backs and sync connection cord but it is much more convenient not having to deal with a sync connection cord and you get remote start and stop at the top of poles etc. Also soon (and already with 3rd party apps) their will be live streaming monitoring on my Android phone. You have more batteries to worry about though
        >
        > With Protune compression and dynamic range selected, 360 still panoramas stitched from Gopro video frame sequences are distinctly better looking than before.
        >
        > Stereo still panoramas using video-shooting Wifi synced Gopros with Protune settings looks very promising I think for rapid semi-automatic stereo capture and stitching of action scenes.
        >
        > Here is a draft stereo 360 panorama (anaglyph) of a crowded street scene captured in 15 seconds from Wifi synced Gopros
        > (360/180 equi -- use DevalVR etc to view interactively)
        > http://www.mediavr.com/pittstgopro.jpg
        >
        > The angular coverage of the stitched panoramas is about 160 degrees vertically. (I had the cameras rolled 30 degrees off vertical so I could use the extra diagonal coverage of the lenses).
        >
        > This is without any retouching and using Smartblend blending of the crowds.
        > The obvious errors are not sync errors but Smartblend differences in stitching the L and R panoramas. Where Smartblend operated similarly on the crowd areas in the L and R panoramas -- the sync is good. By outputing individual layers from the source images the Smartblend differences errors could be retouched out.
        >
        > One of the advantages of video for capturing stereo still panoramas is that you have lots of frames so you can use one camera in a NPP location (the left camera here) and other offset (about 15cm interaxial here) and still get good blending for the offset camera. (The alternative is to have the cameras symmetrically disposed but that complicates a definitive calibration solution for either cameras.) I used a rotator that spun the cameras around continuously in about 15 seconds and I reduced the number of frames for stitching to 25% of the original number available. So it was equivalent to shooting about 8 fps instead of 30fps. I still had about 100 frames for 360 for each camera so the non-NPP character of the right camera was not an issue.
        >
        > PeterM
        >
      • Erik Krause
        ... This would probably be more complicated, even if smartblend would be in active development (which isn t the case unfortunately). If I understand correctly,
        Message 3 of 3 , Oct 17, 2012
        • 0 Attachment
          Am 17.10.2012 05:15, schrieb panovrx:
          > Ideally there would be a smart blend option tailored to stereo source
          > frame sets.

          This would probably be more complicated, even if smartblend would be in
          active development (which isn't the case unfortunately). If I understand
          correctly, you would like the program to place the blending seam such,
          that a person that is in the left image is in the right image, too. But
          since there is a displacement between both persons, the program would
          need to identify them as person.

          Currently smartblend does some magic based on "Kolmogorov Min Cut" and
          "Psychovisual Error" (according to Mike Norel, the developer of
          smartblend) in order to find the seam line. But I doubt this technique
          is magical enough to accomplish what you want.

          BTW.: Did you try to use photoshop autoblend? According to Hans it is
          even better than smartblend.

          --
          Erik Krause
          http://www.erik-krause.de
        Your message has been successfully submitted and would be delivered to recipients shortly.