360 Gopro 3D crowd
- http://www.mediavr.com/newtown3d/newtown3dgopropanorama.htm (anaglyph)
here is a very large crowd in 3d from 5m in the air. Pole views are good for hyperstereo action panoramas as you have more scope for more hyperstereo (no near objects) and the moving objects (people) are more likely to be visually isolated against the ground -- rather than being in serried ranks as seen from ground level -- which is more challenging for equal L/R Smartblend 3d stitching.
Also the Gopro camera (the Hero 2 at least) is very flare prone and having the cameras tilted down like here gives you less flare and it also evens out the automatic exposure variations. This is with the Medium angle setting with Protune at 25 fps with a 360 rotation in 10 seconds = 250 frames at 1080p using the 3dKit with an extender cable for hyperstereo. The down tilt here is about 25 degrees.
The depth artefacts that are there are due to differential (L/R) Smartblend action. One way I thought of to handle this issue in future is to use the crop feature in PTGui and to do multiple stitches using a varying offset in the crop positions in L and R stitches. This corresponds to setting Smartblend to optimise its action for a particular parallax (scene depth) value. Then composite the multiple stitches.
- Here from the Gopro Users 3D forum are some answers I made to questions about Gopro 3d panorama production
This is stitched (with PTGui) from 10 seconds worth of video frames --
with the 3d kit (with an extender) with Hero 2s with Protune, 1080p M, 25 fps -- with the
cameras on a pole with a rotator which is spinning them around at 1 revolution in 10 seconds
-- what look like sync errors are in fact blending differences of the stitching of the action on the L and R images
The work flow is convoluted but fast because the frames are small.
Calibration means finding common points between images and working out how the camera must have
rotated between shots and what kind of lens characteristics it must have (fov, distortions,off-centeredness).
The common points are found automatically and the point finding can be constrained by masks allowing
you to specify that only static and distant points are used in some calibration situations
First I crop the videos to synced 360 sequences and convert to frames (L and R sets).
Then stitch the R frames with PTGui -- because my rig rotates around the R camera lens axis.
And level the panorama it will produce in PTGui. Now we know the orientation of frame 1 of the R sequence
Then I work out the orientation of the first frame of the L sequence by using distant
common points with frame 1 of the R sequence.
Then I convert all the R fisheye frames into narrow equirectangular format images 150 degrees high, 40 degrees wide using the first R frame orientation for all frames.
Then ditto I convert all the L fisheyes into narrow equirectangular format images 150 degrees high, 40 degrees wide using the first L frame orientation for all frames.
Each of these pairs of R and L equi frames are parallel to each other and could make aligned stereo pairs with no further work.
Then I stitch the R equi frames into a panorama (the R panorama) using a fresh calibration. Save that calibration as a template and apply it to the L equi frames to get the L panorama.
PTGui will output a webpage with an interactive Flash panorama. More specialised
authoring packages like Pano2VR and Panotour Pro can deliver versions that work on mobile
devices and tablets too -- using sensors in the devices sometimes to provide a more
intuitive interface. Eg -- turn your Iphone and you see the panorama scroll realistically.
I use Panotour Pro for these panoramas but Pano2VR has some specific advantages for anaglyph stereo
panoramas (better no color compression workflows).
If you look at the street panorama -- in non-interactive form
you can see that the vertical alignment of L and R views is accurate
but if you look at the festival panorama -- in interactive form
you will see that there is a lot of vertical misalignment in some parts of the image
This is not because the festival panorama had vertical misalignment in the non-interactive form.
It is a more fundamental problem to do with how capturing a stereo panorama this way is not
equivalent exactly to turning around and looking up and down in the real world with two eyes.
But you will notice the vertical alignment is good in a vertical zone in the center of the viewer.
And if you dont mind strong disparities in the rest of the image it still gives a strong 3d impression.
You will notice too that if you tilt down the bottom corners of the images have the worst disparities.
This is the main problem for stereo panoramas done with two spinning cameras -- getting all-over vertical
alignment in interactive versions.
If you use DSLRs you can get much sharper stereo panoramas done the same way but the Gopros are
rapidly getting towards that sort of quality and are much faster and more flexible in the kind of stereo panoramas
you can make with them
but the viewer view alignment issues remain