Loading ...
Sorry, an error occurred while loading the content.
 

Re: [PanoToolsNG] Re: Quest for effective high-res 360x180 VR pano distribution

Expand Messages
  • Ken Warner
    Hi Erik, Yes, I understand the jpeg compression block thing. I thought it was 8x8 but no matter. 512 is a handy size for lots of memory management issues as
    Message 1 of 27 , Jun 3, 2011
      Hi Erik,

      Yes, I understand the jpeg compression block thing. I thought it
      was 8x8 but no matter. 512 is a handy size for lots of memory management
      issues as is 1024.

      I also don't have a clue as to what is going on deep inside the rendering
      engine. It would be nice to know more about that and if you really have
      to have overlap for every 512 (or 510) block of memory or just for
      the whole cube face.

      I remember once an experiment I did with OpenGL and WebGL putting together
      6 cube faces. I got visible seams on the edges but nowhere else. Someone
      suggested a particular parameter for one of the OpenGL functions that
      blended the seams. I forget the details now.

      erik_leeman wrote:
      > Hi Ken,
      >
      > I think you are confusing tiles and cube faces here.
      > In this Pano2VR context tiles are the 'building blocks' of a multi resolution panorama. They are the .jpeg files that have to be shoveled in and out of the graphics engine in large numbers. Inside that graphics engine there is, as I understand it, one big soup of data, in which those cube faces do not really exist as such, they are abstract entities. Therefore I think their dimensions are not relevant, as long as they completely fit into the available amount of memory. But I could be wrong of course.
      > The dimensions of those individual .jpegs on the other hand does matter. Not only for logistic purposes, but also for the efficiency of the JPEG compression algorithm. I do not have a link to solid information about this at hand, but what I remember is that you have to limit bitmap dimensions to multiples of 16x16 pixels because JPEG compression works best that way. 512x512 complies nicely to that requirement.
      >
      > Selecting cubefaces from a range of 510-1020-1530-2040-2550-3060-3570 in the Multiresolution tab of Pano2VR, a tile size of 510, AND an overlap of 1 pixel for each tile seems to me to be the way to 'get it right'. But again, I could be wrong.
      >
      > Cheers!
      >
      > Erik Leeman
      >
      > <http://www.flickr.com/photos/erik-nl/> <http://www.erikleeman.com/>
      >
      > --- In PanoToolsNG@yahoogroups.com, Ken Warner wrote:
      >> Erik,
      >>
      >> I understand the need for sizes of textures to be a power of two.
      >> I won't go into the technical details here because I probably
      >> wouldn't get it right with the current graphics cards being so
      >> advanced.
      >>
      >> But I have a question and need clarification.
      >>
      >> A one pixel overlap on a 510x510 tile will give you 512x512
      >> tiles. However, a multiple of 510 will give more than 1 pixel
      >> overlap
      >>
      >> 512; 1024; 1536; 2048 etc. are the multiples of two. So you want
      >> 1534 to give you a one pixel overlap not 1530 --- and same for
      >> other multiples. The multiples are of (for example)512 - 2 = 510 to
      >> get the tile size right or 2048 - 2 = 2046 etc.
      >>
      >> Am I wrong?
      >
      >
    • Erik Krause
      ... [...] ... Do you have a source for this claim? According to Nvidia documentation (PDF): http://tinyurl.com/5s47pnz Previously core OpenGL required texture
      Message 2 of 27 , Jun 7, 2011
        Am 03.06.2011 00:09, schrieb Trausti Hraunfjord:
        > It means that cubefaces/graphics have to go by the "power of two" concept
        > (this is not a request, but a requirement) . The power of two means that
        > graphics have to have the following dimensions:
        > 1/2/4/8/16/32/64/128/256/512/1024/2048/4096
        >
        [...]
        >
        > This goes for ALL hardware that is based on OpenGL 2 standards, and not some
        > "evil Adobe plan" as some might like to think. All iDevices are already
        > bound by these standards.

        Do you have a source for this claim? According to Nvidia documentation
        (PDF): http://tinyurl.com/5s47pnz "Previously core OpenGL required
        texture images (not including border texels) to be a power-of-two size
        in width, height, and depth. OpenGL 2.0 allows arbitrary sizes for
        width, height, and depth"

        Regarding Flash hardware acceleration Adobe writes in
        http://tinyurl.com/3qssbwz "Your bitmap dimensions do not have to be a
        power of two, but the more iterations over which they can be evenly
        divided, the better."

        However, if I understand
        http://www.khronos.org/webgl/wiki/WebGL_and_OpenGL_Differences
        correctly, non-power-of-two textures can't be mip-mapped in OpenGL and
        WebGL, which indeed would be a drawback. But the page also describes a
        mechanism how to work around this limitation...

        --
        Erik Krause
        http://www.erik-krause.de
      • Trausti Hraunfjord
        I am the source of the claim. You can read other sources with different claims, such as the ones you have provided. Here is one that deals with the issue
        Message 3 of 27 , Jun 7, 2011
          I am the source of the claim.

          You can read other sources with different claims, such as the ones you have
          provided. Here is one that deals with the issue based on mipmapping, which
          is the way our F11 based panorama engine handles things.
          http://gamedev.stackexchange.com/questions/7927/why-would-you-use-textures-that-are-not-a-power-of-2
          Of course there exist workarounds, and some workarounds may be quite ok, but
          still, the best way to do things, is the right way.

          Probably I was too general in my claim, but since I am not an expert, nor a
          rocket scientist or a math genius, I only claimed what I thought to be
          absolutely true. More knowledge doesn't hurt, and I have read a little more
          about the subject. My previous knowledge comes from my programmer:

          I asked: So it will not be possible to use cubefaces that are 2500x2500
          pixels?
          He answered: No. Mipmapping for the GPU requires the power of two sizes.

          There was more, but from that I based my claim.

          Trausti



          On Tue, Jun 7, 2011 at 2:26 AM, Erik Krause <erik.krause@...> wrote:

          > Am 03.06.2011 00:09, schrieb Trausti Hraunfjord:
          > > It means that cubefaces/graphics have to go by the "power of two" concept
          > > (this is not a request, but a requirement) . The power of two means that
          > > graphics have to have the following dimensions:
          > > 1/2/4/8/16/32/64/128/256/512/1024/2048/4096
          > >
          > [...]
          > >
          > > This goes for ALL hardware that is based on OpenGL 2 standards, and not
          > some
          > > "evil Adobe plan" as some might like to think. All iDevices are already
          > > bound by these standards.
          >
          > Do you have a source for this claim? According to Nvidia documentation
          > (PDF): http://tinyurl.com/5s47pnz "Previously core OpenGL required
          > texture images (not including border texels) to be a power-of-two size
          > in width, height, and depth. OpenGL 2.0 allows arbitrary sizes for
          > width, height, and depth"
          >
          > Regarding Flash hardware acceleration Adobe writes in
          > http://tinyurl.com/3qssbwz "Your bitmap dimensions do not have to be a
          > power of two, but the more iterations over which they can be evenly
          > divided, the better."
          >
          > However, if I understand
          > http://www.khronos.org/webgl/wiki/WebGL_and_OpenGL_Differences
          > correctly, non-power-of-two textures can't be mip-mapped in OpenGL and
          > WebGL, which indeed would be a drawback. But the page also describes a
          > mechanism how to work around this limitation...
          >
          > --
          > Erik Krause
          > http://www.erik-krause.de
          >
          >
          > ------------------------------------
          >
          > --
          >
          >
          >
          >


          [Non-text portions of this message have been removed]
        • Wim Koornneef
          ... I agree but I think it is good to give an explanation why. With the default base tile size and the default steps of the tiles you most times get a lot of
          Message 4 of 27 , Jun 7, 2011
            erik_leeman wrote:
            > .....
            > Selecting cubefaces from a range of 510-1020-1530-2040-2550-3060-3570 in
            > the Multiresolution tab of Pano2VR, a tile size of 510, AND an overlap of
            > 1 pixel for each tile seems to me to be the way to 'get it right'.......
            >

            I agree but I think it is good to give an explanation why.

            With the default base tile size and the default steps of the tiles you most
            times get a lot of tiles in each of the tile sets that are not equel in
            height and width, by doing some math yourself you can make sure that each
            tile in a set has the same size for height and width.
            By doing this you reduce the number of tiles in each tile set and this can
            speed up the download and processing time.

            By adding extra steps of tiles the transitions will be better, with the
            default number of tile steps you will see that when you are zooming in and
            are getting closer to the point that new tiles will be displayed that the
            image will be a bit fuzzy and then when you zoom further in suddenly (when
            the tiles are changed) the image will be sharp. With an extra step you will
            not see the point where the tiles are changed.

            Of course the total pano size will be larger with extra steps but you get
            extra quality back for it (better transitions and less visible shimmering)
            and when viewing the pano with a normal computer and an average internet
            speed connection the extra total size is not an issue.

            Skipping steps as Erik did is possible when you know the size of the screens
            you are making the panos for.
            It is a bit of work but when you make a pano with many steps from from very
            small to the max with extra steps in between and then give the tiles that
            contains the same part of the scene a "visible" number with photoshop that
            correspondents with the step then when you zoom in and out you can see the
            numbers in the pano changing.
            Also zoom in and out with a different size of the viewer window and by doing
            this you know which tiles are used for a specific display size.
            You can also use this test to see what happens when you change the bias and
            other settings.

            Wim




            --
            View this message in context: http://panotoolsng.586017.n4.nabble.com/Quest-for-effective-high-res-360x180-VR-pano-distribution-tp3565426p3579050.html
            Sent from the PanoToolsNG mailing list archive at Nabble.com.
          • Ken Warner
            1) WebGL and iDevices are OpenGL ES (embedded systems) Not clear to me if what you can say for OpenGL applies to OpenGL ES. I know some parameters to some
            Message 5 of 27 , Jun 7, 2011
              1) WebGL and iDevices are OpenGL ES (embedded systems) Not clear to me
              if what you can say for OpenGL applies to OpenGL ES. I know some parameters
              to some functions are different.

              2) There is no rule that says the edge of a texture has to be mapped to a
              particular place on a surface. And there's no rule that says "cube faces"
              can't be bigger than the cube they surround. It's virtual space not real space.

              Erik Krause wrote:
              > Am 03.06.2011 00:09, schrieb Trausti Hraunfjord:
              >> It means that cubefaces/graphics have to go by the "power of two" concept
              >> (this is not a request, but a requirement) . The power of two means that
              >> graphics have to have the following dimensions:
              >> 1/2/4/8/16/32/64/128/256/512/1024/2048/4096
              >>
              > [...]
              >> This goes for ALL hardware that is based on OpenGL 2 standards, and not some
              >> "evil Adobe plan" as some might like to think. All iDevices are already
              >> bound by these standards.
              >
              > Do you have a source for this claim? According to Nvidia documentation
              > (PDF): http://tinyurl.com/5s47pnz "Previously core OpenGL required
              > texture images (not including border texels) to be a power-of-two size
              > in width, height, and depth. OpenGL 2.0 allows arbitrary sizes for
              > width, height, and depth"
              >
              > Regarding Flash hardware acceleration Adobe writes in
              > http://tinyurl.com/3qssbwz "Your bitmap dimensions do not have to be a
              > power of two, but the more iterations over which they can be evenly
              > divided, the better."
              >
              > However, if I understand
              > http://www.khronos.org/webgl/wiki/WebGL_and_OpenGL_Differences
              > correctly, non-power-of-two textures can't be mip-mapped in OpenGL and
              > WebGL, which indeed would be a drawback. But the page also describes a
              > mechanism how to work around this limitation...
              >
            Your message has been successfully submitted and would be delivered to recipients shortly.