Loading ...
Sorry, an error occurred while loading the content.

Re: Sony announce 25Mp 35mm sensor

Expand Messages
  • Erik Krause
    ... This is what I would suspect people to do with a full frame sensor, yes. But apparently most of them use a 10.5mm or even 8mm lens in order to need less
    Message 1 of 15 , Feb 1, 2008
    • 0 Attachment
      On Friday, February 01, 2008 at 10:41, Keith Martin wrote:

      > But then, a full-frame sensor camera does mean working with different
      > lenses to get the equivalent effect. So a rough equivalent of the
      > 10.5mm would be 16mm, wouldn't it? Something like the old 16mm
      > fisheye that I used briefly on my old Canon A1.

      This is what I would suspect people to do with a full frame sensor,
      yes. But apparently most of them use a 10.5mm or even 8mm lens in
      order to need less shots. With a 25MP full frame sensor they would
      get roughly the same output resolution as with the same lens on a
      10MP crop factor camera.

      > Assuming the manufacturing and glass quality was similar, that would
      > give approximately the same view but reduced chromatic abberation.

      Yes, of course. But you always can go for even higher quality using
      longer lenses and more shots...

      > (Slightly reduced depth of field too, but that's physics for ya!)

      There is a frequent misunderstanding about DOF and spherical
      panoramas mostly because people use DOF values intended for single
      printed images as a comparison. For spherical you have to calculate
      differently: http://wiki.panotools.org/Depth_of_Field

      > I don't think it is really a matter of being beyond the resolution of
      > a fisheye, as that's just analog-world optics. The Sigma 8mm and
      > Nikon 10.5mm fisheyes are designed to produce acceptable images on a
      > cropped-area sensor, and trying to capture an image using a broader
      > part of the image means going beyond the design intentions.

      The corners of a crop factor 1.5 image from a 10.5mm lens are very
      close to the outer image circle. Hence if you talk about fisheyes you
      can't simply say "designed for...". Anyone should know that close to
      the image circle there is a lower resolution, not only due to lens
      design flaws but due to the fisheye mapping.

      > So... isn't the important thing simply using a lens that is actually
      > meant to cover a full-frame sensor?

      The pre-digital Sigma 8mm lenses where meant to cover a full-frame
      sensor. Nevertheless the image quality was bad near the image circle.

      You could use it on a full-frame sensor for a 3-around workflow,
      where each image contributes about 120° - which is pretty inside the
      image circle and (coincidentally!) a crop by 1.5

      Same applies if you use a 10.5mm lens on a full-frame. In a 3-around
      workflow you more or less use only the parts visible on a 1.5 crop
      sensor anyway. The ecxess parts are only necessary because you need
      some overlap to find control points.

      best regards


      Erik Krause
      http://www.erik-krause.de
    • Keith Martin
      ... Got it. Although really it is is lower *quality* that we re talking about. Resolution, although related in a sense, means something slightly different. At
      Message 2 of 15 , Feb 1, 2008
      • 0 Attachment
        Sometime around 1/2/08 (at 13:35 +0100) Erik Krause said:

        >close to the image circle there is a lower resolution, not
        >only due to lens design flaws but due to the fisheye mapping.

        Got it. Although really it is is lower *quality* that we're talking
        about. Resolution, although related in a sense, means something
        slightly different. At least, with digital images it is used to refer
        to the sensors and the final pixels.

        Thanks for the further info and the DoF link! I was thinking in terms
        of individual shots, but that's interesting data on that wiki. Stuff
        for me to ponder. :-)

        k
      • mrjimbo
        Keith, I m not a rocket scientist but did learn that the delay for the introduction of teh Betterlight new 10k scan back was all about that no lenses resolved
        Message 3 of 15 , Feb 1, 2008
        • 0 Attachment
          Keith,
          I'm not a rocket scientist but did learn that the delay for the introduction of teh Betterlight new 10k scan back was all about that no lenses resolved what it would do properly.. Some how they did get to a compromise and did release the back.. In my converstaions with them at that time they spoke of the issues related to that level of resolving using optics.. Further we must realize that today optics are multi part.. So it's probably not just a matter of saying make another one that does it.. In the smaller sensors they have been packing more and more pixels.. but as spoken in these posts that has been at a price.. So it makes sense to make larger sensors..so the info that is captured isn't shrunk as much. The optics are actually doing a conversion...making a big image fit on a small sensor.. It appears that what we are experiencing is degradiation when we get to a certain threhshold at out current optical technology. The present answer is larger cameras it seems.. So tommorrows Nikon may look like my Pentax 6x7 with a face lift and a Nikon logo on it ( hopefully a little lighter too)....or a new version of a Sinar 8x10 with a fixed sensor in the back of it.with large pixel sizes... Whooo Hooo.

          jimbo


          ----- Original Message -----
          From: Keith Martin
          To: PanoToolsNG@yahoogroups.com
          Sent: Friday, February 01, 2008 3:41 AM
          Subject: RE: [PanoToolsNG] Re: Sony announce 25Mp 35mm sensor


          Sometime around 1/2/08 (at 01:42 -0800) Paul D. DeRocco said:

          > > From: Erik Krause
          >>
          >> ...then 10MP on a crop factor 1.6 sensor is beyond the resolution of
          >> any fisheye, too. 25MP on a full frame sensor has the same absolute
          >> resolution (pixel density) like a 10MP sensor at crop faktor 1.6
          >
          >Probably. Even on my 6Mp 10D, which is a 1.6x crop sensor, I can see a lot
          >of CA near the edges, so it's obvious that I wouldn't be getting any more
          >sharpness if I stuck it on my 10Mp 40D. And a 25MP FF sensor would probably
          >be even worse, because it reaches into the worst part of the lens.

          But then, a full-frame sensor camera does mean working with different
          lenses to get the equivalent effect. So a rough equivalent of the
          10.5mm would be 16mm, wouldn't it? Something like the old 16mm
          fisheye that I used briefly on my old Canon A1. Assuming the
          manufacturing and glass quality was similar, that would give
          approximately the same view but reduced chromatic abberation.
          (Slightly reduced depth of field too, but that's physics for ya!)

          I don't think it is really a matter of being beyond the resolution of
          a fisheye, as that's just analog-world optics. The Sigma 8mm and
          Nikon 10.5mm fisheyes are designed to produce acceptable images on a
          cropped-area sensor, and trying to capture an image using a broader
          part of the image means going beyond the design intentions. So...
          isn't the important thing simply using a lens that is actually meant
          to cover a full-frame sensor?

          (I think that's what you meant in your first post, but I wasn't sure...)

          k




          [Non-text portions of this message have been removed]
        • Fabio Bustamante
          Hey Carel, So in theory it would be possible to develop a ~4mp image from a 12.8mp RAW file, is it right? Have you ever tried this? How different in practice
          Message 4 of 15 , Feb 2, 2008
          • 0 Attachment
            Hey Carel,

            So in theory it would be possible to develop a ~4mp image from a 12.8mp
            RAW file, is it right? Have you ever tried this? How different in
            practice would that be from reducing a 12.8mp picture to a 4mp size with
            a good interpolator?

            I can hardly imagine a 4mp image that surpasses it's 12.8mp version in
            any way...

            Is this really used in star photography?
            >
            > One could use all these pixels in combination with DCRaw in the Super-pixel
            > mode to circumvent the Bayer matrix induced artifacts. The main disadvantage
            > of the super-pixel method is that you end up with an image that is only
            > 1/4th size of the original, so with this sensor you would end up with a
            > 6Mpixel image, but more detailed image.
            > http://deepskystacker.free.fr/english/technical.htm#rawdecod
            >
            > Carel
            >
          • Carel
            ... No, it would not surpass the original sized quality, but might be an interesting way to use this overkill of 24Mpixels for web purposes. I only have 5D
            Message 5 of 15 , Feb 2, 2008
            • 0 Attachment
              Fabio Bustamante-2 wrote:
              >
              > Hey Carel,
              >
              > So in theory it would be possible to develop a ~4mp image from a 12.8mp
              > RAW file, is it right? Have you ever tried this? How different in
              > practice would that be from reducing a 12.8mp picture to a 4mp size with
              > a good interpolator?
              >
              > I can hardly imagine a 4mp image that surpasses it's 12.8mp version in
              > any way...
              >
              > Is this really used in star photography?
              >
              >

              No, it would not surpass the original sized quality, but might be an
              interesting way to use this overkill of 24Mpixels for web purposes. I only
              have 5D images, but a test is on the way. I will use the super-pixel method
              with DCRaw versus ACR. My reasoning was along the lines of Bernhard Vogl,
              his complaints about the shortcomings of the Bayer array and his observation
              that one retains more detail when downsizing an image that was taken with a
              longer lens to the size of the same image taken with a shorter lens. But
              maybe that is not a good analogy.

              >Is this really used in star photography?

              The inclusion of this method in DeepSkyStacker would indicate so. When I
              asked about this on my recent visit to the Mt Wilson Observatory, it did not
              seem to ring a bell, while the recently discussed method of getting a
              sharper image by using a slightly misalligned stack of images ("dribbling")
              did.

              Carel
              --
              View this message in context: http://www.nabble.com/Sony-announce-25Mp-35mm-sensor-tp15207204p15249800.html
              Sent from the PanoToolsNG mailing list archive at Nabble.com.
            Your message has been successfully submitted and would be delivered to recipients shortly.