Loading ...
Sorry, an error occurred while loading the content.

Re: GEOSTATS: Assessing the success of simulation

Expand Messages
  • Pierre Goovaerts
    Hi Tom, You can look at the paper by C. Deutsch Deutsch, C.V. 1997. Direct assessment of local accuracy and precision. In: Baafi, E.Y., Schofield, N.A.
    Message 1 of 4 , Jan 24, 2000
    • 0 Attachment
      Hi Tom,

      You can look at the paper by C. Deutsch
      Deutsch, C.V. 1997. Direct assessment of local accuracy and precision.
      In: Baafi, E.Y., Schofield, N.A. (Eds.), Geostatistics Wollongong '96.
      Kluwer Academic Publishers, Dordrecht, pp. 115--125.

      The corresponding Gslib program, called accplt, can
      be downloaded from:
      http://www.ualberta.ca/~cdeutsch/

      Note that if the objective is simply to estimate a probability
      of contamination over the same support than the data, I don't
      see the benefit of using stochastic simulation over a more
      straightforward (indicator or multiGaussian) kriging approach.

      Regards,

      Pierre

      <><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><>

      ________ ________
      | \ / | Pierre Goovaerts
      |_ \ / _| Assistant professor
      __|________\/________|__ Dept of Civil & Environmental Engineering
      | | The University of Michigan
      | M I C H I G A N | EWRE Building, Room 117
      |________________________| Ann Arbor, Michigan, 48109-2125, U.S.A
      _| |_\ /_| |_
      | |\ /| | E-mail: goovaert@...
      |________| \/ |________| Phone: (734) 936-0141
      Fax: (734) 763-2275
      http://www-personal.engin.umich.edu/~goovaert/

      <><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><>


      On Mon, 24 Jan 2000, Tom Charnock wrote:

      > Hi
      >
      > I've been looking at simulation to generate probability maps of
      > contamination exceeding a given level. I am looking at how we might apply
      > the technique in the initial stages of an accident, consequently, though I
      > have large datasets to try the techniques on, I only use a small subset when
      > generating the realisations. My question is whether there is any standard
      > method for assessing the validity of the probability map using the remaining
      > data. I.e. I'm looking for some kind of jackknifing procedure but for
      > simulation rather than estimation.
      >
      > cheers
      >
      > Tom
      >
      > PS can anyone point me at any literature about introducing known trends into
      > simulation codes.
      >

      --
      *To post a message to the list, send it to ai-geostats@....
      *As a general service to list users, please remember to post a summary
      of any useful responses to your questions.
      *To unsubscribe, send email to majordomo@... with no subject and
      "unsubscribe ai-geostats" in the message body.
      DO NOT SEND Subscribe/Unsubscribe requests to the list!
    • Tom Charnock
      Hi Thanks to Pierre for the advice about indicator and multigaussian kriging. I ve looked at indicators. However the problem I have with all the techniques
      Message 2 of 4 , Jan 26, 2000
      • 0 Attachment
        Hi

        Thanks to Pierre for the advice about indicator and multigaussian kriging. I've looked
        at indicators. However the problem I have with all the techniques I've tried is that we will have to apply to very scarce data. The consequence is that it is very hard to generate a reasonable variogram on the basis of the data alone. This means I have to use some feeling about the nature of the process and my experience with large datasets to define (ok fiddle) a SV model which I think is about right. I'm not particularly happy about doing this for the untransformed data, I'm less happy doing it for indicators. Comments anyone?

        Multigaussian I have not tried, can anyone point me at any software etc. Would disjunctive kriging also do the same things for me?


        Tom

        PS thanks also to Phaedon who responded directly


        > -----Original Message-----
        > From: Pierre Goovaerts [mailto:goovaert@...]
        > Sent: 24 January 2000 22:33
        > To: Tom Charnock
        > Cc: 'AI-Geostatistics'
        > Subject: Re: GEOSTATS: Assessing the success of simulation
        >
        >
        > Hi Tom,
        >
        > You can look at the paper by C. Deutsch
        > Deutsch, C.V. 1997. Direct assessment of local accuracy and precision.
        > In: Baafi, E.Y., Schofield, N.A. (Eds.), Geostatistics Wollongong '96.
        > Kluwer Academic Publishers, Dordrecht, pp. 115--125.
        >
        > The corresponding Gslib program, called accplt, can
        > be downloaded from:
        > http://www.ualberta.ca/~cdeutsch/
        >
        > Note that if the objective is simply to estimate a probability
        > of contamination over the same support than the data, I don't
        > see the benefit of using stochastic simulation over a more
        > straightforward (indicator or multiGaussian) kriging approach.
        >
        > Regards,
        >
        > Pierre
        >
        >
        >
        > On Mon, 24 Jan 2000, Tom Charnock wrote:
        >
        > > Hi
        > >
        > > I've been looking at simulation to generate probability maps of
        > > contamination exceeding a given level. I am looking at how
        > > we might apply
        > > the technique in the initial stages of an accident,
        > > consequently, though I
        > > have large datasets to try the techniques on, I only use a
        > > small subset when
        > > generating the realisations. My question is whether there
        > > is any standard
        > > method for assessing the validity of the probability map
        > > using the remaining
        > > data. I.e. I'm looking for some kind of jackknifing
        > > procedure but for
        > > simulation rather than estimation.
        > >
        > > cheers
        > >
        > > Tom
        > >
        > > PS can anyone point me at any literature about introducing
        > > known trends into
        > > simulation codes.
        > >
        >
        --
        *To post a message to the list, send it to ai-geostats@....
        *As a general service to list users, please remember to post a summary
        of any useful responses to your questions.
        *To unsubscribe, send email to majordomo@... with no subject and
        "unsubscribe ai-geostats" in the message body.
        DO NOT SEND Subscribe/Unsubscribe requests to the list!
      • srahman@lgc.com
        Scarce datasets are more the norm than the exception for most disiciplines. Indicators take into account the connectivity of extreme values, which might be
        Message 3 of 4 , Jan 26, 2000
        • 0 Attachment
          Scarce datasets are more the norm than the exception for most
          disiciplines. Indicators take into account the connectivity of extreme
          values, which might be applicable to a mapping of probabilities
          of exceedance of a contaminant. Nevertheless, indicator variograms,
          except at the median where you would find more "pairs", tend
          to have poor definition at the extreme ends. Not knowing much about
          the processes behind a certain environmental disaster, except
          that prevailing weather and other conditions might have an impact
          on the distribution of contaminants, I would surmise that it will be
          difficult to find an "analogy", like someone could in the geosciences,
          e.g. a channel sand, whether encountered in Tashkent or
          Muskogee, Oklahoma, is the end result of a similar kind of depositional
          process.

          I know of some sites where the problem of uncertainty, at least of
          first order (higher scale) uncertainty, is tackled in a more "deterministic"
          manner.The approach comes under various guises, frequently
          names such as "scenario modeling", "deterministic modeling", etc
          are used. The idea is to subjectively come up with a ranked list of
          higher level uncertainties that might presumably have a bigger impact
          on the eventual mapping result, before zooming in on the details of
          any particular "scenario" or model. One can additionaly apply a
          probablistic technique to this first phase of scenario modeling, e.g.
          Boolean simulation, but frequently due to the sheer number of variables
          of first order uncertainty, scenarios are arrived at deterministically.
          To take an oil and gas example, the following process will be applied:

          OBJECTIVE: Arrive at oil recovery factor for a carbonate pool
          FIRST ORDER UNCERTAINTIES:

          Fault sealing capacity
          Completely sealed
          Partly sealed
          Fully transmissible
          Fracture distribution
          Dense
          Medium
          Low density
          Geological model
          Pinnacle
          Platform

          and so forth. In such cases, lower order permeability distributions might
          be a secondary consideration. Presumably in your case, "wind direction"
          might be a factor, coupled with such others as "rainfall distribution", or
          "amount of contaminant leaked to the environment". Each of the above scenarios
          will result in a greater variance of results than say, "wind speed" or "amount
          of rainfall". It matters less how much rain fell compared to how such rain
          is actually distributed, e.g. is it a residential area? a national park? a
          desert?

          Syed
          Landmark Graphics








          Tom Charnock <Tom.Charnock@...> on 01/26/2000 07:51:40 PM








          To: ai-geostats@...

          cc: (bcc: Syed Abdul Rahman/SINGPROD1/Landmark)



          Subject: Re: GEOSTATS: Assessing the success of simulation









          Hi

          Thanks to Pierre for the advice about indicator and multigaussian kriging. I've
          looked
          at indicators. However the problem I have with all the techniques I've tried is
          that we will have to apply to very scarce data. The consequence is that it is
          very hard to generate a reasonable variogram on the basis of the data alone.
          This means I have to use some feeling about the nature of the process and my
          experience with large datasets to define (ok fiddle) a SV model which I think is
          about right. I'm not particularly happy about doing this for the untransformed
          data, I'm less happy doing it for indicators. Comments anyone?

          Multigaussian I have not tried, can anyone point me at any software etc. Would
          disjunctive kriging also do the same things for me?


          Tom

          PS thanks also to Phaedon who responded directly


          > -----Original Message-----
          > From: Pierre Goovaerts [mailto:goovaert@...]
          > Sent: 24 January 2000 22:33
          > To: Tom Charnock
          > Cc: 'AI-Geostatistics'
          > Subject: Re: GEOSTATS: Assessing the success of simulation
          >
          >
          > Hi Tom,
          >
          > You can look at the paper by C. Deutsch
          > Deutsch, C.V. 1997. Direct assessment of local accuracy and precision.
          > In: Baafi, E.Y., Schofield, N.A. (Eds.), Geostatistics Wollongong '96.
          > Kluwer Academic Publishers, Dordrecht, pp. 115--125.
          >
          > The corresponding Gslib program, called accplt, can
          > be downloaded from:
          > http://www.ualberta.ca/~cdeutsch/
          >
          > Note that if the objective is simply to estimate a probability
          > of contamination over the same support than the data, I don't
          > see the benefit of using stochastic simulation over a more
          > straightforward (indicator or multiGaussian) kriging approach.
          >
          > Regards,
          >
          > Pierre
          >
          >
          >
          > On Mon, 24 Jan 2000, Tom Charnock wrote:
          >
          > > Hi
          > >
          > > I've been looking at simulation to generate probability maps of
          > > contamination exceeding a given level. I am looking at how
          > > we might apply
          > > the technique in the initial stages of an accident,
          > > consequently, though I
          > > have large datasets to try the techniques on, I only use a
          > > small subset when
          > > generating the realisations. My question is whether there
          > > is any standard
          > > method for assessing the validity of the probability map
          > > using the remaining
          > > data. I.e. I'm looking for some kind of jackknifing
          > > procedure but for
          > > simulation rather than estimation.
          > >
          > > cheers
          > >
          > > Tom
          > >
          > > PS can anyone point me at any literature about introducing
          > > known trends into
          > > simulation codes.
          > >
          >
          --
          *To post a message to the list, send it to ai-geostats@....
          *As a general service to list users, please remember to post a summary
          of any useful responses to your questions.
          *To unsubscribe, send email to majordomo@... with no subject and
          "unsubscribe ai-geostats" in the message body.
          DO NOT SEND Subscribe/Unsubscribe requests to the list!



          --
          *To post a message to the list, send it to ai-geostats@....
          *As a general service to list users, please remember to post a summary
          of any useful responses to your questions.
          *To unsubscribe, send email to majordomo@... with no subject and
          "unsubscribe ai-geostats" in the message body.
          DO NOT SEND Subscribe/Unsubscribe requests to the list!
        Your message has been successfully submitted and would be delivered to recipients shortly.