Thanks to everybody (P. Kyriakidis, J. Senegas, D. Myers, W. Thayer) who

answered my question about generating

autocorrelated random fields. You helped me very much structuring the

world of simulation algorithms.

Unfortunately time is short and therefore I decided to implement a

swapping algorithm - being aware of its disadvantages. An overview of

alternative methods will only be mentioned in the theory part of my

thesis.

You can find the anwers I got from list members as attachment.

Yours Marcel

----------

On Fri, 24 May 2002, Marcel Frehner wrote:

> Hi everybody

>

> I'm writing a diploma thesis about error propagation in digital terrain

> models and I want to use monte carlo methods to simulate elevation errors

> in the data points and their effect on various gis operations.

>

> My data are irregularly distributed points (not grid data!) which I

> triangulated using java as programming language. I found lots of

> suggestions in literature how to simulate autocorrelated error fields

> (Heuvelink, Ehlschlaeger, Goodchild, Wechsler,

> Haining/Griffith/Bennet) but as far as I was able to understand them the

> only practicable two (for my task) where:

>

> 1) Generating uncorrelated random field and swapping until a

> predefined

> level of autocorrelation (Moran's I) is reached. (Goodchild, 1980)

>

> 2) Same as 1) but prior to swapping the random numbers have to pass a

> series of statistical tests like a test for

> multivariate-normality. (Haining, Griffith, Bennet, 1983)

>

There are many more methods of generating autocorrelated fields with

pre-specified covariance models. The most recent survey of such methods

are: Chiles and Delfiner (1999): Geostatistics Modeling Spatial

Uncertainty, Goovaerts (1997): Geostatistics for Natural Resources

Evaluation, and Deutcsh and Journel (1998): GSLIB. From a GIScience

perspective, there is an even more recent (not yet released) reference:

Zhang and Goodchild (2002): Uncertainty in Geographic Information.

Swapping algorithms fall into a broader family of algorithms called

iterative algorithms, which are all variants (or closely related) to

Markov Chain Monte Carlo methods. The only difference is that in the case

of random fields you are sampling from a multi-variate distribution.

Non-iterative algorithms include, among others, LU decompisition,

Convolution (or moving averages), Spectral methods, and sequential simulations.

These methods yield simulated fields with pre-specified covariance models,

and each has its pros and cons. Some people look into such realizations

via the covariance parameters, while some others via the weights attached

to neighborhing values at any simulation location. Weights and

covariances are functionally linked.

I would choose some algorithm the latter family, for simulating

unconditional realizations with pre-specified covariances. They are

generally faster. I would choose some kind of iterative technique to

condition to non-linear functions or data. The traditional techniques that

involve kriging (or some other kind of interpolation method) do not handle

non-linearities that well...

>

> Am I right saying that I need measured errors at some points if i want to

> apply interpolation techniques to simulate an autocorrelated error

> field? (Ehlschlaeger (1994) used a formula depending on the spatial

> autocorrelative effect) I'm asking that, because the only thing I have is

> the RMSE which is 0.4 m. I haven't got the points from which this error

> was empirically derived. But I could randomly set a few start points and

> derive all other points from them.

>

Since you do not have error data, you cannot model the variogram of the

error, or its (possible) covariance with the true signal (heteroscedastic

case). Further on, you cannot say where errors are larger or smaller.

Simulations with a given covariance model, but no data information, are

termed unconditional (with respect to the data). You can still choose some

covariance models, and then simulate with these models. Note that the

patterns that you see when you overlay such error realizations on

the actual DEM, depend on the variance of the error (sill of

adopted error variogram, which is linked to the DEM's MSE),

the correlation range and type (e.g., spherical or Gaussian) of the error

variogram, AND the underlying patterns of the DEM. The locus of high and

low DEM+error elevations over a large number of realizations is that of

the original DEM: if you have no error data, on average what you see is

the DEM itself...

> So my first question is:

> What technique shall I apply to simulate an autocorrelated random error

> field?

>

See comments above.

>

> My second set of questions is:

> How can I determine suitable parameters for the error fields

> autocorrelation? What is the minimum distance of spatial independence? How

> should I determine a suitable distance decay exponent if I haven't got any

> sample error points to estimate a variogram from?

>

Look at studies that have both original DEMs and GPS surveys, or higher

accuracy elevation measurements, and find if they have modeled any

error variograms/covariances.

Or just experiment with different error variogram ranges, types, and

relative nuggets, and then perform some kind of sensitivity analysis.

>

> I hope my questions are not too stupid and thanks for any help!

> Marcel

>

>

Hope this helps.,

Phaedon

---------------------------------------------------------------------------

Phaedon C. Kyriakidis

Assistant Professor

Department of Geography tel: +1 (805) 893-2266

University of California Santa Barbara fax: +1 (805) 893-3146

Ellison Hall 5710 e-mail: phaedon@...

Santa Barbara, CA 93106-4060 URL: www.geog.ucsb.edu/~phaedon

---------------------------------------------------------------------------

>

> --

> * To post a message to the list, send it to ai-geostats@...

> * As a general service to the users, please remember to post a summary of any useful responses to your questions.

> * To unsubscribe, send an email to majordomo@... with no subject and "unsubscribe ai-geostats" followed by "end" on the next line in the message body. DO NOT SEND Subscribe/Unsubscribe requests to the list

> * Support to the list is provided at http://www.ai-geostats.org

>

Hi Marcel,

just as an information, there are at least three papers by Fisher which

may be of interest for you:

- Fisher: First experiments in viewshed uncertainty: the accuracy of the

viewshed area. Photogrammetric Engineering and Remote Sensing,

57(10): 1321-1327, 1991

- Lee, Snyder and Fisher: Modeling the effect of data errors on feature

extraction from digital elevation models. Photgrammetric Engineering and

Remote Sensingm 58(10): 1461-1467, 1992.

- Fisher: Improved modeling of elevation errors with geostatistics.

Geoinformatica, 2(3): 215-233, 1998.

Unfortunately, the algorithm used by Fisher for generating a correlated

random field is not very relevant, in my opinion, as are the ones you

mentioned. The best thing to do is to use standard geostatistical

simulation algorithms, such as turning band, LU decomposition (for small

grids) or Fast Fourier Transforms (for regular grids). This requires the

modling of the spatial structure (-> covariance model) and the choice of

the distribution (eg Gaussian). You can find a nice description of these

algorithms in the book by Chiles and Delfiner, Geostatistics: Modeling

Spatial Uncertainty.

Good luck!

julien

--

*******************

Julien Senegas

senegas@...

http://cg.ensmp.fr/~senegas

*******************

A couple of relevant references

1. GSLIB (2nd edition), C.V. Deutsch and A.G. Journel, Oxford

University Press

This pertains to codes for geostatistics including simulation,

the code is available in FORTRAN

(CD-ROM or diskette included with volume, also can be downloaded

from web, see listing on the AI-GEOSTATS site)

2. GEOSTATISTICAL SIMULATIONS, M. Armstrong and P.A. Dowd (editors),

Kluwer academic press

This is a collection of papers including discussions pertaining to

simulation

Donald E. Myers

http://www.u.arizona.edu/~donaldm

Marcel Frehner wrote:

>Hi everybody

>

>I'm writing a diploma thesis about error propagation in digital terrain

>models and I want to use monte carlo methods to simulate elevation errors

>in the data points and their effect on various gis operations.

>

>My data are irregularly distributed points (not grid data!) which I

>triangulated using java as programming language. I found lots of

>suggestions in literature how to simulate autocorrelated error fields

>(Heuvelink, Ehlschlaeger, Goodchild, Wechsler,

>Haining/Griffith/Bennet) but as far as I was able to understand them the

>only practicable two (for my task) where:

>

>1) Generating uncorrelated random field and swapping until a

>predefined

>level of autocorrelation (Moran's I) is reached. (Goodchild, 1980)

>

>2) Same as 1) but prior to swapping the random numbers have to pass a

>series of statistical tests like a test for

>multivariate-normality. (Haining, Griffith, Bennet, 1983)

>

>

>Am I right saying that I need measured errors at some points if i want to

>apply interpolation techniques to simulate an autocorrelated error

>field? (Ehlschlaeger (1994) used a formula depending on the spatial

>autocorrelative effect) I'm asking that, because the only thing I have is

>the RMSE which is 0.4 m. I haven't got the points from which this error

>was empirically derived. But I could randomly set a few start points and

>derive all other points from them.

>

>So my first question is:

>What technique shall I apply to simulate an autocorrelated random error

>field?

>

>

>My second set of questions is:

>How can I determine suitable parameters for the error fields

>autocorrelation? What is the minimum distance of spatial independence? How

>should I determine a suitable distance decay exponent if I haven't got any

>sample error points to estimate a variogram from?

>

>

>I hope my questions are not too stupid and thanks for any help!

>Marcel

>

>

>

>--

>* To post a message to the list, send it to ai-geostats@...

>* As a general service to the users, please remember to post a summary of any useful responses to your questions.

>* To unsubscribe, send an email to majordomo@... with no subject and "unsubscribe ai-geostats" followed by "end" on the next line in the message body. DO NOT SEND Subscribe/Unsubscribe requests to the list

>* Support to the list is provided at http://www.ai-geostats.org

>

>

Marcel,

I used gstat's SGS algorithm to generate an autocorrelated field. I chose

an exponential model and varied the range parameter to obtain different

levels of autocorrelation. I then calculated Moran's I for each of the

fields. By a little trial and error I have 'populations' with known

autocorrelation structure from which I can now collect samples. There may

be more direct ways to generate fields with know autocorrelation structure

(e.g., LU decomposition, p-field, turning bands; Deutsch and Journel.

1998. GSLIB User's Guide) but they usually are limited to generating small

fields. gstat is freeware that allows you to simulate values at specified

locations (i.e., the locations do not have to form a grid) and can be

downloaded from the ai-geostats web page.

Best regards,

Bill

At 03:41 PM 5/24/02 +0200, you wrote:

>Hi everybody

>

>I'm writing a diploma thesis about error propagation in digital terrain

>models and I want to use monte carlo methods to simulate elevation errors

>in the data points and their effect on various gis operations.

>

>My data are irregularly distributed points (not grid data!) which I

>triangulated using java as programming language. I found lots of

>suggestions in literature how to simulate autocorrelated error fields

>(Heuvelink, Ehlschlaeger, Goodchild, Wechsler,

>Haining/Griffith/Bennet) but as far as I was able to understand them the

>only practicable two (for my task) where:

>

>1) Generating uncorrelated random field and swapping until a

>predefined

>level of autocorrelation (Moran's I) is reached. (Goodchild, 1980)

>

>2) Same as 1) but prior to swapping the random numbers have to pass a

>series of statistical tests like a test for

>multivariate-normality. (Haining, Griffith, Bennet, 1983)

>

>

>Am I right saying that I need measured errors at some points if i want to

>apply interpolation techniques to simulate an autocorrelated error

>field? (Ehlschlaeger (1994) used a formula depending on the spatial

>autocorrelative effect) I'm asking that, because the only thing I have is

>the RMSE which is 0.4 m. I haven't got the points from which this error

>was empirically derived. But I could randomly set a few start points and

>derive all other points from them.

>

>So my first question is:

>What technique shall I apply to simulate an autocorrelated random error

>field?

>

>

>My second set of questions is:

>How can I determine suitable parameters for the error fields

>autocorrelation? What is the minimum distance of spatial independence? How

>should I determine a suitable distance decay exponent if I haven't got any

>sample error points to estimate a variogram from?

>

>

>I hope my questions are not too stupid and thanks for any help!

>Marcel

>

>

>

>--

>* To post a message to the list, send it to ai-geostats@...

>* As a general service to the users, please remember to post a summary of

>any useful responses to your questions.

>* To unsubscribe, send an email to majordomo@... with no subject and

>"unsubscribe ai-geostats" followed by "end" on the next line in the

>message body. DO NOT SEND Subscribe/Unsubscribe requests to the list

>* Support to the list is provided at http://www.ai-geostats.org

**************************************************

William C. Thayer, P.E.

Environmental Science Center

Syracuse Research Corporation

301 Plainfield Road, Suite 350

Syracuse, NY 13212

phone: (315) 452-8424

fax: (315) 452-8440

email: thayer@...

web: http://esc.syrres.com/

http://esc.syrres.com/geosem/

**************************************************

[Non-text portions of this message have been removed]