Assumptions in Remote Sensing
Back in 2012, I wrote the following editorial for SENSED, the magazine of the Remote Sensing and Photogrammetry Society. I found it recently while looking through back issues, and thought it deserved a wider audience, as it is still very relevant. I’ve made a few updates to the text, but it is mostly as published.
In this editorial, I’d like to delve a bit deeper into our subject, and talk about the assumptions that we all make when doing our work.
In a paper written almost twenty years ago, Duggin and Robinove produced a list of assumptions which they thought were implicit in most remote sensing analyses. These were:
- There is a very high degree of correlation between the surface attributes of interest, the optical properties of the surface, and the data in the image.
- The radiometric calibration of the sensor is known for each pixel.
- The atmosphere does not affect the correlation (see 1 above), or the atmospheric correction perfectly corrects for this.
- The sensor spatial response characteristics are accurately known at the time of image acquisition.
- The sensor spectral response and calibration characteristics are accurately known at the time of image acquisition.
- Image acquisition conditions were adequate to provide good radiometric contrast between the features of interest and the background.
- The scale of the image is appropriate to detect and quantify the features of interest.
- The correlation (see 1 above) is invariant across the image.
- The analytical methods used are appropriate and adequate to the task.
- The imagery is analysed at the appropriate scale
- There is a method of verifying the accuracy with which ground attributes have been determined, and this method is uniformly sensitive across the image.
These all come from the following paper, in which there is a far more detailed discussion of each of these: Duggin and Robinove, 1990, Assumptions implicit in remote sensing data acquisition and analysis, International Journal of Remote Sensing, 11:10, p1669.
I firmly believe that now is a very important time to start examining this list more closely. We are in an era when products are being produced routinely from satellites: end-user products such as land-cover maps, but also products designed to be used by the remote sensing community, such as atmospherically-corrected surface reflectance products. Similarly, GUI-based ‘one-click’ software is being produced which purports to perform very complicated processing, such as atmospheric correction or vegetation canopy modelling, very easily.
My question to you, as scientists and practitioners in the field is: Have you stopped to examine the assumptions underlying the products you use?, and even if you’re not using products such as those above, have you looked at your analysis to see whether it really stands up to a scrutiny of its assumptions?
I suspect the answer is no – it certainly was for me until recently. There is a great temptation to use satellite-derived products without really looking into how they are produced and the assumptions that may have been made in their production process (seriously, read the Algorithm Theoretical Basis Document!). Ask yourself, are those assumptions valid for your particular use of the data?
Looking at the list of assumptions above, I can see a number which are very problematic. Number 8 is one that I have struggled with myself – how do I know whether the correlation between the ground data of interest and the image data is uniform across the image. I suspect it isn’t – but I’d need a lot of ground data to test it, and even then, what could I do about it? Of course, number 11 causes lots of problems for validation studies too. Number 4 and 5 are primarily related to the calibration of the sensors, which is normally managed by the operators themselves. We might not be able to do anything about it – but have we considered it, particularly when using older and therefore less well-calibrated data?
As a relatively young member of the field, it may seem like I’m ‘teaching my grandparents to suck eggs’, and I’m sure this is familiar to many of you. Those of you who have been in the field a while have probably read the paper – more recent entrants may not have done so. Regardless of experience, I think we could all do with thinking these through a bit more. So on go, have a read of the list above, maybe read the paper, and have a think about your last project: were your assumptions valid?
I’m interested in doing some more detailed work on the Duggin and Robinove paper, possibly leading to a new paper revisiting their assumptions in the modern era of remote sensing. If you’re interested in collaborating with me on this then please get in touch via robin@rtwilson.com.
If you found this post useful, please consider buying me a coffee.
This post originally appeared on Robin's Blog.
Categorised as: Academic, Remote Sensing
Robin,
thank you for this post – I really appreciate the discussion.
Vitor
Thanks very much. As a coauthor of the paper, I appreciate your attention to it and the fact that you value it. Unfortunately, Mike Duggin is no longer with us, but I’d be happy to discuss it with you.
Chuck Robinove