This is really just a quick note for me in the future, and for anyone else who might find this useful.
I have been involved in doing some administration of a Linux server recently – although I haven’t had full control over the server as administrators from the company that own the server have been doing the ‘low-level’ administration, and we need to get permission from them to do various administration tasks.
Anyway, recently they installed a new hard drive as we needed more space on the server. Prior to the installation we had one disk mounted on /, and a lot of data stored in /data. When they mounted the new hard disk they mounted it on /data. Now, we then logged on to the server and found that there was nothing in /data…all of our data had vanished!
Now, although we thought we knew what the cause was, we wanted to tread carefully so we didn’t accidentally do anything would actually lose us any data. So, we did a bit of investigation.
The output of df -h showed that the data that had gone ‘missing’ was still on the disk, as we only had about 50Gb free (hence why a new hard disk had been installed). However, running du -h --max-depth=1 / showed no space was taken up by /data, and the total given at the bottom of the du output didn’t match the total disk usage according to df.
This all confirmed our suspicion that the data was there, but that it was hidden by the new hard disk being mounted ‘over’ it. We simply ran umount /data, and all of our data appeared again, and df and du now agreed.
So, we resolved this long term by:
Mounting the new disk as /newdata
Copying everything from /data to /newdata
Deleting everything inside /data (but not the folder itself)
Remounting /newdata as /data
So, overall it was quite simple, but it was one of those occasions in which we really needed to stop and think, just so we didn’t do anything stupid and lose the valuable data that was on the server.
Summary: Fascinating book covering the whole breadth of high performance Python. It starts with detailed discussion of various profiling methods, continues with chapters on performance in standard Python, then focuses on high performance using arrays, compiling to C and various approaches to parallel programming. I learnt a lot from the book, and have already started improving the performance of the code I wrote for my PhD (rather than writing up my thesis, but oh well…).
Reference: Gorelick, M. and Ozsvald, I., 2014, High Performance Python, O’Reilly, 351pp, Publishers Link
I would consider myself to be a relatively good Python programmer, but I know that I don’t always write my code in a way that would allow it to run fast. This, as Gorelick and Ozsvald point out a number of times in the book, is actually a good thing: it’s far better to focus on programmer time than CPU time – at least in the early stages of a project. This has definitely been the case for the largest programming project that I’ve worked on recently: my PhD algorithm. It’s been difficult enough to get the algorithm to work properly as it is – and any focus on speed improvements during my PhD would definitely have been a premature optimization!
However, I’ve now almost finished my PhD, and one of the improvements listed in the ‘Further Work’ section at the end of my thesis is to improve the computational efficiency of my algorithm. I specifically requested a review copy of this book from O’Reilly as I hoped it would help me to do this: and it did!
I have a background in C and have taken a ‘High Performance Computing’ class at my university, so I already knew some of theory, but was keen to see how it applied to Python. I must admit that when I started the book I was disappointed that it didn’t jump straight into high performance programming with numpy, and parallel programming libraries – but I soon changed my mind when I learnt about the range of profiling tools (Chapter 2), and the significant performance improvements that can be done in pure Python code (Chapters 3-5). In fact, when I finished the book and started applying it to my PhD algorithm I was surprised just how much optimization could be done on my pure Python code, even though the algorithm is a heavy user of numpy.
When we got to numpy (Chapter 6) I realised there were a lot of things that I didn’t know – particularly the inefficiency of how numpy allocates memory for storing the results of computations. The whole book is very ‘data-driven’: they show you all of the code, and then the results for each version of the code. This chapter was a particularly good example of this, using the Linux perf tool to show how different Python code led to significantly different behaviour at a very low level. As a quick test I implemented numexpr for one of my more complicated numpy expressions and found that it halved the time taken for that function: impressive!
I found the methods for compiling to C (discussed in Chapter 7) to be a lot easier than expected, and I even managed to set up Cython on my Windows machine to play around with it (admittedly by around 1am…but still!). Chapter 8 focused on concurrency, mostly in terms of asynchronous events. This wasn’t particularly relevant to my scientific work, but I can see how it would be very useful for some of the other scripts I’ve written in the past: downloading things from the internet, processing data in a webapp etc.
Chapter 9 was definitely useful from the point of view of my research, and I found the discussion of a wide range of solutions for parallel programming (threads, processes, and then the various methods for sharing flags) very useful. I felt that Chapter 10 was a little limited, and focused more on the production side of a cluster (repeatedly emphasising how you need good system admin support) than how to actually program effectively for a cluster. A larger part of this section devoted to the IPython parallel functionality would have been nice here. Chapter 11 was interesting but also less applicable to me – although I was surprised that nothing was mentioned about using integers rather than floats in large amounts of data where possible (in satellite imaging values are often multiplied by 10,000 or 100,000 to make them integers rather than floats and therefore smaller to store and quicker to process). I found the second example in Chapter 12 (by Radim Rehurek) by far the most useful, and wished that the other examples were a little more practical rather than discussing the production and programming process.
Although I have made a few criticisms above, overall the book was very interesting, very useful and also fun to read (the latter is very important for a subject that could be relatively dry). There were a few niggles: some parts of the writing could have done with a bit more proof-reading, some things were repeated a bit too much both within and between chapters, and I really didn’t like the style of the graphs (that is me being really picky – although I’d still prefer those style graphs over no graphs at all!). If these few niggles were fixed in the 2nd edition then I’d have almost nothing to moan about! In fact, I really hope there is a second edition, as one of the great things about this area of Python is how quickly new tools are developed – this is wonderful, but it does mean that books can become out of date relatively quickly. I’d be fascinated to have an update in a couple of years, by which time I imagine many of the projects mentioned in the book will have moved on significantly.
Overall, I would strongly recommend this book for any Python programmer looking to improve the performance of their code. You will get a lot out of it whether you write in pure Python or use numpy a lot, whether you are an expert in C or a novice, and whether you have a single machine or a large cluster.
I’ve just had my second journal paper published, and so I thought I’d start a series on my blog where I explain some of the background behind my publications, explain the implications/applications that my work has, and also provide a brief layman’s summary for non-experts who may be interested in my work. Hopefully this will a long-running series, with at least one post for each of my published papers – if I forget to do this in the future then please remind me!
So, this first post is about:
Wilson, R. T., Milton, E. J., & Nield, J. M. (2014). Spatial variability of the atmosphere over southern England, and its effect on scene-based atmospheric corrections. International Journal of Remote Sensing, 35(13), 5198-5218.
Satellite images are affected by the atmospheric conditions at the time the image was taken. These atmospheric effects need to be removed from satellite images through a process known as ‘atmospheric correction’. Many atmospheric correction methods assume that the atmospheric conditions are the same across the image, and thus correct the whole image in the same way. This paper investigates how much atmospheric conditions do actually vary across southern England, and tries to understand the effects of ignoring this and performing one of these uniform (or ‘scene-based’) atmospheric corrections. The results show that the key parameter is the Aerosol Optical Thickness (AOT) – a measure of the haziness of the atmosphere caused by particles floating in the air – and that it varies a lot over relatively small distances, even under clear skies. Ignoring the variation in this can lead to significant errors in the resulting satellite image data, which can then be carried through to produce errors in other products produced from the satellite images (such as maps of plant health, land cover and so on). The paper ends with a recommendation that, where possible, spatially-variable atmospheric correction should always be used, and that research effort should be devoted to developing new methods to produce high-resolution AOT datasets, which can then be used to perform these corrections.
Key conclusions
I always like my papers to answer questions in a way that actually affects what people do, and in this case there are a few key ‘actionable’ conclusions:
Wherever possible, use a spatially-variable (per-pixel) atmospheric correction – particularly if your image covers a large area.
Effort should be put into developing methods to retrieve high-resolution AOT from satellite images, as this data is needed to allow per-pixel corrections to be carried out.
Relatively low errors in AOT can cause significant errors in atmospheric correction, and thus errors in resulting products such as NDVI. These errors may result from carrying out a uniform atmospheric correction when the atmosphere was spatially-variable, but they could just be due to errors in the AOT measurements themselves. Many people still seem to think that the NDVI isn’t affected by the atmosphere, but that is wrong: you must perform atmospheric correction before calculating NDVI, and errors in atmospheric correction can cause significant errors in NDVI.
Key results
The range of AOT over southern England was around 0.1-0.5 on both days
The range of PWC over southern England was around 1.5-3.0cm and 2.0-3.5cm on the 16th and 17th June respectively
An AOT error of +/- 0.1 can cause a 3% error in the NDVI value
History & Comments
When I started my PhD I tried to find a paper like this one – and I couldn’t find one. I could find all sorts of comments in the literature – and in informal conversations with academics – that said that per-pixel atmospheric corrections were far better than scene-based corrections, but no-one seemed to have actually investigated the errors involved. So, I decided to investigate this myself as a sort of ‘Pilot Study’ for my PhD. This paper is basically a re-working of this Pilot Study.
Once I got started on this work I realised why no-one had done it! The first thing I needed to do was to use data on Aerosol Optical Thickness (AOT) and Precipitable Water Vapour (PWC) to find out how much spatial variation there is in these parameters. Unfortunately, the data is generally very low resolution, and so it is difficult to get a fair sense of how these parameters vary. In fact, almost half of the paper is taken up with describing the datasets that I’ve used, doing some validation on them, and then explaining how I used these datasets to estimate the range of values found over southern England during the dates in question. The datasets didn’t always agree particularly well, but we managed to establish approximate ranges of the values over the days in question.
Both of the days in question were relatively clear summer days, and I was surprised about the range of AOT and PWC values that we found. They were definitely nothing like uniform!
Once we’d established the range of AOT and PWC values, we performed simulations to establish the difference between a uniform atmospheric correction and a spatially-variable atmospheric correction. These simulations were carried out using Py6S: my Python interface to the 6S radiative transfer model. This made it very easy to perform multiple simulations at a range of wavelengths and with varying AOT and PWC values, and then process the data to produce useful results.
When performing a uniform atmospheric correction, a single AOT (or PWC) value is used across the whole image. We took this value to be the mean of the AOT (or PWC) values measured across the area, and then examined the errors that would result from correcting a pixel with this mean AOT when it actually had a higher or lower AOT. We performed simulations taking this higher AOT to be the 95th percentile of the AOT distribution, and the lower AOT to be the 5th percentile of this distribution. This meant that the errors found from the simulations would be found in at least 10% of the pixels in an image covering the study area.
Data, Code & Methods
Unfortunately I did the practical work for this paper before I had really taken on board the idea of ‘reproducible research’, so the paper isn’t easy to reproduce automatically. However, I do have the (rather untidy) code that was used to produce the results of the paper – please contact me if you would like a copy of this for any reason. The data are available from the following links – some of it is freely available, some only for registered academics:
MODIS: Two MODIS products were used, the MOD04 10km aerosol product and the MOD05 1km water vapour product. These were both acquired for tile h17v03 on the 16th and 17th June 2006, and are available to download through LADSWEB.
AERONET: Measurements from the Chilbolton site were used – available here.
The last few months have seen a flurry of activity in Py6S – probably caused by procrastinating from working on my PhD thesis! Anyway, I thought it was about time that I summarised the various updates and new features which have been released, and gave a few more details on how to use them.
These have all been released since January 2014, and so if you’re using version 1.3 or earlier then it’s definitely time to upgrade! The easiest way to upgrade is to simply run
pip install -U Py6S
in a terminal, which should download the latest version and get it all set up properly. So, on with the new features.
A wide range of bugfixes
I try to fix any actual bugs that are found within Py6S as soon as they are reported to me. The bugs fixed since v1.3 include:
More accurate specificiation of geometries (all angles were originally specified as integers, now they are specified as floating point values)
Fixed errors when setting custom altitudes in certain situations – for example, when altitudes have been set and then re-set
Fixes for ambiguity in dates when importing AERONET data – previously if you specified a date such as 01/05/2014 which could be interpreted either day-first (1st May) or month-first (5th January) then it assumed month-first, which was the opposite what the documentation specified. This now assumes day first – consistent with the documentation
Error handling has been improved for situations when 6S itself crashes with an error – rather than Py6S crashing it now states that 6S itself has encountered an error
Added the extraction of two outputs from the 6S output file that weren’t extracted previously: the integrated filter function and the integrated solar spectrum
Parallel processing support
Now when you use functions that run 6S for multiple wavelengths or multiple angles (such as the run_landsat_etm or run_vnir functions) they will automatically run in parallel. From the user’s point of view, everything should work in exactly the same way, but it’ll just be faster! How much faster, depends on your computer. If you’ve got a dual-core processor then it should be almost (but not quite) twice as fast. For a quad-core then it will probably be around three times faster, for an eight-core machine then it will probably be more like five times as fast. If you want to experiment then there is an extra parameter that you can pass to any of these functions to specify how many 6S runs to perform in parallel – just run something like:
run_landsat_etm(s, 'apparent_radiance', n=3)
to run three 6S simulations in parallel.
I’ve tested the parallel processing functionality extensively, and I’m very confident that it produces exactly the same answers as the non-parallel version. However, if you do run into any problems then please let me know immediately, and I’ll do whatever fixes are necessary.
Python 3 compatibility
Py6S is now fully compatible with Python 3. This has involved a number of changes to the Py6S source code, as well as doing some alterations to some of the dependencies so that they all work on Python 3 too. I don’t use Python 3 much myself, but all of the automated tests for Py6S now run on both Python 2.7 and Python 3.3 – so that should pick up any problems. However, if you do run into any issues, then please contact me.
Added wavelengths for two more sensors
Spectral response functions for Landsat 8 OLI and RapidEye are now included in the PredefinedWavelengths class, making it easy to simulate using these bands by code as simple as:
I’m happy to add the spectral response functions for other sensors – please email me if you’d like another sensor, and provide a link to the spectral response functions, and I’ll do the rest.
The future…
I’ve got lots of plans for the future of Py6S. Currently I’m finishing off my PhD, which is having to take priority over Py6S, but as soon as I’ve finished I should be able to release a number of new features.
Currently I’m thinking about ways to incorporate the building of Lookup Tables into Py6S – this should make running multiple simulations far quicker, and is essential to use Py6S for performing atmospheric corrections on images. I’m also considering a possible restructuring of the Py6S interface (or possibly a separate ‘modern’ Pythonic interface) for version 2.0 or 3.0. I’m also planning to apply to the Software Sustainability Institute Open Call, next year, with the aim of developing the software, and the community, further.
This post is more a note to myself than anything else – but it might prove useful for someone sometime.
In the dim and distant mists of time, I set up a startup file for ENVI which automatically loaded a specific image every time you opened ENVI. I have no idea why I did that – but it seemed like a good idea at the time. When tidying up my hard drive, I removed that particular file – and ever since then I’ve got a message each time I load ENVI telling me that it couldn’t find the file.
I looked in the ENVI preferences window, and there was nothing listed in the Startup File box (see below) – but somehow a file was still being loaded at startup. Strange.
I couldn’t find anything in the documentation about where else a startup file could be configured, and I searched all of the configuration files in the ENVI program folder just in case there was some sort of command in one of them – and I couldn’t find it anywhere.
Anyway, to cut a long story short, it seems that ENVI will automatically run a startup file called envi.ini located in your home directory (C:\Users\username on Windows, \home\username on Linux/OS X). This file existed on my machine, and contained the contents below – and deleting it stopped ENVI trying to open this non-existent file.
; envi startup script
open file = C:\Data\_Datastore\SPOT\SPOT_ROI.bsq
As part of my PhD I’ve developed a number of algorithms which are implemented as a class in Python code. An example would be something like this:
class Algorithm:
def __init__(self, input_filename, output_basename, thresh, n_iter=10):
self.input_filename = input_filename
self.output_basename = output_basename
self.thresh = thresh
self.n_iter = n_iter
def run(self):
self.preprocess()
self.do_iterations()
self.postprocess()
def preprocess(self):
# Do something, using the self.xxx parameters
def do_iterations(self):
# Do something, using the self.xxx parameters
def postprocess(self):
# Do something, using the self.xxx parameters
The way you’d use this algorithm normally would be to instantiate the class with the required parameters, and then call the run method:
alg = Algorithm("test.txt", 0.67, 20)
alg.run()
That’s fine for using interactively from a Python console, or for writing nice scripts to automatically vary parameters (eg. trying for all thresholds from 0.1 to 1.0 in steps of 0.1), but sometimes it’d be nice to be able to run the algorithm from a file with the right parameters in it. This’d be particularly useful for users who aren’t so experienced with Python, but it can also help with reproducibility: having a parameter file stored in the same folder as your outputs, allowing you to easily rerun the processing.
For I while I’ve been trying to work out how to easily implement a way of using parameter files and the standard way of calling the class (as in the example above), without lots of repetition of code – and I think I’ve found a way to do it that works fairly well. I’ve added an extra function to the class which writes out a parameter file:
def write_params(self):
with open(self.output_basename + "_params.txt", 'w') as f:
for key, value in self.__dict__.iteritems():
if key not in ['m', 'c', 'filenames']:
if type(value) == int:
valuestr = "%d" % value
elif type(value) == float:
valuestr = "%.2f" % value
else:
valuestr = "%s" % repr(value)
f.write("%s = %s\n" % (key, valuestr))
This function is generic enough to be used with almost any class: it simply writes out the contents of all variables stored in the class. The only bit that’ll need modifying is the bit that excludes certain variables (in this case filenames, m and c, which are not parameters but internal attributes used in the class – in an updated version of this I’ll change these parameters to start with an _, and then they’ll be really easy to filter out).
The key thing is that – through the use of the repr() function – the parameter file is valid Python code, and if you run it then it will just set a load of variables corresponding to the parameters. In fact, the code to write out the parameters could be even simpler – just using repr() for every parameter, but to make the parameter file a bit nicer to look at, I decided to print out floats and ints separately with sensible formatting (two decimal places is the right accuracy for the parameters in the particular algorithm I was using – yours may differ). One of the other benefits of using configuration files that are valid Python code is that you can use any Python you want in there – string interpolation or even loops – plus you can put in comments. The disadvantage is that it’s not a particularly secure way of dealing with parameter files, but for scientific algorithms this isn’t normally a major problem.
The result of writing the parameter file as valid Python code is that it is very simple to read it in:
params = {}
execfile(filename, params)
This creates an empty dictionary, then executes the file and places all of the variables into a dictionary, giving us exactly what we’d want: a dictionary of all of our parameters. Because they’re written out from the class instance itself, any issues with default values will already have been dealt with, and the values written out will be the exact values used. Now we’ve got this dictionary, we can simply use ** to expand it to parameters for the init function, and we’ve got a function that will read parameter files and create the object for us:
So, if we put all of this together we get code which automatically writes out a parameter file when a class is instantiated, and a class method that can instantiate a class from a parameter file. Here’s the final code, followed by an example of usage:
class Algorithm:
def __init__(self, input_filename, output_basename, thresh, n_iter=10):
self.input_filename = input_filename
self.output_basename = output_basename
self.thresh = thresh
self.n_iter = n_iter
self.write_params()
def write_params(self):
with open(self.output_basename + "_params.txt", 'w') as f:
for key, value in self.__dict__.iteritems():
if key not in ['m', 'c', 'filenames']:
if type(value) == int:
valuestr = "%d" % value
elif type(value) == float:
valuestr = "%.2f" % value
else:
valuestr = "%s" % repr(value)
f.write("%s = %s\n" % (key, valuestr))
def run(self):
self.preprocess()
self.do_iterations()
self.postprocess()
@classmethod
def fromparams(cls, filename):
params = {}
execfile(filename, params)
del params['__builtins__']
return cls(**params)
def preprocess(self):
# Do something, using the self.xxx parameters
def do_iterations(self):
# Do something, using the self.xxx parameters
def postprocess(self):
# Do something, using the self.xxx parameters
I do a lot of my academic programming in Python, and – even though I often write about the importance of reproducible research – I don’t always document my code very well. This sometimes leads to problems where I have some code running fine, but I don’t know which modules it requires. These could be external libraries, or modules I’ve written myself – and it’s very frustrating to have to work out the module requirements by trial and error if I transfer the code to a new machine.
However, today I’ve realised there’s a better way: the modulefinder module. I’ve written a short piece of code which will produce a list of all of the ‘base’ or ‘root’ modules (for example, if you run from LandsatUtils.metadata import parse_metadata, then this code will record LandsatUtils) that your code uses, so you know which you need to install.
When running GDAL on my university’s supercomputer yesterday I got the following error:
ERROR 1: Landsat_Soton.tif, band 1: An error occured while writing a dirty block
This post is really just to remind me how to solve this error – I imagine the error may have a multitude of possible causes. In my case though, I knew I’d seen it before – and fixed it – but I couldn’t remember how. It turns out that it’s really simple: GDAL is giving an error saying that it can’t write part of the output file to the hard drive. In this case, it’s because the supercomputer that I’m using has quotas for the amount of storage space each user can use – and I’d gone over the quota ‘hard limit’, and therefore the operating system was refusing to write any of my files.
So, the simple answer is to delete some files, and then everything will work properly!
(If you’re not using a shared computer with quotas, then this may be because your hard drive is actually full!)
10. Can you re-generate any intermediate data set from the original raw data by running a series of scripts?
It depends which of my projects you’re talking about. For some of my nicely self-contained projects then this is very easy – everything is encapsulated in a script or a series of scripts, and you can go from raw data, through all of the intermediate datasets, to the final results very easily. The methods by which this is done vary, and include a set of Python scripts, or the use of the ProjectTemplate package in R. Since learning more about reproducible research, I try to ‘build in’ reproducibility from the very beginning of my research projects. However, I’ve found this very difficult to add to a project retrospectively – if I start a project without considering this then I’m in trouble. Unfortunately, a good proportion of my Phd is in that category, so not everything in the PhD is reproducible. However, the main algorithm that I’m developing is – and that is fully source-controlled, relatively well documented and reproducible. Thank goodness!
11. Can you re-generate all of the figures and tables in your research paper by running a single command?
The answer here is basically the same as above: for some of my projects definitely yes, for others, definitely no. Again, there seems to be a pattern that smaller more self-contained projects are more reproducible – and not all figures/tables of my PhD thesis can be reproduced – but generally you’ve got a relatively good chance. At the moment I don’t use things like Makefiles, and don’t write documents with Sweave, KnitR or equivalents – so to reproduce a figure or table you’ll often have to find a specific Python file and run it (eg. create_boxplot.py, or plot_fig1.py), but it should still produce the right results.
12. If you got hit by a bus, can one of your lab-mates resume your research where you left off with less than a week of delay?
Not really not – it would be difficult, even for my supervisor or someone who knew a lot about what I was doing to take over my work. My "bus factor" is definitely 1 (although I hope that the bus factor for Py6S is fractionally greater than 1). Someone who had a good knowledge of Python programming, including numpy, scipy, pandas and GDAL, would have a good chance at taking over one of my better-documented and more-reproducible smaller projects – but I think someone would struggle to pick up my PhD. In many ways though, that’s kinda the point of a PhD – you’re meant to end up being the World Expert in your very specific area of research, which would make it very difficult for anyone to pick up anyone’s PhD project.
For one of my other projects, it may take a while to get familiar with it – but it should be perfectly possible to take my code, along with drafts of papers and/or other documentation I’ve written and continue the research. In many ways that is the whole point of reproducible research: aiming to develop research that someone else can easily reproduce and extend. The only difference is that usually the research is reproduced/extended after it’s been completed by you, whereas if you get hit by a bus then it’ll never have been completed in the first place!
Recently I ran into a situation where I needed to select Landsat scenes by various criteria – for example, to find images over a certain location, within a certain date range, with other requirements on cloudiness and so on. Normally I’d do this sort of filtering using a tool like EarthExplorer, but I needed to do this for about 300 different sets of criteria – making an automated approach essential.
So, I found a way to get all of the Landsat metadata and import it into a database so that I could query it at will and get the scene IDs for all the images I’m interested in. This post shows how to go about doing this – partly as a reference for me in case I need to do it again, but hopefully other people will find it useful.
So, to start, you need to get the Landsat metadata from the USGS. On this page, you can download the metadata files for each of the Landsat satellites separately (with Landsat 7 metadata split into SLC-on and SLC-off).
You’ll want the CSV files, so click the link and have a break while it downloads (the CSV files are many hundreds of megabytes!). If you look at the first line of the CSV file once you’ve downloaded it (you may not want to load it in a text editor as it is such a huge file, but something like the head command will work fine), you’ll see the huge number of column headers giving every piece of metadata you could want! Of course, most of the time you won’t want all of the metadata items, so we want to extract just the columns we want.
The problem with this is that lots of the traditional tools that are used for processing CSV files – including text editors, database import tools and Excel – really don’t cope well with large CSV files. These Landsat metadata files are many hundreds of megabytes in size, so we need to use a different approach. In this case, I found that the best approach was using one of the tools from csvkit, a set of command-line tools for processing CSV files, written in Python. One of the key benefits of these tools is that they process the file one line at a time, in a very memory-efficient way, so they can work on enormous files very easily. To extract columns from a CSV file we want to use csvcut, and we can call it with the following command line
This will extract the 5th, 6th, 1st, 2nd 3rd etc columns from LANDSAT_ETM.csv to LANDSAT_ETM_Subset.csv. To get a list of the columns in the file along with their ID number, so that you can choose which ones you want to extract, you can run:
csvcut -n LANDSAT_ETM.csv
After doing this you’ll have a far smaller CSV file in LANDSAT_ETM_Subset.csv that just contains the columns you’re interested in. There’s only one problem with this file – it still has the headers at the beginning. This is great for a normal CSV file, but when we import it into the database we’ll find that the header line gets imported too – not what we want! The easiest way to remove it is using the following command:
cat LANDSAT_ETM_Subset.csv | sed "1 d" > LANDSAT_ETM_Subset.csv
Again, this doesn’t load the whole file in to memory, so will work with large files happily.
We then need to create the database. This can be done with any database system, but to get a simple local database I decided to use SQLite. Once you’ve installed this you can do everything you need from the command-line (you can create the tables using a GUI tool such as SQLite Administrator, but you won’t be able to do the import using that tool – it’ll crash on large CSV files). To create a database simply run:
sqlite LandsatMetadata.sqlite
which will create a database file with that name, and then drop you in to the SQLite console. From here you can type any SQL commands (including those to create or modify tables, plus queries), plus SQLite commands which generally start with a . In this case, we need to create a table for the various columns we’ve chosen from our CSV. It is important here to make sure that the column names are exactly the same as those in the CSV, or the import command won’t work (you can change the names later with ALTER TABLE if needed). You can take the following SQL and modify it to your needs.
Just type this into the SQLite console and the table will be created. We now need to import the CSV file, and first we have to define what is used as the separator in the file. Obviously, for a CSV file, this is a comma, so we type:
.separator ,
And then to actually import the CSV file we simply type:
.import LANDSAT_ETM_Subset.csv images
That is, .import followed by the name of the CSV file and the name of the table to import into. Once this is finished – it may take a while – you can check that it imported all of the rows of the CSV file by running the following query to get the number of rows in the table:
SELECT COUNT() from images;
and you can compare that to the output of
wc -l LANDSAT_ETM_Subset.csv
which will count the lines in the original file.
Your data is now in the database and you’re almost done – there’s just one more thing to do. This involves changing how the dates and times are represented in the database, so you can query them easily. Still in the SQLite console, run:
UPDATE images
SET startTime=time(substr(images.sceneStartTime,10, length(images.sceneStartTime)));
And then…you’re all done! You can now select images using queries like:
SELECT * FROM images WHERE path=202 AND row=24
AND acquisitionDate > date("2002-03-17","-1 months")
AND acquisitionDate < date("2002-03-17","+1 months")
Once you’ve got the results from a query you’re interested in, you can simply create a text file with the sceneIDs for those images and use the Landsat Bulk Download Tool to download the images.