Robin's Blog

Review: Code – The Hidden Language of Computer Hardware and Software by Charles Petzold

Summary: This book takes you all the way from Morse Code to a fully working computer, explaining everything along the way. What’s more, it’s a great read too! If you ever wondered how a computer worked then buy this and read it – even if you think you already know (unless you’re, you know, a chip designer at Intel or something!) Front Cover

Reference: Petzold, C., 2000, Code: The Hidden Language of Computer Hardware and Software, Microsoft Press, 395pp Amazon Link

As you’ll probably know if you’ve read many articles on this site: I’m a computer programmer and general ‘geek’. So, it won’t surprise you to know that I am quite interested in how computers work – and picked up this book thinking that I’d already know quite a lot of it. I knew a fair bit – but I learnt a huge amount from reading it, and it helped me gain a full understanding of what is going on when I write computer programs – right down to the level of the electricity inside the processor. By the end of the book I was itching to buy lots of relays or transformers and make a computer on my living room table!

The book starts by looking at the ways you, as a child, might try and communicate with your best friend who lives across the street – after your parents think you’ve gone to bed. The natural solution to this is Morse code using a torch, and Petzold takes this simple code as a good starting point to explain the concepts of a code. He then moves on to Braille, which is significantly more complex than I thought, and which gives the opportunity to look at some of the more complex things you find in codes (eg. shift characters and escape characters – both of which Braille has). You’ll note that nothing about computers has been introduced yet – and that is a key feature of the first part of the book, it doesn’t go straight in to “this is how a computer works”, it starts at a very basic (but still interesting) level that becomes useful when thinking about computers later in the book, but isn’t too scary.

Electricity and electrical circuits are introduced when describing how you might communicate with another friend whose window you can’t see from yours. This is introduced almost entirely from scratch – explaining how circuits work, what voltage is, how batteries work etc – but it actually went beyond my previous knowledge in electricity fairly quickly, and taught me much of interest. Whenever circuits are drawn in the book – from here onwards – they are shown with the wires that have current in them in red, making it very easy to see what is going on.

The discussion of electricity for sending messages leads into the history of telegraph networks, and then the concept of relays. I’d never really understood relays before, but Petzold introduces them with a very good analogy as a ‘labour saving device’ at a telegraph station. Around this point a number of other key – but rather unrelated – topics are covered like Boolean logic (True/False, AND, OR etc) and number systems (particularly number bases and binary). There is a very practical emphasis on everything – and the point about the importance of binary as on/off, true/false, open/closed and so on, is very much emphasised. After these introductions, the relays discussed earlier are combined to produce logic gates (AND, OR, NOT, NAND, XOR and so on) with the aim of producing a circuit to help you choose a cat (yes, it sounds strange, but works well as an example!). Here you can start to see how this is moving towards a computer…

I’m not going to go much further into detail about the rest of the book, except to say that you move towards being able to ‘build’ (conceptually if not actually physically) a fully-working computer gradually, one step at a time. From logic gates, to adding circuits and subtracting circuits and from clocks to flip-flops and RAM you gradually work up to a full, programmable computer which you have basically built by page 260! Given how much detail everything is explained in – and how little knowledge is assumed – fitting it into 260 pages is very impressive!

Of course, the book continues past page 260, going on to cover topics including input and output (from keyboards and to the screen), high and low level programming languages, graphics, multimedia and more. Interestingly, transistors aren’t mentioned until after you’ve got almost all of the way to building a computer – but this is almost certainly because relays are far easier to understand, and accomplish the same job. Once they have been introduced, a couple of important processors (the Intel 8080 and the Motorola 6800) are examined in detail – a really interesting opportunity to see how the concepts you’ve learnt about have been applied in real life by chip designers.

I can think of very few issues with this book – although the last chapter does read rather strangely, as if the author was trying to fit far too much into far too little space (trying to cover multimedia, networking, WIMP interfaces and more in one chapter is a bit of a tall order though!), but I very much like the book as a whole. It is one of those rare books that is suitable for a very wide range of audiences – from those with almost no knowledge of the subject at all (it starts from the very beginning, so that isn’t a problem) right up to those who are experienced programmers and know some of it (they will still find a lot they don’t know, and realise a lot of things). Overall: a great read, very interesting and very educational. You won’t be disappointed.


Validating the validation?

So, I’ve been pondering an interesting scientific dilemma recently: how do you validate a validation technique? That is, if you’re using a certain procedure to validate some data (that is, check how correct/accurate it is), how can you validate the validation procedure itself?

This has come up in my work recently in relation to validating Aerosol Optical Thickness (AOT) data from satellites. Validation is normally performed by comparing satellite-derived data to data from the Aerosol Robotic Network (AERONET). However, this is challenging for a number of reasons: the main one being that AERONET measurements are point measurements of the AOT above the measurement site, whereas the satellite will measure AOT averaged over a large pixel area (10 x 10km for MODIS). A number of validation methods have been proposed to deal with this, and they are all based on spatial and temporal averaging of the data, to bring it into a comparable state.

MODIS AOT data

Example MODIS AOT data for Europe

The big question then becomes:

If there are a number of different methods for validating this data, then how do we choose which one is best?

This is a big problem because the results of the validation will be used to infer the accuracy and uncertainty of the dataset – and if we use a validation that gives misleading results then we will have a misleading opinion of the dataset. This gets even more difficult when you need to choose parameter values for use in these validation procedures. For example, if the validation involves temporal or spatial averaging then what period/distance should the averaging be done over? 30 minutes? 1 hour? 5 hours?

So, how should we deal with this?

Unfortunately, I don’t know. I know that I have a problem with some of the current methods (for example, trying a wide range of parameter values and choosing the ones that give you the best validation results – that is, the lowest error) as some of them really do seem to be ‘cheating the system’. This is something that I’m hoping to come back to over time – I already have some preliminary ideas on how to ‘fix’ the AOT validation system – but I think it is a problem which won’t go away.


Updated Snow GIS data

A while back I released a GIS dataset containing Snow’s Cholera analysis data in modern GIS formats, and georeferenced to the British National Grid (see my previous post). Unfortunately, there was an error in some of the attributes of the Cholera Deaths shapefile which caused issues when using the data.

This error has now been fixed, and the updated data are available to download here: SnowGIS_v2.zip. The link on the previous post has also been updated.

Please accept my apologies for this error.


I signed the Science Code Manifesto – and you should too!

I’ve just signed the Science Code Manifesto because I firmly believe in what it says. Ok well, that probably doesn’t tell you much – generally I tend to believe in things that I sign – but I’d like to tell you why I signed it, and why I think it’s really important.

A lot of my PhD life is spent writing code (a lot of my life outside of my PhD is also spent writing code, but that’s another story). When I tell people this quite a few of them are surprised that I’m not doing a computer science PhD – because surely they’re the only ones who spend their time writing code? Well…no! A lot of scientists spend a lot of time writing code for research in almost every subject.

Why do they do that? Well, nearly every research project involves at least one of the following activities:

  • Data processing
  • Plotting graphs
  • Calculating statistics
  • Running models
  • Building new models, simulations and so on

All of these activities can easily be done through code, and in fact it’s often far more efficient to do them through code than by other methods. However, mistakes can be made in code, and people will often want to check the results of other people’s papers (that is, to ensure reproducibility – a key factor in science) – but to do that they need the code. That is what the first tenet of the Science Code Manifesto says: “All source code written specifically to process data for a published paper must be available to the reviewers and readers of the paper”. That means that as a reader (or reviewer) I can read the code (to check it looks like it does what it’s meant to do), and run the code (to check it actually does what its meant to do). It also means that if I have the code to do the processing, plus the input data, I can generate the output data that they got, and check it against the results in the paper. I was reading a paper today which examined aerosol optical depth variations across Europe. They had really high resolution data, and I’d love to have seen a map of the distribution across the UK in detail, but it wasn’t included in the paper (they had a lower-resolution map of the whole of Europe instead). If I’d had access to the code (and the data) then I could have generated the data myself, plotted a map over the whole of Europe (to check that it looked the same as their published map) and then zoomed in on the UK to examine it in more detail.

Scientific papers are designed to be built upon. As Newton said, “If I have seen further it is only by standing on the shoulders of giants”  – as scientists we all need to stand on the shoulders of those who came before us (giants or not). If you have the code that other scientists have used to produce the results in their paper, it is likely that you might want to modify it (for example, to fix some errors you’ve found), extend it (to make it applicable to your particular study area), and share it or its modifications with your colleagues. You won’t be able to do this unless you know what license the code was originally released under – hence the second tenet of “The copyright ownership and license of any released source code must be clearly stated”.

The next two tenets are very important as they place scientific code at the same level as scientific papers, books and other more ‘traditional’ academic outputs. They state that Researchers who use or adapt science source code in their research must credit the code’s creators in resulting publications and Software contributions must be included in systems of scientific assessment, credit, and recognition. This is important because if we believe that scientific code is important (which I, and the 846 people who have signed the manifesto so far believe) then we need to recognise it. This means two things: firstly citing it, so that we give the proper attribution to the authors, and let people see how it is being used; and secondly giving credit for writing code when we assess how good researchers are. This is something that varies significantly by department and research area – but it is something which I think should be standard across all fields. If you write a good piece of scientific software (not a 10 line Python script in a random file somewhere, but something which is properly released, useful, documented and sustainable) then you should be given credit for it, just as if you had written a paper or a journal article! As a number of people have commented before: a scientific paper which describes a new algorithm is not the scientific work itself – it is just an advert for the work. The real scientific work, and scientific product, is the code that implements the algorithm.

Finally, the manifesto touches on the subject of software sustainability – something that I will (hopefully) be doing a lot more work on in the near future. This refers to the practice of sustaining software so that it can continue to be used (and, ideally, continue to be improved) in the future. Software is a funny thing – it is susceptible to rotting away, just like organic material. This is known as software decay and is primarily caused by the rapid progress made in technology: it may be that the ‘latest, greatest’ technology that you used to write your software in 2012 can’t be run in 2020, or 2025, but the job the software does may still be very important. I think (hope) that all of my code will be able to run for the foreseeable future as I’ve written it in fairly standard programming languages (such as Python and R), but this may not be the case – for example, libraries can easily break as standards evolve, and if the author is no longer maintaining their libraries then they may not get fixed. This can be a big issue, and leads on to the other part of sustaining software: that of generating a community around the software, which will help sustain it in the years to come. The manifesto is actually fairly limited in what it says: Source code must remain available, linked to related materials, for the useful lifetime of the publication, but I feel that it a lot of the other things I’ve raised in this paragraph are also relevant.

So, there we go. That’s why I signed the manifesto – now have a think about it, and if you agree go and sign it too!


In praise of ProjectTemplate for reproducible research

As you might know from some of my previous posts, I’m a big fan of making my scientific work reproducible. My main reasons for being so keen on this are:

1. Reproducibility is key to science – if it can’t be reproduced then it can not be verified (that is, the experiment can’t be tried again to determine if the same result was produced then no-one can verify your work, and no-one can falsify it if it was incorrect), and therefore (according to various scientific philosophers such as Popper) it isn’t science.

2. It’s actually really useful for me as a researcher. Have you ever come back to a project six months after stopping work on it (possibly because you’d submitted a paper on it, and had to wait ages for the reviewers comments) and found it almost impossible to work out how to produce a certain graph or table, or which data was used to produce a certain result? Making your work reproducible by other scientists also means it will be reproducible by you when you’ve forgotten all about how it worked!

Basically reproducibility in scientific research these days means code. You could write a long Word document saying exactly how you processed all of your data (good luck keeping it up-to-date) and then run through all of that again, but in most of my work I use code in a variety of languages (Python, R and IDL mostly) to do the processing for me.

The beauty of this (aside from not spending ages clicking around in frustrating dialog boxes) is that doing your research through code gets it a long way to being reproducible without any other work on your part. You created your original outputs through code, so you can reproduce them just by running the code again! Simple isn’t it?

Well, unfortunately, it’s not quite that simple. Do you know which exact bit of data you used to create that code? Did you pre-process the data before using it in your code? Did you some processing on the data that the code produced before putting into a table/graph in your paper? Will you remember these things in six months/six years if you need to reproduce that bit of work yourself (or, more scarily, if someone emails you to tell you that they think your results were wrong…)? Unfortunately, I think that’s unlikely.

Anyway, to get to the point of this post: I have recently been using a R package called ProjectTemplate which has really helped me make my research properly reproducible. This package generates a standard folder structure for your R code, and provides a range of useful functionality for automatically loading data and caching the results of computations. I’ve been using this for a report that I wrote recently for my PhD supervisors (something which may turn into a paper sometime – hopefully), and it’s been great.

I’m not going to give a full overview of all of the functionality of ProjectTemplate here, but I’ll show you a bit about how I use it. Firstly, here is my folder structure for this project:

Folder structure

Folder structure for my ProjectTemplate project

As you can see there are quite a few folders here with code in:

  • data: scripts for loading in data (or data files themselves, which will be loaded automatically if they are of certain file types)
  • lib: functions that will be used throughout the project
  • munge: scripts to manipulate or ‘munge’ before use in the rest of the project (ie. pre-processing)
  • src: scripts used to actually do the processing and analysis of the data

There are also quite a few folders that I use that I haven’t shown expanded above:

  • cache: stores cached R objects (eg. results from time-consuming processing steps)
  • graph: stores graphical outputs from the analysis

So, what we’ve got here is a place to put re-usable functions (basically functions that could eventually go into a separate R package – eg. for reading specific format data files etc), a place to put pre-processing code, and a place to put actually scientific analysis code. You’ll see there are loads of other folders that I haven’t mentioned that I don’t really use, but I suspect I will probably use them in new projects in the future.

The beauty of this folder structure is that the folder that contains the structure above can be simply zipped up, given to someone else and then they can run it themselves. How do they do that? Simple, change the working directory to the folder and run:

library(ProjectTemplate)
load.project()

That will load all of the library needed, load the data (from files or cache), pre-process it (or load the results from the cache) and get it to the stage where you can run any of the files in src. Great!

The brilliant thing is that each of my scripts in the src folder will produce one or two outputs for my report. All of my graphs are saved as PDFs into the graphs folder, ready to be included directly into a LaTeX document, and my tables are produced as R data frames and then converted to LaTeX tables using the xtable package.

So, what’s the practical upshot of this? Well, if I come back to this in six months I can run any analysis that I use for my report by typing a couple of lines of code and then running the relevant file. It also meant that when I realised half-way through my writing up that I had accidentally done all of my results (about 5 graphs, and 4 big tables) based on some incorrect input data (basically I had data for half of Europe rather than just the UK, which makes a big difference to the ranges of atmospheric data!) it took me about 30 minutes to generate all of the new results by simply changing a line of code where the data was imported, running the pre-processing again (which took about 20 minutes of the 30 minutes time!) and then running each file to generate the required graph PDF files or xtable LaTeX code.

Hopefully this will have made sense to you all – stay tuned for some more reproducible research posts in the near future.


Announcing bib2coins – convert BibTeX files to COINS metadata

Recently I was shocked to find that there didn’t seem to be a simple tool which would convert BibTeX files to COINS metadata span tags – so I wrote one!

That sentence probably made no sense to you – so lets go through it in a bit more depth. I use LaTeX to write all of my reports for my PhD, and therefore I keep all of my references in BibTeX format. I also use BibTeX format to keep a list of all of my publications, which I then use to automatically produce a nice listing of my publications on my website. I’ve recently become a big fan of Zotero, which will import references from webpages with a single click. This works for many sites like Google Scholar, Web of Knowledge, Science Direct etc – and I wanted to get the same for my publications page.

Examining the information given on the Zotero Exposing Metadata page suggests that one of the ways to expose this metadata in a HTML page is to use COINS (ContextObjects in Spans). This involves putting a number of strange-looking <SPAN> elements into your HTML page, which Zotero (and various other tools like Mendeley) will then use to automatically add the bibliographic data to their database.

So, how should I create the COINS metadata? Well, you can generate one item at a time using the online generator, or you can export items from Zotero as COINS, but neither of these methods can be automated. I’d really like to have a simple command-line tool that would take a BibTeX file and produce COINS metadata for all of the entries in the file…

So that’s what I created! It’s called bib2coins and it is available on the Python Package Index, to install simply run pip install bib2coins, and it will automatically be placed on your path. You can then just run it as bib2coins bibtexfile.bib and it will print out a load of COINS spans to standard output – just ready for you to pipe into the middle of a HTML file!

The code is fairly simple, and uses a BibTeX parser written by Vassilios Karakoidas combined with my own code to create the COINS spans themselves. It is not finished yet, and currently works well for journals and ‘inproceedings’ items but hasn’t been tested on much else (I haven’t written any books, so I’m not so concerned about creating COINS metadata for them!). However, I will be updating this tool to support more bibliographic item types in the near future.


How to: Make your Sphinx documentation compile with ReadTheDocs when you’re using Numpy and Scipy

Sphinx is a great tool for documenting Python programs (and lots of other things – I recently saw a lecturer who had done all of his lecture notes using Sphinx!) and I’ve used it for my new project (which will be announced on this blog in the next few days). Now that the project is near release, I wanted to get the documentation onto ReadTheDocs to make it nice and easily accessible (and so that it can be easily built every time I commit to GitHub).

The theory is that you just point ReadTheDocs to your GitHub repository, it finds the Sphinx conf.py file, and all is fine. However, if you use any module outside of the standard library, and you’re using the Sphinx autodoc module, then it will fail to compile the documentation. This is because the Python code that you are documenting needs to be able to be imported for autodoc to work, and if you are trying to import a module that doesn’t exist by default on a Python install then an error will be produced.

The ReadTheDocs FAQ says that you can setup a pip_requirements file to install any modules that are needed for your code, but this won’t work for any modules that include C code. This is understandable – as ReadTheDocs don’t want any random C code executing on their server – but it means that trying to build the docs for any code that uses numpy, scipy or matplotlib (or many other modules) will fail.

The FAQ suggests how to solve this – using a so-called ‘mock’. This is an object that pretends to be one of these modules, so that it can be imported, but doesn’t actually do anything. This doesn’t matter as it is not normally necessary to actually run the code to produce the docs, just to be able to import it. However, the code that is provided by ReadTheDocs doesn’t work for any modules that you import using the * operator – for example, from matplotlib import *.  After asking a StackOverflow question, I found how to fix this for the code that ReadTheDocs provide, but a comment suggested a far easier way to do this, simply add code like the following to the top of your conf.py file:

import mock

MOCK_MODULES = ['numpy', 'scipy', 'matplotlib', 'matplotlib.pyplot', 'scipy.interpolate']
for mod_name in MOCK_MODULES:
sys.modules[mod_name] = mock.Mock()

In the MOCK_MODULES list put the names of all of the modules that you import. It is important to list submodules (such as matplotlib.pyplot) as well as the main modules. After committing your changes and pushing to GitHub, you should find that your docs compile properly.


How to: Set raster values to NoData easily in ArcGIS 10

While processing some data at work today I had an issue where I had a raster dataset in ArcGIS, where all cells with invalid data had been set to 9999. Of course, this caused a lot of issues for the statistics on the dataset – basically they were all nonsense – so I needed to fix it. I couldn’t seem to get reclass to work properly with floating point data, so I posted a question to the GIS StackExchange site. Within a few minutes I had a couple of answers: one suggesting a possible way using the interface, and one suggesting a method using the ArcPy API.

However, I managed to find a way to do this easily using the approach recommended for ArcPy in the response from Aragon (thanks!), but using the GUI interface. To do this, follow the instructions below:

1. Run the Spatial Analyst -> Conditional -> Set Null tool

2. Set the input conditional raster to be the raster dataset which you want to change

3. Set the expression to VALUE = 9999 where 9999 is the value that you want to replace with NoData.

4. Set the Input false raster or constant value to the same raster dataset that you select in step 2.

5. Choose a sensible output raster. The dialog should look roughly like this (click for more detail):

 

6. Click OK

It’s done! Your new raster should have all of the values of 9999 replaced with NoData, and your statistics should work fine.


Announcing DateRangeParser: Parse strings like “27th-29th June 2010″

In a project recently I was struggling to find a way to parse strings that contain a date range, for example:

  • 27th-29th June 2010
  • Tuesday 29 May -> Sat 2 June 2012
  • From 27th to 29th March 1999

None of the Python modules I investigated (including parsedatetime) seemed to be able to cope with the range of strings that I had to deal with. I investigated patching parsedatetime to allow it to do what I wanted, but I found it very hard to get into the code. So, I thought, why not write my own…

So I did, and I’ve released it under the LGPL and you can install it right now by running:

pip install daterangeparser

Or you can visit the DateRangeParser PyPI page to download it manually, read the documentation, or hack on the code.

The current version will parse a wide range of formats (see the examples in the documentation) and will deal with individual dates as well as date ranges. The API is very simple – just import the parse method and run it, giving the date range string as an argument. For example:

from daterangeparser import parse
print parse("14th-19th Feb 2010")

This will produce an output tuple with two datetime objects in it: the start and end date of the range you gave.

The parser is built using PyParsing – a great Python parsing framework that I have found very easy to get to grips with. It is incredibly powerful, very easy to use, and really shows how limited regular expressions can be! Now that I’ve done this I have an urge to use PyParsing to write parsers for all of the horrible scientific data formats that I have to deal with in my PhD….watch this space!


Producing polar contour plots with matplotlib

In my field I often need to plot polar contour plots, and generally plotting tools don’t make this easy. In fact, I think I could rate every single graphing/plotting package in the world by the ease of producing a polar contour plot – and most would fail entirely! Still, I have managed to find a fairly nice way of doing this using my plotting package of choice: matplotlib.

I must warn you first – a Google search for matplotlib polar contour or a similar search term will produce a lot of completely out-dated answers. The most commonly found answers are those such as this StackOverflow question and this forum post. In fact, the first question was asked by me last year – and got an answer which is now completely out of date. Basically, all of these answers tell you that you can’t do a polar contour plot directly in matplotlib, and you must convert your points from polar co-ordinates to cartesian co-ordinates first. This isn’t difficult, but is a pain to do, and of course you then end up with cartesian axes which doesn’t look great. The great news is that you can now do polar contour plots directly with matplotlib!

So, how do you do them? Simple really, you just create some polar axes and plot a contour plot on them:

fig, ax = subplots(subplot_kw=dict(projection='polar'))
cax = ax.contourf(theta, r, values, nlevels)

This produces a filled contour plot, as it uses the contourf function, using the contour function would give simple contour lines. The first three parameters which must be given to this function are all two-dimensional arrays containing: the radii, the angles (theta) and the actual values to contour. The final parameter is the number of contour levels to plot – you tend to want lower numbers for line contours and higher numbers for filled contour plots (to get a smooth look).

I never quite understood these two-dimensional arrays, and why they were needed. I normally had my data in the form of three lists that were basically columns of a table, where each row of the table defined a point and value. For example:

Radius Theta Value
10 0 0.7
10 90 0.45
10 180 0.9
10 270 0.23
20 0 0.5
20 90 0.13
20 180 0.52
20 270 0.98

Each of these rows define a point – for example, the first row defines a point with a radius of 10, an angle of 0 degrees and a value of 0.7. I could never understand why the contour function didn’t just take these three lists and plot me a contour plot. In fact, I’ve written a function that will do just that, which I will describe below, but first let me explain how those values are converted to two-dimensional arrays.

First of all, lets think of the dimensions: we obviously have dimensions here in our data, the radius and the angle. In fact, we could re-shape our values array so that it is two-dimensional fairly easily. We can see from the table above that we are doing all the azimuth angles for a radius of 10 degrees, then the same azimuth angles for a radius of 20 degrees, etc. Thus, rather than our values being stored in a one-dimensional list, we could put them in a two-dimensional table where the columns are the azimuth angles, and the rows are the radii:

0 90 180 270
10 0.7 0.45 0.9 0.23
20 0.5 0.13 0.52 0.98

This is exactly the sort of two dimensional array that we need to give to the contourf function. That’s not too hard to understand – but why on earth do the radii and angle arrays have to be two-dimensional too. Well, basically we just need two arrays like the one above, but with the relevant radii and angles in the cells, rather than the values. So, for the angles, we’d have:

0 90 180 270
10 0 90 180 270
20 0 90 180 270

And for the radii we’d have:

0 90 180 270
10 10 10 10 10
20 0 20 20 20

Then, when we take all three arrays together, each cell will define the three bits of information we need. So, the top left cell gives us an angle of 0, a radius of 10 and a value of 0.7. Luckily, you don’t have to make these arrays by hand – a handy NumPy function called meshgrid will do it for you:

>>> radii = np.arange(0, 60, 10)
>>> print radii
[ 0 10 20 30 40 50]
>>> angles = np.arange(0, 360, 90)
>>> print angles
[  0  90 180 270]
>>> np.meshgrid(angles, radii)
(array([[  0,  90, 180, 270],
       [  0,  90, 180, 270],
       [  0,  90, 180, 270],
       [  0,  90, 180, 270],
       [  0,  90, 180, 270],
       [  0,  90, 180, 270]]),
array([[ 0,  0,  0,  0],
       [10, 10, 10, 10],
       [20, 20, 20, 20],
       [30, 30, 30, 30],
       [40, 40, 40, 40],
       [50, 50, 50, 50]]))

One thing to remember is that the plotting function requires the angle (theta) in radians, not degrees, so if your data is in degrees (as it often is) then you’ll need to convert it to radians using the NumPy radians function.

After doing all of this you can get your data into the contour plotting function correctly, and you can get some polar axes for it to be plotted on. However, if you do this, you’ll find that your axes look something like this:

Example of polar axes with 0 at 3pm

You can see that zero degrees isn’t at the top, it’s at the ‘East’ or ‘3 o’clock’ position, and the angles go round the wrong way! Apparently that’s how these things are often done in maths – but in my field particularly people want to have a polar plot like a compass, with zero at the top!

If you try and find how to do this, you’ll find a StackOverflow answer with a brilliant subclass of PolarAxes which does this for you. It’s brilliant that matplotlib allows you do this sort of customisation, but if you look below the accepted answer you’ll find a link to the matplotlib documentation for a function called set_theta_zero_location. This function very nicely takes a compass direction (“N” or “E” or “NE” etc) for where zero should be, and puts it there! Similarly, the function set_theta_direction sets the direction in which the angles will increase. All you need to do to use these is call them from the axes object:

ax.set_theta_zero_location("N")
ax.set_theta_direction(-1)

The example above will set up the plot for a ‘normal’ compass-style plot with zero degrees at the north, and the angles increasing clockwise. If you find that these lines of code give an error you need to update your matplotlib version – these methods were only added in the latest version (v1.1.0).

So, now we’ve covered everything that I’ve gradually learnt about doing this, we can put it all together in a function. I use the function below whenever I want to plot a polar contour plot, and it works fine for me. It is documented through the docstring shown in the code below.

I can’t guarantee the code will work for you, but hopefully this post has been helpful and you’ll now be able to go away and create polar contour plots in matplotlib.

import numpy as np
from matplotlib.pyplot import *

def plot_polar_contour(values, azimuths, zeniths):
    """Plot a polar contour plot, with 0 degrees at the North.

    Arguments:

     * `values` -- A list (or other iterable - eg. a NumPy array) of the values to plot on the
     contour plot (the `z` values)
     * `azimuths` -- A list of azimuths (in degrees)
     * `zeniths` -- A list of zeniths (that is, radii)

    The shapes of these lists are important, and are designed for a particular
    use case (but should be more generally useful). The values list should be `len(azimuths) * len(zeniths)`
    long with data for the first azimuth for all the zeniths, then the second azimuth for all the zeniths etc.

    This is designed to work nicely with data that is produced using a loop as follows:

    values = []
    for azimuth in azimuths:
      for zenith in zeniths:
        # Do something and get a result
        values.append(result)

    After that code the azimuths, zeniths and values lists will be ready to be passed into this function.

    """
    theta = np.radians(azimuths)
    zeniths = np.array(zeniths)

    values = np.array(values)
    values = values.reshape(len(azimuths), len(zeniths))

    r, theta = np.meshgrid(zeniths, np.radians(azimuths))
    fig, ax = subplots(subplot_kw=dict(projection='polar'))
    ax.set_theta_zero_location("N")
    ax.set_theta_direction(-1)
    autumn()
    cax = ax.contourf(theta, r, values, 30)
    autumn()
    cb = fig.colorbar(cax)
    cb.set_label("Pixel reflectance")

    return fig, ax, cax