Robin's Blog

Previously Unpublicised Code: RTWIDL

When looking through my profile on Github recently, I realised that I had over fifty repositories – and a number of these weren’t really used much by me anymore, but probably contained useful code that no-one really knows about! So, I’m going to write a series of posts giving brief descriptions of the code and what it does, and point people to the Github repository and any documentation (if available). I’m also going make sure that I take this opportunity to ensure that every repository that I publicise has a README file and a LICENSE file.

So, let’s get going with the first repository, which is RTWIDL: a set of useful functions for the IDL programming language. It’s slightly incorrect to call this “unpublicised” code as there has been a page on my website for a while, but it isn’t described in much detail there.

These functions were written during my Summer Bursary at the University of Southampton, and are mainly focused around loading data from file formats that aren’t supported natively by IDL. Specifically, I have written functions to load data from the OceanOptics SpectraSuite software (used to record spectra from instruments like the USB2000), Delta-T logger output files, and NEODC Ames format files. This latter format is interesting – it’s a modification of the NASA Ames file format, so that it can store datetime information as the independent variable. Unfortunately, due to this change none of the ‘standard’ functions for reading NASA Ames format data in IDL will work with this data. Quite a lot of data is available in this format, as for a number of years it was the format of choice of the National Earth Observation Data Centre in the UK (see their documentation on the format). Each of these three functions has detailed documentation, in PDF format, available here.

As well as these functions, there are also a few utility functions for checking whether ENVI is running, loading files into ENVI without ENVI ‘taking over’ the IDL variable, and displaying images with default min-max scaling. These aren’t documented so well, but should be fairly self-explanatory.

RTWIDL is released under the BSD license, and is available at

Blue Marble: From Apollo 17 to DSCOVR, an EPIC journey

NASA image ID AS17-148-22727 is famous. Although you may not recognise the number, you will almost certainly recognise the image:

This was taken by NASA Apollo astronauts on the 7th December 1972, while the Apollo 17 mission was on its way to the moon. It has become one of the most famous photographs ever taken, and has been widely credited as providing an important contribution to the ‘green revolution’, which was rapidly growing in the early 1970s.

It wasn’t, in fact, the first image of the whole Earth to be taken from space – the first images were taken by the ATS-III satellite in 1967, but limitations in satellite imaging at the time meant that their quality was significantly lower:

The Apollo astronauts had the benefit of a high-quality Hasselblad film camera to take their photographs – hence the significantly higher quality.

Part of the reason that this image has become so famous, is because there haven’t been any more like it – Apollo 17 was the last manned lunar mission, and we haven’t sent humans that far away from the Earth since. NASA has released a number of mosaics of satellite images from satellites such as MODIS and Landsat and called these ‘Blue Marble’ images – but they’re not quite the same thing!

Anyway, fast-forwarding to 2015…and NASA launched DSCOVR, the Deep Space Climate Observatory satellite (after a long political battle with climate-change-denying politicians in the United States). Rather than orbiting relatively close to the Earth (around 700km for polar orbiting satellites, and 35,000km for geostationary satellites), DSCOVR sits at the ‘Earth-Sun L1 Lagrangian point’, around 1.5 million km from Earth!

At this distance, it can do a lot of exciting things – such as monitor the solar wind and Coronal Mass Ejections – as it has a continuous view of both the sun and Earth. From our point of view here, the Earth part of this is the most interesting – as it will constantly view the sunny side of the Earth, allowing it to take ‘Blue Marble’ images multiple times a day.

The instrument that acquires these images is called the Earth Polychromatic Imaging Camera (EPIC, hence my little pun in the title), and it takes many sequential Blue Marble images every day. At least a dozen of these images each day are made available on the DSCOVR:EPIC website – within 36 hours of being taken. As well as providing beautiful images (with multiple images per day meaning that it’s almost certain that one of the images will cover the part of the Earth where you live!), EPIC can also be used for proper remote-sensing science – allowing scientists to monitor vegetation health, aerosol effects and more at a very broad spatial scale (but with a very high frequently).

So, in forty years we have moved from a single photograph, taken by a human risking his life around 45,000km from Earth, to multiple daily photographs taken by a satellite orbiting at 1.5 million km from Earth – and providing useful scientific data at the same time (other instruments on DSCOVR will provide warnings of ‘solar storms’ to allow systems on Earth to be protected before the ‘storm’ arrives – but that’s not my area of expertise).

So, click this link and go and look at what the Earth looked like yesterday, from 1.5 million km away, and marvel at the beautiful planet we live on.

Two great IPython extensions

I bought a new laptop recently, and just realised that I hadn’t installed two great IPython extensions that I always try to install whenever I set up a new IPython environment – so I thought I’d blog about them to let the world (well, my half-a-dozen readers) know.

They’re both written by MinRK – one of the core IPython developers – and provide some really useful additional functionality. If you’re impatient, then download them here, otherwise read on to find out what they do.

Table of Contents

Screen Shot 2015-10-22 at 21.52.28

This extension provides a lovely floating table of contents – perfect for those large notebooks (like tutorial notebooks from conferences). Simply click the button on the toolbar to turn it on and off.


This provides a simple button on the toolbar that takes your current notebook, uploads it as a gist to Github, and then provides you with a link to view the gist in nbviewer. In practice what this means is that you can be working on a notebook, hit one button and then copy a link and share it with anyone – regardless what level of technical experience they have. Really useful!


These are both great extensions – and I’m sure there are far more that I should be using, so if you know of any then let me know in the comments!

How to: get nice vector graphics in your exported PDF ipython notebooks

(This is really Part 2 of IPython tips, tricks & notes – Part 1, but I thought I’d give it a more self-explanatory title)

IPython (sorry, Jupyter!) notebooks are really great for interactively exploring data, and then turning your analyses into something which can easily be sent to a non-technical colleague (by adding some Markdown and LaTeX cells along with the code and output graphs).

However, I’ve always been frustrated by the quality of the graphics that you get when you export a notebook to PDF. This simple example PDF was exported from a notebook and shows what I mean – and an enlarged screenshot is below:

Screen Shot 2015-10-18 at 15.18.02

You can see how blurry the text and lines are – it looks horrible!

Now, normally if I were producing graphs like this in matplotlib, I’d save the outputs as PDF, so I get nice vector graphics which can be enlarged as much as you want without any reduction in quality. However, I didn’t know how I could do that in the notebook while still allowing the plots to be displayed nicely inline while editing the notebook.

Luckily, at the NGCM Summer Academy IPython Training Course, I had the chance to ask one of the core IPython developers about this, and he showed me the method below, which works brilliantly.

All you have to do is add the following code to your notebook – but note, it must be after you have imported matplotlib. So, the top of your notebook would look something like this:

from matplotlib.pyplot import *
%matplotlib inline

from IPython.display import set_matplotlib_formats
set_matplotlib_formats('png', 'pdf')

This tells IPython to store the output of matplotlib figures as both PNG and PDF. If you look at the .ipynb file itself (which is just JSON), you’ll see that it has two ‘blobs’ – one for the PNG and one for the PDF. Then, when the notebook is displayed or converted to a file, the most useful format can be chosen – in this case, the PNG is used for interactive display in the notebook editor, and the PDF is used when converting to LaTeX.

Once we’ve added those lines to the top of our file, the resulting PDF looks far better:

Screen Shot 2015-10-18 at 15.31.21

So, a simple solution to a problem that had been annoying me for ages – perfect!

Introducing recipy: effortless provenance tracking with Python

By time this blog post is published, I will have finished my presentation about recipy at EuroSciPy (see the abstract for my talk), and so I thought it would be a good time to introduce recipy to the wider world. I’ve been looking for something like recipy for ages – and I suggested the idea at the Collaborations Workshop 2015 Hack Day. I got together in a team with Raquel Alegre and Janneke van der Zwaan, and our implementation of recipy won the Hack Day prize! I’m very excited about where it could go next, but first I ought to explain what it is:

So, have you ever run a Python script to produce some outputs and then forgotten exactly how you created them? For example, you created plot.png a few weeks ago and now you want to use it in a publication, but you can’t remember how you created it. By adding a single line of code to your script, recipy will log your inputs, outputs and code each time you run the script, and you can then query the resulting database to find out how exactly plot.png was created.

Does this sound good to you? If so, read on to find out how to use it.

Installation is stupidly simple: pip install recipy

Using it is also very simple – just take a Python script like this:

import pandas as pd
from matplotlib.pyplot import *

data = pd.read_csv('data.csv')

data.plot(x='year', y='temperature')

data.temperature = data.temperature - 273

and add a single extra line of code to the top:

import recipy
import pandas as pd
from matplotlib.pyplot import *
...(code continues as above)...

Now you can just run the script as usual, and you’ll see a little bit of extra output on stdout:

recipy run inserted, with ID 1b40ce05-c587-4f5d-bfae-498e64d71a6c

This just shows that recipy has recorded this particular run of your code.

Once you’ve done this you can query your recipy database using the recipy command-line tool. For example, you can run:

$ recipy search graph.png

Run ID: 1b40ce05-c587-4f5d-bfae-498e64d71a6c
Created by robin on 2015-08-27T20:50:23
Ran /Users/robin/code/euroscipy/recipy/ using /Users/robin/.virtualenvs/recipypres/bin/python
Git: commit 4efa33fc6e0a81e9c16c522377f07f9bf66384e2, in repo /Users/robin/code/euroscipy, with origin None
Environment: Darwin-14.3.0-x86_64-i386-64bit, python 2.7.9 (default, Feb 10 2015, 03:28:08)


** Previous runs creating this output have been found. Run with --all to show. **

You can also view these runs in a GUI by running recipy gui, which will give you a web interface like:



There are more ways to search and find more details about particular runs: see recipy --help for more details. Full documentation is available at Github – which includes information about how this all works under the hood (it’s all to do with the crazy magic of sys.meta_path).

So – please install recipy (pip install recipy), let me know what you think of it (feel free to comment here, or email me at robin AT, and please submit issues on Github for any bugs you run into (pull requests would be even nicer!).

How I became Dr Robin Wilson: Part 2

At the end of the previous post in this series, I was six months into my PhD and worrying that I really needed to come up with an overarching topic/framework/story/something into which all of the various bits of research that I was doing would fit. This part is the story of how I managed to do this, albeit rather slowly!

In fact, I felt that the next year or so of my PhD went very slowly. I can only find a few notes from meetings so I’m not 100% sure what I was spending my time doing, but in general I was trying not to worry too much about the ‘overarching story’ and just get on with doing the research. At this point ‘the research’ was mostly my work on the spatial variability of the atmosphere.

When I first thought about investigating the spatial variability of the atmosphere over southern England I was pretty sure it’d be fairly easy to do: all I had to do was grab some satellite data and do some statistics. I was obviously very naive at that point in my PhD…it was actually far harder than that for a number of reasons. One major problem was that the ‘perfect data’ that I’d imagined didn’t actually exist, and all of the datasets that did exist had limitations. For example, a number of satellite datasets had lots of missing data due to cloud cover, or had poor accuracy, and ground measurements were only taken at a few sparsely distributed points.

I spent a long time writing a very detailed report on the various datasets available, how they were calculated and their accuracy. I then performed independent validations myself (as the accuracy often depended on the situation in which they were used, and I wanted to establish their accuracy over my study area), and finally actually used the datasets to get a rough idea of the spatial variability of these two parameters (AOT and PWC) over southern England. This took a long time, but got me to the stage where I was very familiar with these datasets, and gave me the opportunity to develop my data processing skills.

I then used Py6S – by then a fairly robust tool that was starting to be used by others in the field – to simulate the effects of this spatial variability on satellite images, particularly when atmospheric correction of these images was done by assuming that the atmosphere was spatially uniform. The conclusion of my report was interesting: it basically said that the spatial variability in PWC wasn’t a huge problem for multispectral satellite sensors, but that the spatial variability in AOT could lead to significant errors if it was ignored.

By the time I’d finished writing this report I was probably somewhere between one year and one and a half years into my PhD, and I was wondering where to go next. I’d originally planned that my investigation into the spatial variability of the atmosphere would be one of the ‘three prongs’ of my PhD (yes, I found some notes that I had lost when I wrote the previous article in this series!), and the others would be based around novel sensors (such as LED-based sensors) and BRDF correction of satellite/airborne data. However, I hadn’t really done much on the BRDF side of things, and I wasn’t sure exactly how the LED-based sensors would fit in to my PhD as a lot of the development work was being done by students in the Electronics department, and so I wasn’t sure how much it could be counted as ‘my’ work (I was also concerned that we’d find out that they just didn’t work!).

I spent a lot of time around this point just sitting and thinking, and scribbling vague notes about where I could go next. While doing this I kept coming back to the resolution limitations in methods for estimating AOT from satellite images, for two main reasons.

  1. I really wanted high-resolution data for my investigation into spatial variability, but it wasn’t available so I had to make do with 10km MODIS data instead
  2. My spatial variability work had shown that it was important to take into account the spatial variability in AOT over satellite images, and the only way to do this properly would be to perform a per-pixel atmospheric correction. Of course, a per-pixel atmospheric correction requires an individual estimate for AOT for each pixel in the image: and there weren’t any AOT products that had a high enough resolution to do this for sensors such as Landsat, SPOT or DMC (or up-c0ming sensors such as Sentinel-2).

The obvious answer to this was to develop a way of estimate AOT at high-resolution from satellite data – but I kept well away from this as I was pretty sure it would be impossible (or at least, very difficult, and would require skills that I didn’t have).

I tried to continue with some other work on the novel LED-based instruments, but kept thinking how these instruments would nicely complement a high-resolution AOT product, as they could be used to validate it (after all, if you create a high-resolution product, it is often difficult to find anything to validate it with). Pretty-much everything that I did kept leading me back to the desire to develop a high-resolution AOT product…

I eventually gave up trying to resist this, and started brainstorming possible ways to approach creating a high-resolution AOT product. I was pretty sure that none of the ‘standard’ approaches would work (people had tried these before and hadn’t succeeded) so I tried to think ‘outside the box’. I eventually came up with an idea – and you’ll have to wait for the next part to find out what this idea was, how I ‘sold’ it to my supervisors, and what happened next.

Interactive cloud frequency web map, with Google Earth Engine

Summary: I’ve developed an interactive cloud frequency map, available here. It may be particularly useful for satellite imaging researchers working out where they can acquire imagery easily.

Screen Shot 2015-07-25 at 22.32.11

One of the major issues with optical satellite imaging is that you can’t see through clouds: so normally when its cloudy, you can’t get anything useful from your images. This actually has a big effect on where you can use satellite imaging effectively: for example, a lot of people have used satellite data to monitor changes in the Amazon rainforest, but it’s quite challenging to find cloud-free images due to the climate in that region of the world.

Similarly, I remember a friend of mine struggling throughout his PhD with cloud cover. He was trying to observe vegetation in India, and needed to look at images taken around the monsoon because the vegetation was growing most vigorously at that time of year. The problem, of course, is that its very cloudy during the monsoon season – so there were barely any images he could use, and he ended up spending half of his PhD developing a new method to classify cloud from his images, so that he could extract the small fraction of the data that was actually usable. I’ve run into similar problems too – for example, some research in Hydrabad ran into problems caused by the limited availability of data due to cloud cover.

I’ve often found myself wanting to look at cloud frequency in different areas so that when I have a number of options for where to use as a case study for something, I can easily pick the area that is likely to have the most cloud-free data available. I’ve been a ‘Trusted Tester’ of Google Earth Engine for a long time, and had written a short script in Earth Engine to produce a map of cloud frequency.

Screen Shot 2015-07-25 at 22.49.22

I found myself using this frequently with colleagues, but I wasn’t able to share the data or interface with anyone easily. So, a few weekends ago I sat down and altered one of the EarthEngine demonstration applications (the ‘trendy-lights’ demo) to work with my cloud frequency code. After a bit of trial and error I got it working: the webapp is available here, and the code is on Github. I’m fairly new to Javascript development, but I think it all works fairly well: you should be able to search to find a location on the map, and you can click anywhere on the map to produce a pop-up box with the cloud cover percentage. The higher this percentage, the more days in a year are considered to be cloudy by the MODIS satellite (more details are available from the Info link in the top right).

So, I hope you find this useful (and even if you’re not using it as a satellite imaging researcher, you may find the cloud cover patterns across the world to be fascinating anyway…)

How I became Dr Robin Wilson: Part 1

As many of you probably know, I’ve been working towards a PhD at the University of Southampton. This post is the brief story of my PhD, my graduation and my future plans.

So, back in the dim and distant days of 2010, I started a PhD with the Institute for Complex Systems Simulation (ICSS) at the University of Southampton. This is a Doctoral Training Centre (now known as Centres for Doctoral Training, because of an acronym clash!) which offers four-year PhDs: you start off with a year of taught courses (at MSc level, though you don’t actually get awarded an MSc), focused on building skills for your research, and then continue with a fairly standard three-years of research.

My first year was very useful, and covered a range of topics including introductions to complexity science and simulation methods, significant skills development in programming (particularly high-performance computing) and statistics, plus domain-specific courses (such as computer vision, remote sensing and machine learning). Many of these courses were coursework-focused, and helped me develop my writing skills (I also used this opportunity to properly learn LaTeX).

At the end of the taught year I did a ‘Summer Project’, which was equivalent to a MSc dissertation project. Mine had the wonderful title of “Can a single cloud spoil the view?”: Modelling the effect of an isolated cumulus cloud on calculated surface solar irradiance”. The story of this project is a blog post in itself, but in the mean time you can read my thesis here. I was particularly pleased to find out later that my thesis won the Remote Sensing and Photogrammetry Society (RSPSoc) Masters Thesis Prize – a highly-competitive award.

Paths taken by light in the ray-tracing Radiative Transfer Model I developed for my MSc

Paths taken by light in the ray-tracing Radiative Transfer Model I developed for my MSc

One of the benefits of doing a PhD through the ICSS was that I didn’t have to have a detailed plan for my entire PhD when I started: in fact, some of my colleagues didn’t even know what department they wanted to work in, and used the first year as an opportunity to ‘date’ potential supervisors and try out potential topics. I came in knowing I wanted to do a PhD in remote sensing, probably focusing on some sort of quantitative methods development, potentially in the areas of correction and calibration of satellite imagery (areas I’d worked on for my undergraduate dissertation), but I didn’t know much more than that.

By the time I got to the end of my taught year and actually started the ‘research component’ I’d narrowed down a little bit, and realised that I wanted to do something to do with atmospheric aerosols and atmospheric correction. I came up with a plan which involved looking at the spatial variability of atmospheric conditions (principally the aerosol content, as measured by Aerosol Optical Thickness, and water content, as measured by Precipitable Water Vapour). I can’t actually find a copy of this plan at the moment (it’s probably on one of my many external hard disks somewhere), so that vague memory will have to do for now!

What I have managed to find, however, is a copy of a report I produced for my first six-monthly supervisory meeting, summarising roughly what I’d done in that period. Looking back, I’m actually impressed as to what I’d managed to achieve:

  • I’d started to investigate the spatial variability of the atmosphere over southern England, and had really been struggling with the availability and quality of data. This struggle actually led to a lot of interesting work, one of which was investigating the relationship between visibility (as measured by meteorological stations and airports) and Aerosol Optical Thickness (AOT, the measure of the ‘clarity of the atmosphere’ that I was interested in). According to my notes, I had submitted a paper about this within the first six months – and although that version of the paper was rejected, a later version was published as Wilson et al., 2015 (PDF).
  • I had finished v1.0 of Py6S, my Python interface to the 6S Radiative Transfer Model. 6S simulates how light passes through the atmosphere under configurable atmospheric conditions, and is widely-used in atmospheric correction of satellite images. Again, I will probably write another article about how the idea for Py6S came about and the way it developed over time, but here I’ll just summarise by saying that developing Py6S was a great idea, it gave me a really useful framework for implementing the rest of my PhD projects, and it saved me a huge amount of time in the long run. Luckily, my supervisors were very supporting of me taking time to create a fully-featured version of Py6S, and I later published a paper on it in Computers & Geosciences as Wilson, 2013 (PDF)
  • I’d investigated various other ideas, some of which came to fruition throughout the rest of my PhD (developing LED-based sun photometers, validation of GPS-based water vapour measurements, working with other radiative transfer models) and some of which didn’t (attempting to use webcams to monitor visibility and therefore AOT, monitoring various other environmental changes from webcams, developing full spectrometers using LEDs).

I’m pretty sure some of those things were done ‘on the side’ during my taught year, but I can’t remember exactly what I did when. Anyway, by the end of the first six months of full-time research I had a number of interesting ideas, some significant frustrations, some potential papers, and some big questions about where my PhD was going.

I was very aware that a PhD had to ‘tell a story’ and have a coherent thread running through it: I was confident that I could do research, but I knew that putting together 4-5 completely unrelated chapters wouldn’t satisfy an examiner. Reading my notes from meetings at the time show that I was really quite worried about this – there are lots of question marks all over the page, and notes about the importance of finding a good overall structure and aim.

This would come, but it would take some time – and you’ll have to wait until Part 2 for that…

How to: load the Google Maps Places library through Google API Loader

Google have recently introduced a new way of loading their javascript APIs: their Google API Loader. To use it, all you do is add a script tag in your HTML:

<script src=""></script>

You can then load whatever Google APIs you want using code like this:

google.load('visualization', '1.0');
google.load('jquery', '1');
google.load('maps', '3');

google.setOnLoadCallback(function() {

and the callback function will run after all of the APIs have loaded.

This is far nicer than including individual URLs to APIs as separate script tags, but the documentation is a bit limited. For example, the list of supported APIs doesn’t include the Google Maps API, but sample code from another team within Google (the Earth Engine team) already uses this method of loading the Maps API.

The problem is that I couldn’t find a way to specify that I wanted to load the Google Maps Places library – so I had to go back to including a script tag:

<script type="text/javascript"

But this seemed to conflict with the Google API Loader way of doing things.

Anyway, to cut a long story short, I’ve worked out how to use the Google API Loader to load the API with the Places library. Just do this:

google.load('maps', '3', {other_params:'libraries=places'});

This also works for any other parameters you want to put in the API URL, and may work for other Google APIs (although I haven’t tried it).

IPython tips, tricks & notes – Part 1

During the last week, I attended the Next Generation Computational Modelling (NGCM) Summer Academy at the University of Southampton. Three days were spent on a detailed IPython course, run by MinRK, one of the core IPython developers, and two days on a Pandas course taught by Skipper Seaborn and Chris Fonnesbeck.
The course was very useful, and I’m going to post a series of blog posts covering some of the things I’ve learnt. All of the posts will be written for people like me: people who already use IPython or Pandas but may not know some of the slightly more hidden tips and techniques.

Useful Keyboard Shortcuts

Everyone knows the Shift-Return keyboard shortcut to run the current cell in the IPython Notebook, but there are actually three ‘running’ shortcuts that you should know:

  • Shift-Return: Run the current cell and move to the cell below
  • Ctrl-Return: Run the current cell and stay in that cell
  • Opt-Return: Run the current cell, create a new cell below, and move to it

Once you know these you’ll find all sorts of useful opportunities to use them. I now use Ctrl-Return a lot when writing code, running it, changing it, running it again etc – it really speeds that process up!
Also, everyone knows that TAB does autocompletion in IPython, but did you know that Shift-TAB pops up a handy little tooltip giving information about the currently selected item (for example, the argument list for a function, the type of a variable etc. This popup box can be expanded to its full size by clicking the + button on the top right – or, by pressing Shift-TAB again.

Magic commands

Again, a number of IPython magic commands are well known: for example, %run and %debug, but there are loads more that can be really useful. A couple of really useful ones that I wasn’t aware of are:


This writes the contents of the cell to a file. For example:

%%writefile test.txt
This is a test file!
It can contain anything I want...

And more...
Writing test.txt
!cat test.txt
This is a test file!
It can contain anything I want...

And more...


This changes the way that exceptions are displayed in IPython. It can take three options: plain, context and verbose. Let’s have a look at these.
First we create a simple module with a couple of functions, this basically just gives us a way to have a stack trace with multiple functions that leads to a ZeroDivisionError.


def f(x):
    return 1.0/(x-1)

def g(y):
    return f(y+1)
Now we’ll look at what happens with the default option of context

import mod
ZeroDivisionError                         Traceback (most recent call last)
<ipython-input-6-a54c5799f57e> in <module>()
      1 import mod
----> 2 mod.g(0)

/Users/robin/code/ngcm/ngcm_ipython_tutorial/Robin'sNotes/ in g(y)
      5 def g(y):
----> 6     return f(y+1)

/Users/robin/code/ngcm/ngcm_ipython_tutorial/Robin'sNotes/ in f(x)
      2 def f(x):
----> 3     return 1.0/(x-1)
      5 def g(y):

ZeroDivisionError: float division by zero
You’re probably fairly used to seeing that: it’s the standard IPython stack trace view. If we want to go back to plain Python we can set it to plain. You can see that you don’t get any context on the lines surrounding the exception – not so helpful!

%xmode plain
Exception reporting mode: Plain
import mod
Traceback (most recent call last):

  File "<ipython-input-8-a54c5799f57e>", line 2, in <module>

  File "/Users/robin/code/ngcm/ngcm_ipython_tutorial/Robin'sNotes/", line 6, in g
    return f(y+1)

  File "/Users/robin/code/ngcm/ngcm_ipython_tutorial/Robin'sNotes/", line 3, in f
    return 1.0/(x-1)

ZeroDivisionError: float division by zero
The most informative option is verbose, which gives all of the information that is given by context but also gives you the values of local and global variables. In the example below you can see that g was called as g(0) and f was called as f(1).

%xmode verbose
Exception reporting mode: Verbose
import mod
ZeroDivisionError                         Traceback (most recent call last)
<ipython-input-10-a54c5799f57e> in <module>()
      1 import mod
----> 2 mod.g(0)
        global mod.g = <function g at 0x10899aa60>

/Users/robin/code/ngcm/ngcm_ipython_tutorial/Robin'sNotes/ in g(y=0)
      5 def g(y):
----> 6     return f(y+1)
        global f = <function f at 0x10899a9d8>
        y = 0

/Users/robin/code/ngcm/ngcm_ipython_tutorial/Robin'sNotes/ in f(x=1)
      2 def f(x):
----> 3     return 1.0/(x-1)
        x = 1
      5 def g(y):

ZeroDivisionError: float division by zero


The load magic loads a Python file, from a filepath or URL, and replaces the contents of the cell with the contents of the file. One really useful application of this is to get example code from the internet. For example, the code %load will create a cell containing that matplotlib example.

%connect_info & %qtconsole

IPython operates on a client-server basis, and multiple clients (which can be consoles, qtconsoles, or notebooks) can connect to one backend kernel. To get the information required to connect a new front-end to the kernel that the notebook is using, run %connect_info:

  "control_port": 49569,
  "signature_scheme": "hmac-sha256",
  "transport": "tcp",
  "stdin_port": 49568,
  "key": "59de1682-ef3e-42ca-b393-487693cfc9a2",
  "ip": "",
  "shell_port": 49566,
  "hb_port": 49570,
  "iopub_port": 49567

Paste the above JSON into a file, and connect with:
    $> ipython <app> --existing <file>
or, if you are local, you can connect with just:
    $> ipython <app> --existing kernel-a5c50dd5-12d3-46dc-81a9-09c0c5b2c974.json 
or even just:
    $> ipython <app> --existing 
if this is the most recent IPython session you have started.
There is also a shortcut that will load a qtconsole connected to the same kernel:


Stopping output being printed

This is a little thing, that is rather reminiscent of Mathematica, but can be quite handy. You can suppress the output of any cell by ending it with ;. For example:

Right, that’s enough for the first part – tune in next time for tips on figures, interactive widgets and more.