|« Aug||Nov »|
So, hot on the heels of my last post, I’m now applying another concept from a computing blog post to science. For many years I’ve been a fan of Joel Spolsky’s blog, and I’ve learnt a lot from it. One of his most interesting posts was called The Law of Leaky Abstractions. As with my last post, I’ll briefly explain the computing context, and then broaden out to talk about its applications to science.
Joel states that the law of leaky abstractions is that:
All non-trivial abstractions, to some degree, are leaky.
Of course, that doesn’t make much sense without understanding what abstractions are, and what it means to say that they are leaky. Joel defines an abstraction as a simplification of something which hides the complex details that are occurring under the covers. Protocols such as TCP/IP are a good example of an abstraction in computing. They abstract the details of communicating over a computer network., allowing you to send messages across wires and guarantee that they will get there, without having to worry about how it actually works. The problem is that sometimes these abstractions leak – that is, some of the hidden complexity bubbles up to the surface. In the TCP/IP example above this might happen because the network cable has physically broken, or because some of the pins have become crossed, or because of faulty network infrastructure elsewhere.
An example from everyday life is how a home heating system abstracts away the details of actually heating your house, and simply replaces it with a dial allowing you to control the temperature. That all works fine until your radiators get air trapped in them (as happens in my Southampton house worryingly frequently), or the heating pump fails, or the water tank overflows.
Now, these kind of abstractions can be seen in science too. Newtonian mechanics is an abstraction of the underlying complexity of motion which works fine most of the time – until you get to speeds near to the speed of light, at which point it stops working. Another example is atomic theory, which states that all matter is composed of individual atoms. In many situations, this theory works very well, but sometimes the underlying complexity of sub-atomic particles (such a neutrons, quarks or gluons) bubbles to the surface.
Abstractions in all fields often come in sets, or pyramids. For example, the abstraction of writing
print "Hello, world!"
to display some text on the screen has a whole pyramid of abstractions below it, a few examples of which are:
- The abstraction that strings can be stored as integers (you only have to look at the difficulty in supporting foreign characters in many programming languages to realise that this abstraction is very leaky)
- The abstraction that programs have an unlimited memory address space to use (performance issues caused by page-faults, and error messages about Windows virtual memory show how this leaks)
- The abstraction of a filesystem containing files and folders (replacing the complexity of tiny magnetic areas on a hard disk)
This type of pyramid is so common that there is even an official standard for it within the field of computer networking: the Open Systems Interconnection model. This has 7 layers, using abstractions from the simple (transmitting one bit over an electrical link) to the complex (sessions and application protocols).
Similarly, there is a pyramid of abstractions in many areas of science. For example, if you abstract away the behaviour of individual electrons you can produce ‘laws’ of electrical circuits. Combining these laws produces the abstractions of electronics (involving complex circuits and logic gates), and these can then be abstracted away again with the creation of integrated circuits, and so on, until a complex electronic device is created. However, every so often the complexity will bubble up again. For example, the fundamental limitation on the speed of modern computer processors is the size of an electron (as the circuits can’t get any smaller than that) – which is a big leak from the bottom abstraction (electron abstracted into the laws of electrical circuits) to the top abstraction (computer processors which run high-level code).
Now, this all raises some interesting questions – some of which I’m hoping to address in future posts:
- Are these pyramids of abstractions present in all areas of science? If not, then why not? I have a suspicion that mathematics will prove to be different…
- Is the process of creating these abstractions part of what defines a field as a science? How does the gradual building of the abstraction pyramid link to theories about the progression of science (such as Kuhn’s paradigms)
- How does the pyramid of abstractions look in a particular field (for example, my field of remote sensing)? Does thinking about the pyramid of abstractions for your field lead to good ideas for research?
- Does ‘good’ (whatever that may mean) research seek to build abstractions, or to demolish abstractions?
Hopefully I’ll get the chance to revisit some of these issues in future posts – but in the meantime please leave your comments.
I listened to a very interesting episode of The Pod Delusion podcast today. In fact, all of the bits of the programme were interesting: from a discussion of faith schools from an athiest perspective, to a description of Tesco’s planning policies from a local council planning officer. However, the item on the programme that I want to talk about was called Default Deny and was written by Sean Ellis. Below I’ll give a brief description of his argument before talking about how I think this applies to academic life.
Sean took his inspiration from an article called The Six Dumbest Ideas in Computer Security, by Marcus Ranum which states that the dumbest idea is Default Accept. Until recently, many computer security systems were designed around the idea of letting everything go through apart from things that were specifically told to be stopped (thus the name: Default Accept). This is easily seen in Windows anti-virus software. Windows (at least before Windows XP) used the Default Accept principle, so anti-virus software was created with huge lists of pieces of software which must be denied. In his article, Marcus Ranum argues that it is far more sensible to base the system on a Default Deny principle, thus only allowing specific software to run.
So far, so good – but what does this have to do with academia? Well, Sean applies this to ideas by asking whether people have Default Deny brains or Default Accept brains, that is: whether they accept any new ideas by default, or whether they immediately reject them until they have studied the evidence and been convinced. He comes to the conclusion that many people have Default Accept brains, but that Default Deny brains would probably be a more healthy situation.
That struck a chord with me, in terms of scientific studies and academia in general. It can easily be said that science must be done with a Default Deny brain (or at least with a Default Deny hat on), but I think there is more to it than this. As a student in secondary school we were expected to work within a Default Accept paradigm. We weren’t expected to question what we were being taught – we were meant to just accept it at face value. That started to change at A-level when we were first introduced to some of the controversy in our subjects. I distinctly remember one of my A-level geography lessons when we were told that scientists still weren’t sure about how drumlins (a type of glacial landform) were formed. I suppose that was the day that I first became really interested in research: I’d found that we didn’t know everything, and I wanted to be one of the people that started to find things out!
However, even though we had been introduced to controversy at A-level we weren’t expected to really question the fundamentals we were taught. When writing an exam essay about the Burgess model I was expected to list a selection of its flaws, but not to question it at a fundamental level, nor to talk about the inherent problems with models.
At the beginning of my undergraduate degree I started to read journal papers: the original places where new scientific ideas are circulated. But, as a new student, I was far too in awe of the authors to criticise their work. After all, how could I, a lowly first year student, find problems in the work of professors and lecturers – especially work which had been accepted by equally experienced peer-reviewers? Gradually, however, I began to realise that it didn’t actually take a huge amount of knowledge to be able to intelligently critique someone’s work. Ignoring the name and affiliation of the author often helped with this. I distinctly remember finding a paper about the Empirical Line Method in a relatively unknown journal (something like the Chinese Journal of Soil Science – a strange place for a remote sensing paper), reading it, and realising that I could have written something better in my sleep! After that they followed thick and fast: I realise that papers in Remote Sensing of Environment are not without their flaws, and my unquestioned acceptance of the Ruddiman hypothesis completely ignored the body of work that presented alternative theories.
The gradual change from a Default Accept paradigm to Default Deny is an important change which occurs during education. Sadly, this doesn’t seem to occur for many people (even students who go on to get good degrees at university), and it should really start far lower down in school (there is no reason why able students should not be questioning the accepted point of view at GCSE level). Science is entirely based around the questioning of facts, and the denying of knowledge that cannot be backed up with evidence. All of my favourite academic papers (and, indeed, most of the important papers ever written in science) came about from the author denying what he has read and heard, and considering alternative points of view. When the change from Default Accept to Default Deny takes place it marks the start of the transition of that individual to a true scientist – and is a mark of increasing academic maturity.