Convolution in python – which function to use?
Slightly boringly, this very similar to my last post – but it’s also something useful that you may want to know, and that I’ll probably forget if I don’t write it down somewhere.
Basically, scipy.ndimage.filters.convolve is about twice as fast as scipy.signal.convolve2d.
I run convolutions a lot on satellite images, and Landsat images are around 8000 x 8000 pixels. Using a random 8000 x 8000 pixel image, with a 3 x 3 kernel (a size I often use), I find that convolve2d takes 2.9 seconds, whereas convolve takes 1.1 seconds – a useful speedup, particularly if you’re running a loop of various convolutions. The same sort of speed-up can be seen with larger kernel sizes, for example, a 9 x 9 kernel takes 14.7 seconds and 6.8 seconds – an even more useful speed-up.
When I switched my code from convolve2d to convolve, I had to do a bit of playing around to get the various options set to ensure that I got the same results. For reference, the following two lines of code are equivalent:
res1 = convolve2d(img, kernel, mode='same') res2 = convolve(img, kernel, mode='constant', cval=0.0)
Interestingly these functions are both within scipy, so there must be a reason for them both to exist. Can anyone enlighten me? My naive view is that they could be merged – or if not, then a note could be added to the documentation saying that convolve is far faster!
If you found this post useful, please consider buying me a coffee.
This post originally appeared on Robin's Blog.
Categorised as: Programming, Python
Quickly looking at the code, ndimage convolve only works with real valued data. Convolve2D is more generic, but not as optimized for image data. That would be my guess for factor of 2 and why they keep both functions.