It's all about strength reduction, baby
There's an image processing algorithm that I've been playing around with. It does what it does fairly well, except that it's slower than we'd like.
If you look at most image processing algorithms, there's an invariant that contributes to the overall time taken by the algorithm: the area of the image in pixels. In all but a few cases, you have no choice but to be a slave to that invariant. Worse, since the area grows as a function of width x height, if you double an image's size, your routine will take four times as long. Quadruple the image size, your routine takes 16 times as long.
Given that, what we really want is an algorithm that takes O(kA), where k is a constant and A is the area. Where possible, we want k to be a small as it can be. Sometimes we get O(nA) where n grows linearly. We can live with that.
The algorithm I have been kicking around is O(n
2A). So for every pixel, there are n
2 separate operations. That's bad. Since A already grows as a function of a square, that means that we're looking at an overall O(m
4) algorithm. Well, not really since n is realistically a value between 9 and 50, but as n grows, the pain still grows as the square.
This morning I figured out a way to do identical calculation and reduce this to O(2nA). Nice.