An easy way to visualize this is in calculating Fibonacci numbers. The standard recursive solution can be memoized to significantly improve time complexity at the cost of using more space to store sub-problem results. This is top-down dynamic programming—recursion and memoization. Top-down dynamic programming can be easier to program and anecdotally it is more commonly used.
Bottom-up dynamic programming is instead iterative and we solve sub-problems first, building them into bigger sub-problems. Without the overhead of recursive calls we don't necessarily increase space complexity, but still have the decrease in time complexity. For calculating the nth Fibonacci number this looks like:
This is time complexity O(n) and space complexity O(1), a huge improvement on the naive recursive solution that's O(2n) time and O(n) space. For details on big-O of recursive algorithms read this.
Some other interesting applications of dynamic programming are:
- Dijkstra's algorithm (shortest paths)
- Tower of Hanoi (teaching game)
- Knapsack problem (combinatorial optimization)
The knapsack problem, for example, has a top-down solution but I think the bottom-up solution is especially appealing. Using a matrix to store sub-problem solutions we can make the O(2n) time recursive algorithm O(nW) time and space:
But wait, there's more. We only need to remember a part of the matrix for the next iteration, which means we don't even need the matrix. It can be further optimized to only use O(W) space:
Not, in my opinion, an obvious solution by any means, but a very elegant one.