What's the difference between recursion, memoization & dynamic programming?

63,542

Solution 1

Consider calculating the fibonacci sequence:

Pure recursion:

int fib(int x)
{
    if (x < 2)
        return 1;

    return fib(x-1) + fib(x-2);
}

results in exponential number of calls.

Recursion with memoization/DP:

int fib(int x)
{
    static vector<int> cache(N, -1);

    int& result = cache[x];

    if (result == -1)
    {
        if (x < 2)
            result = 1;
        else
            result = fib(x-1) + fib(x-2);
    }

    return result;
}

Now we have linear number of calls the first time, and constant thereafter.

The above method is called "lazy". We calculate the earlier terms the first time they are asked for.

The following would also be considered DP, but without recursion:

int fibresult[N];

void setup_fib()
{
    fibresult[0] = 1;
    fibresult[1] = 1;
    for (int i = 2; i < N; i++)
       fibresult[i] = fibresult[i-1] + fibresult[i-2];
}

int fib(int x) { return fibresult[x]; }

This way may be described as "eager", "precaching" or "iterative". Its faster overall but we have to manually figure out the order the subproblems need to be calculated in. This is easy for fibonacci, but for more complex DP problems it gets harder, and so we fall back to the lazy recursive method if it is fast enough.

Also the following is neither recursion nor DP:

int fib(int x)
{
    int a = 1;
    int b = 1;
    for (int i = 2; i < x; i++)
    {
        a = a + b;
        swap(a,b);
    }
    return b;
}

It uses constant space and linear time.

Also I will mention for the sake of completeness there is a closed form for fibonacci that uses nether recursion nor DP that allows us to calculate in constant time the fibonacci term using a mathematic formula based on the golden ratio:

http://www.dreamincode.net/forums/topic/115550-fibonacci-closed-form/

Solution 2

Well, recursion+memoization is precisely a specific "flavor" of dynamic programming: dynamic programming in accordance with top-down approach.

More precisely, there's no requrement to use recursion specifically. Any divide & conquer solution combined with memoization is top-down dynamic programming. (Recursion is LIFO flavor of divide & conquer, while you can also use FIFO divide & conquer or any other kind of divide & conquer).

So it is more correct to say that

divide & conquer + memoization == top-down dynamic programming

Also, from a very formal point of view, if you implement a divide & conquer solution for a problem that does not generate repetitive partial solutions (meaning that there's no benefit in memoization), then you can claim that this divide & conquer solution is a degenerate example of "dynamic programming".

However, dynamic programming is a more general concept. Dynamic programming can use bottom-up approach, which is different from divide & conquer+memoization.

Solution 3

I'm sure you can find detailed definition over internet. Here is my attempt to simplify things.

Recursion is calling itself again.

Dynamic Programming is a way to solve problems which exhibit a specific structure (optimal sub structure) where a problem can be broken down into sub problems which are similar to original problem. Clearly one can invoke recursion to solve a DP. But it is not necessary. One can solve a DP without recursion.

Memoization is a way to optimize DP algorithms which rely on recursion. The point is not to solve the sub problem again which has been already solved. You can view it as cache of solutions to sub problems.

Solution 4

They are different concepts. They overlap pretty often, but they are different.

Recursion happens whenever a function calls itself, directly or indirectly. This is all.

Example:

a -> call a
a -> call b, b -> call c, c -> call a

Dynamic programming is when you use solutions to smaller subproblems in order to solve a larger problem. This is easiest to implement recursively because you usually think of such solutions in terms of a recursive function. An iterative implementation is usually preferred though, because it takes less time and memory.

Memoization is used to prevent recursive DP implementation from taking a lot more time than needed. Most times, a DP algorithm will use the same subproblem in solving multiple large problems. In a recursive implementation, this means we will recompute the same thing multiple times. Memoization implies saving the results of these subproblems into a table. When entering a recursive call, we check if its result exists in the table: if yes, we return it, if not, we compute it, save it in the table, and then return it.

Share:
63,542
Sakibul Alam
Author by

Sakibul Alam

I am a professional full-stack web developer specializing in Springboot/Java/Kotlin, Laravel/PHP, and RoR/Ruby and Vue, React. I am fluent in both front- and back-end development. Along with having more than several years of extensive experience working in diverse industries and disciplines - building small and medium scale enterprise and consumer-facing web applications - I am extremely detail-oriented and have a keen interest in code quality, OOP, and TDD. My goal is to be a career Software Engineer, solving challenging problems and building robust systems that touch people and businesses. When I am not working, you will find me watching reruns of Friends and Futurama. I sometimes like to go cycling or running as long as my legs hold me up. Expertise: SpringBoot/Java/Kotlin, PHP, Laravel, Ruby on Rails, HTML5, CSS3, Javascript/ES6, JQuery, VueJS, SQL, MySQL/MariaDB, Database Design, REST, TDD Others: Java, Android, C#, Visual Basic.NET, Python, C++, Embedded C (AVR), Arduino

Updated on July 05, 2022

Comments

  • Sakibul Alam
    Sakibul Alam almost 2 years

    Possible Duplicate:
    Dynamic programming and memoization: top-down vs bottom-up approaches

    I have gone through a lot of articles on this but can't seem to make sense of it. At times recursion and dynamic programming looks the same and at others memoization & dynamic programming look alike. Can someone explain to me what's the difference?

    P.S. It will also be helpful if you could point me to some code using the three approaches on the same problem. (e.g. the Fibonacci series problem, I think every article I read used recursion but referred to it as dynamic programming)

  • BlueRaja - Danny Pflughoeft
    BlueRaja - Danny Pflughoeft over 11 years
    DP (almost?) always involves recursion and memoization, so saying they have nothing to do with each other is a incorrect.
  • ninjagecko
    ninjagecko over 11 years
    @BlueRaja-DannyPflughoeft: You're misinterpreting what I say: this is why I said "..they are completely separate concepts." as clarification. You can always turn a recursive algorithm into an iterative algorithm. DP and memoization take advantage of optimal substructure; that doesn't make them per se recursive; recursion is just one way to view the exploitation of optimal substructure. The fact that pigeons tend to perch on buildings does not make pigeons a related concept to buildings, unless you're arguing that, in which case that's fine.
  • IVlad
    IVlad over 11 years
    Your last example is DP, you just reduce the memory. The algorithm is the same as in your previous two examples.
  • Sakibul Alam
    Sakibul Alam over 11 years
    So, what i understand is recursion and memoization are used to solve DP problems but they are totally separate things. And divide and conquer problems differ from DP in that the sub-problems do not overlap.
  • Sakibul Alam
    Sakibul Alam over 11 years
    thanks, for the examples. Are pure recursion not DP until use memoization? And the last example is momoization approach, right?
  • Ankush
    Ankush over 11 years
    @Shuvo Yes. DP is a type of Divide and Conquer. You are right that in DP we end up having overlapping sub problems. We exploit this fact and save time by storing sub solutions in tables.
  • Andrew Tomazos
    Andrew Tomazos over 11 years
    Most people would not consider the last coded answer DP, they would call it a simple iterative solution. Notably it doesn't keep track of the mapping between the term number and the answer for that term. In the end there isn't a definitive test that can say yes or no something is DP or not. A pure recursion that doesn't cache the answers (which is all that memoization means) is generally not considered DP.
  • user13107
    user13107 about 10 years
    @AndrewTomazos cold you please explain why the second example is DP? I get that it is recursion, but not getting why its DP.
  • Andrew Tomazos
    Andrew Tomazos about 10 years
    @user13107: It memoizes the answers in a cache so if two calls are made f(3) and then later f(3) again, only the first does the computation, the second call gets the cached result from the first. This is generally considered a form of DP.
  • Vlad the Impala
    Vlad the Impala about 9 years
    Just because there's a closed form solution does not mean it takes constant time.
  • mimoralea
    mimoralea over 7 years
  • Sebastian Graf
    Sebastian Graf over 6 years
    The last example is still DP, but only stores the current 'wavefront' (which coincidentally is just a single memory cell). Which is just a fancy way of detecting that former cells will not be referenced anymore and can thus be 'garbage-collected' or reused.
  • Sebastian Graf
    Sebastian Graf over 6 years
    That's equivalent to Hirschberg's Algorithm computing the Levenshtein Distance in linear space, although the full DP matrix has O(nm) entries.
  • Sebastian Graf
    Sebastian Graf over 6 years
    The bottom-up approach computes the whole matrix, whether the results are actually needed or not, whereas the top-down approach is more like lazy evaluation: Results are computed only when demanded, but most of the time the bookkeeping associated with this kind of caching is outperformed by the access patterns of and the ability to properly parallelize array-based code.
  • Seshadri R
    Seshadri R over 5 years
    Nit-picking. Your code under Recursion with memoization/DP should be int fib() instead of void fib()
  • Adil Saju
    Adil Saju over 5 years
    great example. Nice
  • Nathan B
    Nathan B about 2 years
    It's not a constant time, because power is not a constant time really.
  • Andrew Tomazos
    Andrew Tomazos about 2 years
    @NathanB: It's complicated, precise definitions of asymptotic behaviour don't work well with non-ideal fixed precision numeric types that appear in real finite computers. The closed form algorithm for fibonacci is generally casually refered to as constant time. It's certainly the fastest know solution in any case.