The Curious case of Dynamic programming.

Abhinandan Sharma
5 min readMar 24, 2021

--

In this article, we are going to go through what exactly Dynamic programming is and how it makes our life easier. But firstly we are going to discuss a bit on algorithms.

What is an algorithm?

An algorithm is like a recipe for Maggi, for example when making we go through certain instructions

  1. Boil water
  2. Pour the Maggi noodle
  3. Add the spices for flavor
  4. Heat until boiled
  5. Eat and enjoy

Here we see that we went through the desired set of steps to get to our desired result. So an algorithm is the same, in the more formal definition is

“Algorithm is a finite, defined set of instructions to solve problems.”

Now there are many algorithms used at different places for different purposes. Such as :

  • Divide and conquer
  • Brute force
  • Randomized algorithms
  • Recursive algorithm
  • Greedy algorithm
  • Dynamic programming

What we are going to discuss here is Dynamic programming algorithms.

What are Dynamic programming algorithms? Where are they used?

Consider this when you were a child and your maths teacher gave you a problem of adding the first ten natural numbers. You could go on adding numbers one by one or you could divide into subproblems by dividing ten(even) into pairs of two.

So the solution would be-

-->(1+2)+(3+4)+(5+6)+(7+8)+(9+10)3+7+11+15+1910+26+1936+19= 55

Now consider if the teachers ask to add the first eleven natural numbers, you just would memorize your previous result and add eleven to it

(result of n-1)=55
55 + 11= 66

So dynamic programming is a fancy term for doing this.

Dynamic programming, formally defined as, is a mathematical optimization technique, in which a bigger problem is broken into smaller subproblems. The point to be noted here is the subproblems are not independent. In Dynamic programming we store the solution of the subproblems, so as not to compute the result again, this is what’s called memorization.

It’s basically like the divide and conquer approach but unlike it, the subproblems are not independent.

Let’s look into an example to understand this definition more.

Fibonacci sequence is the sum of the previous two numbers in the sequence, mathematically defined as follows:

Fibonacci(n) = 1 if n=0
Fibonacci(n) = 1 if n=1
Fibonacci(n) = Fibonacci(n-1) + Fibonacci(n-2)

Recursive code :

Dynamic programming approach:

Now you may think these solutions look similar, a good question indeed, but in ways they are different, they do the same things. But their execution logically is different, in recursion we calling the function the function again and again then calculating the output while in dynamic programming, we store the results of Fibonacci of the i’th number in database array (memorization) which can then accessed at any moment of time.

Space and Time Complexity

Recursion is not time-efficient for larger values of n

We can clearly see the recursive solution is time-efficient, whereas in the Dynamic Programming approach we are trading memory (database array) for time efficiency.

Time complexity= number of instructions * time taken per instruction

The time complexity of recursion is O(2^n)

The time complexity of Dynamic programming is O(n).

As we are using a memory/database array to store n results, therefore space complexity is O(n).

Recursion time complexity whereas Dynamic approach is only O(n)

Dynamic programming is mostly categorized into two forms of problems:

  1. Optimization problems
  2. Combinatorial problems

From the name itself, we can understand optimization problems require an optimized solution, whereas Combinatorial problems expect us to find the total number of solutions/ways possible for a problem.

When to use Dynamic Programming?

We should only dynamic programming approach if it’s efficient and as it’s a function of two things :

  1. The number of subproblems
  2. How much time it takes to solve each subproblem

These factors determine when it’s best to use the Dynamic Programming approach.

How Dynamic programming can be implemented?

Dynamic programming uses two different techniques :

  1. Memorization
  2. Tabulation

Memorization — This is a laissez-faire approach. Assuming that we have all the computation of the subproblems and no idea of the final state. We start with the root and build up a solution to the final state, we make sure that we never recalculate the subproblem as we store the results in the database array and thus no duplication of subtrees.

  • example: Calculating the Fibonacci of the nth number say 10, fib(10) we would use fib(10)=fib(9)+fib(8), which calls fib(9)=fib(8)+fib(7) etc…, which resolves finally to fib(3)=fib(2)+fib(1),but it will not recompute fib(2), because we cached it.
  • Starting at the top of the tree and computing the subproblem leads us from the subtree to the optimal solution (root).

Tabulation — In tabulation the order of calculation is pre-decided and it provides more flexibility than memorization. This is a bit like memorization but it’ more active, also we have should remember the correct in which computations are carried out. We start from the base of the tree, the zero state to the reach to the destination by accessing values of the computed state from the table/database.

  • example: Consider Fibonacci sequence we might calculate fib(2), fib(3), fib(4)… caching every computed state, so we can compute the following more easily using the previous state.

Implementing a DP approach can be done in two ways :

  1. Bottom-Up
  2. Top-Down

In layman terms :

  • Bottom-Up(Tabulation) — I’m going to be healthy. It will start with eating healthy food. Then I’ll exercise regularly. After putting in hours of training and working out. I’ll be a fit person.
  • Top-Down (Memorization)- I will be a fit person. How? By exercising. How? Regularly improving using workout sessions. Then? I’ll eat healthy and workout. How? I’ll start by eating healthy.

In the top-down approach, we look at the big solution, knowing how the small solutions are computed, while in the bottom-up we build the solution from starting with subproblems.

Examples of tabulation and memorization computing factorial

TABULATION -

MEMORIZATION-

Where is Dynamic programming used?

Dynamic programming finds its application in various real-life activities such as:

  1. Google Maps, if we are trying to get to a source-destination, it uses memorization by caching a series of destinations in between, for finding multiple routes.
  2. In networks using various routers, to find multiple paths for routing.
  3. Document distance problem which is used by various search engines.
  4. String matching algorithms, applications in genomics for finding similar extent in different genome sequences.
  5. Git Merge.
  6. In TEX (a previous form of LATEX).
  7. In databases for caching queries.

Summary:

Dynamic programming is an optimization technique, it provides us a tradeoff space for time. It finds numerous applications in real life in search engines, maps, etc.

References:

  1. https://www.geeksforgeeks.org/dynamic-programming/
  2. http://web.mit.edu/15.053/www/AMP-Chapter-11.pdf
  3. https://developer.ibm.com/articles/j-seqalign/
  4. http://6.006.scripts.mit.edu/~6.006/spring08/wiki/index.php?title=Document_Distance_Problem_Definition#:~:text=We%20define%20the%20distance%20between,pair%20of%20input%20text%20documents.

--

--