Dynamic Programming for Coding Interviews | Skilled.dev

Dynamic Programming

Course launching November 17, 2020

Follow me on YouTube for free coding interview videos.

Users who sign up for the email list will receive an exclusive 75% discount at launch.

Your subscription could not be saved. Please try again.
Your subscription was successful! 🤓

Dynamic programming is one of the most misunderstood and feared topics for programming interviews. Most explanations use complex terminology and jargon instead of defining the process simply. Here, we'll start from the most basic ideas stated in simple English and build up from that.

Dynamic programming takes time and practice, so work through this lesson, answer the interview questions, and it will eventually click.

As we work through this article, we'll integreate some of the common terminology like subproblem, memoization, and top-down vs bottom-up, so make sure you have completed those sections first for this lesson to be more clear. You may notice article looks like those previous lessons, and that's because it is. We've already learned all the core concepts for dynamic programming, and this article ties them together to get the complete picture.

Dynamic programming is an optimization. After we calculate a piece of a solution, we store it in a data structure so we can use it later instead of performing the calculation again.

Let's re-define dynamic programming so it matches how it's used in interview questions.

"Dynamic programming": Breaking a problem into small increments and solving each as its own step and storing the results for each step in a hash table (or some other data structure). Once you have all the increments solved, you can combine them to find the answer.

Dynamic programming often means calculating ALL the values for all steps to reach the desired solution. We store the result of each step so it only needs to be calculated once, which is the key to dynamic programming. While it might sound inefficient to calculate all the steps, often times it is the only way. The optimization is that we store the results of each step so we only perform each calculation once when we have the overlapping subproblems.

The keys to dynamic programming:

  1. There are subproblems: we can solve the same problem repeatedly with different input values and combine them to calculate the solution
  2. The subproblems are overlapping: the algorithm requires the solution from a subproblem using the same input multiple times
  3. We can memoize the results of overlapping subproblems to remove duplicate work: after calculating the solution to a subproblem with a given input, we store its result in a hash table

With dynamic programming each subproblem is a decision. At each point we consider the current value and the solutions to the subproblems we previously calculated. Then we make the best choice to solve the current subproblem given the data avaible to get the local solution at each point. Once we have solved the problem at each step, we will have calculated our overall solution.

The key to dynamic programming is learning to think in subproblems, and this comes with practice. Don't memorize tables or recursive trees. Learn to think about solving the problem in steps using the same logic at each increment but with different input. Then just store the answer in a data structure so you don't hav eto repeat the calculations again.

Dynamic Programming Example

Let's take calculating a Fibonacci number as an example for dynamic programming. Any given Fibonacci number is defined as the sum of the two numbers that come before it in the sequence. So the 100th Fibonacci number is the 99th + 98th fib(100) = fib(99) + fib(98). But to find the 99th and 98th, we must find the two numbers before each of those.

So to find the value of the 100th Fibonacci number, we find the value of all 99 Fibonacci numbers before it. Each calculation introduces overlapping subproblems because each number is used multiple times to calculate any given Fibonacci number. Finding the value of a these 99 numbers step-by-step and using memoization to remove the duplicate calculations is exactly how dynamic programming works.

The key is storing the result calculated in each step. fib(100) = fib(99) + fib(98), but fib(99) = fib(98) + fib(97). So both fib(100) and fib(99) each need fib(98), so we only need to calculate fib(98) once and then we can reuse its value when needed. We do this for all values that we calculate in the Fibonacci sequence so we solve each step only once.

Dynamic programming can be used either top-down or bottom-up algorithms. In either case, we use a hash table (or some data structure) to store the results as we build our solution.

With top-down we store the result of a recursive function call for a given input. The key in the hash table is the function input, and the value in the hash table is the result returned from the recursrive function for that input.

With bottom-up, we start with the smallest value and build up our hash table as we continue to calculate each step. The bottom-up approach uses previously calculated values stored in the hash table to determine the current value. This repeats until you build up to the solution.

Dynamic programming is only an optimization when you have overlapping subproblems and storing the value for each step removes duplicate work. If there are no overlapping subproblems, then storing the result for each step is likely unnecessary (and could incur unneeded space complexity) because you will only be required to calcualte that step only once to find the solution. You can use a standard recursive or iterative solution without dynamic programming to solve the question.

Let's consider Fibonacci with a top-down and bottom-up dynamic programming solutions. It's important to note that the approach and way we think about it doesn't change — we break the question into subproblems and solve each with different input values. All that changes is the implementation in either start at the smallest value and working up, or start at the largest value and working down.

Top-Down Fibonacci Dynamic Programming

Many developers find dynamic programming to be easier to grasp by using recursion. We go top-down by starting with the desired result and calculating subproblems recursively by changing the input until we reach the base case. Once we have the base case, we start building the solution back up until we reach our initial call. For example, with the Fibonacci, we calculate recursively as the following.

const fib = (n) => {
  if (n < 2) {
    return n;

  return fib(n - 1) + fib(n - 2);

So if we're specifically talking about dynamic programming impelemented recursively, we could define it as: solving a problem recursively and storing the results of recursive calls in a data structure (typically a hash table) so you don't call a function with the same parameters more than once. With a top-down approach, we can simply think of dynamic programming as just an optimization on top of recursion by using memoization.

const memo = {};

const fib = (n) => {
  if (n < 2) {
    return n;

  // If we have the value in our hash table,
  // return it instead of continuing with the recursive case
  if (memo[n]) {
    return memo[n];

  const nthFib = fib(n - 1) + fib(n - 2);

  memo[n] = nthFib;

  return nthFib;

If we have overlapping subproblems, all we're really saying is that different recursive calls need a result from the same recursive function. For example, think of calculating the 5th Fibonacci number.

fib(7) = fib(6)   +   fib(5)
        /      \     /      \
       f(5) + f(4)  f(4) + f(3)

So to calculate fib(7) we need fib(6) and fib(5), but to calculate fib(6) we also need fib(5). In addition, both fib(6) and fib(5) need the result from fib(4) to calculate their values. These are overlapping subproblems, and we use memoization to store the result of function calls so they don't have to be recursively calculate each time.

Fibanacci without memoization requires requires ~O(2n) time complexity. With n = 7, we have over 40 operations.

Credit: VisuAlgo

With dynamic programming, we memoize the results and build up a table that we can read from to get previously calculated values instead of recalculating them and introducing duplicate work. So as we calculate a value in the Fibonacci sequence, we store its result in a hash table or array:

Or in code:

// Hash table with store results for different inputs
fibMemoHashTable = {
  0: 0, // fib(0)
  1: 1, // fib(1)
  2: 1, // fib(2)
  3: 2, // fib(3)
  4: 3, // fib(4)
  5: 5, // fib(5)
  6: 8, // fib(6)
  7: 13, // fib(7)
  // ... and so on for all the fibonacci numbers

When we read the memoized results from the hash table, it allows us to reduce the Fibonacci calculation to O(n) time by adding O(n). We remove the duplicate work and only perform the calculations show in the image below. The blue nodes are when we read from the table from a value that was previously calculated.

Credit: VisuAlgo

That's really all there is to it. If you can feel comfortable with recursive thinking and understand how to store data in another data structure, you can tackle almost any dynamic programming question. Not only that, they can even become very easy for you.

Bottom-Up Fibonacci Dynamic Programming

Any question that can be solved recursively can also be solved iteratively. Recursion is just a simple way to think about how to break a problem into smaller increments. Solving a problem recursively is called a top-down approach and solving a problem iteratively is called a bottom-up approach.

With Fibonacci, we start at the smallest values we know and build up our list again and calculate the current value using the previous two values until we reach our solution. Again, we're using memoization and solving subproblems to get our answer.

const memo = { 0: 0, 1: 1 };

function fibonacci(n) {
  if (memo[n]) {
    return memo[n];

  for (let i = 2; i <= n; i++) {
    const currentFib = memo[i - 1] + memo[i - 2];
    memo[i] = currentFib;

  return memo[n];

While using a hash table is actually not required to calculate a solution for Fibonacci bottom-up, very often in dynamic programming bottom-up solutions, you will need to construct a full table like this to solve your subproblems efficiently

Dynamic Programming in Interviews

Enough with the Fibonacci, am I right? Let's take a look at some common examples of dynamic programming and classic DP questions.

Knapsack Problem

The knapsack problem is a classic dynamic programming interview question. We have a list of items, and each item has a weight and value. You want to find the highest value possible while keeping the weight below a certain threshold.

You iterate through all the items. Then go from 0 to the max weight for each item and determine which item is best to keep at each point. So we have subproblems at for each item at each weight. We memoize the solution to each subproblem and then as we try the new item combinations, we use the previously calculated optimal solutions and test it again the current item and keep the combination that provides maximum the value while staying under the allowed weight. This allows us to build up our solution, and once we have tried all the item combinations at all the weights, we will have solved the problem.

Coin Change

There are a few ways to ask this question, but they all revolve around combining various change denominations. In this course, we have a similar question where you find the minimum number of coins required to calculate a target value given denominations.

It is solved by starting at zero and finding the minimum coins to reach every value up to the desired target value. We test every denomination at each value. At each point, we keep the minimum number of coins required to reach the current value, and the current value's minimum coins is determined by the minimum coin combination to reach the previous values. By finding finding the minimum coins to reach each value from 0 to the target locally in each subproblem, we will have solved the problem globally once we reach the end.

What Do They Have in Common?

Both of these examples follow the same pattern we see in dynamic programming. They solve subproblems which means at each point, we solve the problem locally for the current values. We use memoization to build up the answers to the subproblems. Once we reach the end, we have solved the problem globally and will have our answer.


Dynamic programming is an optimization, and we use it when we have overlapping subproblems. We use a data structure (often a hash table) to memoize the result of subproblems so we only perform the calculation for each step only once.

Each subproblem is just its own problem. We find the best solution for this point based on the information we have. Each subproblem is a decision, and we make the choice to solve it at this point. We store this value which will be used later to find the answer of future subproblems. Once we reach the end, we have solved the problem globally and will have our answer.

If there are subproblems but they aren't overlapping (ie. you need to calculate the solution to a subproblem only once), then dynamic programming/memoization is not an improvement.

To restate the introduction, the keys to dynamic programming:

  1. There are subproblems: we can solve the same problem repeatedly with different input values and combine them to calculate the solution
  2. The subproblems are overlapping: the algorithm requires the solution from a subproblem using the same input multiple times
  3. We can memoize the results of overlapping subproblems to remove duplicate work: after calculating the solution to a subproblem with a given input, we store its result in a hash table
Bottom-Up vs Top-Down
Steal Packages

Table of Contents