Dynamic Programming for Coding Interviews | Skilled.dev

Dynamic Programming

Course launching August 2020

Follow me on YouTube for free coding interview videos.

Users who sign up for the email list will receive an exclusive 75% discount at launch.

Your subscription could not be saved. Please try again.
Your subscription was successful! 🤓


^ recursion + memoization is one way, but specifically it is top down. The other way is bottom-up where you build out some table to remember all the values. (IS THE TOP-DOWN AND BOTTOM-UP ARTICLES GOOD ENOUGH?)


Dynamic programming is one of the most misunderstood and feared topics for programming interviews. Most explanations use complex terminology and jargon instead of defining the process simply. Here, we'll start from the most basic ideas stated in simple English and build up from that.

Let's re-define dynamic programming so it matches how it's used in problems.

"Dynamic programming": Breaking a problem into small increments and solving each as its own problem and storing the result. Once you have all the increments solved, you can combine them to find the answer.

Before moving on, I want to make one point very clear. Dynamic programming often means calculating ALL the values for all steps to reach the desired solution. This may sound innefficient, but sometimes it is the only way. We store the result of each step so it only needs to be calculated once, which is a key of dynamic programming.

So let's take calculating a Fibonacci number as an example. Any given Fibonacci number is defined as the sum of the two numbers that come before it in the sequence. So the 100th Fibonacci number is the 99th + 98th fib(100) = fib(99) + fib(98). But to find the 99th and 98th, we must find the two numbers before each of those.

So to find the value of the 100th Fibonacci number, we find the value of all 99 Fibonacci numbers before it. Finding the value of a these 99 numbers is exactly how dynamic programming works, and each number is its own "increment" as we defined above.

The other important aspect is storing the result calculated in each increment. So fib(100) = fib(99) + fib(98), but fib(99) = fib(98) + fib(97). So both fib(100) and fib(99) each need fib(98), so we only need to calculate fib(98) and then we can reuse its value when needed. We do this for all values as we solve each increment.

^ some kind of fibonacci example? Maybe use what's below now.

Dynamic programming is often easiest thought of using recursion. We start with the current desired result and calculate recursively by changing the input until we reach the base case. Once we have the base case, we start building the solution back up until we reach our initial call. For example, with the Fibonacci, we calculate recursively as the following.


So if we're specifically talking about dynamic programming impelemented recursively, we could define it as: Solving a problem recursively and storing the results of recursive calls in a data structure (typically a hash table) so you don't call a function with the same parameters more than once.

Dynamic programming is just an optimization on top of recursion. So if you're having difficulty understanding dynamic programming, what the real issue may be is that you need to more practice to understand recursion. That's totally fine because recursive thinking can be difficult until you grasp it, but you can't get better at dynamic programming until you understand what the problem actually is.

Memoization is the term we use for the dynamic programming optimization where it just means we store the result of calculations and function calls so we don't perform them more than once. So a dynamic programming problem is just recursion with memoization.

Another confusing term is subproblem, but really all a subproblem means is that the problem can be called recursively. If we have the results of previous steps, we can calculate the desired value. For example, any Fibonacci number is just the sum of the two previous Fibonacci numbers fib(n) = fib(n - 1) + fib(n - 2). So to calculate the nth Fibonacci we calculate the two before, and to get those we get the two before them, and so until we reach out base case. Each calculation is its own subproblem.

If we have overlapping subproblems, all we're really saying is that different recursive calls need a result from the same recursive function. For example, think of calculating the 5th Fibonacci number.

fib(5) = fib(4)   +   fib(3)
        /      \     /      \
       f(3) + f(2)  f(2) + f(1)

So to calculate fib(5) we need fib(4) and fib(3), but to calculate fib(4) we also need fib(3). In addition, both fib(4) and fib(3) need the result from fib(2) to calculate their values. These are overlapping subproblems, and we use memoization to store the result of function calls so they don't have to be recursively calculate each time.

That's really all there is to it. If you can feel comfortable with recursive thinking and understand how to store data in another data structure, you can tackle almost any dynamic programming question. Not only that, they can even become very easy for you.

Any question that can be solved recursively can also be solved iteratively. Recursion is just a simple way to think about how to break a problem into smaller increments. Solving a problem recursively is called a top-down approach and solving a problem iteratively is called a bottom-up approach. You can learn more about each approach in the top-down vs bottom-up article.

Count Islands
Subproblems and Overlapping Subproblems

Table of Contents