Solving Egg Dropping problem the most efficient way

Shubhayan S
9 min readMay 31, 2019
Photo by Chris Ried on Unsplash

Dynamic Programming

Dynamic Programming is mainly an optimization over plain recursion. Wherever we see a recursive solution that has repeated calls for the same inputs, we can optimize it using Dynamic Programming. The idea is to simply store the results of subproblems so that we do not have to re-compute them when needed later. This simple optimization reduces time complexities from exponential to polynomial. For example, if we write a simple recursive solution for Fibonacci Numbers, we get exponential time complexity and if we optimize it by storing solutions of subproblems, time complexity reduces to linear.

Let us say that we have a machine, and to determine its state at time t, we have certain quantities called state variables. There will be certain times when we have to make a decision which affects the state of the system, which may or may not is known to us in advance. These decisions or changes are equivalent to transformations of state variables. The results of the previous decisions help us in choosing future ones.

What do we conclude from this? We need to break up a problem into a series of overlapping sub-problems and build up solutions to larger and larger sub-problems. If you are given a problem, which can be broken down into smaller sub-problems, and these smaller sub-problems can still be broken into smaller ones — and if you manage to find out that there are some over-lapping sub-problems, then you’ve encountered a DP problem.

Some famous Dynamic Programming algorithms are:

  • Unix diff for comparing two files
  • Bellman-Ford for shortest path routing in networks
  • TeX the ancestor of LaTeX
  • WASP — Winning and Score Predictor

The core idea of Dynamic Programming is to avoid repeated work by remembering partial results and this concept finds it application in a lot of real-life situations.

In programming, Dynamic Programming is a powerful technique that allows one to solve different types of problems in time O(n2) or O(n3) for which a naive approach would take exponential time.

Dynamic Programming and Recursion

Dynamic programming is basically, recursion plus using common sense. What it means is that recursion allows you to express the value of a function in terms of other values of that function. Where the common sense tells you that if you implement your function in a way that the recursive calls are done in advance, and stored for easy access, it will make your program faster. This is what we call Memoization — it is memorizing the results of some specific states, which can then be later accessed to solve other sub-problems.

The intuition behind dynamic programming is that we trade space for time, i.e. to say that instead of calculating all the states taking a lot of time but no space, we take up space to store the results of all the sub-problems to save time later.

Majority of the Dynamic Programming problems can be categorized into two types:

1. Optimization problems.
2. Combinatorial problems.

The optimization problems expect you to select a feasible solution, so that the value of the required function is minimized or maximized. Combinatorial problems expect you to figure out the number of ways to do something, or the probability of some event happening.

Every Dynamic Programming problem has a schema to be followed:

  • Show that the problem can be broken down into optimal sub-problems.
  • Recursively define the value of the solution by expressing it in terms of optimal solutions for smaller sub-problems.
  • Compute the value of the optimal solution in a bottom-up fashion.
  • Construct an optimal solution from the computed information.

Advantages of Dynamic Programming:

Ø For the various problems in area such as inventory, chemical engineering design, and control theory, Dynamic Programming is the only technique used to solve the problem.

Ø It is well suited for multi-stage or multi-point or sequential decision process.

Ø It is suitable for linear or non-linear problems, discrete or continuous variables, and deterministic problems.

Disadvantages of Dynamic Programming:

Ø No general formation of Dynamic Program is available; every problem has to be solved in its own way.

Ø Dividing problem in subproblem and storing intermediate results consumes memory.

Ø It is difficult to develop code using dynamic programming as opposed to greedy technique. Also, different DP has different methods of solving them.

Egg Dropping

The following is a description of the instance of this famous puzzle involving n=2 eggs and a building with k=36 floors.

Suppose that we wish to know which stories in a 36-story building are safe to drop eggs from, and which will cause the eggs to break on landing. We make a few assumptions:

ü An egg that survives a fall can be used again.

ü A broken egg must be discarded.

ü The effect of a fall is the same for all eggs.

ü If an egg breaks when dropped, then it would break if dropped from a higher floor.

ü If an egg survives a fall then it would survive a shorter fall.

ü It is not ruled out that the first-floor windows break eggs, nor is it ruled out that the 36th-floor do not cause an egg to break.

If only one egg is available and we wish to be sure of obtaining the right result, the experiment can be carried out in only one way. Drop the egg from the first-floor window; if it survives, drop it from the second-floor window. Continue upward until it breaks. In the worst case, this method may require 36 droppings. Suppose 2 eggs are available. What is the least number of egg-droppings that is guaranteed to work in all cases?
The problem is not actually to find the critical floor, but merely to decide floors from which eggs should be dropped so that total number of trials are minimized.

To derive a dynamic programming functional equation for this puzzle, let the state of the dynamic programming model be a pair s = (n,k), where

n = number of test eggs available, n = 0, 1, 2, 3, …, N − 1.

k = number of (consecutive) floors yet to be tested, k = 0, 1, 2, …, H − 1.

For instance, s = (2,6) indicates that two test eggs are available and 6 (consecutive) floors are yet to be tested. The initial state of the process is s = (N, H) where N denotes the number of test eggs available at the commencement of the experiment. The process terminates either when there are no more test eggs (n = 0) or when k = 0, whichever occurs first. If termination occurs at state s = (0, k) and k > 0, then the test failed.

Now, let

W (n, k) = minimum number of trials required to identify the value of the critical floor under the worst-case scenario given that the process is in state s = (n, k).

Then it can be shown that

W (n, k) = 1 + min {max (W (n − 1, x − 1), W (n, kx)): x = 1, 2, …, k}

with W(n,0) = 0 for all n > 0 and W (1, k) = k for all k. It is easy to solve this equation iteratively by systematically increasing the values of n and k.

In this post a generic solution to n eggs and k floors will be discussed. The answer is to attempt to drop an egg on all floors (1 to k) and to compute repetitively the minimal number of drops required in worst cases. In the worst scenario, the floor that delivers the least value will be part of the solution.
In the worst scenario, we report the minimum numbers of testing in the following solutions; they may also be changed simply to print out floor numbers for each test.

1) Optimum substructure, two scenarios might occur when an egg is dropped from floor X
2) The ovine does not break. (1) The ovine breaks.
1) When the egg breaks off on the xth level, only floors less than x with remaining eggs need to be checked; hence, the issue decreases to x-1 floors and to n-1 eggs
2) If the egg does not break from level X, only floors higher than x must be checked; hence, the problem is limited to the k-x and n eggs floors.
Because in the worst scenario we have to reduce the number of trials, we accept up to two cases. The maximum of the above two scenarios is considered for every level and the floor that produces the least number of tests is chosen.

2) Overlapping Subproblems

The next step is a recursive application which simply follows the previously described recursive structure.

Pseudo Code:

Output:

In the worst scenario with 2 eggs and 10 floors, the minimum number of tries is 4

Note that the above code repeatedly calculates the same subproblems. See E(2, 2) for two evaluations of the following partial recursion tree. When you draw the whole récursion tree for tiny values of N and K, numerous recurrent subproblems occur.

This problem has overlapping Subproblems as the same sub-problems are called repeatedly. So Egg Dropping Puzzle has a dynamic programming problem with both (see this and this). Similar to the other usual issues with dynamic programming, recomputing the same underlying problems may be prevented by building a temporary array, egg Floor [][].

Dynamic Programming Solution

Following are the implementations for Egg Dropping problem using Dynamic Programming.

Output :

Minimum number of trials in worst case with 2 eggs and 36 floors is 8

Time Complexity: O(nk²)

Auxiliary Space: O(nk)

As an exercise, you may try modifying the above DP solution to print all intermediate floors (The floors used for minimum trial solution).

Numerical Example

Input:42Output:Number of trials when number of eggs is 2 and number of floors is 4: 3

Explanation:

For the Input 1,

Case 1: Drop at first floor:

a. Egg does not break:
If egg does not break then we have three floors left and two eggs. We can either choose 2nd or 3rd floor and proceed similarly but we can easily see we have to do at least 3 trials.

b. Egg breaks:
If egg breaks then we found the critical floor in only 1 trial.

In case 1, the worst possibility is that egg does not break so trials will be 3 for case 1.

Case 2: Drop at second floor:

a. Egg breaks:
We are left with one egg and we have to check only 1 floor so number of trials 2.

b. Egg does not break:
We still have two eggs and two floors to check we have to check one by one on these floors so trials needed are 3.

So, for case 2 together the worst possibility is 3.

In the end we have to find the minimum of case 1, case 2, case 3, case 4.

Note: Dropping from 3rd and 4th floor is same as dropping from 2nd and 1st floor respectively only difference is that subcases A and B just gets swapped.

Working:

E(2,4)|-------------------------------------|             |           |         ||             |           |         |x=1/          x=2/      x=3/     x=4//             /         ....      ..../             /E(1,0)  E(2,3)     E(1,1)  E(2,2)/  /...         /x=1/                 ...../E(1,0)  E(2,2)/......Partial recursion tree for 2 eggs and 4 floors.

Numerical Analysis

§ One trial is — dropping an egg once from the particular floor.

§ If egg does not break after dropping, will be used again.

§ If egg breaks when dropped from some floor then it will break if dropped from any higher floor.

§ If egg does not break when dropped from some floor then it will not break if dropped from any lower floor.

Explanation:

2 eggs, 4 floors

Recursion: try dropping an egg from each floor from 1 to 4 and calculate the minimum number of dropping needed in worst case.

§ Base cases

§ Eggs — 1, floors — x : play safe and drop from floor 1, if egg does not break then drop from floor 2 and so on. So in worst case x times an egg needs to be dropped to find the solution.

§ Floors = 0: No trials are required.

§ Floors = 1: 1 trails is required.

§ For rest of the case, if an egg is dropped from xth floor then there are only 2 outcomes which are possible. Either egg will break OR egg will not break.

§ If Egg breaks — check the floors lower than x. So problem is reduced is 2–1 eggs and x-1 floors.

§ If egg does not break — check the floors higher than x floors with all the n eggs are remaining. So the problem is reduced to 2 eggs and 4-x floors.

Conclusion

Hence, it can be seen that Dynamic Programming is a powerful technique that can be used to solve many complex problems in time. It also amounts to breaking down an optimization problem into simpler sub-problems and storing the solution to each sub-problem so that each sub-problem is only solved once.

The main objective of this assignment was to ponder light on the various paradigms of geometric algorithms and its uses in our environment. It can be seen that Dynamic Programming is an efficient algorithm to solve the most famous egg dropping problem.

The program is solved using the pseudocode provided in the assignment and expressed using a numerical example with the necessary explanation.

As the problem asks us to find the minimum number of trails required to drop an egg from a given floor without breaking.

The analysis helps us to efficiently understand the dynamic program and its uses in real life.

--

--

Shubhayan S

People say ML and AI is the future of our generation but I believe it's much more than that. Well, I plan on integrating ML, Web and AI in the future.