Amazon Coding Question – Solved

9 Live
After all the servers are down, the developers must send one more request to conclude the failure of the application. The developers at Amazon want to perform a reliability drill on some servers. There are n servers where the i-th server can serve request[i] number of requests and has an initial health of health[i] units. Each second, the developers send the maximum possible number of requests that can be served by all the available servers. With the request, the developers can also send a virus to one of the servers that can decrease the health of a particular server by k units. The developers can choose the server where the virus should be sent. A server goes down when its health is less than or equal to 0. Find the minimum total number of requests that the developers must use to bring all the servers down. Example Consider n = 2, request = [3, 4], health = [4, 6], k = 3. The minimum number of requests required is 21. | No. of Requests | Virus Server | Server’s New Health | Total Requests | |----------------|--------------|----------------------|----------------| | 3 + 4 = 7 | 1 | 6 - 3 = 3 | 7 | | 3 + 4 = 7 | 1 | 3 - 3 = 0 (server 1 dies) | 14 | | 3 | 0 | 4 - 3 = 1 | 17 | | 1 | 0 | 1 - 3 = -2 (server 0 dies) | 18 | | - | - | - (conclude the failure) | 21 |

Asked in: Amazon

Image of the Question

Question Image

All Testcases Passed ✔



Passcode Image

Solution


import math
def getMinRequests(request, health, k):
    arr = [(req/math.ceil(heal/k),req,heal) for req, heal in zip(request, health)]
    arr.sort(reverse = True)
// ... rest of solution available after purchase

🔒 Please login to view the solution

Explanation


```
To solve this problem, start by fully understanding the simulation process described. You are working with a set of servers, each capable of handling a certain number of requests per second and each with a specific health value. The health value represents how long the server can last, and each second you send a virus to one server, reducing its health by a fixed value k. Along with the virus, you also send as many requests as all currently alive servers can handle collectively. Your objective is to determine the minimum total number of requests required to bring all servers down, including a final request to conclude the failure after all servers are offline.

The key challenge here is deciding, at each second, which server should receive the virus to ensure that all servers go down using the least number of total requests. This introduces an optimization problem: minimize total requests while strategically choosing which server’s health to target.

You should start by thinking about the dynamics of how request load is accumulated. Every second that passes contributes to the total request count based on the sum of request capacities of all still-active servers. If more servers remain alive for longer, you are sending more requests over time. So, a strategy that prolongs the survival of high-capacity servers can actually increase the total number of requests needed. Conversely, bringing down high-capacity servers early can reduce the number of requests sent in subsequent seconds.

Given that, one logical direction to explore is how to prioritize server shutdowns. Since the total request count per second is determined by the collective capacities of all active servers, servers with higher request capacities contribute more to the total per second. Therefore, keeping high-capacity servers alive for longer can inflate the overall request count. This suggests that servers with high request values should potentially be targeted earlier.

However, this isn't the only factor to consider. The amount of time (or virus hits) required to bring down a server depends on its health and the virus damage k. A server with low health can be taken down quickly, possibly in one or two virus hits. If you focus only on high-request servers early, you might waste time attacking tough servers while the smaller ones continue contributing to the request total. Thus, a balanced strategy is needed—one that considers both a server’s request capacity and its vulnerability (health).

Now consider the concept of simulation with greedy decision-making. At each second, you might simulate all possible virus targets and evaluate the resulting impact on the total requests accumulated. For example, if attacking server A reduces the collective request rate more significantly in future seconds than attacking server B, then server A might be the better target, even if it takes longer to bring down.

This simulation-based approach suggests you could develop a process where you:
1. Maintain a record of which servers are still alive.
2. At each second, compute the total request load based on active servers.
3. Choose the server to send the virus to, potentially the one that will most reduce future request totals if taken down quickly.
4. Apply the virus to that server, reducing its health.
5. Remove any servers whose health drops to zero or below.
6. Repeat until all servers are down.
7. Finally, send one more request to "conclude the failure."

One direction to think more deeply about is how to evaluate which server to target next. You could consider a heuristic that combines request capacity and the number of hits needed to bring a server down. For instance, a score that weighs each server's request rate against its remaining health could help determine the most effective target per second.

Also, reflect on the consequences of not targeting a server early. If you let a high-capacity server run for too many seconds, its request contributions can stack up quickly. Therefore, minimizing the lifespan of such servers could be beneficial even if it takes more virus hits to bring them down.

Lastly, because this is an optimization problem that involves dynamic state changes (servers going down), greedy choices might not always lead to the best global outcome. You might need to consider exploring different sequences of virus targets through some form of decision tree, pruning clearly suboptimal paths. However, for larger numbers of servers, a full decision tree may become computationally expensive, so incorporating intelligent heuristics or priority-based approaches becomes crucial.

In summary, you need to simulate the process over time, carefully choosing which server to weaken based on how that choice affects the total number of requests in the future. Consider both the request capacity and the health of each server, and prioritize actions that lead to a smaller cumulative request total, even if that means spending more virus hits on a tough server early. Balancing short-term efficiency and long-term gain is key to finding the optimal strategy.
```


Related Questions