ShareChat Coding Question – Solved

2 Live
Question 2 Implement a Least Frequently Used (LFU) cache data structure of size cacheSize that handles two types of queries: GET and PUT. A GET query attempts to retrieve the value of a given key. If the key is present in the cache, it is returned. Otherwise, it returns -1. A PUT query updates or inserts a key-value pair into the cache. When the cache is full, the least frequently used key is removed to accommodate the new key-value pair. If there is a tie in the frequency of keys, then the least recently used key is removed. Return an array of integers where each i-th element is the answer for the i-th GET query. Example Suppose cacheSize = 1 q = 5 queries = ["PUT 1 1", "PUT 2 2", "GET 1"] Only cacheSize = 1 element is stored in the cache.

Asked in: ShareChat

Image of the Question

Question Image

All Testcases Passed βœ”



Passcode Image

Solution


from collections import defaultdict, OrderedDict

class ListNode:
    def __init__(self, key,value):
// ... rest of solution available after purchase

πŸ”’ Please login to view the solution

Explanation


```
To approach the problem of implementing a Least Frequently Used (LFU) cache with a fixed capacity, you need to carefully understand the requirements and constraints, and then think about the data structures and algorithms that can effectively support the needed operations.

Step 1: Understand the requirements
- The cache has a fixed size (cacheSize).
- Two types of queries:
- GET key: Return the value of the key if present in the cache; otherwise, return -1.
- PUT key value: Insert or update the key with the given value.
- When the cache is full and a new key needs to be inserted, remove the least frequently used key.
- If multiple keys have the same frequency (a tie), remove the least recently used among them.
- For each GET query, you return the value found or -1, and the final output is an array of these GET results.

Step 2: Breakdown of LFU cache behavior
- The key operation is to track frequency of usage for each key.
- Frequency increases whenever a key is accessed (either via GET or PUT).
- The cache eviction policy requires:
- Identify the keys with the minimum frequency.
- Among those keys, remove the one that was used least recently.
- So the data structure must support fast:
- Lookup by key to get the value and current frequency.
- Update frequency and maintain recency ordering among keys with the same frequency.
- Identify and remove the key with minimum frequency and least recent usage.

Step 3: Choose the right data structures
- Use a dictionary (hash map) keyed by the cache keys to store:
- The associated value.
- The current frequency count.
- A pointer/reference to the node or position in a frequency list for quick updates.
- Maintain frequency lists:
- For each frequency, maintain an ordered list (e.g., a doubly linked list) of keys that have this frequency.
- This order represents recency within that frequency bucket β€” most recently used at one end and least recently used at the other.
- Maintain a global variable tracking the minimum frequency currently in the cache (minFreq).

Step 4: Handling GET query
- When GET(key) is called:
- Check if key exists in the cache dictionary.
- If not, return -1.
- If yes:
- Retrieve its current frequency and value.
- Update the frequency of the key:
- Remove it from its current frequency list.
- Increment its frequency by 1.
- Add it to the list corresponding to the new frequency.
- If the old frequency list becomes empty and was the minimum frequency list, increment minFreq.
- Return the value.

Step 5: Handling PUT query
- When PUT(key, value) is called:
- If cache size is zero, do nothing (special edge case).
- If key already exists:
- Update its value.
- Increase its frequency as in the GET operation.
- If key does not exist:
- If cache is at capacity:
- Evict one key:
- The key to evict is the least recently used key in the list of keys with frequency equal to minFreq.
- Remove that key from the frequency list and the cache dictionary.
- Insert the new key with frequency 1:
- Add key and value to the cache dictionary.
- Add key to frequency list for frequency = 1.
- Set minFreq to 1.

Step 6: Eviction logic details
- Eviction always targets the key with the minimum frequency.
- Among keys with the minimum frequency, evict the least recently used (typically the tail of the doubly linked list).
- Removing from the frequency list and the dictionary must be efficient (O(1)).

Step 7: Maintaining performance
- Aim for O(1) time complexity for both GET and PUT operations.
- Achieving this requires careful use of hash maps and doubly linked lists.
- The dictionary offers O(1) access by key.
- Doubly linked lists allow O(1) insertions and deletions of nodes representing keys in frequency lists.
- Tracking minFreq efficiently avoids searching for the minimum frequency during evictions.

Step 8: Handling edge cases
- Cache size zero: All GET operations return -1 and PUT operations do nothing.
- Single element cache: Every new PUT may cause eviction.
- Repeated PUT operations with the same key should update the value and frequency accordingly.
- GET on non-existent keys returns -1.

Step 9: Output assembly
- For each GET query, capture the returned value (or -1).
- After processing all queries, return the array of GET results.

Summary:
- This problem is a classic LFU cache design challenge.
- Core insight: Maintain a hash map for key-to-(value, frequency) mapping and frequency buckets (ordered lists) for keys of the same frequency.
- Efficiently update frequencies on access and insertions.
- On capacity overflow, evict the least frequently and least recently used key.
- Use minFreq to quickly identify the eviction target frequency.
- Properly handle updating minFreq as keys are accessed or evicted.
- Ensure all operations run in O(1) time for scalability.
```


Related Questions