LRU Cache
the least recently used cache, utlize the function
class LRUCache {
public:
LRUCache(int capacity) {
}
int get(int key) {
}
void put(int key, int value) {
}
};
/**
* Your LRUCache object will be instantiated and called as such:
* LRUCache* obj = new LRUCache(capacity);
* int param_1 = obj->get(key);
* obj->put(key,value);
*/
C++
#include<list>
#include<unordered_map>
class LRUCache {
private:
struct Node {
int k;
int v;
};
std::list<Node> cache;
std::unordered_map<int, std::list<Node>::iterator> map;
size_t cap;
void move_top(auto it) {
cache.splice(cache.begin(), cache, it);
}
public:
LRUCache(int capacity) : cap(capacity) {
}
int get(int key) {
auto it = map.find(key);
if (it == map.end()) return -1;
auto cache_it = it -> second;
move_top(cache_it);
return cache_it -> v;
}
void put(int key, int value) {
auto it = map.find(key);
if (it == map.end()) {
if (cache.size() >= cap) {
auto ev_it = prev(cache.end());
map.erase(ev_it->k);
cache.erase(ev_it);
}
cache.push_front({key, value});
auto new_it = cache.begin();
map.insert({key, new_it});
} else {
auto cache_it = it -> second;
move_top(cache_it);
cache_it->v = value;
}
}
};
Question 1 – Medium difficulty / Understanding & edge cases
textWhat happens in your implementation when: 1. We call put(key, value) with a brand new key and the cache is not full yet? 2. We call put(key, value) with a key that already exists in the cache? 3. We call get(key) on a key that does not exist?
Please walk through the code path for each case (you can point to lines or describe the flow). (They want to see if you really understand both the happy path and the update path.)
Question 2 – Slightly harder / Implementation detail + C++ knowledge
textIn your put() function when the cache is full, you wrote:
Why do we need to erase from the map before we erase from the list? What would happen if we changed the order — that is, if we first did cache.erase(ev_it) and only then map.erase(ev_it->k)? (Tests understanding of iterator invalidation + why we need the key from the node before destroying the node.)
-- NO, you cannot reverse the order, because
When you call cache.erase(ev_it), the node object (Node) that ev_it was pointing to (*ev_it) is destroyed at that moment.
After destruction, the memory that contained ev_it->k (and ev_it->v) no longer contains a valid int.
Trying to read ev_it->k afterwards is reading from destroyed object → undefined behavior
(very often crashes with segfault, sometimes gives garbage value, sometimes seems to "work" by luck → very nasty bug)
Question 3 – Design / Trade-off / Improvement question
textYour current implementation uses std::list
Some people implement LRU Cache with std::list
What are the advantages and disadvantages of your current approach (key+value only in list) compared to the version that keeps value in both containers?
In which situations would you prefer one over the other? Bonus follow-up that sometimes comes after Q3: Would you consider changing your current implementation after hearing the pros/cons? Why or why not?
Value type → Recommended approach
──────────────────────── ───────────────────────────────
int, pointer, small POD → Either is fine (alternative slightly more popular in interviews)
string (>16–24 bytes) → Prefer your approach (no duplication)
vector, map, big struct → Strongly prefer your approach
shared_ptr