# algorithm Offline Caching

## Example

The caching problem arises from the limitation of finite space. Lets assume our cache `C` has `k` pages. Now we want to process a sequence of `m` item requests which must have been placed in the cache before they are processed.Of course if `m<=k` then we just put all elements in the cache and it will work, but usually is `m>>k`.

We say a request is a cache hit, when the item is already in cache, otherwise its called a cache miss. In that case we must bring the requested item into cache and evict another, assuming the cache is full. The Goal is a eviction schedule that minimizes the number of evictions.

There are numerous greedy strategies for this problem, lets look at some:

1. First in, first out (FIFO): The oldest page gets evicted
2. Last in, first out (LIFO): The newest page gets evicted
3. Last recent out (LRU): Evict page whose most recent access was earliest
4. Least frequently requested(LFU): Evict page that was least frequently requested
5. Longest forward distance (LFD): Evict page in the cache that is not requested until farthest in the future.

Attention: For the following examples we evict the page with the smallest index, if more than one page could be evicted.

### Example (FIFO)

Let the cache size be `k=3` the initial cache `a,b,c` and the request `a,a,d,e,b,b,a,c,f,d,e,a,f,b,e,c`:

Requestaadebbacfdeafbec
`cache 1`aaddddaaadddfffc
`cache 2`bbbeeeeccceeebbb
`cache 3`ccccbbbbfffaaaee
cache missxxxxxxxxxxxxx

Thirteen cache misses by sixteen requests does not sound very optimal, lets try the same example with another strategy:

### Example (LFD)

Let the cache size be `k=3` the initial cache `a,b,c` and the request `a,a,d,e,b,b,a,c,f,d,e,a,f,b,e,c`:

Requestaadebbacfdeafbec
`cache 1`aadeeeeeeeeeeeec
`cache 2`bbbbbbaaaaaaffff
`cache 3`ccccccccfddddbbb
cache missxxxxxxxx

Eight cache misses is a lot better.

Selftest: Do the example for LIFO, LFU, RFU and look what happend.

The following example programm (written in C++) consists of two parts:

The skeleton is a application, which solves the problem dependent on the chosen greedy strategy:

``````#include <iostream>
#include <memory>

using namespace std;

const int cacheSize     = 3;
const int requestLength = 16;

const char request[]    = {'a','a','d','e','b','b','a','c','f','d','e','a','f','b','e','c'};
char cache[]            = {'a','b','c'};

// for reset
char originalCache[]    = {'a','b','c'};

class Strategy {

public:
Strategy(std::string name) : strategyName(name) {}
virtual ~Strategy() = default;

// calculate which cache place should be used
virtual int apply(int requestIndex)                                      = 0;

// updates information the strategy needs
virtual void update(int cachePlace, int requestIndex, bool cacheMiss)    = 0;

const std::string strategyName;
};

bool updateCache(int requestIndex, Strategy* strategy)
{
// calculate where to put request
int cachePlace = strategy->apply(requestIndex);

// proof whether its a cache hit or a cache miss
bool isMiss = request[requestIndex] != cache[cachePlace];

// update strategy (for example recount distances)
strategy->update(cachePlace, requestIndex, isMiss);

// write to cache
cache[cachePlace] = request[requestIndex];

return isMiss;
}

int main()
{
Strategy* selectedStrategy[] = { new FIFO, new LIFO, new LRU, new LFU, new LFD };

for (int strat=0; strat < 5; ++strat)
{
// reset cache
for (int i=0; i < cacheSize; ++i) cache[i] = originalCache[i];

cout <<"\nStrategy: " << selectedStrategy[strat]->strategyName << endl;

cout << "\nCache initial: (";
for (int i=0; i < cacheSize-1; ++i) cout << cache[i] << ",";
cout << cache[cacheSize-1] << ")\n\n";

cout << "Request\t";
for (int i=0; i < cacheSize; ++i) cout << "cache " << i << "\t";
cout << "cache miss" << endl;

int cntMisses = 0;

for(int i=0; i<requestLength; ++i)
{
bool isMiss = updateCache(i, selectedStrategy[strat]);
if (isMiss) ++cntMisses;

cout << "  " << request[i] << "\t";
for (int l=0; l < cacheSize; ++l) cout << "  " << cache[l] << "\t";
cout << (isMiss ? "x" : "") << endl;
}

cout<< "\nTotal cache misses: " << cntMisses << endl;
}

for(int i=0; i<5; ++i) delete selectedStrategy[i];
}
``````

The basic idea is simple: for every request I have two calls two my strategy:

1. apply: The strategy has to tell the caller which page to use
2. update: After the caller uses the place, it tells the strategy whether it was a miss or not. Then the strategy may update its internal data. The strategy LFU for example has to update the hit frequency for the cache pages, while the LFD strategy has to recalculate the distances for the cache pages.

Now lets look of example implementations for our five strategies:

### FIFO

``````class FIFO : public Strategy {
public:
FIFO() : Strategy("FIFO")
{
for (int i=0; i<cacheSize; ++i) age[i] = 0;
}

int apply(int requestIndex) override
{
int oldest = 0;

for(int i=0; i<cacheSize; ++i)
{
if(cache[i] == request[requestIndex])
return i;

else if(age[i] > age[oldest])
oldest = i;
}

return oldest;
}

void update(int cachePos, int requestIndex, bool cacheMiss) override
{
// nothing changed we dont need to update the ages
if(!cacheMiss)
return;

// all old pages get older, the new one get 0
for(int i=0; i<cacheSize; ++i)
{
if(i != cachePos)
age[i]++;

else
age[i] = 0;
}
}

private:
int age[cacheSize];
};
``````

FIFO just needs the information how long a page is in the cache (and of course only relative to the other pages). So the only thing to do is wait for a miss and then make the pages, which where not evicted older. For our example above the program solution is:

``````Strategy: FIFO

Cache initial: (a,b,c)

Request    cache 0    cache 1    cache 2    cache miss
a          a          b          c
a          a          b          c
d          d          b          c          x
e          d          e          c          x
b          d          e          b          x
b          d          e          b
a          a          e          b          x
c          a          c          b          x
f          a          c          f          x
d          d          c          f          x
e          d          e          f          x
a          d          e          a          x
f          f          e          a          x
b          f          b          a          x
e          f          b          e          x
c          c          b          e          x

Total cache misses: 13
``````

Thats exact the solution from above.

### LIFO

``````class LIFO : public Strategy {
public:
LIFO() : Strategy("LIFO")
{
for (int i=0; i<cacheSize; ++i) age[i] = 0;
}

int apply(int requestIndex) override
{
int newest = 0;

for(int i=0; i<cacheSize; ++i)
{
if(cache[i] == request[requestIndex])
return i;

else if(age[i] < age[newest])
newest = i;
}

return newest;
}

void update(int cachePos, int requestIndex, bool cacheMiss) override
{
// nothing changed we dont need to update the ages
if(!cacheMiss)
return;

// all old pages get older, the new one get 0
for(int i=0; i<cacheSize; ++i)
{
if(i != cachePos)
age[i]++;

else
age[i] = 0;
}
}

private:
int age[cacheSize];
};
``````

The implementation of LIFO is more or less the same as by FIFO but we evict the youngest not the oldest page. The program results are:

``````Strategy: LIFO

Cache initial: (a,b,c)

Request    cache 0    cache 1    cache 2    cache miss
a          a          b          c
a          a          b          c
d          d          b          c          x
e          e          b          c          x
b          e          b          c
b          e          b          c
a          a          b          c          x
c          a          b          c
f          f          b          c          x
d          d          b          c          x
e          e          b          c          x
a          a          b          c          x
f          f          b          c          x
b          f          b          c
e          e          b          c          x
c          e          b          c

Total cache misses: 9
``````

### LRU

``````class LRU : public Strategy {
public:
LRU() : Strategy("LRU")
{
for (int i=0; i<cacheSize; ++i) age[i] = 0;
}

// here oldest mean not used the longest
int apply(int requestIndex) override
{
int oldest = 0;

for(int i=0; i<cacheSize; ++i)
{
if(cache[i] == request[requestIndex])
return i;

else if(age[i] > age[oldest])
oldest = i;
}

return oldest;
}

void update(int cachePos, int requestIndex, bool cacheMiss) override
{
// all old pages get older, the used one get 0
for(int i=0; i<cacheSize; ++i)
{
if(i != cachePos)
age[i]++;

else
age[i] = 0;
}
}

private:
int age[cacheSize];
};
``````

In case of LRU the strategy is independent from what is at the cache page, its only interest is the last usage. The programm results are:

``````Strategy: LRU

Cache initial: (a,b,c)

Request    cache 0    cache 1    cache 2    cache miss
a          a          b          c
a          a          b          c
d          a          d          c          x
e          a          d          e          x
b          b          d          e          x
b          b          d          e
a          b          a          e          x
c          b          a          c          x
f          f          a          c          x
d          f          d          c          x
e          f          d          e          x
a          a          d          e          x
f          a          f          e          x
b          a          f          b          x
e          e          f          b          x
c          e          c          b          x

Total cache misses: 13
``````

### LFU

``````class LFU : public Strategy {
public:
LFU() : Strategy("LFU")
{
for (int i=0; i<cacheSize; ++i) requestFrequency[i] = 0;
}

int apply(int requestIndex) override
{
int least = 0;

for(int i=0; i<cacheSize; ++i)
{
if(cache[i] == request[requestIndex])
return i;

else if(requestFrequency[i] < requestFrequency[least])
least = i;
}

return least;
}

void update(int cachePos, int requestIndex, bool cacheMiss) override
{
if(cacheMiss)
requestFrequency[cachePos] = 1;

else
++requestFrequency[cachePos];
}

private:

// how frequently was the page used
int requestFrequency[cacheSize];
};
``````

LFU evicts the page uses least often. So the update strategy is just to count every access. Of course after a miss the count resets. The program results are:

``````Strategy: LFU

Cache initial: (a,b,c)

Request    cache 0    cache 1    cache 2    cache miss
a          a          b          c
a          a          b          c
d          a          d          c          x
e          a          d          e          x
b          a          b          e          x
b          a          b          e
a          a          b          e
c          a          b          c          x
f          a          b          f          x
d          a          b          d          x
e          a          b          e          x
a          a          b          e
f          a          b          f          x
b          a          b          f
e          a          b          e          x
c          a          b          c          x

Total cache misses: 10
``````

### LFD

``````class LFD : public Strategy {
public:
LFD() : Strategy("LFD")
{
// precalc next usage before starting to fullfill requests
for (int i=0; i<cacheSize; ++i) nextUse[i] = calcNextUse(-1, cache[i]);
}

int apply(int requestIndex) override
{
int latest = 0;

for(int i=0; i<cacheSize; ++i)
{
if(cache[i] == request[requestIndex])
return i;

else if(nextUse[i] > nextUse[latest])
latest = i;
}

return latest;
}

void update(int cachePos, int requestIndex, bool cacheMiss) override
{
nextUse[cachePos] = calcNextUse(requestIndex, cache[cachePos]);
}

private:

int calcNextUse(int requestPosition, char pageItem)
{
for(int i = requestPosition+1; i < requestLength; ++i)
{
if (request[i] == pageItem)
return i;
}

return requestLength + 1;
}

// next usage of page
int nextUse[cacheSize];
};
``````

The LFD strategy is different from everyone before. Its the only strategy that uses the future requests for its decission who to evict. The implementation uses the function `calcNextUse` to get the page which next use is farthest away in the future. The program solution is equal to the solution by hand from above:

``````Strategy: LFD

Cache initial: (a,b,c)

Request    cache 0    cache 1    cache 2    cache miss
a          a          b          c
a          a          b          c
d          a          b          d          x
e          a          b          e          x
b          a          b          e
b          a          b          e
a          a          b          e
c          a          c          e          x
f          a          f          e          x
d          a          d          e          x
e          a          d          e
a          a          d          e
f          f          d          e          x
b          b          d          e          x
e          b          d          e
c          c          d          e          x

Total cache misses: 8
``````

The greedy strategy LFD is indeed the only optimal strategy of the five presented. The proof is rather long and can be found here or in the book by Jon Kleinberg and Eva Tardos (see sources in remarks down below).

### Algorithm vs Reality

The LFD strategy is optimal, but there is a big problem. Its an optimal offline solution. In praxis caching is usually an online problem, that means the strategy is useless because we cannot now the next time we need a particular item. The other four strategies are also online strategies. For online problems we need a general different approach. PDF - Download algorithm for free