The @lru_cache
decorator can be used wrap an expensive, computationally-intensive function with a Least Recently Used cache. This allows function calls to be memoized, so that future calls with the same parameters can return instantly instead of having to be recomputed.
@lru_cache(maxsize=None) # Boundless cache
def fibonacci(n):
if n < 2:
return n
return fibonacci(n-1) + fibonacci(n-2)
>>> fibonacci(15)
In the example above, the value of fibonacci(3)
is only calculated once, whereas if fibonacci
didn't have an LRU cache, fibonacci(3)
would have been computed upwards of 230 times. Hence, @lru_cache
is especially great for recursive functions or dynamic programming, where an expensive function could be called multiple times with the same exact parameters.
@lru_cache
has two arguments
maxsize
: Number of calls to save. When the number of unique calls exceeds maxsize
, the LRU cache will remove the least recently used calls.typed
(added in 3.3): Flag for determining if equivalent arguments of different
types belong to different cache records (i.e. if 3.0
and
3
count as different arguments)We can see cache stats too:
>>> fib.cache_info()
CacheInfo(hits=13, misses=16, maxsize=None, currsize=16)
NOTE: Since @lru_cache
uses dictionaries to cache results, all parameters for the function must be hashable for the cache to work.
Official Python docs for @lru_cache
. @lru_cache
was added in 3.2.