Expensive computations or database calls can quickly slow down the system. Having independent caches within a library or modules can quickly become unmanageable when handling data changes when relating to flushing caches.
The cache library provides for clearing caches based on a tagging system that quickly clears caches as needed.
The cache library provides three caching systems, the first two are used when the lifespan of a cache doesn't matter, however, memory constraints are taken into account. These first two cache will purge items based on two different methods:
- Least Frequently Used (LFU) - Counts how often an item is needed. Those that are used least often are discarded first. This works very similar to LRU except that instead of storing the value of how recently a block was accessed, we store the value of how many times it was accessed. So of course while running an access sequence we will replace a block which was used least number of times from our cache. E.g., if A was used (accessed) 5 times and B was used 3 times and others C and D were used 10 times each, we will replace B.
- Least recently used (LRU) - Discards the least recently used items first. This algorithm requires keeping track of what was used when. When the cache is full and requires more room the system will purge the item with the lowest reference frequency.
The time based cache:
- Time To Live (TTL) - Cache entries expire based on time. However, if new entries are added beyond the capacity of the cache, the oldest entries are discarded first.
No hooks registered.
There are two ways to create caches:
- Caching dictionariers - As a variable as a cache and it will act like a dictionary. Only the first level dictionary keys will be used for cache timeouts and flushing.
- Function decorators - Decorate functions and the results of the function will be cached.
To create the three different caches:
- session_validation_cache = self._Cache.lru(maxsize=50, tags=('sessions', 'users'))
- Creates a LRU cache with a max size of 50 entries. Also, creates flush tags of 'sessions' and 'users'.
- session_validation_cache = self._Cache.lfu(maxsize=50, tags=('sessions', 'users'))
- session_validation_cache = self._Cache.ttl(ttl=120, maxsize=50, tags=('sessions', 'users'))
- Creates a TTL cache with entries timing out after 120 seconds.
The cache decorator allows the function's results to be cached. Any subsequent calls to the function can return cached results. The arguments to the function will be considered when checking for cached entries.
Limitation: The caching decorator can only track and return cached items if the arguments are standard python classes such as int, dictionaries, lists, float, tuples, and strings. Other objects passed in as arguments may cause the cache to be bypassed.
To use the decorator, first import it with from
yombo.utils.decorators import cached. Once it's imported, just simply decorate the function:
1 @cached() 2 def some_costly_function(name=None): 3 find_name_in_database(name)
The cached decorator accepts the following arguments:
- ttl - If using a TTL, this accepts a time in second to timeout items. Default is 120.
- maxsize - Max number of entries, default is 512
- cachename - Specify a cache name. If none is provided, the cache name will be generated off the function's name and the module that it's in.
- tags - A list (or tuple) of tags for flushing. When a flush is requested for a specific tag, any matching tags will also be flushed.
- cache_type = Default is TTL. Specify one of: lru, lfu, or ttl
1 self.data = self._Cache.ttl() # default is 120 seconds or 512 items. 2 self.data['hello'] = "Joe said hello."
As a decorator.
1 from yombo.utils.decorators import cached 2 3 print fib(35) # Without decorator, will take a little bit of time. 4 print fib(35) # With decorator, will be nearly instant - even on first call for a fib. 5 6 @cached(ttl=5) # memoize for 5 seconds 7 def fib(x): 8 if num < 2: 9 return num 10 else: 11 return fib(num-1) + fib(num-2)