tango.util.container.more.CacheMap

License:

BSD style: see license.txt

Version:

Initial release: April 2008

Author:

Kris

Since:

0.99.7
class CacheMap(K, V, alias Hash = Container.hash, alias Reap = Container.reap, alias Heap = Container.Collect) #
CacheMap extends the basic hashmap type by adding a limit to the number of items contained at any given time. In addition, CacheMap sorts the cache entries such that those entries frequently accessed are at the head of the queue, and those least frequently accessed are at the tail. When the queue becomes full, old entries are dropped from the tail and are reused to house new cache entries.
In other words, it retains MRU items while dropping LRU when capacity is reached.

This is great for keeping commonly accessed items around, while limiting the amount of memory used. Typically, the queue size would be set in the thousands

this(uint capacity) #
Construct a cache with the specified maximum number of entries. Additions to the cache beyond this number will reuse the slot of the least-recently-referenced cache entry.
~this() #
clean up when done
void reaper(K, R)(K k, R r) [static] #
Reaping callback for the hashmap, acting as a trampoline
uint size() [final] #
int opApply(int delegate(ref K key, ref V value) dg) [final] #
Iterate from MRU to LRU entries
bool get(K key, ref V value) #
Get the cache entry identified by the given key
bool add(K key, V value) [final] #
Place an entry into the cache and associate it with the provided key. Note that there can be only one entry for any particular key. If two entries are added with the same key, the second effectively overwrites the first.
Returns true if we added a new entry; false if we just replaced an existing one
bool take(K key) [final] #
Remove the cache entry associated with the provided key. Returns false if there is no such entry.
bool take(K key, ref V value) [final] #
Remove (and return) the cache entry associated with the provided key. Returns false if there is no such entry.
Ref deReference(Ref entry) [private] #
Place a cache entry at the tail of the queue. This makes it the least-recently referenced.
Ref reReference(Ref entry) [private] #
Move a cache entry to the head of the queue. This makes it the most-recently referenced.
Ref addEntry(K key, V value) [private] #
Add an entry into the queue. If the queue is full, the least-recently-referenced entry is reused for the new addition.
struct QueuedEntry [private] #
A doubly-linked list entry, used as a wrapper for queued cache entries. Note that this class itself is a valid cache entry.
Ref set(K key, V value) #
Set this linked-list entry with the given arguments.
Ref prepend(Ref before) #
Insert this entry into the linked-list just in front of the given entry.
Ref append(Ref after) #
Add this entry into the linked-list just after the given entry.
Ref extract() #
Remove this entry from the linked-list. The previous and next entries are patched together appropriately.