Efficient cache for gigabytes of data written in Go
The BigCache provides shards, eviction and it omits GC for cache entries. As a result it is very fast cache even for large number of entries. Freecache is the only one of the available in-memory caches in Go which provides that kind of functionality(only one ???)
Our service would receive many requests concurrently, so we needed to provide concurrent access to the cache. The easy way to achieve that would be to put sync.RWMutex in front of the cache access function to ensure that only one goroutine could modify it at a time. However other goroutines which would also like to make modifications to it, would be blocked, making it a bottleneck. To eliminate this problem, shards could be applied. The idea behind shards is straightforward. Array of N shards is created, each shard contains its own instance of the cache with a lock. When an item with unique key needs to be cached a shard for it is chosen at first by the function hash(key) % N. After that cache lock is acquired and a write to the cache takes place. Item reads are analogue. When the number of shards is relatively high and the hash function returns properly distributed numbers for unique keys then the locks contention can be minimized almost to zero. This is the reason why we decided to use shards in the cache
hash(key) % N进行选择相应的分片（问题1.这是不是意味着一旦N指定，就不能改变？？）
The simplest way to evict elements from the cache is to use it together with FIFO queue. When an entry is added to the cache then two additional operations take place:
- An entry containing a key and a creation timestamp is added at the end of the queue.
- The oldest element is read from the queue. Its creation timestamp is compared with current time. When it is later than eviction time, the element from the queue is removed together with its corresponding entry in the cache.
- Eviction is performed during writes to the cache since the lock is already acquired.
Metrics indicated that there were over 40 mln objects in the heap and GC mark and scan phase took over four seconds），所以如果关闭GC?
- GC作用于heap(堆), 所以可以关掉堆，仓库，该仓库提供了新的的方式（非堆）来分配和释放内存的内置方法来管理内存。（问题2.需要关注怎么实现的）
- 不触发GC，Go version 1.5中给出了方式和条件(This optimization states that if map without pointers in keys and values is used then GC will omit its content)
BigCache does not handle collisions. When new item is inserted and it’s hash collides with previously stored item, new item overwrites previously stored value.
BigCache divides the data into shards based on the hash of the key. Each shard contains a map and a ring buffer，看了一下源码，并不是环，而是队列。
When the number of shards is relatively high and the hash function returns properly distributed numbers for unique keys then the locks contention can be minimized almost to zero，因为N真的挺大。。
Writing our own Malloc() and Free() implementation (see malloc.go) which requests memory directly from the OS. The keys, the values, and the entire hash table itself is kept in this off-heap storage.
The data set is sharded into 256 segments by the hash value of the key. Each segment has only two pointers, one is the ring buffer that stores keys and values, the other one is the index slice which used to lookup for an entry. Each segment has its own lock, so it supports high concurrent access.