Home » Php » php – Inconsistent cache values using Zend Cache with AWS ElastiCache across multiple servers

php – Inconsistent cache values using Zend Cache with AWS ElastiCache across multiple servers

Posted by: admin April 23, 2020 Leave a comment


We are using Zend Cache with a memcached backend pointing to an AWS ElastiCache cluster with 2 cache nodes. Our cache setup looks like this:

$frontend = array(
    'lifetime' => (60*60*48),
    'automatic_serialization' => true,
    'cache_id_prefix' => $prefix
$backend = array(
    'servers' => array(
        array( 'host' => $node1 ),
        array( 'host' => $node2 )
$cache = Zend_Cache::factory('Output', 'memecached', $frontend, $backend);

We have not noticed any problems with the cache in the past when using a single EC2 server to write and read from the cache.

However, we have recently introduced a second EC2 server and suddenly we’re seeing issues when writing to the cache from one server and reading from another. Both servers are managed by the same AWS account, and neither server has issues writing to or reading from the cache individually. The same cache configuration is used for both.

Server A executes $cache->save('hello', 'message');

Subsequent calls to $cache->load('message'); from Server A return the expected result of hello.

However, when Server B executes $cache->load('message');, we get false.

As far as my understanding of ElastiCache goes, the server making the read request should have no bearing on the cache value returned. Can anyone shed some light on this?

How to&Answers:

Can you tell what hash_strategy you are using for memcache? I’ve had problems in the past using the default standard but everything has been fine since changing to consistent:



With a normal hashing algorithm, changing the number of servers can cause many keys to be remapped to different servers resulting in huge sets of cache misses.

Imagine you have 5 ElastiCache Nodes in your cache Cluster, adding an sixth server may cause 40%+ of your keys to suddenly point to different servers than normal. This activity is undesirable, may cause cache misses and eventually swamping your backend DB with requests. To minimize this remapping it is recommended to follow consistent Hashing model in your cache clients.

Consistent Hashing is a model that allows for more stable distribution of keys given addition or removal of servers. Consistent Hashing describes methods for mapping keys to a list of servers, where adding or removing servers causes a very minimal shift in where keys map to. Using this approach, adding an eleventh server should cause less than 10% of your keys to be reassigned. This % may vary in production but it is far more efficient in such elastic scenarios compared to normal hash algorithms. It is also advised to keep memcached server ordering and number of servers same in all the client configurations while using consistent Hashing. Java Applications can use “Ketama library” through spymemcached to integrate this algorithm into their systems. More information on consistent hashing can be found at http://www.last.fm/user/RJ/journal/2007/04/10/rz_libketama_-_a_consistent_hashing_algo_for_memcache_clients