NGINX TCP/UDP Load Balancer with Memcached

Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering. It is the soul of web, it’s capable of turning almost any application into a speed-demon.

But what happens sometime that you can not afford the 20-30GB RAM Servers to cache your data. Also, it is not a smart idea to depend on a single node to cache all your data (single point of failure).

In this article, we will deploy a high available memcached pool of smaller instances using Nginx LB and Memcache.

Nginx TCP/UDP

In release 1.5 and later, NGINX can proxy and load balance TCP traffic, using the –with-steam flag in the build time.

How to build Nginx from source with stream ?

Nginx LB with Memcached

Nginx-Memcache

As shown in the above figure, if any user requested a certain page in your website, let’s say pageA, and you are caching the page_view into memcache to save time for the next request and enhance the performance.

pageA url will be redirected from the first load balancer to WebServer3, WebServer3 in his turn requests the cached version of pageA from one of the memcached servers using the second load balancer that just redirects it to a specific memcached server using IP Hashing algorithm.

If this is the first time pageA is requested, WebServer3 will get a miss and it should be cached ( by the WebApp ) on the same memcached server.

What if a new user requested pageA now ?

Using the URL hashing, the user url should definitely be redirected to WebServer3 and its cache request using the IP hashing is also redirected to the same memcached instance that cached the data for the first pageA view and get a hit.

In this scenario, if you have 3 instances with 8GB RAM for each of them, you will have a 24GB RAM of memcache pool.

What if we are using RB algorithm for Web Servers ?

Don’t even think of using this methodology, for our example, pageA will be requested using all the webserver, each webserver will request it from a different memcached and if it’s missed, the web application will cache it there. In the result, you will have pageA cached in all of your memcached instances.

What if I have added/removed one of the Web Servers ?

In the case adding a new server, the url hashing method will start redirect some urls to it, let assume it’s pageC. The IP hashing in the cache-lb will redirect the new server IP address to any memcache instances, that perhaps has pageC.  In the worst case, all urls coming from the new server will be misses until they are cached.

In case of removing an old server, the pages will be requested from the cache unit using different IP address, that may be redirected to the memcache that does not have the page.

What about hot keys ?

If you are afraid of the hot key issue on memcached because of too many views on certain pageswhy you are using URL hashing to balance the load between your web servers ? Funny-Meme-Faces-1

Nginx Configuration

stream {
   upstream backend {
      hash $remote_addr consistent;
      
      server mem1_backend:11211 max_fails=2;
      server mem2_backend:11211 max_fails=2;
      server mem3_backend:11211 max_fails=2 ;
   }

   server {
      listen 11211;
      proxy_pass backend;
   }
}

At last,I have to mention that this experiment did not work with us in OpenSooq, since we have a different architecture.

But it may help someone, so we share it.