AWS, Drupal and Caching: pt.3 Adding some sexy caching

Oct. 15, 2014

Note: Sadly, due to #drupalgeddon I had to revert this site to an old backup. This specific article though was corrupt from the backup, so has some old notes, sorry about that


Okay then folks, shall we get us some caching? Can I get a HELL YEAH!?! Let's start by installing varnish using the nice simple steps laid out on the official page (

Next step is the VCL configuration, now this is hard to just throw at you, as depends on the actual app setup you have. Best case is to do a search online for the word varnish and the app framework you are using. For Drupal some useful sites with base vcl files are here:

Also, here is a copy of my VCL file:

# This is a basic VCL configuration file for varnish. See the vcl(7)# man page for details on VCL syntax and semantics.## TODO: Update internal subnet ACL and security.# Define the internal network subnet.# These are used below to allow internal access to certain files while not# allowing access from the public internet.# acl internal {# ""/24;# }# Default backend definition. Set this to point to your content# server.#backend default {.host = "";.port = "8080";}acl purge {"localhost";"";}# Respond to incoming requests.sub vcl_recv {# Check the incoming request type is "PURGE", not "GET" or "POST"if (req.request == "PURGE") {# Check if the ip coresponds with the acl purgeif (!client.ip ~ purge) {# Return error code 405 (Forbidden) when noterror 405 "Not allowed.";}return (lookup);}# Use anonymous, cached pages if all backends are down.if (!req.backend.healthy) {unset req.http.Cookie;}# Allow the backend to serve up stale content if it is responding slowly.set req.grace = 6h;# Pipe these paths directly to Apache for streaming.#if (req.url ~ "^/admin/content/backup_migrate/export") {# return (pipe);#}if (req.restarts == 0) {if (req.http.x-forwarded-for) {set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip;}else {set req.http.X-Forwarded-For = client.ip;}}# Do not cache these paths.if (req.url ~ "^/status\.php$" ||req.url ~ "^/update\.php$" ||req.url ~ "^/admin$" ||req.url ~ "^/admin/.*$" ||req.url ~ "^/flag/.*$" ||req.url ~ "^.*/ajax/.*$" ||req.url ~ "^.*/ahah/.*$") {return (pass);}# Do not allow outside access to cron.php or install.php.#if (req.url ~ "^/(cron|install)\.php$" && !client.ip ~ internal) {# Have Varnish throw the error directly.# error 404 "Page not found.";# Use a custom error page that you've defined in Drupal at the path "404".# set req.url = "/404";#}# Always cache the following file types for all users. This list of extensions# appears twice, once here and again in vcl_fetch so make sure you edit both# and keep them equal.if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") {unset req.http.Cookie;}# Remove all cookies that Drupal doesn't need to know about. We explicitly# list the ones that Drupal does need, the SESS and NO_CACHE. If, after# running this code we find that either of these two cookies remains, we# will pass as the page cannot be cached.if (req.http.Cookie) {# 1. Append a semi-colon to the front of the cookie string.# 2. Remove all spaces that appear after semi-colons.# 3. Match the cookies we want to keep, adding the space we removed# previously back. (\1) is first matching group in the regsuball.# 4. Remove all other cookies, identifying them by the fact that they have# no space after the preceding semi-colon.# 5. Remove all spaces and semi-colons from the beginning and end of the# cookie string.set req.http.Cookie = ";" + req.http.Cookie;set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); set req.http.Cookie = regsuball(req.http.Cookie, ";(SESS[a-z0-9]+|SSESS[a-z0-9]+|NO_CACHE)=", "; \1=");set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", "");set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", "");if (req.http.Cookie == "") {# If there are no remaining cookies, remove the cookie header. If there# aren't any cookie headers, Varnish's default behavior will be to cache# the page.unset req.http.Cookie;}else {# If there is any cookies left (a session or NO_CACHE cookie), do not# cache the page. Pass it on to Apache directly.return (pass);}}}# Set a header to track a cache HIT/MISS.sub vcl_deliver {if (obj.hits > 0) {set resp.http.X-Varnish-Cache = "HIT";}else {set resp.http.X-Varnish-Cache = "MISS";}}# Code determining what to do when serving items from the Apache servers.# beresp == Back-end response from the web server.sub vcl_fetch {# We need this to cache 404s, 301s, 500s. Otherwise, depending on backend but# definitely in Drupal's case these responses are not cacheable by default.if (beresp.status == 404 || beresp.status == 301 || beresp.status == 500) {set beresp.ttl = 10m;}# Don't allow static files to set cookies.# (?i) denotes case insensitive in PCRE (perl compatible regular expressions).# This list of extensions appears twice, once here and again in vcl_recv so# make sure you edit both and keep them equal.if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") {unset beresp.http.set-cookie;}# Allow items to be stale if needed.set beresp.grace = 6h;}# In the event of an error, show friendlier messages.sub vcl_error { # Otherwise redirect to the homepage, which will likely be in the cache.set obj.http.Content-Type = "text/html; charset=utf-8";synthetic {"<html><head><title>Page Unavailable</title><style>body { background: #303030; text-align: center; color: white; }#page { border: 1px solid #CCC; width: 500px; margin: 100px auto 0; padding: 30px; background: #323232; }a, a:link, a:visited { color: #CCC; }.error { color: #222; }</style></head><body onload="setTimeout(function() { window.location = '/' }, 5000)"><div id="page"><h1 class="title">Page Unavailable</h1><p>The page you requested is temporarily unavailable.</p><p>We're redirecting you to the <a href="/">homepage</a> in 5 seconds.</p><div class="error">(Error "} + obj.status + " " + obj.response + {")</div></div></body></html>"};return (deliver);}sub vcl_hit {if (req.request == "PURGE") {purge;error 200 "Purged.";}}sub vcl_miss {if (req.request == "PURGE") {purge;error 200 "Purged.";}}sub vcl_fetch {if (beresp.ttl <= 0s ||beresp.http.Set-Cookie ||beresp.http.Vary == "*") {/** Mark as "Hit-For-Pass" for the next 2 minutes*/set beresp.ttl = 120 s;return (hit_for_pass);}return (deliver);}

So Varnish is running, BUT no-one is going to be using it until we tell all incoming traffic on port 80 to go through varnish instead of apache, tell apache to use a different port itself (8080), and tell varnish to use this new apache port to populate its caches.

sudo vim /etc/default/varnish

Uncomment 'Alterntive 2', and modify to be more like this

DAEMON_OPTS="-a :80 \-T localhost:6082 \-f /etc/varnish/default.vcl \-S /etc/varnish/secret \-s malloc,256m"Ensure your default.vcl file is using the new apache port on it's backendbackend default {.host = "";.port = "8080";}

sudo vim /etc/apache2/ports.conf

Change the port from 80 to 8080 on apache

Stop apache and varnishStart apache and varnish

Fix any errors you see from the startup

Next, we have to tell our Drupal site to use Varnish, so add the below to settings.php and flush the cache

// Add Varnish as the page cache handler.$conf['cache_backends'] = array('sites/all/modules/contributed/varnish/');$conf['page_cache_invoke_hooks'] = FALSE;$conf['cache_class_cache_page'] = 'VarnishCache';

If you haven't used varnish, several things can stop it working. To run a quick test of the page, run curl -I this will return the headers, one should say varnish cache HIT (or maybe MISS). If it says MISS, run the command a couple of times to see if it changes to a HIT (HIT means, it has pulled the page from cache, rather than apache/mysql). If you still get MISS then the most likely thing is that your application is doing this. I had to disable a few Drupal modules (syntax-highlighter and selectivizer) as they were causing it to not cache, and were not vital to the application.

Once you start getting HIT using curl, we now change to use browser tools in chrome/firefox. Load up the page, look at the html headers in resources, and confirm the same HIT header is set in there. If it isn't, it is likely due to a cookie being set by the website. This can be a right pain to fix, easiest way to diagnose though is to use the dev tools to list all the cookies which are being set, then alter your curl request to sent one of the cookies at a time, to see which one causes repeated MISSes (curl --cookie "COOKIENAME=anyvalue" -I Once we know the cookie, we can modify the vcl accordingly to strip out the cookie IF the cookie does not change the content being rendered, if it does, then more complex config changes may be required.


First, let us install APC onto the server ( APC has very little config once it is there on your server, but there is one tool which may be useful to setup to ensure everything is working as expected. First, look on your system for the apc docuemtnation, as inside the folder there should be a folder called apc.php. Once you have found it, note down the path and go into the root of your web application, then symlink the apc.php file into there, possibly giving it a random name so it is harder to find (ln -s /usr/share/doc/php-apc/apc.php apc.php). Once this is done, simply load up the new file in your browser, you should see the pie chart filling up as pages are visited. Main things are to ensure fragmentation is kept low, and you are mainly getting HITs.


First, let us install memcache onto the server (, then we tell our website to use memcache, in Drupal, just enable the memcache module and add these lines to settings.php (change your key_prefix to something for you, this is to prevent conflict between multiple sites on the same server).

// MemCache$conf['cache_backends'][] = 'sites/all/modules/contributed/memcache/';$conf['cache_default_class'] = 'MemCacheDrupal';$conf['cache_class_cache_form'] = 'DrupalDatabaseCache';$conf['memcache_key_prefix'] = 'rbprod';

To test memcache is working, really just isntall the memcache admin module aswell, and look at the output to see if you get some nice HITs in the table. Refresh a few times if not.

Recovery plan, Cron, slowlogs, and Mandrill

I have lumped all of these together into their own post because they strangely tie together nicely. The tasl at hand is to have cron run a script every day which grabs the current mysql slowlogs and then emails them to the user via Mandrill. The other crons run backup script (which uses the AWS scripts) and the usual Drupal cron.

First step is for you to take a look in your crontab file, do this by typing <pre>crontab -e</pre>. There wont be much there other than some notes on how to use crontab, and also a blank canvas for all your exciting scripts. Let's start by quickly adding our Drupal cron, first login to your site and get your cron URL from /admin/reports/status, then add the below to crontab and modify. This will run the Drupal cron.php file every 30mins (the first item in a crontab line being minutes).

# Drupal crons*/30 * * * * wget -O - -q -t 1

Now let's setup the EC2 backups. This sounds complex, and in all rights it should be, but i'm afraid it isn't. First, exit crontab and install the aws CLI using the instructions here (, note that whilst the page may appear long, you will likely get to skip swathes of text. Once you have the CLI installed, it is time to configure it with your AWS keys ( Now, we should be ready to test it out, so go into your EC2 admin page, find the volumeID of the item you would like to backup, and note it down. Then open up terminal, and paste a command like the one below, but replace the volID for your own

aws ec2 create-snapshot --volume-id vol-000000c00 --description "$(date +\%Y-\%m-\%d) [Backup of testsite]"

You should now have the site backing up in AWS under snapshots. You may have credentialsissues though, I did have to manually edit the credentials file rather than using the nice 'aws configure' functionality which was made available in their documentation. If things are working though, then let's just tell our crontab to run this once a day at about 2am (low traffic time).

# EC2 snapshots0 2 * * * aws ec2 create-snapshot --volume-id vol-83770c84 --description "$(date +\%Y-\%m-\%d) [Backup of rbprod2]" --debug

The last of the 3 cron commands is another simple one, just email us the mysql slow query logs once a day, so let's paste this into the crontab file

# Logging0 1 * * * mysqldumpslow /var/log/mysql-slow.log | mail -s "slow query log"

Now, this may or may not work for you. But in my case, I wanted better control, and to stop all my emails going to spam inboxes (AWS has a history in the olden days of sending spam, so I wouldn't rely on one to send email without a lot of TLC). So what to do, well Mandrill is part of MailChimp, and is also completely free to use. What does it do? Well, we can tell postscript (the application on your server which sends out emails) to actually send the files to Mandrill to send for us, and it will then also help us track and see some cool graphs, and who doesn't like a cool graph. Just follow these instructions though for this ( If this doesn't work though, and you start getting mech errors, then you may need to install this (apt-get install libsasl2-modules).

Is this it? Weeeell I had a few slight issues, one was that after the first email went out I realised the clock was an hour out as the server was on GMT rather than BST, so needed to make a quick change to that <pre>ln -sf /usr/share/zoneinfo/Europe/London /etc/localtime</pre>.