Remote Redis: Spiped vs Stunnel

Redis is fast, there’s no doubt about that. Unfortunately for us, connecting to Redis has an overhead, and the method you connect with can have a huge impact.

Connecting locally

Our options for connecting locally are Unix sockets or TCP sockets, so let’s start by comparing them directly:

Socket vs TCP:

As we can see, the higher overhead of TCP connections limits the throughput. By pipelining multiple requests through single connections, we can reduce the TCP setup overhead and get performance approaching that of sockets:

Socket vs tcp with pipeline of 1000:

Connecting over the network:

When we connect over the network, we have no choice but to use TCP sockets, and since redis has no network security, we need to secure our connections.

Our options for secure connections are stunnel and spiped, let’s test them both out.

Spiped vs stunnel:

As we can see, spiped seems to be hitting some kind of bottleneck, limiting the numbers regardless of the tests performed. The problem here appears to be that spiped pads messages:

[spiped] can significantly increase bandwidth usage for interactive sessions: It sends data in packets of 1024 bytes, and pads smaller messages up to this length, so a 1 byte write could be expanded to 1024 bytes if it cannot be coalesced with adjacent bytes.

So when we’re doing a large number of small requests with redis-benchmark, each small request is padded out to make it much larger, maxing out our bandwidth:

Like with unix sockets vs tcp, this improves when we use pipelining, as less bandwidth is wasted to padding:

Spiped vs stunnel, pipeline 1000:

There’s still a gap, but it’s much narrower now.


So what’s the solution? If you can, have your application on the same server as Redis, so that you can use Unix sockets for performance.
If you have to run over the network, bear in mind the overhead of spiped when sending large numbers of small requests.
Pipelining can have a huge impact, performing better over the network than none-pipelined locally. The issue is that not every application can neatly bundle all requests into pipelined chunks, so your result may vary depending on use case.

All tests were performed between two Kimsufi ks-5 dedicated servers, with a 100mb link.

Redis notes

I decided the tidy up the redis docs and I wrote some notes for myself on the way:

Redis as pure cache:

Setting maxmemory and maxmemory-policy to ‘allkeys-lru’ will make redis auto expire all keys, starting with the oldest first, without any need for manually setting EXPIRE. Perfect when used just for caching.

Lexicographical sorted sets:

Elements stored under the same key in a sorted set can be retrieved lexicographicaly, powerful for string searching. If you need to normalise a string while retaining the original, you can store them together. E.g. ‘banana:Banana’ to ignore case while searching but preserve the case of the result.

Distributed Locks

Getting distributed locks safely is more complicated than it first appears, with a few edge cases that may cause locks to not be released, etc. Redlock has been written as a general solution, and has a large number of implementations in different languages.


  • Only returns extra info for tty, raw for all others
  • Can be set to repeat commands using -r <count> -i <delay>
  •  ‘–stat’ produces continuous stats
  •  Can scan for big keys with –big-keys (can be used in production)
  •  Supports pub/sub directly
  •  Can echo all redis commands using MONITOR
  •  Can show redis latency and intrinsic latency
  •  Can grab RDB from server
  •  Can simulate LRU load with 80/20 access rates


  • Slaves can chain (slave -> slave replication, doesn’t replicate local slave writes)
  • Master can use diskless replication, sends rdb directly to slave from mem.
  • Master can be set to reject writes unless a certain number of slaves are available.


  •  Clients can subscribe to sentinel pub/sub for events.
  •  Sentinels never forget seen sentinels
  •  Slaves can be given promotion priority to avoid or prefer them becoming masters.


  • DISCARD cancels the current queue
  • WATCH will cancel EXEC if the watched key has changed since the WATCH command was issued.


  • SUNION can take a long time for large/many sets.
  • The Lua debugger can be used to step through lua scripts line by line.
  • Total memory used can exceed maxmemory briefly, could be by a large amount but only if setting a large key.
  • If you are storing a lot of objects in a set, split the key apart and use the first part as a hash key instead -> more memory efficient. (‘test1234’ -> ‘test1’ ‘234’ <value>)
  • Publishing ignores database members
  • Subscribing supports pattern matching
  • Clients may receive duplicated messages if they have multiple subscriptions
  • Keyspace notifications can report all commands affecting a key, all keys receiving lpush, and all keys expiring in db 0.
  • Expired keys only fire when they are actually expired by redis, not the exact time they should expire.

Redis Cluster vs Redis Replication

While researching Redis Cluster I found a large number of tutorials on the subject that confused Replication and Cluster, with people setting up ‘replication’ using cluster but no slaves, or building a ‘cluster’ only consisting of master-slave databases with no cluster config.

So to clear things up:


Replication involves a master server which serves reads and writes, and duplicates all data to one or more slave servers (which serves reads but not writes). Slaves can be used to replace a master in case of failure, spread read request load, or to perform backups of the database to reduce load on the master.


Clusters are used when you have more data than RAM in a single machine: the data is automatically split (based on the key) across multiple databases, increasing the amount of data you can store. Clients requesting a key from any cluster node will be redirected to the node holding the key, and are expected to learn the locations of keys to reduce the number of redirects.

Replicaton + Cluster

Redis Cluster supports replication by adding slaves to existing nodes, if a master becomes unreachable then its slave will be promoted to master.


Last but not least, Redis Sentinel can be used to manage replicated servers (not clustered, see below.) Clients connect to a Sentinel and request a master or slave to communicate with, the sentinels handle health checks of the masters/slaves, and will automatically promote a slave if a master is unreachable. You need to have at least 3 sentinels running so that they can agree on reachability of nodes, and to ensure the sentinels aren’t a single point of failure.

Cluster handles its own promotion and does not need Sentinel in front of it.

Docker 404 during apt get update


During build of ubuntu images, apt-get can’t resolve any archives (404’s or ‘Cannot initiate the connection’).


The issue is caused by the docker bridge hanging, so try the following:
sudo apt-get install bridge-utils
pkill docker
iptables -t nat -F
ifconfig docker0 down
brctl delbr docker0
sudo service docker restart

NGINX DNS in docker


Docker runs its own container DNS that means you can easily link containers together just by specifying their names, instead of having to work out their dynamically assigned IP’s somehow. Unfortunately NGINX doesn’t read the /etc/hosts file that docker edits, and so can’t resolve hostnames of other containers.


An earlier hacky solution was to bundle dnsmasq in with nginx and have it use as a resolver, violating the one process per container best practice.

As of Docker 1.10 you can use the new embedded DNS feature by setting as the resolver in nginx.conf, just be aware that there are issues with IPv6.

Zabbix Aggregate Checks and “Unsupported Item Key”


Trying to create an aggregate check in Zabbix to measure total bandwidth in a group of servers using ‘grpsum’, get status ‘Not supported’ and error message ‘Unsupported item key.’


It’s not anywhere obvious in the docs  or after googling the error, but the item type needs to be changed from ‘Zabbix agent’ to ‘Zabbix aggregate’

Interface lag on linux mint with some themes.


Interface lag in linux mint when moving or resizing windows.


Problem is caused by the theme needing equinox but it not being installed. MATE highlights this issue but cinnamon doesn’t. Can be solved by either installing equinox, or change to a theme that doesn’t need it.

Converting InnoDB to MyISAM


Trying to convert from InnoDB to MyISAM for smaller RAM footprint MySQL on a tiny VPS


adding skip-innodb on its own didn’t work, had to add:

default-storage-engine = MYISAM


And then run:

mysql -u... -p... -AN -e"SELECT CONCAT('ALTER TABLE ',table_schema,'.',table_name,' ENGINE=MyISAM;') FROM information_schema.tables WHERE engine ='InnoDB';" > ${CONVERT_SCRIPT}
mysql -u... -p... -A < ${CONVERT_SCRIPT}

Installing luasec with luarocks on ubuntu

Came across this error today while trying to install luasec:

Error: Could not find expected file libssl.a, or, or* for OPENSSL — you may have to install OPENSSL in your system and/or pass OPENSSL_DIR or OPENSSL_LIBDIR to the luarocks command. Example: luarocks install luasec OPENSSL_DIR=/usr/local

The solution is to find the libssl files

find / -name ‘libssl.*’

and then add the the path using OPENSSL_LIBDIR on the end of the command, e.g:

sudo luarocks install luasec OPENSSL_LIBDIR=/usr/lib/x86_64-linux-gnu/

MC DDOS protection part 3: Monitoring

So now our tunnels are set up, our round-robin SRV records are distributing our players across our VPS’s, next it would be nice if we could check the traffic passing across them, and also keep an eye on the system stats for each machine.

In this section we’re going to look at the basic structure of Zabbix, the terminology, and adding custom graphs.


Zabbix is made up of 3 parts:

Server Core
  • Receives and processes stats from each server
  • Stores the data
  • Can send alerts based on certain criteria (“triggers”)
Server Web Interface
  • Displays graphs and information about the servers
  • Can be on a separate machine from the core
Zabbix Agent
  • Runs on each of the servers you want to monitor
  • Reports the servers status to Zabbix
  • Can be customised with extra scripts

Installing zabbix core and web interface

To install Zabbix you will need a webserver and a mysql database. Your webserver can be on another machine to the one you’re installing zabbix server on, you just need to tell it where during the configuration.

Zabbix has some good documentation so I won’t re-write it, the version I used is here but feel free to use a later release, the only thing that’s important is that you don’t use the default from the apt repository: it will most likely be < v2.0 and not support interface auto discovery properly.

You will also need to enable access to port 10051 incoming to your zabbix server, either through your firewall or its iptables.

Installing zabbix agent on your servers

As above: add the repository, update,  and “apt-get install zabbix-agent”.

Few extra notes:

  • Make sure you’re on a version later than 2.0 with “zabbix-agent –version”
  • Change “Server=” in /etc/zabbix/zabbix_agentd.conf to your zabbix server IP
  • Restart the agent with “/etc/init.d/zabbix_agentd restart”
  • Allow port 10050 through iptables with ” iptables -A INPUT -p tcp –dport 10050 -j ACCEPT”

Host Configuration

Assuming you’ve now installed Zabbix and aren’t screaming at me for not including more detail, you’re probably now staring at the web GUI wondering where to start. Here are the basic terms you need to know:


  • Measure a specific value on the server, e.g. CPU load


  • Group Items into categories, e.g. “OS Memory” = all memory items.

Discovery Rules

  • Auto-generate new items by scanning the system for extra hard drives, interfaces, etc.


  • Perform an event when an item matches certain criteria, e.g. send an email when CPU load exceeds 100%


  • Self explanatory, graph an item or set of items on a host.


  • Can be used to create a page with multiple graphs.


  • Group all of the above together into a package that can be easily added to a new host with minimal extra configuration.

Add new host

New hosts can be added via configuration > Hosts > Create host

Give the host a descriptive name, add it to “linux servers” group, and enter it’s IP address. You then need to add a template to tell it what to monitor. “Template OS Linux” covers everything we need for our VPS, including an interface discovery script for our GRE tunnels.

Save your new host, and go back to Configuration > Hosts. After a few minutes you should see Status: Monitored and a green Z icon indicating that everything is fine.

If you have a grey icon, you may have not added any templates (hence nothing to monitor.

If there is a red icon then there is most likely an issue with communication to the host, hover over the red icon to see the details, check that your firewall rules are allowing the correct ports, and that the IP’s are correct.


If you navigate to Monitoring > Graphs and select your new host.  There should be a graph available with a name like “Network traffic on gre1”, which if you have any players connected will look something like this:


Using this page when can now switch between each host we’ve added and view the bandwidth usage on each interface. Great start, but it would be nice if we could combine the information into one place, which we’ll do with a custom graph.

Creating a custom graph

Since our MC server host has an interface for each tunnel, we can add each item for each tunnel onto a new custom graph:

Go to Configuration > Hosts and select “Graphs” on the entry for your MC server.

Select “Create Graph” in the top right corner.

You can leave everything default except the name and the Items.

Select “add items” and then select each incoming and outgoing item for your GRE interfaces. You can change any colours you want, and preview your changes in the preview tab.

Once you’re happy with the result, save the result and your new graph should appear under Monitoring > Graphs.



Depending on how many VPS’s you have and which interfaces you have selected, your graph may look a bit messy, even with some custom colours assigned to interfaces. Another option is to use a screen to display the data, with each interfaces as a separate graph, but all on the same page.

To do this go to Configuration > Screens and “Create Screen.”

Give it a name and decide an initial size, you can change it later.

Save it and it should appear in the list of screens: select your screen and you will be presented with a grid and an option to change the data for each one.

Select the data you want and you’re done, you can now view your new screen under Monitoring > Screens:


And we’re done for now. Thanks for reading, next up: traffic shaping on our VPS’s