Code coverage percentage is a controversial subject: some will tell you that you get diminishing returns after 70-80% and it’s not worth bothering, others will say that 100% is not enough. The underlying problem is often the approach taken to testing:
100% coverage as a goal is counterproductive
- Code coverage tools tell you how much of your code has been tested, not how well that code has been tested.
- Code coverage only applies to the code you have written, and doesn’t highlight the features you’ve forgotten to implement.
- Achieving high code coverage with low quality is relatively simple, but only results in the code being run, not being tested. It’s possible to hit 100% code coverage with 0 assertions!
If features drive your testing, you are more likely to find missing code paths, and to write assertions for the features. If coverage drives your testing, your are likely to write low quality, brittle tests that satisfy the coverage tool – missing assertions and missing omissions in your logic. This is a problem because:
- Your customer doesn’t care about your coverage, hitting 100% doesn’t help them if your code is still full of bugs. Tests should focus on surfacing bugs, not satisfying tool output.
- You lose insight into the areas that need to be more thoroughly tested: when 100% of your code is covered by low quality tests, your code coverage tool becomes useless.
100% unit test coverage != 100% test coverage
When looking at the test pyramid, it’s easy to assume that if you’re aiming for 100% test coverage, that means you should be hitting 100% coverage with your unit tests, ~60% with component tests, ~30% with integration tests etc. This misses the point of the tests pyramid: that each layer higher in the test pyramid performs a different function to the layer below, and should fill in the gaps that the layer below wasn’t able to cover.
Unit testing glue-code with tonnes of mocks is possible, but the effort to value ratio is pretty bad: you’ll end up duplicating the same tests at a higher level anyway, and then have more tests to fix when you change that code later.
Not all code is equal
Some code is much more important than other code. Your focus should be on testing the important code thoroughly, not in wasting effort setting up mocks for the less important code.
Some projects have more risk than others. Are you writing banking software for millions of paying users, or a side project for fun? Scale your testing time appropriately.
Use your product requirements to drive testing. Use the risk factors of your product to determine how thorough your tests need to be. Use your code coverage tool as a tool, to tell you where best to spend your limited amount of time, not as a goal to give you a false sense of security.
Controllers safely separate your API from the outside world; they parse and sanitise data that comes in, and they filter the data you return.
For this reason, you shouldn’t do any input validation past your controllers: if it doesn’t exist by that point then something has gone wrong, and you should fix the code instead of adding hundreds of validations.
If your endpoint requires an account ID with your request and the user doesn’t pass it in, that’s a user error and you should handle it by validating the input and rejecting the request.
If your internal code requires an account ID and gets called without it however, that’s an error with your code, and your code should fail, rather than specifically checking for it and handling it in every single function it may occur.
For the same reason, controllers should abstract the request information from the rest of the code, and should be the last point in your code that you ever see the request object. Internal code beyond the controllers should have no concept of a request, only the values parsed in.
Handy little feature I didn’t know about in Express:
Using app.param([name], callback) you can bind callbacks directly to route parameters, allowing you to move common preprocessing/validation out of each function that uses the parameter, and into a single function (without having to call it explicitly each time.)
You can pass in an array of names, using next() to jump to the next parameter, and the callback is only called once regardless of how many times the parameter appears in route handlers.
The callbacks are local to the router they are defined on, so you can handle things (or not) differently based on the context.
While researching Redis Cluster I found a large number of tutorials on the subject that confused Replication and Cluster, with people setting up ‘replication’ using cluster but no slaves, or building a ‘cluster’ only consisting of master-slave databases with no cluster config.
So to clear things up:
Replication involves a master server which serves reads and writes, and duplicates all data to one or more slave servers (which serves reads but not writes). Slaves can be used to replace a master in case of failure, spread read request load, or to perform backups of the database to reduce load on the master.
Clusters are used when you have more data than RAM in a single machine: the data is automatically split (based on the key) across multiple databases, increasing the amount of data you can store. Clients requesting a key from any cluster node will be redirected to the node holding the key, and are expected to learn the locations of keys to reduce the number of redirects.
Replicaton + Cluster
Redis Cluster supports replication by adding slaves to existing nodes, if a master becomes unreachable then its slave will be promoted to master.
Last but not least, Redis Sentinel can be used to manage replicated servers (not clustered, see below.) Clients connect to a Sentinel and request a master or slave to communicate with, the sentinels handle health checks of the masters/slaves, and will automatically promote a slave if a master is unreachable. You need to have at least 3 sentinels running so that they can agree on reachability of nodes, and to ensure the sentinels aren’t a single point of failure.
Cluster handles its own promotion and does not need Sentinel in front of it.
Tried to install LuaSec on a new machine recently and got the following error:
luarocks install luasec
Warning: Failed searching manifest: Failed loading manifest: Failed fetching manifest for http://luarocks.org/repositories/rocks – Error fetching file: Failed downloading http://luarocks.org/repositories/rocks/manifest – URL redirected to unsupported protocol – install luasec to get HTTPS support.
So all I need to do to install LuaSec is install LuaSec first, brilliant!
One solution is buried here https://github.com/luarocks/luarocks-site/issues/6
and is to specify the server directly:
luarocks install –only-server=http://rocks.moonscript.org luasec
To run systemtap you need debuginfo, but it fails when installing linux image with :
apt-get source linux-image-4.4.0-53-generic-dbgsym
Reading package lists… Done
Picking ‘linux’ as source package instead of ‘linux-image-4.4.0-53-generic-dbgsym’
And then fails to find ‘linux’
Solution is to uncomment ‘deb-src’ in /etc/apt/sources.list, run apt-update again, and then
sudo apt-get build-dep –no-install-recommends linux-image-$(uname -r)
Running system tap gives the error: ‘error: ‘struct module’ has no member named ‘symtab’’
This is caused by a bug with system tap not containing symtab for ubuntu 16+ in version 2.9, and can be solved by upgrading to systemtap 3.0+ by compiling from source.
Docker supports passing in environment variables to your containers as a handy way to easily switch environments when using multiple docker-compose files. For example you may have a base docker-compose.yml with a docker-compose.dev.yml and docker-compose.prod.yml file that specify environment variables for database hostnames/credentials.
The issue is simply accessing these environment variables directly isn’t possible in openresty (e.g. using os.getenv()).
The issue is caused by NGINX blocking all environment variables by default. To make them accessible again you need to re-define them in your nginx.conf, but with no values, e.g:
You should then be able to access them as normal.
Trying to create an aggregate check in Zabbix to measure total bandwidth in a group of servers using ‘grpsum’, get status ‘Not supported’ and error message ‘Unsupported item key.’
It’s not anywhere obvious in the docs or after googling the error, but the item type needs to be changed from ‘Zabbix agent’ to ‘Zabbix aggregate’
Installed Windows 10, installed steam, xbox button on controller doesn’t open up Big Picture any more
Need to disable DVR setting in the xbox application that comes with win10.
Can either login into the xbox app and disable the DVR option under settings, or if you can’t login for some reason, you can press the windows key and g in any program, and select “this is a game”, then turn off the dvr setting there.