6 Min

E2E testing in a controlled and isolated environment

1e2e:
2    working_directory: ~/build
3    machine: true
4    environment:
5      TZ: 'Europe/Berlin'
6    steps:
7      - restore_cache:
8          key: repo-{{ .Environment.CIRCLE_SHA1 }}
9      - restore_cache:
10          key: build-{{ .Environment.CIRCLE_SHA1 }}
11      - restore_cache:
12          key: packages-{{ checksum "yarn.lock" }}-{{ .Environment.CIRCLE_SHA1 }}
13      - run: docker info
14      - run: docker login -u "${DOCKER_USERNAME}" -p "${DOCKER_PASSWORD}"
15      - run: sudo chmod -R 777 /home/circleci/build/logs .
16      - run:
17          name: run ci-runner
18          # This file contains the commands to launch Smart Alarm and the E2E image
19          command: |
20            ./docker/e2e/ci-runner.sh ${CIRCLE_BRANCH}
21      - store_artifacts:
22          path: cypress_output/videos
23      - store_artifacts:
24          # If you configure Cypress nicely, it will create screenshots automatically, which you can store in circleci
25          path: cypress_output/screenshots
26      - store_test_results:
27          path: cypress_output/results
28      - store_test_results:
29          path: test-results

In the previous article, you learned about E2E testing with Cypress as well as some general ideas behind it and how we in the Smart Alarm team use it.

E2E testing in a controlled and isolated environment

In the previous article, you learned about E2E testing with Cypress as well as some general ideas behind it and how we in the Smart Alarm team use it. This article explains a few key aspects of isolation and determinism in testing in order to avoid side effects on production systems, while also automatically testing the entire software stack. Some Docker experience is welcome but not necessarily required. Our backend is based on Ruby so there will be some Ruby commands in the examples and code snippets – but don't worry, the concept is the same, no matter which language you use.

How it used to be

Traditionally, many teams used test environments that run on dedicated machines. This was usually easy to implement by changing the config parameters. Running automated tests against such an environment is certainly better than not testing against an integrated system at all. Unfortunately, this approach has many drawbacks, including:

  • Lack of isolation: Test runs interfere with each other if they are using the same system
  • Lack of determinism: As a rule, guaranteeing that all tests start with the exact same preconditions is quite a challenge
  • The setup of the software to be tested and the test environment itself was usually complex and not repeatable in an automated way

Such test environments often diverged from the production setup significantly over time, making them increasingly less useful.

Containers to the rescue

Back in 2013, the term “container”, coined mainly in relation to Docker and its runtime, enables developers to package complex software in a defined and deterministic manner. The idea hasn't change much: you put software into an image and execute it. This is done by defining a set of instructions in a so-called Dockerfile. Instead of affecting the host system by reading and changing files, the execution takes place in an isolated context – a container. A Docker image consists of layers, each of which represents a Dockerfile instruction. The layers are stacked and each one is a delta of the changes from the previous layer. From the Dockerfile below, for example, our backend image is based on the Ruby:2.6.6 image and adds 7 instructions to the image.

1# Inherit from a defined ruby image
2FROM ruby:2.6.6
3
4# Specify the default folder in which the process launches from
5WORKDIR /opt/app
6
7# Copy dependency information from host to image
8ADD Gemfile /opt/app/
9ADD Gemfile.lock /opt/app/
10RUN bundle install --without development --jobs 4 --deployment # Install these dependencies
11
12# Copy the application from host to image
13ADD . /opt/app/
14
15RUN chmod 777 /opt/app/docker-entrypoint.sh
16
17# Specify the command that is used on default
18CMD ["/opt/app/docker-entrypoint.sh"]
The Dockerfile used to power the Smart Alarm backend

Quick side note for non-Rubyists: A Gemfile describes the dependencies required to execute the associated Ruby code, while the bundle install command installs the exact dependencies and versions that are needed. For more information, see https://bundler.io/

How does this help?

Installing software on target machines seems simple, but in truth, it might cause some issues, especially from an infrastructure perspective. Most companies don't work in a homogeneous landscape. In other words, many different software solutions might be implemented using older and newer versions of a programming language, runtime or library, as well as complex dependency trees. Accomplishing this with manual installations is possible but it could be time-consuming and expensive. It's much easier to put those dependencies into an image as described above and execute it in a container.

At SMS digital, many different teams work with a variety of software stacks. One team uses Python, another prefers Ruby, and others use Node.js. If we installed the software individually, we would need to keep detailed wiki pages on how to install these products. With containers, we just need the name of the image and we are ready to go. Ensuring the dependency installation is a task for the developer – the person who knows best.

Orchestration

Using the power of containers is convenient, helpful, and prevents headaches from setting up the software. However, we still need to manage a combination of containers and allow them to communicate with each other. This is called orchestration. We orchestrate our images in a defined way so that we can launch them everywhere automatically.

Orchestration toolkits

There are different toolkits available for implementing orchestration with images. Our team uses Kubernetes for deployments. We use Docker Compose for automated tests. The reason is that setting up a service with Docker Compose is extremely easy and effective for simple setups. It can also be used with most CI tools such as Jenkins or CircleCI.

All orchestrations aim to accomplish the same goal: Isolate the containers as much as possible. This is defined by means of a docker-compose.yml file, which specifies different aspects of a service (or multiple services).

Putting services together

Combining different services is straightforward – just list the services and their required attributes. To define a service dependency, just add a “depends_on” field to the service definition. The following file provides a thorough explanation using our software stack as an example.

1version: '3.2'
2services:
3  # A database can be used as a service, too!
4  database:
5    image: postgres:11.4
6    command: postgres -c logging_collector=on -c log_destination=stderr -c log_directory=/logs -c log_filename=database.log -c log_statement=all -c log_duration=on -c log_min_duration_statement=0
7    environment:
8      # Here you can put env variables like postgres username and password. e.g. USER: johndoe
9    ports:
10      - '54321:5432'
11    volumes:
12      - ./logs:/logs
13  frontend:
14    image: smsdigital/frontend:latest
15    working_dir: /usr/share/nginx/html
16    build: .
17    ports:
18      - '3000:80'
19    volumes:
20      - ./dist:/usr/share/nginx/html
21    environment:
22      # Here you can put env variables like the api url, etc. 
23  api:
24    # specify the image to be pulled
25    image: smsdigital/backend:latest
26    # All ports listed that should be forwarded to the host system
27    ports:
28      - '9393:9393'
29    links:
30      - database
31    depends_on:
32      - database
33    environment:
34      # Here you can put env variables like database uri, etc.
The simple app stack in a docker-compose.yaml

Starting the stack is as easy as this line:

1docker-compose -f docker-compose.yml up -d

Automated test execution

To automate the test execution, we use a build pipeline based on CircleCI (but Jenkins or Travis will work just as fine). CircleCI supports the execution of Docker and Docker-compose commands, and enables us to take our entire app stack and execute it easily.

Seeds – Start from square one

We use seed data to populate the database via a ruby rake task. These are partly random and partly static (username, password) and the script to create them is part of the backend. Therefore, we need to manually execute the database seeding task after the API is launched:

1docker-compose exec api bin/rake db:seed

Using seeds is essential for quick testing. Without this, accessing and working with the application would be more laborious and require a workaround to access the data in the first place. Having said that, do not execute these seeds in the production database.

Cypress

We already wrote about Cypress in a previous article and how to write tests, but how do we execute them in CircleCI?

Cypress provides different Docker images for this case that include Chromium or Firefox and gives us easy access to common functions. We just need to bundle our tests in there, build the image, and run it:

1# Use the cypress provided image
2FROM cypress/browsers:chrome69
3
4WORKDIR /usr/src/app
5
6COPY ./cypress ./cypress
7
8ADD ./package.json ./package.json
9ADD ./tsconfig.json ./tsconfig.json
10ADD ./cypress.json ./cypress.json
11
12RUN mkdir cypress/screenshots cypress/videos cypress/test_results
13
14VOLUME ./cypress/screenshots
15VOLUME ./cypress/videos
16VOLUME ./cypress/test_results
17VOLUME ./test-results
18
19RUN yarn
20CMD ["yarn", "cypress:run:ci"]
The test-image that will run the E2E tests

The complete picture

Now that we have all Docker files at our disposal, it's time to write the CircleCI config:

1e2e:
2    working_directory: ~/build
3    machine: true
4    environment:
5      TZ: 'Europe/Berlin'
6    steps:
7      - restore_cache:
8          key: repo-{{ .Environment.CIRCLE_SHA1 }}
9      - restore_cache:
10          key: build-{{ .Environment.CIRCLE_SHA1 }}
11      - restore_cache:
12          key: packages-{{ checksum "yarn.lock" }}-{{ .Environment.CIRCLE_SHA1 }}
13      - run: docker info
14      - run: docker login -u "${DOCKER_USERNAME}" -p "${DOCKER_PASSWORD}"
15      - run: sudo chmod -R 777 /home/circleci/build/logs .
16      - run:
17          name: run ci-runner
18          # This file contains the commands to launch Smart Alarm and the E2E image
19          command: |
20            ./docker/e2e/ci-runner.sh ${CIRCLE_BRANCH}
21      - store_artifacts:
22          path: cypress_output/videos
23      - store_artifacts:
24          # If you configure Cypress nicely, it will create screenshots automatically, which you can store in circleci
25          path: cypress_output/screenshots
26      - store_test_results:
27          path: cypress_output/results
28      - store_test_results:
29          path: test-results
circleci.yaml excerpt that shows the E2E tests - you also need steps to build, package, etc.

Note that the file at ./docker/e2e/ci-runner.sh contains the commands for running the complete tests, some of which are mentioned above for seeding and automatic test execution. The file looks like this:

1# bootup database
2# setup api
3# start frontend
4docker-compose -f docker-compose.yml up -d 
5
6# run network checks
7docker run --network build_default appropriate/curl --retry 5 --retry-delay 5 --retry-connrefused http://api:9393
8docker run --network build_default appropriate/curl --retry 5 --retry-delay 5 --retry-connrefused http://frontend
9
10# assign access for logs
11sudo chown ${USER}:${GROUP} -R /home/circleci/build/logs
12
13# run seed
14docker-compose exec api bin/rake db:seed
15
16# build e2e container
17docker build -t e2e -f Dockerfile .
18
19# run cypress
20# provide volumes to capture screenshots and videos on test failure
21docker run --network build_default -it \
22  -v "$PWD"/cypress_output/test_results:/usr/src/app/cypress/test_results \
23  -v "$PWD"/cypress_output/videos:/usr/src/app/cypress/videos \
24  -v "$PWD"/cypress_output/screenshots:/usr/src/app/cypress/screenshots \
25  --rm e2e yarn cypress:run:ci --reporter junit --reporter-options mochaFile=./cypress_output/test_results/result.xml
A failed test results in a screenshot of the Cypress window if configured

As a result, you end up with a pipeline that fails with meaningful errors and screenshots for the E2E tests, removing the needle in the haystack of finding out the reason for a test failure.

That's all, folks!

With these tools in place, testing your stack automatically is quick and easy. Cypress is a great test runner and using the docker-compose structure is simple enough.

Lekealem Asong
Full-Stack Developer
SMS digital GmbH