At Dock we rely heavily on automated UI testing as a primary way of ensuring quality and preventing regressions in our web apps and satellite projects. We have found it to be an excellent method of testing by simulating how our web applications are supposed to work for our end users. In light of this, we've continuously noticed significant improvements in quality as we've introduced more tests.
We rely on a couple technologies for running UI tests. The first technology is Puppeteer, which is a high-level API for headless Chromium. We used Puppeteer starting from very early 0.x versions and noted that it evolved and became more powerful and mature with each iteration. Our biggest issue in the beginning, however, was to make it work properly inside of a Docker container. But their project has pretty good documentation now on how to troubleshoot possible issues.
Here is an example of a test scenario:
One powerful feature of Cucumber is that once you define a step, you can use it in multiple scenarios. In the example above you can see the steps:
I navigate to "" or
I submit form. These steps are generic and are used in many different scenarios.
Obviously when normally loading a web app into Puppeteer, it doesn't work in an isolated environment. It performs requests to our backend and reacts to responses. Initially we tried running tests against our real staging api, but we faced multiple issues:
We quickly found that testing with the real backend was not viable and decided that tests should run against a mock backend. This significantly simplifies the process, especially if you think of your web app as a classic program, which gets input and provides output. Backend responses are essentially an input for our app and there is no reason to have complex test setup just to provide this input.
When we realized this there were two options:
We initially tried the first approach, but it wasn't adequate since we couldn't distinguish the same requests made from different scenarios. It was necessary for us to be able to define mock responses in an isolated manner per scenario, so we decided to stick with Pretender.
This has worked well but also required the injection of Pretender into our web page during tests, which was not a trivial task. For example, this required us to keep two dev servers running locally: one with "normal" app, which can be opened in a browser, and the second "test" one with Pretender injected, so that we can develop and test in parallel.
Our process improved dramatically after the release v0.13 of Puppeteer, where there was
request.respond() method introduced. With request interception done on browser internal level, we managed to completely separate our app from the testing environment. Now we have a running local instance of the app opened in a browser, which is convenient for development, and at the same time we can load the app into Puppeteer! Thanks to request interception, requests can now be mocked without any changes needed in the app itself or the server running it. It also comes with a blazing fast performance.
As I mentioned before, we use UI testing in several projects so it was natural to split some of the common logic into a separate library and share it between the projects. We created and open sourced a tiny pptr-mock-server library. Currently it's lacking api documentation and unit tests, but at least it provides an idea about how to implement backendless testing using Puppeteer.
Even though automated UI testing is a great way to keep a high quality bar for our apps, we went through a thorny path of getting our tests up and running for it to start providing value for us.
We hope that you will find this article helpful when implementing UI tests for your own app or upgrading your existing setup.