Create a way to test that enables users to interact with multiple forms.

Next, we needed to test our hypothesis, and we needed a front-end and back-end to achieve:

  1. A backend that easily integrates/authenticates

  2. A front-end system that will help us templatize elements, and make it clean/easy to reuse and alter them across many tests
  3. An organization method that allows us to isolate tests from each other while also preserving them for future reference
  4. A reliable front-end that limits bugs and usability issues that could skew our data.

Testing workflow

In response to these needs, we chose the following set of components as the core of our testing/building workflow.


All of our tests are hosted on a NodeJS server with Express. We chose this for the ease of integration you get with services like Firebase, and the ability to easily extend functionality with additional node modules in the future. One node module we’re currently using is ejs which helps us templatize test elements across pages.


Most of our tests require users to input some basic demographic information about themselves (date of birth, country, tech-savviness, etc.). We connect to a Firebase Realtime Database to store that data when the test form is submitted, which allows us to view our analytics data in the context of audience demographic. We also use the unique key generated by each Firebase as a validation code for our MTurk users so they can confirm they completed the assignment by identifying themselves on the MTurk project page using their unique user ID.

Firebase example

We chose Bootstrap as the starting point of our front-end framework because it includes basic styling and functionality for the form elements that we’re testing. Vanilla Bootstrap provided a good foundation for us to start experimenting with the look and feel of the components we were testing, and it’s ubiquity and reliability keeps us from worrying about interface bugs that could throw off our dataset.


We’re using Github in a fairly nontraditional way. We need a way to keep the code from each of our tests separate, but we also need an archive of all of our old tests for future reference. To achieve this we create a new branch in our repository for each test. This allows us to go back in time and check out any of our previous tests to capture screenshots or borrow code to bring into a new test. We also keep a master branch that includes any improvements we make to the way the backend works.

Github example

We’ve experimented with a few different analytics solutions for these tests. We break them down in more detail in the Analyze & Conclude section, but our goal here is pretty simple. We need a way to track user behavior such as the time spent in fields, the number of corrections being made, page abandonment rates, etc. We’re also using services that generate heatmaps of user activity, and even screen recordings of all activity on the page.