Understand how basic web form functionality performs in a baseline evaluation.
Discover a method for cheap and effective user testing.
Discover elements and form fields we want to refine through data-driven testing.
Discover what information from testing is valuable and worth documenting.
|Gender||38% Male, 62% Female|
|Tech Savvy||1% Strongly Disagree, 3% Disagree, 5% Neither, 58% Agree, 33% Strongly Agree|
What we tested
Our first real attempt at collecting data from an online form. We wanted to see what it looked like analytically when people use basic form fields with no additional modification.
Enter day, month, and year in three separate free-text fields.
We know this kind of dropdown isn’t a best practice, but wanted to find out how bad of a practice it actually is.
We’re using the information people enter to discover who is taking the test. Figuring out how they get online is a big part of that effort.
Another part of getting demographic information is gender. We’ll continue to experiment with balancing speed and a variety of options with this form field in the future.
Country and Profession dropdowns had the longest duration.
We noticed that the amount of time it took to use the country dropdown was five seconds. We think this time could probably be shorter and we’ll look at ways to improve that.
The profession field took 12 seconds to fill out, the longest by eight seconds. This may be a bug, we’re working to fix.
Add a visual affordance to click on a dropdown.
We also observed that in dropdowns, people tend to click near the beginning of the bar, or on the arrows at the end of the bar. It’s possible that left-aligning the arrows on a dropdown, or making it narrower, could speed up completion times for drop downs.
66% use the tab key to move from field to field.
95% of people click on the first date field, and then 33% of them click on the following two. This means about 66% of users employ the tab key in this situation.
Increase the quality of data.
These data collection methodologies aren't perfect. We're still seeing random aberrations in data, and there's no way to make sure inputs are perfect from all people. Large sample sizes will help to counteract this in the future. Something to consider moving forward is how we can increase the quality of data, either through processing the results better to remove aberrations, or have better recruiting methodologies.
We saw “bad data” rates of about 3-5% with a sample of 100.