One of the key responsibilities of information architects (IAs) at Beaconfire during a site redesign is creating navigation labels that make sense for the sites’ priority audiences. Recently, we’ve started using tree testing to check our work and make sure the navigation labels make sense to those priority audiences.
Through the IA process, we focus on two parallel inputs – our client’s needs and the needs of their priority audiences. We identify the key information audiences want from the site and frequently get their input on how they’d categorize that content via open card sorts. The card sort gives us an opportunity to learn how the priority audiences think about and group information on a site and what language they use to describe it. In an open sort, participants are presented a list of content items and asked to group those items into whatever categories make sense to them. Rebecca wrote a great post that explains more about card sorting last summer.
After the card sort, we bring that input back to the client in the form of a draft a sitemap and continue to refine the site structure to ensure all the content has a home. Before we sign-off on the sitemap, we like to present the “new & improved” navigation to users and see if it is going to work for them. In the past, we have tested the new navigation with a ‘closed card sort.’ Like an open sort, users are presented a list of content items but in a closed card sort, they are asked to sort those items into the pre-determined category (navigation element) that makes the most sense. Closed sorting has worked well for our navigation validation – we’ve seen good results with the testing and solid performance on the navigation.
However, we recently discovered OptimalWorkshop’s TreeJack, a “tree testing” tool that allows us to test the navigation from a much more authentic starting point. In a tree test, participants are asked to click through the site tree (navigation structure) to complete a series of tasks. This type of process closely mirrors website users mental model “Where will I find X” when they come to a site.
In tree testing, we develop tasks that align with the priority content and features of the site or, if a particular navigation element is in question, we can create a task to test that. The testing results give us a quick overview of how many participants succeeded (got the page we identified as an answer for the task); whether they got there directly or wandered through the navigation; and how long it took. In addition, the testing results show us the path that each participant took for each task which gives us good insight into the ‘why’s’ behind unsuccessful tasks.
We’ve been impressed with the TreeJack tool. Tests are easy to set-up and the test results are easy to understand and actionable. It’s also nice that the tool is online, so it ready whenever your participants are, and no special software is needed.
Sounds fun, doesn’t it? Try it out here and let us know what you think: https://beaconfire.optimalworkshop.com/treejack/survey/BeaconfireWire