Experience Design: Asavie Moda: I.A.


Good information architecture (I.A.) helps users to understand their environment and organises content so they can find what they're searching for. How can this be applied to a software system and what is the process?


Not understanding or being able to find what you're looking for is a common problem among systems and it happens to all systems over time. One of the reasons that can cause this is adding to or expanding the system in an ad hoc manner. The system loses cohesion, each part starts to evolve separately and there is no overall narrative. Entering a new building for the fist time and navigating your way to the correct office or the correct floor can be difficult, if it hasn't been planned as one complete unit.

“App Control” was our latest feature. It allowed administrators the ability to see which apps users had installed on their phones. But where was the most appropriate place to add this feature to the overall system? We “temporarily” put it under the account label in the U.I. while development was taking place, with the best of intensions of finding a more suitable home for it later. But pressure to make the release date meant that we never found a better home for it.

Unfortunately when it came to user testing nobody knew where to find it. We had a serious problem, but how to fix it?

Diagnosing the Problem

We knew there was a problem because nobody could find our latest and greatest feature “App Control”. We knew this because we did user testing and directly observed users fail to complete tasks and measured the results.

Users' score card
Example of an user test, score card. 8 tasks were required to be completed in order for the user to achieve their goal: “Only allow compliant apps to be installed on employees' devices”

Above is an example of one test result, highlighting the time taken to find where App Control was in the U.I. It's worth noting that the user was being observed, and spent much longer searching than they would have in real life. We observed the user “Pogo Sticking”, an indicator they couldn't find what they were looking for.

Users were asked to think aloud while completing tasks, the most frequently heard, distressed utterance was “I can't find it.” Our problem was that 1. Content was poorly organised, it was taking too long for users to find what they were searching for and the 2. Labels of our categories were not meaningful or distinct. They did not create enough context for users to understand the environment.

Solving the Problem

We had a clearly defined problem; users' couldn't find what they were looking for and the labels of our categories didn't make sense to them. One solution to these problems is to organise the system in a way that makes sense to the people who will use it. Systems need structure, information architecture is one way concepts can be organised into meaningful categories so that content is easy for users to find. http://www.iainstitute.org/what-is-ia. We adopted three methods to reorganise and restructure the I.A. of our system 1. Card Sorting, 2. Tree testing and 3. Competitor Analysis. Once the restructure was completed we remeasured the results to discover if the new information architecture was superior to the current version.

How the I.A. looked before the redesign
How the I.A. looked before the restructure

1. Card Sorting

If users can't find what they're looking for quickly, they will just go somewhere else. To overcome this you need to understand why people use your product? What is the job to be done? Structure the system in a way that makes sense to the people who will use it. Card sorting is a method to help determine how users organise content, label categories and concepts in their own heads. It can offer insight into their mental model of the problem you are trying to solve.

We wrote down on cards all the concepts and functionality contained within our system in plain English (roughly about 25 cards). We asked users and key stakeholders to organise the cards into groups that made sense to them. We then asked them to label each of the groups they had created. Where possible we did this in a room together to hear “the why”. Why individuals thought certain concepts belonged together and the rationale for the labels they had chosen.

Using a label (plain language) that is already familiar to the user to explain a concept will reduce the amount of thinking they will have to do. Using labels for categories in the U.I. that relates to the task at hand will result in users having a better understanding of their environment. “Software concepts (objects and actions) should be based on the task rather than the implementation”

2. Tree Testing

Once card sorting was completed we had the basic structure of our system. We had content organised into logical categories and meaningful labels assigned to these categories.

Then we needed to test if the structure actually made sense to users; can they find what they're looking for? Do the labels of categories and how the content is organised really align with the key tasks they have to do? Tree testing is a technique that can assist with this, it involves writing very simple, realistic tasks. Presenting the tasks to users and measure if they find what they're looking for. Users complete tasks using a text based version (information architecture tree) of your system.

tree as users completing tasks saw it
Text based version (information architecture tree). This is what users saw when completing tasks. Users followed any path they wished when answering questions, then selected which branch of the tree they believed, would best answer the task question. (Post card sorting activity)

Results from Tree Testing

Metrics from the tree testing
Metrics from the tree testing, the task users were asked to complete can be seen at the top in the light blue box. Users followed any path they wished when answering the task question. The correct path (Devices > Device Management > See apps installed on a specific device) is displayed just below the task above

Many different companies offer this service. We selected Optimalworkshop, because they offer many different metrics to measure how successfully users completed each task. Above you can see an example of how all users performed on this task. One of the metrics we were most interested in was "First Click" or "Visted First".

First click is important because it's a good indicator of the suitability of the category label. Ideally we wanted all users (100%) to first click on the devices label, when completing this task.

Based on the results from the tree testing there were some iterations of the category labels and reorganising the grouping of content within the tree and retesting the tasks that scored lowest. This process was repeated until we were satisfied that all the key tasks were quickly and accurately found by the users.

3. Competitor Analysis

We looked at our competitor's U.I. to see if their was anything we could learn from. The goal of the analysis was to see the types of words they used to label categories in their systems and how they organised and grouped their content.

Overall the analysis didn't prove very fruitful. Many of labels used to describe categories were based on task implementation (made sense to the designers of the system) rather than on the users' task (makes sense to the user of the system). It did not appear that there was an overall theme to how content was organised.

Competitors Information Architecture
Examples of competitor's information architecture trees

Visual Design

Once the information architecture tree is developed to meet users' needs, it's time to start the visual design. We created a prototype that matched as closely as possible the final look and feel of the product. If users can score highly just using a paired down version of the information architecture tree, theoretically they should score higher once they see a fully completed visual design and gain more context from the types of information they can see on individual screens.


Using the same tasks, we tested the prototype and remeasured using the same metrics to establish if there has been an increase or decrease in usability.

Competitors Information Architecture
Competitors Information Architecture
Competitors Information Architecture
The restructured prototype incorporating new information architecture (top two images) along side the original design
Competitors Information Architecture
Time taken to complete the task was massively reduced from 5:40 minutes to 10 seconds a 97% improvement

There were big improvements in findability for all of the key tasks, our Security functionality saw the biggest gains. We moved all of the security features from under the "Account" label to a new top level label called "Security". The task was completed much more quickly and accurately as reflected in the score card result above.


Reorganising and restructuring the system was a big undertaking. All page names and links had to be re-factored, all references to old labels had to be completely removed from the U.I. and existing content had to be reorganised. The help website had to be restructured also, and new screen grabs had to be taken to replace all old images.

The product was small when it started and had a very specific and limited number of features. But over time more features were added without an overall plan, this lead to features being placed in unsuitable areas of the system and were difficult for users to find.

Restructuring our system and improving the information architecture solved the problems we had: 1. Content was poorly organised and the 2. Labels of our categories were not meaningful or distinct. We had many features buried too deep in the old information architecture and were not being used as they were too difficult to find. Reorganising the system based on the tasks that users were trying to complete and using labels for categories that were already familiar to them saw significant improvements in usability.