This week’s reading: Morville and Rosenfeld, Chapters 5 and 6
Our reading this week focused on two interconnected topics: how to conceptualize and group the organizational items of a system, and how to choose words or labels to represent them. It was an exciting pair of chapters, because the effectiveness of an information system like a website submits to empirical testing. That is, broadly speaking, it is actually possible to decide which of two possible schemes is better, in the sense of helping more of the users more of the time. In this comment I’ll focus on how we might apply empiricism to the ideas described in these chapters.
The text outlines organizational schemes appropriate for both exact and ambiguous searches. To use the language of my last entry, “fully realized” questions can be answered with straightforward schemes like alphabetical or chronological arrangement of data, but “fuzzy” questions – or items of information that fall into “fuzzy” categories – require more creativity in their organization. Most of the textbook’s examples of good interfaces give the user several access points in these ambiguous cases. Dell’s website (p. 65), for example, allows its customers to browse by topic (notebooks, desktops, support) or by audience (home, small business, government). A multiplicity of access points is likely to help some users and confuse others. Site analytics might help Dell empirically determine whether the former outnumber the latter.
A relevant metric of effectiveness might be the number of visitors who click on Dell’s topic links versus its audience links. A priori, I would expect that few visitors click “Home & Home Office” from the audience menu; these users are likely to have a more clearly defined need, and thus are more likely use the topic links. If analytics bear out this intuition, Dell should consider eliminating this hyperlink from its audience menu. Conflicting with this impulse, however, is the principle of comprehensiveness (p. 100): if we have special links for business and government audiences, shouldn’t we have a special link for home audiences? Retaining the “Home & Home Office” link might improve the menu’s consistency and thus help the user build a mental model of the Dell website, even if the link is rarely used.
The challenge, then, is to design an empirical test to settle the question of whether our little-used link contributes more than it detracts. A first approach might be to recruit a panel of diverse users, each with a genuine need. Through an automated survey, the website could prompt users to articulate their need. Half of these users could be directed to Dell’s usual site, while the other half are directed to a version of the site with the questionable link omitted. Their progress through the respective designs could be tracked and their success quantified through an exit survey. If one version of the site connects users with content with significantly higher consistency, and no intervening factors such as internal politics intervene, Dell should adopt the more successful architecture. This approach could be tested on micro aspects of design, such as whether to include a particular hyperlink, or on macro aspects, such as an entire top-down site redesign.
The Dell homepage reprinted in the book is dated 2006. I note with interest that in the intervening four years, Dell has given its site a complete revamp – consistent with the textbook authors’ emphasis on ongoing improvement. In 2006, the topical menu had pride of place on Dell’s site, while audience was relegated to a small-type menu of hyperlinks. By contrast, in 2010, the audience menu is splashed prominently across the top of the site; mousing over one of the labels (“For Home,” “For Small and Medium Business,” and so on) drops down a topic menu pertaining to the audience. This integration combines the advantages of both menus in a seamless way that is intuitive to Net-savvy audiences, though empirical testing could be useful to determine whether this two-layer sorting of content might be confusing to Internet novitiates.
Dell’s changes to its labels also merit attention. In 2006, the audience menu was headed “Solutions for:”, and its items were “Home & Home Office,” “Small Business,” “Medium & Large Business,” and “Government, Education, & Healthcare.” In 2010, the audience menu has no heading. The mouseover points are labeled “For Home,” “For Small & Medium Business,” “For Public Sector,” and “For Large Enterprise.” Three changes are interesting here. First is the change to the format of the list’s items. “Solutions for” rings of corporate jargon, which the text’s authors warn against (p. 85-86); Dell’s new formulation sounds much more natural. Second, we see that “Public Sector” has replaced the unwieldy “Government, Education, & Healthcare.” The latter choice is more descriptive, but the three indicated subcategories are heterogeneous; clicking this link is likely to lead us to a narrow, deep architecture where we’ll have to further specify that we work in education, then that we work in K-12 education, and so on. Public Sector, by contrast, denotes the same services more transparently – the label is effectively invisible to users who don’t need it – and the drop-down menu allows much of the disambiguation to take place in one click. Finally, we see that medium businesses have been reclassed with small businesses, while the term “large business” has been replaced with “large enterprise.” This could be an organizational change, but it’s more likely to be a labeling change; Dell has likely determined that its services for large businesses are disparate with the needs of medium-sized businesses, and has relabeled its categories to guide medium-sized business owners to the content most likely to be relevant to their need. The choice of the word “enterprise” in particular is clearly a labeling decision. “Enterprise” is an uncommon word whose connotation of scale may further help medium-sized business owners decide which menu category to pursue. All three of these changes have implications for the site’s overall architecture which could be user-tested by an experiment something like the one I described earlier.
My point in this entry is that IA guidelines are often useful, as when they suggest that we avoid jargon, but that site analytics and empirical testing are the ultimate tests of whether a site serves its users as envisioned. A building can be beautiful but uncomfortable, and a textbook-compliant website might still fail its users. In my first entry in this blog I discussed my view that interaction design is the parent discipline of information architecture. If so, then empiricism is the means by which we can determine whether IA is a properly dutiful child!
No comments:
Post a Comment