Monday, September 27, 2010

Week 6.2: Intuitive architecture

In this post I’d like to briefly engage with Jesse James Garrett’s ideas about empiricism in his 2002 essay “ia/recon.”

Garrett’s attitude towards IA research, especially in Parts 3, 4, and 6 of his essay, can fairly be characterized as dismissive. Garret emphasizes that information architecture is an art, and that its practice is heavily informed by intuition and “hunches” rather than user research of the kind described by Morville and Rosenfeld. He bemoans the necessity for information architects to justify their decisions to their superiors by means of usability studies, which he believes inhibits the discretion of the architect. Research, he says, should not be used “to tell us what to think.”

At first glance, Garrett’s attitude seemed very much at odds with the user-centric approach to IA advocated by Morville and Rosenfeld, and which I’ve invoked repeatedly in this blog. He also seemed to be waving off empiricism in general, which is a special interest of mine in IA and information science more broadly. On closer examination, however, this reading of Garrett misunderstands the core of his argument. He does not disown empiricism. In fact, he says that research “can be extremely useful in cases where user goals can be clearly identified and measured,” giving e-commerce and information retrieval as two examples. But almost all of the examples of IA we’ve studied in this course fall into these categories! No wonder the thrust of Morville and Rosenfeld’s work, with its special focus on e-business, is so different from that of Garrett, who seems to make the “user experience” – a subjective and difficult-to-measure criterion – central to his work.

Based especially on Garrett’s article in the DMI Review, I’m convinced Garrett is just as user-centric as I am. The difference is that I’m interested in metrics – did the user accomplish what he came to the site for? how long did it take? what menus were useful and useless? – while Garrett plumbs the strange and equally interesting depths of how to create emotional and sensory reactions to a website. Science is no more useful in Garrett’s pursuit than it is to an artist seeking to find a formula for how to paint pathos. His skepticism about empiricism is thus unsurprising and appropriate, and our points of view are compatible.

What’s perhaps most surprising about Garrett’s ideas is the notion that creating an emotional experience is the purview of an information architect, as opposed to a graphic designer or another engineer closer to the end user. I’ll keep an eye out for the aspects of IA that fall outside the proper domain of empiricism as the course continues!

Week 6.1: IA drafting

This week’s reading: Morville & Rosenfeld, Chapters 12 and 13; jjg.net; semanticstudios.com

I’ll be dividing this week’s response into two blog posts. This first post represents my first stab at practicing the design skills described by Morville and Rosenfeld; I’ve drawn a partial blueprint for the website Wiktionary.org. Please click to view the large version.


This blueprint is necessarily far from comprehensive, but it effectively shows how a user can navigate Wiktionary from either of two access points: the front page or the page for an entry. I’ve used the same visual vocabulary as the examples in the textbook: gray boxes represent pages, white boxes show components of a page, stacked boxes show collections of pages, and dashed rectangles show groups of interrelated pages. It’s imperfect, but in the process of making it I had to think about what components of a page actually need to be represented in a high-level blueprint, and in what cases it might be acceptable to show representative examples rather than every content area and hyperlink.

Monday, September 20, 2010

Week 5: My first car

This week’s reading: Morville and Rosenfeld, Chapters 10 and 11

This week’s reading was more of a challenge than that of preceding weeks. Morville and Rosenfeld’s writing is perfectly clear; the problem is that I lack a frame of reference to instantiate into the general ideas they discuss. When our authors talked about issues of organization and navigation, I understood using my experience as a website user, but I’ve never architected a large website or intranet, nor participated in an architecture project run by someone else. As we move from product to process, therefore, I feel like an Amish teenager reading a Chevy owner’s manual: I just don’t have a way to situate the information and transmute it into knowledge.

The best way for the Amish teenager to address his confusion is to get inside a Chevy, and the best way for me to address mine would be to design a website, perhaps collaboratively. A first-person account of the decisions that went into building a hypothetical website might make a compelling term paper. The objective would be to produce a design that could be confidently handed off to the coders, along with some exploration of how I would review and administer the coders’ work. To elaborate on the research phase, I could identify relevant sites to benchmark my own project against, and I could even conduct some simulated user testing with the help of volunteers. The strategy phase would include some interesting visuals as I constructed metaphors, wireframes, and other tools for conceptualizing and rendering the information I wanted to present. This idea doesn’t match the usual notion of a “research paper,” but in a broader sense, a hands-on project like this would foster the clear understanding that is the purpose of research. I’ll be emailing Dr. Simon about the acceptability of this topic.

Consistent with this blog’s focus on empirical user-centrism, the most interesting part of the reading to me was the section on users and how they can be deployed in the research and strategy phases of site building. I was unfamiliar with the technique of card sorting, a thought-provoking means of recruiting users to help build taxonomies. I imagine that an information architect up to his or her neck in data, content, and bureaucracy might easily lose perspective on what categories belong together – or might simply have a different perspective than the future users of the project, who may be professionals in an industry that the architect is only now exploring. Card sorting could remedy that lack of perspective. However, I tend to agree with the authors that such studies should not be taken too far. Elaborate “affinity models” based on a few data points hide significant statistical uncertainty; what is more, users do not always know what they want and their responses may exhibit systematic bias of various kinds. In the authors’ Weather.com example, I would expect that card sorters would be moderately unlikely to group “gardening” with “stargazing” (p. 281), even though both tasks fall into the category of “reasons people care about the weather.” In the final wireframe in Figure 11-10 (p. 285), a wide number of disparate ideas are unified under the “How Will the Weather Affect Your...?” banner. It’s a solid model, but it seems likely to me that this was a top-down decision springing from the architects’ creativity and content research, not a bottom-up decision based on card groupings.

Gratifying in the reading was the discussion on before-and-after benchmarking, which bears some resemblance to my Week 3 idea of setting up two different web designs and testing user efficiency separately in each. This method is deeply empirical, making use of the scientific method to generate information that is independent of the intuition of either users or architects. The importance of intuition and creativity is not to be understated, but in the end, we would like to have a way to know whether we did our jobs right!

Sunday, September 12, 2010

Week 4: Ups, downs, and in-betweens

This week's reading: Morville and Rosenfeld, Chapters 7, 8, and 9

While last week’s reading discussed how the interface should be built and named, this week we focused on how the user will interact with the interface. How will he find his way from one organizational chunk of our site to another? Will it be through browsing or through searching? How will we enable him to browse and search effectively without requiring him to become an expert on our website or on information science in general? There’s a lot of meat here, and the answer to each question is almost always “it depends,” but there’s a pattern to best practices that seems to illuminate a fundamental issue in designing IA for the user.

To illustrate this pattern, let me take examples from the chapter on navigation and the chapter on controlled vocabularies; I will draw parallels between them. In designing a classic thesaurus, the designer first identifies a preferred term for a topic (Cougar) and explains the content of the topic using scope notes (Cougar SN Felis concolor, a large predatory cat native to the Western Hemisphere). The thesaurus maps the term’s variants to the preferred term in the manner of an authority file (Mountain Lion Use Cougar); it also links the preferred term to the next broader term in its hierarchy (Cougar BT Cat), as well as to any narrower terms (Cougar NT Florida Panther). The thesaurus may also note terms that are qualitatively associated with the main entry, as determined by the compiler or by software (Cougar RT Jaguar). In sum, the thesaurus traverses several kinds of relationships (equivalence, hierarchy, and association) in trying to make sense of a search input or otherwise guiding the user to his endpoint.

Compare this straightforward if technical design of a thesaurus to the design of a website. When the user sets out to navigate a large website, the website is usually well served to provide him with global, local, and contextual navigation tools. The global tools move the user up in the site hierarchy, not unlike the “broader term” relationship of a thesaurus. Global navigation is unlike BT in that the user of global navigation can quickly navigate to a different area of the site entirely, while BT by design only connects the user to the next most chunked classification of the term he is already viewing, but both tools share the purpose of giving the user a way to see something more general.

Local navigation tools move the user within the subsite or site section he is already viewing; for example, within Amazon.com’s section on video games, a local menu allows the user to view games for the Nintendo Wii or Xbox 360. If the user clicks on Wii, another local menu prompts him to select a genre of video game. The process is analogous to the classic thesaurus’s “narrower term” relationship: both the local navigation system and NT serve to help the user find something more specific.

Contextual navigation tools most often take the form of hypertext links in the content. Users expect that the text of such links describes the content of the linked webpage. Designers insert hyperlinks when they need to escape website hierarchy and move laterally to pages that are related to the current page’s content. This type of navigation is analogous to the classic thesaurus’s “related term,” in that contextual navigation and RT each serve as a catch-all; they take the user to a place that is related nonhierarchically.

(It’s worth noting as an aside that navigation systems do not generally have a precise equivalent to the thesaurus’s equivalence relationship, because organization is unique and language is not. Some advanced navigation designs do admit parallels to equivalence relationships, but this discussion is tangential to the main point of this entry.)

Clearly, website navigation and thesaurus design have something in common. The commonality lies in how information can be related – broader, narrower, associated – and the likelihood that the user will want to move along one or more of these lines from one idea to another. This mostly-hierarchical relationship makes the organization of a website or a thesaurus transparent, allowing the designer to apply labeling skills to express its contents in a way that the user can readily navigate. I am left, however, with the question of whether hierarchy is uniquely suited to these tasks, or whether information can be organized in a way that is nonhierarchical and yet coherent. Morville and Rosenfeld gesture in this direction with their discussion of tag clouds, which resultantly seem to be a rich resource that semantic web software could use to guess what terms are related. Though creating clear and easily navigable hierarchies is obviously central to information architecture, I’ll remain alert to situational alternatives to the hierarchy that may present themselves!

Wednesday, September 8, 2010

Week 3.5: The information architecture of lib.usf.edu

The previous post is my "official" entry for the week, but I thought I'd cross-post the following from a discussion board entry of mine in LIS 6260, which I'm taking concurrently with this course. The question concerned how libraries can help users get the most out of electronic resources. I applied this week's readings to the question:

IA exhorts us to think about how we present information. For example, on lib.usf.edu, we've made a number of good layout decisions. It's easy to find crucial information like hours and contact information, and we have a mostly well-organized set of hyperlinks in the body. But we've also made some questionable decisions. Why are links to Articles and E-Journals, which are information sources, in the same menu bar with links to ILL and Help, which are services? Why does the link labeled Books take us to the library catalog, which manifestly contains more than just books? Why do we redundantly link to the same pages under the heading Research Tools that we do in the menu bar, and why are the pages labeled differently in one place than in the other? These inconsistencies make it harder for users to build a mental model of the site. Other parts of the page seem to be designed for librarians rather than our colleagues in other fields whom we serve: What is the difference between a database and an e-journal? What is PRONTO? What is RefWorks? (For that matter, what is ILL?) Where will I go if I click on the Karst Information Portal? You won't find the answers to these questions without more clicking.

Anyway, my point is that our website's front page is not bad, but it could be better. The site doesn't do much to point a novice user in the right direction. Its flaws become transparent to veterans like ourselves, but there's a lot an experienced information architect could do to streamline and clarify it. We should *not* cop out by saying that instructors just don't give us the opportunity to teach students how to use the library. If our users can't figure out how to use our interface, the answer is not to ask our users to be more perfect, but to design our interface to be more humane.

Monday, September 6, 2010

Week 3: It looks nice, but does it work?

This week’s reading: Morville and Rosenfeld, Chapters 5 and 6

Our reading this week focused on two interconnected topics: how to conceptualize and group the organizational items of a system, and how to choose words or labels to represent them. It was an exciting pair of chapters, because the effectiveness of an information system like a website submits to empirical testing. That is, broadly speaking, it is actually possible to decide which of two possible schemes is better, in the sense of helping more of the users more of the time. In this comment I’ll focus on how we might apply empiricism to the ideas described in these chapters.

The text outlines organizational schemes appropriate for both exact and ambiguous searches. To use the language of my last entry, “fully realized” questions can be answered with straightforward schemes like alphabetical or chronological arrangement of data, but “fuzzy” questions – or items of information that fall into “fuzzy” categories – require more creativity in their organization. Most of the textbook’s examples of good interfaces give the user several access points in these ambiguous cases. Dell’s website (p. 65), for example, allows its customers to browse by topic (notebooks, desktops, support) or by audience (home, small business, government). A multiplicity of access points is likely to help some users and confuse others. Site analytics might help Dell empirically determine whether the former outnumber the latter.

A relevant metric of effectiveness might be the number of visitors who click on Dell’s topic links versus its audience links. A priori, I would expect that few visitors click “Home & Home Office” from the audience menu; these users are likely to have a more clearly defined need, and thus are more likely use the topic links. If analytics bear out this intuition, Dell should consider eliminating this hyperlink from its audience menu. Conflicting with this impulse, however, is the principle of comprehensiveness (p. 100): if we have special links for business and government audiences, shouldn’t we have a special link for home audiences? Retaining the “Home & Home Office” link might improve the menu’s consistency and thus help the user build a mental model of the Dell website, even if the link is rarely used.

The challenge, then, is to design an empirical test to settle the question of whether our little-used link contributes more than it detracts. A first approach might be to recruit a panel of diverse users, each with a genuine need. Through an automated survey, the website could prompt users to articulate their need. Half of these users could be directed to Dell’s usual site, while the other half are directed to a version of the site with the questionable link omitted. Their progress through the respective designs could be tracked and their success quantified through an exit survey. If one version of the site connects users with content with significantly higher consistency, and no intervening factors such as internal politics intervene, Dell should adopt the more successful architecture. This approach could be tested on micro aspects of design, such as whether to include a particular hyperlink, or on macro aspects, such as an entire top-down site redesign.

The Dell homepage reprinted in the book is dated 2006. I note with interest that in the intervening four years, Dell has given its site a complete revamp – consistent with the textbook authors’ emphasis on ongoing improvement. In 2006, the topical menu had pride of place on Dell’s site, while audience was relegated to a small-type menu of hyperlinks. By contrast, in 2010, the audience menu is splashed prominently across the top of the site; mousing over one of the labels (“For Home,” “For Small and Medium Business,” and so on) drops down a topic menu pertaining to the audience. This integration combines the advantages of both menus in a seamless way that is intuitive to Net-savvy audiences, though empirical testing could be useful to determine whether this two-layer sorting of content might be confusing to Internet novitiates.

Dell’s changes to its labels also merit attention. In 2006, the audience menu was headed “Solutions for:”, and its items were “Home & Home Office,” “Small Business,” “Medium & Large Business,” and “Government, Education, & Healthcare.” In 2010, the audience menu has no heading. The mouseover points are labeled “For Home,” “For Small & Medium Business,” “For Public Sector,” and “For Large Enterprise.” Three changes are interesting here. First is the change to the format of the list’s items. “Solutions for” rings of corporate jargon, which the text’s authors warn against (p. 85-86); Dell’s new formulation sounds much more natural. Second, we see that “Public Sector” has replaced the unwieldy “Government, Education, & Healthcare.” The latter choice is more descriptive, but the three indicated subcategories are heterogeneous; clicking this link is likely to lead us to a narrow, deep architecture where we’ll have to further specify that we work in education, then that we work in K-12 education, and so on. Public Sector, by contrast, denotes the same services more transparently – the label is effectively invisible to users who don’t need it – and the drop-down menu allows much of the disambiguation to take place in one click. Finally, we see that medium businesses have been reclassed with small businesses, while the term “large business” has been replaced with “large enterprise.” This could be an organizational change, but it’s more likely to be a labeling change; Dell has likely determined that its services for large businesses are disparate with the needs of medium-sized businesses, and has relabeled its categories to guide medium-sized business owners to the content most likely to be relevant to their need. The choice of the word “enterprise” in particular is clearly a labeling decision. “Enterprise” is an uncommon word whose connotation of scale may further help medium-sized business owners decide which menu category to pursue. All three of these changes have implications for the site’s overall architecture which could be user-tested by an experiment something like the one I described earlier.

My point in this entry is that IA guidelines are often useful, as when they suggest that we avoid jargon, but that site analytics and empirical testing are the ultimate tests of whether a site serves its users as envisioned. A building can be beautiful but uncomfortable, and a textbook-compliant website might still fail its users. In my first entry in this blog I discussed my view that interaction design is the parent discipline of information architecture. If so, then empiricism is the means by which we can determine whether IA is a properly dutiful child!