Monday, November 22, 2010

Week 14: Extraordinary Claims...

This week’s reading: Borgmann, Part 3

I had planned to use this final post to address Kellner’s criticism of Borgmann, but I can’t bring myself to. I agree with Kellner on many points, but – now that I’ve read Borgmann’s final section – it feels much more important to engage with Borgmann directly. Briefly, I think Kellner errs in finding Borgmann’s argument theological, but Kellner is correct insofar as he critiques Borgmann for arguing from bias rather than fact.

Borgmann errs frequently and badly in describing the vistas of virtual reality. Where Borgmann is factually wrong – as in his critique of computer photorealism on page 198 – his attitudes reflect a pervasive and unjustified pessimism about the power of technology, and where he is factually correct he draws judgments about technological information that are not grounded in those facts. His treatment of virtual information strikes a strange poise between encyclopedic surface knowledge and deeper ignorance. Part 3 of “Holding On to Reality” reads like a nuanced and valuable discussion of the social context and aesthetic forms of the popular music of the latter half of the twentieth century, only to conclude with a cantankerous dismissal of kids these days and their rock and roll music. Consider, for example, Borgmann’s striking statement that the virtual ambiguity of MUDs “renders virtual reality trivial, and, when pressed for its promise of engagement, evaporates” (p. 190). This implicitly moral judgment is preceded by a careful objective examination of the information environment of MUDs, but not by any grounds for condemnation, and the same is true of Borgmann’s subsequent writing off of online relationships as mere “virtual vacuity” (p. 190). Similar argumentative structure is echoed throughout Part 3. Borgmann relies on intuition rather than evidence to make his broadest points, and while such bald assertions may be effective when preaching to the choir, they are less so when trying to convince an undecided audience such as myself. His proclamations about the loss of intelligence, thing, and context in the virtual environment, even if true, repeatedly made me wonder aloud, “So what?”

Borgmann does not get around to answering that big question – “so what?” – until his conclusion, where he starkly warns, “The preternaturally bright and controllable quality of cyberspace makes real things look poor and recalcitrant by comparison” (p. 216). This is precisely my own worry about information technology – the reason I have felt sympathetic to Borgmann throughout the reading. But his final bases for this assertion, like the dreariness of science fiction novels and the basic seductibility of human beings, are themselves not obvious and are certainly not grounded in Borgmann’s study. Indeed, in some areas, from art to war, virtual information sometimes seems to have given reality a heft that previous generations did not always have the chance to acknowledge. Reading the final pages of the book gave me the same sinking feeling one experiences when one’s favorite sports team commits basic errors of play, or when a normally eloquent advocate of one’s own political view stumbles badly in an important debate. If the central question of “Holding On to Reality” is whether it is worse to experience information virtually than to experience it through nature or culture, and the book represents the best argument that can be made in favor of the answer “Yes,” I can hardly blame society if it answers “No.”

With all that said, Borgmann succeeds in making me want to engage more with reality, even if he fails in convincing me to promote this engagement as a cultural rule. And some of his closing ideas, like his worry that the “sheer disorganized and imposing mass” (p. 230) of hyperinformation will guarantee its future loss, have profound resonance for information architects. In a world where people’s preferred way of engaging with information seems likely to remain irremediably virtual, information architects have the momentous task of preserving that imposing mass of virtual information in a usable form. If, because of the nature of the information or its environment, we aren’t able to situate that information in a reality beyond the virtual, I don’t think that makes us contemptible; our mission is merely to organize, present, and expedite. But if we are able – if we can encourage our users to relate deeply to the information they’re gathering – then we will certainly have done our society and our users a subtle service.

Monday, November 15, 2010

Week 13: Information Condensation, Or, It's Only a Model!

This week’s reading: Borgmann, Part 2

“Holding On to Reality” is simultaneously the most general and the most personal treatment of information science I’ve yet read as part of the library science curriculum, and reading it is exhilarating. I’m not ready yet to address Ess’s analysis of Borgmann cited in the lecture notes, since Ess focuses primarily on Part 3 of Borgmann. Instead, I’ll return this week to the needs of users – a focal component of IA – and discuss what user needs have to do with Borgmann’s treatment of information.

Tying physical architecture to his discussion of the distinction between signs and things, Borgmann opines, “No design can specify its realization fully. To convey exactly as much information as the thing realized, a design would have to exhibit just as many features as the thing. But then it would be a duplicate of . . . the thing” (p. 113). That is, a fully realized (or fully imagined) thing must necessarily lose fidelity when it is condensed into a sign. Borgmann’s insight here is the very principle that makes indexers, abstracters, and information architects necessary. Many library users, and information consumers in general, need to know what they’re accessing before they access it. But to know exactly what a journal article says without reading the article is impossible by Borgmann’s principle. Users, then, need a general idea of what an article says – a low-fidelity version of the article. The job of an abstracter is to reduce a cataloged item (a thing) to an abstract of a more digestible length (a sign) with minimal loss of fidelity. A good abstracter makes it possible for users to make reasonable guesses about where a sign points without having to walk down the indicated road and see for themselves.

Labeling components of an information architecture is precisely analogous to abstracting media; it requires the same faculty of condensation, and has much the same end in mind in terms of how the user is served. Yet one difference between information architecture and physical architecture, which is Borgmann’s subject in Chapter 10, is that an edifice, once built, is stripped of the cultural signs used in its creation. The low-fidelity artifice of the blueprint outlives its usefulness, and the building’s users rarely need a blueprint to navigate the building. By contrast, abstracts and labels within an information system are useful precisely because they are condensed, and are indispensable for users long after the system goes live. There seems to be a fundamental disanalogy here: we can apprehend a building with our senses, and hence we navigate a building with the aid of natural signs (like a luggage carousel in an airport or a blackboard in a school) as well as cultural ones, while an information architecture is invisible to the senses and we can navigate it only through cultural signs and the guidance of the architect. Exploring a building is inherently an interactive experience; exploring an information architecture is not.

I think that this idea – the idea that an information architect must also be an information tour guide, providing signs that are naturally deficient in an online environment – is a key to overcoming user frustration with website interfaces and layouts. Since we cannot be physically present to help our users with their needs, our indexing and labeling functions are crucial to this aim. Just as John Harrison’s robust mechanical clock effectively condensed the vast grid of the world map into a longitude (p. 78), our navigation tools need to clearly help the user locate herself within the architecture; and rather like that map’s rigorous grid makes the sign revisable to match the thing, we should be ready to relabel our websites in a way that better matches the thing, or even revise the thing to match user expectations. This last possibility – the ability of the information architect to revise online reality for the convenience of the user – is probably the most exciting aspect of information architecture, and it might well be the subject of Borgmann’s Part 3, subtitled “Information As Reality.”

Monday, November 8, 2010

Week 12: The IA That Quashed a War

This week’s reading: Burnett & Marshall, Chapters 8 and 9 and Conclusion; Borgmann, Introduction and Part 1

Borgmann’s reading this week promises to grow into a rigorous information-science foundation in which to root information architecture, and next week I’ll begin examining the conjunction of his work with IA. For this week, though, I can’t resist the provocations of Burnett and Marshall as they describe the distinctive characteristics of the online music industry. Since they wrote too early to see the evolution of the iTunes music store, I’ll apply some of their ideas to iTunes and see if any characteristics of online music consumers can be extrapolated.

Burnett and Marshall sum up their critique of the music industry’s response to Napster in three points (p. 193). The first and second are closely related: they are the industry’s underestimation of the impact of the Internet on their business, and the studios’ choice to see MP3 and Napster as threats and not opportunities. Both points exemplify a mistaken and unprofitable philosophy of change: “How can we continue to do business as usual as everything changes around us?” Studios, like other businesses (and like libraries), ought instead to have asked, “How can we stand at the vanguard of this change and be ready when mainstream consumers demand digital services?”

The studios never did develop their own business model to deal with MP3s; other entrepreneurs did it for them. KaZaA, YouTube, and the Pirate Bay have each tolerated or encouraged widespread piracy in their attempts to develop commercial models that would flout the studios entirely. Pandora, Rhapsody, iTunes, and others have taken a different tack, negotiating with studios for the right to play or out-license their songs legally. Stores like iTunes have effectively stepped into the digital niche of the physical retailer, doing the work of salesmanship while artists and studios produce new work to license.

Why iTunes took so long to appear on the music scene is a question that would consume a much longer paper than this weekly response. In large part, however, the answer must be that studios hoped that the Internet and its challenges would go away and leave them to their profits of the 1990s. Even today, studios’ willingness to license music online is only grudging, and came about from desperation to steal market share back from pirate sites rather than from their own innovative proclivities.

But come about it did: piracy is almost as easy today as it was in the days of Napster and KaZaA, yet iTunes has crafted a successful business out of selling songs for $0.69 to $1.29. iTunes does well even though its songs are licensed and not bought. Whether studios and artists do proportionally well is, predictably, a matter of some dispute, but both Metallica and its label Elektra clearly make more money when I buy “Enter Sandman” from iTunes than when I download it from the Pirate Bay.

Why do music fans pay money for licensed music from iTunes rather than downloading music free of license from the Pirate Bay? I suspect that part of the answer – but only a small part – is that fans want to support the artists they listen to. Another part – but again, only a small part – is that fans fear retaliation for piracy, though anti-piracy lawsuits against consumers are still uncommon. The most important part of the answer, by far, must be that iTunes is a nice piece of software. The architecture of iTunes and its store encourage easy navigation, search, and downloading in a way that pirate sites do not. iTunes’ built-in ability to organize and play music likewise advantages it over the Pirate Bay, which operates strictly on a bring-your-own-software basis. In the end, the copyright wars between studios and fans have calmed not because of legal settlements – still ongoing – but because of great information architecture that makes fans able and willing to pay for music.

Perhaps the conclusion of this week’s reading, then, is that when an Apple information architect is asked what he does for a living, he might answer “I end copyright wars!”

Tuesday, November 2, 2010

Week 11: Election Day

This week’s reading: Burnett & Marshall, Chapters 5-7

In contrast to the polar bear book, an information setting where I often felt I lacked important context, Web Theory puts me in mind of a thousand different responses: enthusiastic agreements, vigorous ripostes, and occasional moments of eyebrow-raising realizations about how much the Internet has changed since 2003. This week I’ll take advantage of today's date – Election Day – to discuss the interface between politics and the issues of Chapter 7.

In their treatment of copyright issues on the Internet, Burnett and Marshall successfully make the crucial distinction between creators and copyright holders. Plaintiffs in key intellectual property suits are typically publishers and other firms, not the individuals who created the property at issue. Especially egregious examples of such firms have abounded in the news recently. One is Righthaven LLC, a group that has systematically purchased the copyright to stories in the Las Vegas Review-Journal in order to sue weblogs that have quoted from these stories. Another is the U.S. Copyright Group, which hires itself out to movie producers to sue thousands of unnamed defendants accused of illegally downloading movies via BitTorrent. Watchdog groups such as the Electronic Frontier Foundation and Ars Technica have found such suits socially and legally problematic; for current purposes, the most salient point is that the original creators of the material at issue are nowhere to be found in the legal proceedings.

When politicians make copyright law, then, the most vigorous lobbying rarely comes from authors or creators, nor from the disorganized masses who benefit from copyright liberalization; rather, the loudest voices are those of copyright holders, whose financial interest in their material gives them ample incentive to seek stringency in copyright law. Congress’s most recent extension of the copyright term – to the life of the author plus seventy years – coincided with the year when Mickey Mouse would otherwise have entered the public domain, and the Walt Disney Company lobbied for the change with corresponding intensity. Yet a broad-based consensus exists across the arts and sciences that a robust public domain from which to draw information and inspiration, as well as an expansive view of “fair use” of copyrighted material, contributes crucially to the “progress of Science and useful Arts” prioritized by the Copyright Clause of the Constitution. And so the public interest is often at loggerheads with the interests of intellectual property owners.

The Internet enters this conflict partly with its promulgation of Web 2.0 tools like YouTube and Flickr, which permit everyday citizens with no acquaintance with IP law to violate copyright on a daily basis. A photo of a friend posing in front of an iPod ad, if posted to a public Flickr account, might conceivably draw a DMCA takedown notice from Apple. A cottage industry of YouTube videos that redub a few minutes of popular cartoon shows for satirical purposes have frequently received similar notices. The individuals committing the alleged infringements may well be engaging in fair use, but they lack the recourse to defend themselves from the much better funded corporations who own the intellectual property that Web users are adapting. Researchers, too, often skirt IP law when they seek to develop new technologies; DMCA lawsuits have been threatened or actually prosecuted in cases concerning security research, DVD-ripping software, and even the manufacture of universal garage door openers.

The Internet, of course, also makes actual piracy – with no pretensions to fair use – as easy as downloading the right file from The Pirate Bay. But while IP holders invest a lot of money in chasing down pirates, too many artists, scientists, and other genuine creators get caught up in the dragnet. There’s very little money protecting these individuals in comparison with what IP holders spend on lobbying and lawyering. As long as intellectual property issues continue to exist under the radar of the general public, and as long as no large organizations find it’s in their interest to step up for content creators, neither political party will find the will to challenge the status quo as copyright grows more draconian. Victories for copyright liberalizers will come in the courts, not the legislatures – and for those interested in these issues, Election Day doesn’t represent a meaningful choice between alternatives.

Tuesday, October 26, 2010

Week 10: Civilizational architecture

This week’s reading: Burnett & Marshall, Chapters 2-4

This week’s reading discussed some of the identity and civilizational issues surrounding the Web. Though the discussion bore only tangential relevance to information architecture, IA is an important cog in the massive machine of the Internet, and as members of this “cybernetic” system, it behooves us as architects to understand how our society uses this machine – and how the machine is changing society.

The great strength of Web Theory thus far is its ability to recognize and examine facets of the Internet that are so obvious to its users that we’ve long since stopped noticing them. One can’t critically think about a social force one takes for granted. I was struck especially by the discussion of the “network society” – a succinct and precise description of a system where “geographical connections that are no longer grounded in physical communities but are connected through the flows of information weaken the patterns of the formerly spatially constructed communities and societies” (p. 41). I grew up in Florida, but I also grew up on the Internet – and you can see which stomping ground shaped my social life more when you know that my best friends live in New York, San Francisco, Charlotte, and Edmonton, not in Fort Myers.

The authors of Web Theory, moreover, are right to predict that the many-to-many communication facilitated by the Internet means that I have “weak tie” social links to a great diversity of acquaintances who I might never know in real life. My Web acquaintances span races and classes, and include homosexuals, bisexuals, and transgendered people, Muslims, Mormons, and Wiccans, world citizens from Austria to Australia, vegetarians, furries, and at least one person who knows vastly more than I do about any topic you can name. Correspondingly, I don’t feel the exclusive loyalty to my home community, alma mater, or local sports teams that my parents did (though I’ll cop to being a St. Petersburg Times fanboy). I’ve largely replaced identification based on where I live or where I grew up with identification based on my interests and identity.

And as for identity, I found Burnett and Marshall’s treatment of negative and positive effects of the Internet on the lives of its users to be amusing and full of truths. The “opposing” viewpoints they presented reminded me of nothing so much as the parable of the blind men and the elephant from a previous reading. It’s quite true, as Kraut in particular suggests, that some people use the Internet in a way that interferes with local social circles – and also true, as he speculates, that this use can cause feelings of alienation and anonymity. But it’s also true, as Pew found, that the Internet can strengthen our connections with friends and family. If Nie and Erbing find that Internet use results in “spending less time with or on the phone with family and friends” (p. 66), this could be because, as Pew says, Net users “have used e-mail to enrich their important relationships” (p. 67).

It’s tempting for Net businesses to seek ways to capitalize on the ability of the new generation of users to form communities that exist outside of physical geography. Indeed, many have done so with varying success; Facebook’s valuation as of July appeared to stand somewhere between $12 billion and $24 billion. Certainly information architects trying to make their case to skeptical executives should be able, in some contexts, to argue in terms of Internet users’ propensity for constructing and broadcasting their identities using Web tools, as well as some users’ desire to be citizens of an Internet community. I don’t think there’s anything intrinsically wrong with information architects arguing in those terms, or that a company errs morally when it encourages brand loyalty and community-building among its customers. Like many immersive media, however, the Internet certainly can have an addictive and anti-social effect on those who use it uncritically, and online communities like those of World of Warcraft and 4chan play a contributory part in these cases. A solution to this real problem is outside the immediate scope of the reading, but the more we can understand about the nature of the online medium, the better equipped we will be to understand our ethical responsibilities as producers and consumers of Internet content.

Monday, October 18, 2010

Week 9: The Architect's Garden

This week’s reading: Morville & Rosenfeld, Chapters 20 and 21; Burnett & Marshall, Chapter 1

Three diverse readings this week! The Burnett & Marshall chapter seemed to pivot away from information architecture and into the role of information technology in society. This is a legitimately fascinating topic, but the first chapter read like a fifty-page master’s thesis condensed by force into fifteen pages; interesting models and taxonomies are introduced only to be immediately abandoned without real exploration. This week I won’t worry about Web Theory, but instead will indulge myself in a case study. Riffing from Morville & Rosenfeld’s Chapter 21, I’ll talk about a social problem encountered by the users of an Internet forum I help administer, and explain how we used information architecture to solve it.

The forum in question, In the Rose Garden, has about 700 members, of whom several dozen are active contributors. Users are bound by our common interest in the Japanese anime “Revolutionary Girl Utena,” whose immense literary merits – though outside the scope of this blog – have proven multifaceted enough to sustain analysis and discussion throughout the three years of the forum’s existence. Three volunteers, including myself, administer the forum; most commonly, administration involves some routine content maintenance (dealing with multiple threads on the same topic, for example) and keeping an eye out for interpersonal conflicts on the boards.

Though IRG members are brought together by Utena, the bulk of activity on the forum does not directly pertain to the anime. Sampling a few popular threads would reveal political and social discussions, sharing of other anime, airing of college angst, and conversations about shame, anger, and joy. The most frequently trafficked threads, however, are “forum games.” Forum games are threads in which posts follow a simple set of rules – one thread might ask posters to add two words to a developing story, while another is dedicated to the results of a personality quiz. These games, as played on IRG, are usually more reflexive than thoughtful, but they’re easy to join or to post to, which accounts for their disproportionate popularity.

In 2009, the proliferation of forum games grew to the point where many users on IRG perceived them as an unwelcome distraction. Because of their popularity, forum games were usually ranked highly on the chronologically-sorted thread directories, burying more serious or intimate threads in the same category. After experiencing the problem firsthand for months and receiving a few user complaints, I concluded that forum games were inconveniencing many users and stifling other threads. Banning such games, however, was not an acceptable solution; forum games are good social looseners, serve as an access point to IRG for many new users, and – most of all – make many of our users happy, even the ones who also want to be able to find and post to more serious threads.

The solution – obvious in hindsight – was a change to IRG’s organization. In consultation with the other administrators, I created a new subforum that would be devoted to forum games. The subforum was accessible from the front page of the forum. Migrating all the forum games to a single, dedicated area of the site addressed the problem in several ways, but they all boil down to usability. Site users after the change were able to easily identify what section of the site would contain the kind of thread they were looking for. Those who wanted to quickly join a forum game knew where to do that; those who wanted to have a thoughtful conversation weren’t distracted by the game-driven irrelevance of top results in other subfora. The number of clicks needed to access any given thread was constant before and after the change.

As might be expected, the investment of time needed to implement this change paid off in a big way. Forum games continued to thrive in “captivity,” while threads elsewhere enjoyed renewed popularity. I didn’t know it at the time, but I was doing information architecture: designing a website to meet the needs and expectations of its users in an efficient and organized way.

One footnote, apropos of Morville and Rosenfeld’s allusions to the unique aspects of the evolt community in Chapter 21: Many IRG users have a strong preference for either forum games or discussion, and rarely participate in the unpreferred category. From an IA perspective this strengthens the case for the change we made, but at the time the administrators worried that segregating forum games might be tantamount to segregating users. Our small community is tight-knit, unlike the communities of many large Internet forums, and we were concerned about the social impact of “marking” forum games (and, implicitly, their players) in such a visible way. Though the change certainly did not rend the social fabric, I’ve informally noticed that crossover between forum games and other threads has seemed less frequent in the ensuing year. A few game players whose activity previously spanned the forum have settled into their new subforum and rarely emerge from it. Fortunately, there are several others who still bridge the gap, and IRG has not diverged into two unconnected forums.

So much for my belief that IA is a totally new subject for me. It turns out that I am, in fact, an experienced and successful information architect!

Tuesday, October 12, 2010

Week 8: At Last, Librarianship

This week’s reading: Morville & Rosenfeld, Chapters 17 and 18

Our topics this week – first, marketing our services to skeptical managers, and second, comparing IA to business strategy – were far enough outside my experience that I’m not sure how to react to them in a way that goes beyond recapitulation. I think I can best elaborate by comparing the challenges information architects face in justifying their existence to the challenges librarians face in doing the same.

Let’s begin with the obvious. Though the specifics of their duties differ, information architects and librarians are both broadly in the business of making information accessible. Both design systems of organization and labeling to make their information systems more transparent to the user, both create and use metadata extensively to expedite searches, and both are concerned with economizing user effort. As a result of their shared central mission, both librarians and information architects sometimes face questions from decision makers who do not perceive disorganized information as a serious problem.

Google Search, in particular, has contributed to the false impression that all the information in the world is now organized and accessible. Public librarians tear out their hair when their acquisitions budget is cut because Google is free; information architects gnash their teeth when the client wants to install a Google Custom Search bar and dispense with the messy process of web architecture. Sure, Google can’t design our reference queries or our browsing hierarchies, but do users really need that stuff anyway? It falls to us to make the case that, yes, users do need that stuff – and lots more besides that Google can’t do.

Not all of the text’s suggestions on how information architects can make this case are equally applicable to librarians. For instance, public and academic librarians are unlikely to impress policymakers with a return-on-investment analysis, which would contain even more unknowns than a similar IA analysis and would operate outside the myopic timeframe with which their funding authorities concern themselves. But librarians can make good use of the “pain is your best friend” principle (p. 375). We can use stories and presentations to illustrate the often humorously painful consequences of replacing human expertise with search software. We can challenge the policymaker to find a particular commonly sought piece of information using Google, forcing him or her to confront the imperfections of Google directly. We can even use comparative analysis to point up the exact stages where human reference librarians add value to a search process, as well as the types of patrons (such as the young, elderly, and uneducated) who have special trouble conducting information searches by machine.

One important difference between IA’s image problems and those of librarianship looks to the future. The mood in the IA community, as on page 377 of the reading, seems to be that broader recognition of the role of information architects is inevitable. By contrast, the mood in the library community is that future technologies will pose even more stringent challenges to our necessity than current tools already have. I conclude that it might be wise for librarians to restyle their role in civic life as including social information architecture. Library websites should evolve past being electronic card catalogs and instead seek to architect a broad information system, encompassing both physical and Internet resources, that is responsive to the most common needs of its users. This goal is particularly ambitious – most websites undertake to organize a much more limited set of resources – but some libraries, including USF’s, have already begun such an undertaking. I can think of a number of ways to make such a project feasible, and perhaps I’ll study something like this as part of my term paper!

Monday, October 4, 2010

Week 7: Case Studies In Why We Need Information Architects

This week’s reading: Morville & Rosenfeld, Chapters 14 – 16

Our reading this week covered a number of “small” topics within information architecture. The authors’ writing was as engaging as always, but I don’t have much to add with respect to their subject treatments, so I’ll focus in this entry on the information architecture of Google Sets and Textmap.com.

I’ll begin with Google Sets. I went in expecting a very classy information architecture; Google, after all, has reams of experience designing IAs and first rose to prominence in part because of its clean, easy-to-use interface. I had mixed luck with Google Sets as a search tool – it managed to complete a list of Greek moon goddesses, but not a list of recent U.S. presidents – but the effectiveness of the search algorithms that power Google Sets is mostly outside the scope of IA. I noted, however, that Google Sets also failed to return any results given the names Sleepy, Dpoey, Bashfull, and Dock [all sic]. Basic spell-checking is part of the domain of IA, a relative of controlled vocabulary, and Google Sets ought to have been able to reconcile these misspellings.

On a related note, on those occasions when my searches returned no results, Google Sets gave some tips for more effective searching. This is good design – but I was surprised when the tips included “use the full name” and “try being consistent.” There’s no reason a search company with the resources, artificial intelligence, and processing power of Google shouldn’t be able to algorithmically guess that “Harvard” means the same thing as “Harvard University” even without a formal authority file. From such experiences with this tool’s limitations, I’d have to say that Google Sets doesn’t live up to its potential as what could be an interesting tool for finding related keywords for the purposes of tagging or building a controlled vocabulary.

From a navigation perspective, Google Sets is also faulty: to my surprise, I discovered that there is no way to revise one’s search from the results page – a mortal sin in a search engine! To end on a positive note, though, I enjoyed the metaphor of the Google Sets front page, which precedes each search field with a bullet point. This visual shorthand for a list effectively conveys what the user can do with this tool.

From Google Sets we turn to TextMap. TextMap’s IA is frankly baffling. Its “entity pages” are full of fascinating-looking metrics presented without explanation. Better labeling needs to be brought to bear on this site. At the very least, the user needs tooltips; the entity pages offer no explanation, for example, of what a “polarity rank” or “negative raw count” is. The former is defined in the site’s “Frequenty [sic] Asked Questions” – it involves whether the subject is regarded well or poorly – but there’s no indication of how TextMap makes that determination. Similarly, each entity page contains a “relational map” that links the central entity to related ideas, but there’s no way to know what prompts the relationships. The page for “cat,” for example, links the word to “Yusuf Islam.” I had to Google to figure out this relationship: the famous singer-songwriter Cat Stevens is a convert to this Islamic sect. Yet the “cat” page is clearly defined by TextMap’s terse scope note as “animal,” not “person,” so why do links pertaining to Cat Stevens appear here? The “cat” map also links to the name “Sparky,” and I still don’t know why. Each box in the relational map has a different shape – rectangle, oval, or hexagon – but no key is provided.

To draw lessons from the above, it seems that what we have in TextMap is information without architecture. No organizational skeleton puts content elements in a coherent order; no labeling scheme elucidates meaning; navigation is mostly unassisted by common conventions such as hyperlinking or a side menu; and even the search function is poor, failing to deliver the user directly to the desired page even when an exact match is found. Without architecture, the structure falls to the ground. I can’t think of a purpose that I’m confident TextMap would reliably serve.

One bright spot in TextMap is its fairly conscientious vocabulary control in the form of synonym rings. Sony’s entity page, for example, contains some forty synonyms with various permutations of capitalization and punctuation, including “SONY CORP.,” “Sony LLC,” and “Sony Electronics, Inc.” Not every permutation is covered, but the range is quite broad for a home-brewed project, and there are enough variations that a searcher could readily find the page by entering even an inexact synonym.

These two websites, then, each offer lessons in what not to do in building an information architecture. Google Sets reminds us of the importance of vocabulary control to help software make logical inferences about the user’s meaning; TextMap reminds us that data needs to be illuminated by architecture before it can properly be called information.

Monday, September 27, 2010

Week 6.2: Intuitive architecture

In this post I’d like to briefly engage with Jesse James Garrett’s ideas about empiricism in his 2002 essay “ia/recon.”

Garrett’s attitude towards IA research, especially in Parts 3, 4, and 6 of his essay, can fairly be characterized as dismissive. Garret emphasizes that information architecture is an art, and that its practice is heavily informed by intuition and “hunches” rather than user research of the kind described by Morville and Rosenfeld. He bemoans the necessity for information architects to justify their decisions to their superiors by means of usability studies, which he believes inhibits the discretion of the architect. Research, he says, should not be used “to tell us what to think.”

At first glance, Garrett’s attitude seemed very much at odds with the user-centric approach to IA advocated by Morville and Rosenfeld, and which I’ve invoked repeatedly in this blog. He also seemed to be waving off empiricism in general, which is a special interest of mine in IA and information science more broadly. On closer examination, however, this reading of Garrett misunderstands the core of his argument. He does not disown empiricism. In fact, he says that research “can be extremely useful in cases where user goals can be clearly identified and measured,” giving e-commerce and information retrieval as two examples. But almost all of the examples of IA we’ve studied in this course fall into these categories! No wonder the thrust of Morville and Rosenfeld’s work, with its special focus on e-business, is so different from that of Garrett, who seems to make the “user experience” – a subjective and difficult-to-measure criterion – central to his work.

Based especially on Garrett’s article in the DMI Review, I’m convinced Garrett is just as user-centric as I am. The difference is that I’m interested in metrics – did the user accomplish what he came to the site for? how long did it take? what menus were useful and useless? – while Garrett plumbs the strange and equally interesting depths of how to create emotional and sensory reactions to a website. Science is no more useful in Garrett’s pursuit than it is to an artist seeking to find a formula for how to paint pathos. His skepticism about empiricism is thus unsurprising and appropriate, and our points of view are compatible.

What’s perhaps most surprising about Garrett’s ideas is the notion that creating an emotional experience is the purview of an information architect, as opposed to a graphic designer or another engineer closer to the end user. I’ll keep an eye out for the aspects of IA that fall outside the proper domain of empiricism as the course continues!

Week 6.1: IA drafting

This week’s reading: Morville & Rosenfeld, Chapters 12 and 13; jjg.net; semanticstudios.com

I’ll be dividing this week’s response into two blog posts. This first post represents my first stab at practicing the design skills described by Morville and Rosenfeld; I’ve drawn a partial blueprint for the website Wiktionary.org. Please click to view the large version.


This blueprint is necessarily far from comprehensive, but it effectively shows how a user can navigate Wiktionary from either of two access points: the front page or the page for an entry. I’ve used the same visual vocabulary as the examples in the textbook: gray boxes represent pages, white boxes show components of a page, stacked boxes show collections of pages, and dashed rectangles show groups of interrelated pages. It’s imperfect, but in the process of making it I had to think about what components of a page actually need to be represented in a high-level blueprint, and in what cases it might be acceptable to show representative examples rather than every content area and hyperlink.

Monday, September 20, 2010

Week 5: My first car

This week’s reading: Morville and Rosenfeld, Chapters 10 and 11

This week’s reading was more of a challenge than that of preceding weeks. Morville and Rosenfeld’s writing is perfectly clear; the problem is that I lack a frame of reference to instantiate into the general ideas they discuss. When our authors talked about issues of organization and navigation, I understood using my experience as a website user, but I’ve never architected a large website or intranet, nor participated in an architecture project run by someone else. As we move from product to process, therefore, I feel like an Amish teenager reading a Chevy owner’s manual: I just don’t have a way to situate the information and transmute it into knowledge.

The best way for the Amish teenager to address his confusion is to get inside a Chevy, and the best way for me to address mine would be to design a website, perhaps collaboratively. A first-person account of the decisions that went into building a hypothetical website might make a compelling term paper. The objective would be to produce a design that could be confidently handed off to the coders, along with some exploration of how I would review and administer the coders’ work. To elaborate on the research phase, I could identify relevant sites to benchmark my own project against, and I could even conduct some simulated user testing with the help of volunteers. The strategy phase would include some interesting visuals as I constructed metaphors, wireframes, and other tools for conceptualizing and rendering the information I wanted to present. This idea doesn’t match the usual notion of a “research paper,” but in a broader sense, a hands-on project like this would foster the clear understanding that is the purpose of research. I’ll be emailing Dr. Simon about the acceptability of this topic.

Consistent with this blog’s focus on empirical user-centrism, the most interesting part of the reading to me was the section on users and how they can be deployed in the research and strategy phases of site building. I was unfamiliar with the technique of card sorting, a thought-provoking means of recruiting users to help build taxonomies. I imagine that an information architect up to his or her neck in data, content, and bureaucracy might easily lose perspective on what categories belong together – or might simply have a different perspective than the future users of the project, who may be professionals in an industry that the architect is only now exploring. Card sorting could remedy that lack of perspective. However, I tend to agree with the authors that such studies should not be taken too far. Elaborate “affinity models” based on a few data points hide significant statistical uncertainty; what is more, users do not always know what they want and their responses may exhibit systematic bias of various kinds. In the authors’ Weather.com example, I would expect that card sorters would be moderately unlikely to group “gardening” with “stargazing” (p. 281), even though both tasks fall into the category of “reasons people care about the weather.” In the final wireframe in Figure 11-10 (p. 285), a wide number of disparate ideas are unified under the “How Will the Weather Affect Your...?” banner. It’s a solid model, but it seems likely to me that this was a top-down decision springing from the architects’ creativity and content research, not a bottom-up decision based on card groupings.

Gratifying in the reading was the discussion on before-and-after benchmarking, which bears some resemblance to my Week 3 idea of setting up two different web designs and testing user efficiency separately in each. This method is deeply empirical, making use of the scientific method to generate information that is independent of the intuition of either users or architects. The importance of intuition and creativity is not to be understated, but in the end, we would like to have a way to know whether we did our jobs right!

Sunday, September 12, 2010

Week 4: Ups, downs, and in-betweens

This week's reading: Morville and Rosenfeld, Chapters 7, 8, and 9

While last week’s reading discussed how the interface should be built and named, this week we focused on how the user will interact with the interface. How will he find his way from one organizational chunk of our site to another? Will it be through browsing or through searching? How will we enable him to browse and search effectively without requiring him to become an expert on our website or on information science in general? There’s a lot of meat here, and the answer to each question is almost always “it depends,” but there’s a pattern to best practices that seems to illuminate a fundamental issue in designing IA for the user.

To illustrate this pattern, let me take examples from the chapter on navigation and the chapter on controlled vocabularies; I will draw parallels between them. In designing a classic thesaurus, the designer first identifies a preferred term for a topic (Cougar) and explains the content of the topic using scope notes (Cougar SN Felis concolor, a large predatory cat native to the Western Hemisphere). The thesaurus maps the term’s variants to the preferred term in the manner of an authority file (Mountain Lion Use Cougar); it also links the preferred term to the next broader term in its hierarchy (Cougar BT Cat), as well as to any narrower terms (Cougar NT Florida Panther). The thesaurus may also note terms that are qualitatively associated with the main entry, as determined by the compiler or by software (Cougar RT Jaguar). In sum, the thesaurus traverses several kinds of relationships (equivalence, hierarchy, and association) in trying to make sense of a search input or otherwise guiding the user to his endpoint.

Compare this straightforward if technical design of a thesaurus to the design of a website. When the user sets out to navigate a large website, the website is usually well served to provide him with global, local, and contextual navigation tools. The global tools move the user up in the site hierarchy, not unlike the “broader term” relationship of a thesaurus. Global navigation is unlike BT in that the user of global navigation can quickly navigate to a different area of the site entirely, while BT by design only connects the user to the next most chunked classification of the term he is already viewing, but both tools share the purpose of giving the user a way to see something more general.

Local navigation tools move the user within the subsite or site section he is already viewing; for example, within Amazon.com’s section on video games, a local menu allows the user to view games for the Nintendo Wii or Xbox 360. If the user clicks on Wii, another local menu prompts him to select a genre of video game. The process is analogous to the classic thesaurus’s “narrower term” relationship: both the local navigation system and NT serve to help the user find something more specific.

Contextual navigation tools most often take the form of hypertext links in the content. Users expect that the text of such links describes the content of the linked webpage. Designers insert hyperlinks when they need to escape website hierarchy and move laterally to pages that are related to the current page’s content. This type of navigation is analogous to the classic thesaurus’s “related term,” in that contextual navigation and RT each serve as a catch-all; they take the user to a place that is related nonhierarchically.

(It’s worth noting as an aside that navigation systems do not generally have a precise equivalent to the thesaurus’s equivalence relationship, because organization is unique and language is not. Some advanced navigation designs do admit parallels to equivalence relationships, but this discussion is tangential to the main point of this entry.)

Clearly, website navigation and thesaurus design have something in common. The commonality lies in how information can be related – broader, narrower, associated – and the likelihood that the user will want to move along one or more of these lines from one idea to another. This mostly-hierarchical relationship makes the organization of a website or a thesaurus transparent, allowing the designer to apply labeling skills to express its contents in a way that the user can readily navigate. I am left, however, with the question of whether hierarchy is uniquely suited to these tasks, or whether information can be organized in a way that is nonhierarchical and yet coherent. Morville and Rosenfeld gesture in this direction with their discussion of tag clouds, which resultantly seem to be a rich resource that semantic web software could use to guess what terms are related. Though creating clear and easily navigable hierarchies is obviously central to information architecture, I’ll remain alert to situational alternatives to the hierarchy that may present themselves!

Wednesday, September 8, 2010

Week 3.5: The information architecture of lib.usf.edu

The previous post is my "official" entry for the week, but I thought I'd cross-post the following from a discussion board entry of mine in LIS 6260, which I'm taking concurrently with this course. The question concerned how libraries can help users get the most out of electronic resources. I applied this week's readings to the question:

IA exhorts us to think about how we present information. For example, on lib.usf.edu, we've made a number of good layout decisions. It's easy to find crucial information like hours and contact information, and we have a mostly well-organized set of hyperlinks in the body. But we've also made some questionable decisions. Why are links to Articles and E-Journals, which are information sources, in the same menu bar with links to ILL and Help, which are services? Why does the link labeled Books take us to the library catalog, which manifestly contains more than just books? Why do we redundantly link to the same pages under the heading Research Tools that we do in the menu bar, and why are the pages labeled differently in one place than in the other? These inconsistencies make it harder for users to build a mental model of the site. Other parts of the page seem to be designed for librarians rather than our colleagues in other fields whom we serve: What is the difference between a database and an e-journal? What is PRONTO? What is RefWorks? (For that matter, what is ILL?) Where will I go if I click on the Karst Information Portal? You won't find the answers to these questions without more clicking.

Anyway, my point is that our website's front page is not bad, but it could be better. The site doesn't do much to point a novice user in the right direction. Its flaws become transparent to veterans like ourselves, but there's a lot an experienced information architect could do to streamline and clarify it. We should *not* cop out by saying that instructors just don't give us the opportunity to teach students how to use the library. If our users can't figure out how to use our interface, the answer is not to ask our users to be more perfect, but to design our interface to be more humane.

Monday, September 6, 2010

Week 3: It looks nice, but does it work?

This week’s reading: Morville and Rosenfeld, Chapters 5 and 6

Our reading this week focused on two interconnected topics: how to conceptualize and group the organizational items of a system, and how to choose words or labels to represent them. It was an exciting pair of chapters, because the effectiveness of an information system like a website submits to empirical testing. That is, broadly speaking, it is actually possible to decide which of two possible schemes is better, in the sense of helping more of the users more of the time. In this comment I’ll focus on how we might apply empiricism to the ideas described in these chapters.

The text outlines organizational schemes appropriate for both exact and ambiguous searches. To use the language of my last entry, “fully realized” questions can be answered with straightforward schemes like alphabetical or chronological arrangement of data, but “fuzzy” questions – or items of information that fall into “fuzzy” categories – require more creativity in their organization. Most of the textbook’s examples of good interfaces give the user several access points in these ambiguous cases. Dell’s website (p. 65), for example, allows its customers to browse by topic (notebooks, desktops, support) or by audience (home, small business, government). A multiplicity of access points is likely to help some users and confuse others. Site analytics might help Dell empirically determine whether the former outnumber the latter.

A relevant metric of effectiveness might be the number of visitors who click on Dell’s topic links versus its audience links. A priori, I would expect that few visitors click “Home & Home Office” from the audience menu; these users are likely to have a more clearly defined need, and thus are more likely use the topic links. If analytics bear out this intuition, Dell should consider eliminating this hyperlink from its audience menu. Conflicting with this impulse, however, is the principle of comprehensiveness (p. 100): if we have special links for business and government audiences, shouldn’t we have a special link for home audiences? Retaining the “Home & Home Office” link might improve the menu’s consistency and thus help the user build a mental model of the Dell website, even if the link is rarely used.

The challenge, then, is to design an empirical test to settle the question of whether our little-used link contributes more than it detracts. A first approach might be to recruit a panel of diverse users, each with a genuine need. Through an automated survey, the website could prompt users to articulate their need. Half of these users could be directed to Dell’s usual site, while the other half are directed to a version of the site with the questionable link omitted. Their progress through the respective designs could be tracked and their success quantified through an exit survey. If one version of the site connects users with content with significantly higher consistency, and no intervening factors such as internal politics intervene, Dell should adopt the more successful architecture. This approach could be tested on micro aspects of design, such as whether to include a particular hyperlink, or on macro aspects, such as an entire top-down site redesign.

The Dell homepage reprinted in the book is dated 2006. I note with interest that in the intervening four years, Dell has given its site a complete revamp – consistent with the textbook authors’ emphasis on ongoing improvement. In 2006, the topical menu had pride of place on Dell’s site, while audience was relegated to a small-type menu of hyperlinks. By contrast, in 2010, the audience menu is splashed prominently across the top of the site; mousing over one of the labels (“For Home,” “For Small and Medium Business,” and so on) drops down a topic menu pertaining to the audience. This integration combines the advantages of both menus in a seamless way that is intuitive to Net-savvy audiences, though empirical testing could be useful to determine whether this two-layer sorting of content might be confusing to Internet novitiates.

Dell’s changes to its labels also merit attention. In 2006, the audience menu was headed “Solutions for:”, and its items were “Home & Home Office,” “Small Business,” “Medium & Large Business,” and “Government, Education, & Healthcare.” In 2010, the audience menu has no heading. The mouseover points are labeled “For Home,” “For Small & Medium Business,” “For Public Sector,” and “For Large Enterprise.” Three changes are interesting here. First is the change to the format of the list’s items. “Solutions for” rings of corporate jargon, which the text’s authors warn against (p. 85-86); Dell’s new formulation sounds much more natural. Second, we see that “Public Sector” has replaced the unwieldy “Government, Education, & Healthcare.” The latter choice is more descriptive, but the three indicated subcategories are heterogeneous; clicking this link is likely to lead us to a narrow, deep architecture where we’ll have to further specify that we work in education, then that we work in K-12 education, and so on. Public Sector, by contrast, denotes the same services more transparently – the label is effectively invisible to users who don’t need it – and the drop-down menu allows much of the disambiguation to take place in one click. Finally, we see that medium businesses have been reclassed with small businesses, while the term “large business” has been replaced with “large enterprise.” This could be an organizational change, but it’s more likely to be a labeling change; Dell has likely determined that its services for large businesses are disparate with the needs of medium-sized businesses, and has relabeled its categories to guide medium-sized business owners to the content most likely to be relevant to their need. The choice of the word “enterprise” in particular is clearly a labeling decision. “Enterprise” is an uncommon word whose connotation of scale may further help medium-sized business owners decide which menu category to pursue. All three of these changes have implications for the site’s overall architecture which could be user-tested by an experiment something like the one I described earlier.

My point in this entry is that IA guidelines are often useful, as when they suggest that we avoid jargon, but that site analytics and empirical testing are the ultimate tests of whether a site serves its users as envisioned. A building can be beautiful but uncomfortable, and a textbook-compliant website might still fail its users. In my first entry in this blog I discussed my view that interaction design is the parent discipline of information architecture. If so, then empiricism is the means by which we can determine whether IA is a properly dutiful child!

Monday, August 30, 2010

Week 2: What do spam filters and information seekers have in common?

This week’s reading: Morville & Rosenfeld, Chapters 3 and 4

In Reference 101 – known to USF’s SLIS students as “Introduction to Information Sources and Services” – I was taught that a patron’s inquiry could be broken down into elements called givens, wanteds, and modifiers. Givens were delimiters of the domain of interest: for example, a query might concern one-armed baseball players. Wanteds were what the patron sought: the name of such a baseball player. And modifiers were restrictions on the form of the output: it must be a webpage in English.

This week’s reading convinced me that the given-wanted-modifier model cannot cope with much real-world information seeking, at least not in a single application. The idea that all or even most patrons come to the reference desk with a fully realized question that can be answered completely and concisely – “his name was Pete Gray” – is a chimera. A great deal of information seeking cannot be phrased in the form of a question, and the information need is not “answered” so much as iteratively fed and refined until the seeker achieves a subjective satisfaction with the outcome.

There is something deeply secret and human about this vision of a subjective, iterated, fuzzy search for information. I’m instantly put in mind of Bayesian probability, which finds its most widely recognized use in email spam filters. Using logic believed to be humanlike, these filters read incoming email and assign a probability that the email is spam based on whether its characteristics – its words, formatting, origin, and so on – resemble those of known spam messages. Critically, the output of Bayesian filters is linked to its input; users and administrators identify the filter’s blown calls, and the filter adjusts its notion of “what spam looks like” based on its mistakes. After enough iterations, the filter arrives at heuristics of satisfactory accuracy. (My email filter has correctly classified each of the last five hundred messages I’ve received as of this writing.)

Why this discursion? Because one way of viewing an information seeker, a way I believe the text supports, is to see them as a well-developed Bayesian machine for identifying relevance. A middle school student who needs to write a two-page biography of George Washington may not be able to identify exactly what information she’s looking for, but given a choice among webpages entitled The George Washington University, George Washington’s Mount Vernon Estate, and The Life of George Washington, she will immediately identify the third as the most likely to be relevant. A more advanced searcher might gravitate towards websites with names like EnchantedLearning.com, which are likely to present highly relevant information in a format designed for students’ ready comprehension, or AmesLab.gov, whose .gov domain connotes cognitive authority. A good traditional search engine should support searching and browsing of results based on these characteristics. Analogously, the information architecture of a website should play to the Bayesian heuristics of the human mind; whether presented as one-word taxonomic labels or paragraph-long synopses, metadata needs to help the user take a quick glance and accurately judge whether the data will be relevant.

What is more, information architecture should support the iterated refinement of a user’s understanding of her own objective. When our middle schooler finds information about George Washington’s service in the French and Indian War, she will need to contextualize this new knowledge to determine its significance and its likely role in her paper. Her ideal history website might, for example, make a tooltip available that defined the French and Indian War in a single sentence – enough to assure her that this conflict must have been significant – as well as a hyperlink to more information, which would help her place the event chronologically in Washington’s life and outline his involvement in it in more detail. The tooltip pushes the information over the Bayesian significance threshold, and the hyperlink provides a natural avenue to continue the reunderstood seeking process. If one or the other is missing, the student may discard the information as irrelevant (in the first case) or obscure (in the second). Good architecture not only helps the user find “what she’s looking for,” but also helps her identify it.

Tuesday, August 24, 2010

Week 1: Definitions, and notes on taxonomy

This week’s reading: Morville & Rosenfeld, foreword through Chapter 2

I’m coming to the study of information architecture without formal experience in computer science – but I do have the important qualification that I’m a child of the Information Age. At various times in my life I’ve been a hobbyist computer programmer, an avid websurfer, a small-scale webmaster, an online forum administrator, and a student of library science. Each role has exposed me to what I now understand to be IA from a different perspective, and now that I’ve read a couple chapters on the topic, the meaning of the phrase “information architecture” has begun to fall into place.

While attaining my bachelor’s degree at the University of Chicago, I had the good fortune to become acquainted with a software engineer with a special interest in interface design. Through his website and blog, as well as my conversations with him, I learned some of the basic issues and philosophies surrounding software usability; most importantly, I became firmly convinced that most end-user frustrations should be blamed on poor design and not on user incompetence. As a result, I understand information architecture primarily as a branch of interaction design. A good design is by definition a design that is usable and humane – respectful of the needs, frailties, and limited patience of our users. Though outside constraints such as budget and institutional culture may sometimes trouble our pursuit of such a design, good information architecture should place the user first as often as possible. If a design is not usable, it is nothing.

Our book would interject here to point out that interaction design is far from the only discipline related to IA. Pages 10 and 11 usefully summarize the points of tangency between IA and a number of other fields of study. At this phase of my nascent understanding of IA, however, I can’t help but think of these as allied fields to IA, while interaction design is its parent field.

Consider, for example, a now-commonplace but once-striking implementation of information architecture: Gmail’s conversation-based system for organizing and displaying email. In the days of yore, way back in 2003, email services invariably displayed emails one message at a time. Messages were prefixed by a potential infinity of iterations of RE: and FWD:, and the contents of a protracted email exchange might be spread across several pages of a user’s inbox. Gmail changed that by sorting all emails with the same subject line into one conversation, the whole of which can be viewed at a click. This design choice is clearly an act of information architecture, since it changes the nature of Gmail’s information environment through tweaking the organization and navigation of email.

Clearly, conversation-based email grouping would have been impossible without the software developers who coded it, the graphic designers who concretized it, and the usability engineers who optimized it – but it wasn’t for any of their sakes that such grouping was invented. Conversation-based grouping was invented for the sake of interaction design. Gmail’s information architects implemented the new system because it improved user interactions with email software. Users could find, digest, and act on information more easily under the new system than under the old one. This is the exact goal of information architecture. Viewed this way, IA’s status as a subfield of interaction design could not be more apparent.

Going forward, I’ll continue to focus on how good information architecture addresses user needs. At the same time, I’ll stay alert to other issues raised by the readings, such as the question of how architecture shapes the people who dwell within it. Morville and Rosenfeld raise this question through Winston Churchill at the beginning of Chapter 1, and it’s not an unfamiliar one. Educators wonder whether the instant availability of certain information via Google and Wikipedia has changed how we learn; business owners struggle to understand and accommodate the expectations of a generation brought up with social media; and legal professionals are in the midst of a seismic organizational realignment brought about by the Internet’s liberation of legal materials from the monopoly of West Publishing. We will find the evolution of information architecture at the origin of all of these changes, and it is IA – rooted in the needs of its residents – that we will use to shape the future!