Monday, November 22, 2010

Week 14: Extraordinary Claims...

This week’s reading: Borgmann, Part 3

I had planned to use this final post to address Kellner’s criticism of Borgmann, but I can’t bring myself to. I agree with Kellner on many points, but – now that I’ve read Borgmann’s final section – it feels much more important to engage with Borgmann directly. Briefly, I think Kellner errs in finding Borgmann’s argument theological, but Kellner is correct insofar as he critiques Borgmann for arguing from bias rather than fact.

Borgmann errs frequently and badly in describing the vistas of virtual reality. Where Borgmann is factually wrong – as in his critique of computer photorealism on page 198 – his attitudes reflect a pervasive and unjustified pessimism about the power of technology, and where he is factually correct he draws judgments about technological information that are not grounded in those facts. His treatment of virtual information strikes a strange poise between encyclopedic surface knowledge and deeper ignorance. Part 3 of “Holding On to Reality” reads like a nuanced and valuable discussion of the social context and aesthetic forms of the popular music of the latter half of the twentieth century, only to conclude with a cantankerous dismissal of kids these days and their rock and roll music. Consider, for example, Borgmann’s striking statement that the virtual ambiguity of MUDs “renders virtual reality trivial, and, when pressed for its promise of engagement, evaporates” (p. 190). This implicitly moral judgment is preceded by a careful objective examination of the information environment of MUDs, but not by any grounds for condemnation, and the same is true of Borgmann’s subsequent writing off of online relationships as mere “virtual vacuity” (p. 190). Similar argumentative structure is echoed throughout Part 3. Borgmann relies on intuition rather than evidence to make his broadest points, and while such bald assertions may be effective when preaching to the choir, they are less so when trying to convince an undecided audience such as myself. His proclamations about the loss of intelligence, thing, and context in the virtual environment, even if true, repeatedly made me wonder aloud, “So what?”

Borgmann does not get around to answering that big question – “so what?” – until his conclusion, where he starkly warns, “The preternaturally bright and controllable quality of cyberspace makes real things look poor and recalcitrant by comparison” (p. 216). This is precisely my own worry about information technology – the reason I have felt sympathetic to Borgmann throughout the reading. But his final bases for this assertion, like the dreariness of science fiction novels and the basic seductibility of human beings, are themselves not obvious and are certainly not grounded in Borgmann’s study. Indeed, in some areas, from art to war, virtual information sometimes seems to have given reality a heft that previous generations did not always have the chance to acknowledge. Reading the final pages of the book gave me the same sinking feeling one experiences when one’s favorite sports team commits basic errors of play, or when a normally eloquent advocate of one’s own political view stumbles badly in an important debate. If the central question of “Holding On to Reality” is whether it is worse to experience information virtually than to experience it through nature or culture, and the book represents the best argument that can be made in favor of the answer “Yes,” I can hardly blame society if it answers “No.”

With all that said, Borgmann succeeds in making me want to engage more with reality, even if he fails in convincing me to promote this engagement as a cultural rule. And some of his closing ideas, like his worry that the “sheer disorganized and imposing mass” (p. 230) of hyperinformation will guarantee its future loss, have profound resonance for information architects. In a world where people’s preferred way of engaging with information seems likely to remain irremediably virtual, information architects have the momentous task of preserving that imposing mass of virtual information in a usable form. If, because of the nature of the information or its environment, we aren’t able to situate that information in a reality beyond the virtual, I don’t think that makes us contemptible; our mission is merely to organize, present, and expedite. But if we are able – if we can encourage our users to relate deeply to the information they’re gathering – then we will certainly have done our society and our users a subtle service.

Monday, November 15, 2010

Week 13: Information Condensation, Or, It's Only a Model!

This week’s reading: Borgmann, Part 2

“Holding On to Reality” is simultaneously the most general and the most personal treatment of information science I’ve yet read as part of the library science curriculum, and reading it is exhilarating. I’m not ready yet to address Ess’s analysis of Borgmann cited in the lecture notes, since Ess focuses primarily on Part 3 of Borgmann. Instead, I’ll return this week to the needs of users – a focal component of IA – and discuss what user needs have to do with Borgmann’s treatment of information.

Tying physical architecture to his discussion of the distinction between signs and things, Borgmann opines, “No design can specify its realization fully. To convey exactly as much information as the thing realized, a design would have to exhibit just as many features as the thing. But then it would be a duplicate of . . . the thing” (p. 113). That is, a fully realized (or fully imagined) thing must necessarily lose fidelity when it is condensed into a sign. Borgmann’s insight here is the very principle that makes indexers, abstracters, and information architects necessary. Many library users, and information consumers in general, need to know what they’re accessing before they access it. But to know exactly what a journal article says without reading the article is impossible by Borgmann’s principle. Users, then, need a general idea of what an article says – a low-fidelity version of the article. The job of an abstracter is to reduce a cataloged item (a thing) to an abstract of a more digestible length (a sign) with minimal loss of fidelity. A good abstracter makes it possible for users to make reasonable guesses about where a sign points without having to walk down the indicated road and see for themselves.

Labeling components of an information architecture is precisely analogous to abstracting media; it requires the same faculty of condensation, and has much the same end in mind in terms of how the user is served. Yet one difference between information architecture and physical architecture, which is Borgmann’s subject in Chapter 10, is that an edifice, once built, is stripped of the cultural signs used in its creation. The low-fidelity artifice of the blueprint outlives its usefulness, and the building’s users rarely need a blueprint to navigate the building. By contrast, abstracts and labels within an information system are useful precisely because they are condensed, and are indispensable for users long after the system goes live. There seems to be a fundamental disanalogy here: we can apprehend a building with our senses, and hence we navigate a building with the aid of natural signs (like a luggage carousel in an airport or a blackboard in a school) as well as cultural ones, while an information architecture is invisible to the senses and we can navigate it only through cultural signs and the guidance of the architect. Exploring a building is inherently an interactive experience; exploring an information architecture is not.

I think that this idea – the idea that an information architect must also be an information tour guide, providing signs that are naturally deficient in an online environment – is a key to overcoming user frustration with website interfaces and layouts. Since we cannot be physically present to help our users with their needs, our indexing and labeling functions are crucial to this aim. Just as John Harrison’s robust mechanical clock effectively condensed the vast grid of the world map into a longitude (p. 78), our navigation tools need to clearly help the user locate herself within the architecture; and rather like that map’s rigorous grid makes the sign revisable to match the thing, we should be ready to relabel our websites in a way that better matches the thing, or even revise the thing to match user expectations. This last possibility – the ability of the information architect to revise online reality for the convenience of the user – is probably the most exciting aspect of information architecture, and it might well be the subject of Borgmann’s Part 3, subtitled “Information As Reality.”

Monday, November 8, 2010

Week 12: The IA That Quashed a War

This week’s reading: Burnett & Marshall, Chapters 8 and 9 and Conclusion; Borgmann, Introduction and Part 1

Borgmann’s reading this week promises to grow into a rigorous information-science foundation in which to root information architecture, and next week I’ll begin examining the conjunction of his work with IA. For this week, though, I can’t resist the provocations of Burnett and Marshall as they describe the distinctive characteristics of the online music industry. Since they wrote too early to see the evolution of the iTunes music store, I’ll apply some of their ideas to iTunes and see if any characteristics of online music consumers can be extrapolated.

Burnett and Marshall sum up their critique of the music industry’s response to Napster in three points (p. 193). The first and second are closely related: they are the industry’s underestimation of the impact of the Internet on their business, and the studios’ choice to see MP3 and Napster as threats and not opportunities. Both points exemplify a mistaken and unprofitable philosophy of change: “How can we continue to do business as usual as everything changes around us?” Studios, like other businesses (and like libraries), ought instead to have asked, “How can we stand at the vanguard of this change and be ready when mainstream consumers demand digital services?”

The studios never did develop their own business model to deal with MP3s; other entrepreneurs did it for them. KaZaA, YouTube, and the Pirate Bay have each tolerated or encouraged widespread piracy in their attempts to develop commercial models that would flout the studios entirely. Pandora, Rhapsody, iTunes, and others have taken a different tack, negotiating with studios for the right to play or out-license their songs legally. Stores like iTunes have effectively stepped into the digital niche of the physical retailer, doing the work of salesmanship while artists and studios produce new work to license.

Why iTunes took so long to appear on the music scene is a question that would consume a much longer paper than this weekly response. In large part, however, the answer must be that studios hoped that the Internet and its challenges would go away and leave them to their profits of the 1990s. Even today, studios’ willingness to license music online is only grudging, and came about from desperation to steal market share back from pirate sites rather than from their own innovative proclivities.

But come about it did: piracy is almost as easy today as it was in the days of Napster and KaZaA, yet iTunes has crafted a successful business out of selling songs for $0.69 to $1.29. iTunes does well even though its songs are licensed and not bought. Whether studios and artists do proportionally well is, predictably, a matter of some dispute, but both Metallica and its label Elektra clearly make more money when I buy “Enter Sandman” from iTunes than when I download it from the Pirate Bay.

Why do music fans pay money for licensed music from iTunes rather than downloading music free of license from the Pirate Bay? I suspect that part of the answer – but only a small part – is that fans want to support the artists they listen to. Another part – but again, only a small part – is that fans fear retaliation for piracy, though anti-piracy lawsuits against consumers are still uncommon. The most important part of the answer, by far, must be that iTunes is a nice piece of software. The architecture of iTunes and its store encourage easy navigation, search, and downloading in a way that pirate sites do not. iTunes’ built-in ability to organize and play music likewise advantages it over the Pirate Bay, which operates strictly on a bring-your-own-software basis. In the end, the copyright wars between studios and fans have calmed not because of legal settlements – still ongoing – but because of great information architecture that makes fans able and willing to pay for music.

Perhaps the conclusion of this week’s reading, then, is that when an Apple information architect is asked what he does for a living, he might answer “I end copyright wars!”

Tuesday, November 2, 2010

Week 11: Election Day

This week’s reading: Burnett & Marshall, Chapters 5-7

In contrast to the polar bear book, an information setting where I often felt I lacked important context, Web Theory puts me in mind of a thousand different responses: enthusiastic agreements, vigorous ripostes, and occasional moments of eyebrow-raising realizations about how much the Internet has changed since 2003. This week I’ll take advantage of today's date – Election Day – to discuss the interface between politics and the issues of Chapter 7.

In their treatment of copyright issues on the Internet, Burnett and Marshall successfully make the crucial distinction between creators and copyright holders. Plaintiffs in key intellectual property suits are typically publishers and other firms, not the individuals who created the property at issue. Especially egregious examples of such firms have abounded in the news recently. One is Righthaven LLC, a group that has systematically purchased the copyright to stories in the Las Vegas Review-Journal in order to sue weblogs that have quoted from these stories. Another is the U.S. Copyright Group, which hires itself out to movie producers to sue thousands of unnamed defendants accused of illegally downloading movies via BitTorrent. Watchdog groups such as the Electronic Frontier Foundation and Ars Technica have found such suits socially and legally problematic; for current purposes, the most salient point is that the original creators of the material at issue are nowhere to be found in the legal proceedings.

When politicians make copyright law, then, the most vigorous lobbying rarely comes from authors or creators, nor from the disorganized masses who benefit from copyright liberalization; rather, the loudest voices are those of copyright holders, whose financial interest in their material gives them ample incentive to seek stringency in copyright law. Congress’s most recent extension of the copyright term – to the life of the author plus seventy years – coincided with the year when Mickey Mouse would otherwise have entered the public domain, and the Walt Disney Company lobbied for the change with corresponding intensity. Yet a broad-based consensus exists across the arts and sciences that a robust public domain from which to draw information and inspiration, as well as an expansive view of “fair use” of copyrighted material, contributes crucially to the “progress of Science and useful Arts” prioritized by the Copyright Clause of the Constitution. And so the public interest is often at loggerheads with the interests of intellectual property owners.

The Internet enters this conflict partly with its promulgation of Web 2.0 tools like YouTube and Flickr, which permit everyday citizens with no acquaintance with IP law to violate copyright on a daily basis. A photo of a friend posing in front of an iPod ad, if posted to a public Flickr account, might conceivably draw a DMCA takedown notice from Apple. A cottage industry of YouTube videos that redub a few minutes of popular cartoon shows for satirical purposes have frequently received similar notices. The individuals committing the alleged infringements may well be engaging in fair use, but they lack the recourse to defend themselves from the much better funded corporations who own the intellectual property that Web users are adapting. Researchers, too, often skirt IP law when they seek to develop new technologies; DMCA lawsuits have been threatened or actually prosecuted in cases concerning security research, DVD-ripping software, and even the manufacture of universal garage door openers.

The Internet, of course, also makes actual piracy – with no pretensions to fair use – as easy as downloading the right file from The Pirate Bay. But while IP holders invest a lot of money in chasing down pirates, too many artists, scientists, and other genuine creators get caught up in the dragnet. There’s very little money protecting these individuals in comparison with what IP holders spend on lobbying and lawyering. As long as intellectual property issues continue to exist under the radar of the general public, and as long as no large organizations find it’s in their interest to step up for content creators, neither political party will find the will to challenge the status quo as copyright grows more draconian. Victories for copyright liberalizers will come in the courts, not the legislatures – and for those interested in these issues, Election Day doesn’t represent a meaningful choice between alternatives.

Tuesday, October 26, 2010

Week 10: Civilizational architecture

This week’s reading: Burnett & Marshall, Chapters 2-4

This week’s reading discussed some of the identity and civilizational issues surrounding the Web. Though the discussion bore only tangential relevance to information architecture, IA is an important cog in the massive machine of the Internet, and as members of this “cybernetic” system, it behooves us as architects to understand how our society uses this machine – and how the machine is changing society.

The great strength of Web Theory thus far is its ability to recognize and examine facets of the Internet that are so obvious to its users that we’ve long since stopped noticing them. One can’t critically think about a social force one takes for granted. I was struck especially by the discussion of the “network society” – a succinct and precise description of a system where “geographical connections that are no longer grounded in physical communities but are connected through the flows of information weaken the patterns of the formerly spatially constructed communities and societies” (p. 41). I grew up in Florida, but I also grew up on the Internet – and you can see which stomping ground shaped my social life more when you know that my best friends live in New York, San Francisco, Charlotte, and Edmonton, not in Fort Myers.

The authors of Web Theory, moreover, are right to predict that the many-to-many communication facilitated by the Internet means that I have “weak tie” social links to a great diversity of acquaintances who I might never know in real life. My Web acquaintances span races and classes, and include homosexuals, bisexuals, and transgendered people, Muslims, Mormons, and Wiccans, world citizens from Austria to Australia, vegetarians, furries, and at least one person who knows vastly more than I do about any topic you can name. Correspondingly, I don’t feel the exclusive loyalty to my home community, alma mater, or local sports teams that my parents did (though I’ll cop to being a St. Petersburg Times fanboy). I’ve largely replaced identification based on where I live or where I grew up with identification based on my interests and identity.

And as for identity, I found Burnett and Marshall’s treatment of negative and positive effects of the Internet on the lives of its users to be amusing and full of truths. The “opposing” viewpoints they presented reminded me of nothing so much as the parable of the blind men and the elephant from a previous reading. It’s quite true, as Kraut in particular suggests, that some people use the Internet in a way that interferes with local social circles – and also true, as he speculates, that this use can cause feelings of alienation and anonymity. But it’s also true, as Pew found, that the Internet can strengthen our connections with friends and family. If Nie and Erbing find that Internet use results in “spending less time with or on the phone with family and friends” (p. 66), this could be because, as Pew says, Net users “have used e-mail to enrich their important relationships” (p. 67).

It’s tempting for Net businesses to seek ways to capitalize on the ability of the new generation of users to form communities that exist outside of physical geography. Indeed, many have done so with varying success; Facebook’s valuation as of July appeared to stand somewhere between $12 billion and $24 billion. Certainly information architects trying to make their case to skeptical executives should be able, in some contexts, to argue in terms of Internet users’ propensity for constructing and broadcasting their identities using Web tools, as well as some users’ desire to be citizens of an Internet community. I don’t think there’s anything intrinsically wrong with information architects arguing in those terms, or that a company errs morally when it encourages brand loyalty and community-building among its customers. Like many immersive media, however, the Internet certainly can have an addictive and anti-social effect on those who use it uncritically, and online communities like those of World of Warcraft and 4chan play a contributory part in these cases. A solution to this real problem is outside the immediate scope of the reading, but the more we can understand about the nature of the online medium, the better equipped we will be to understand our ethical responsibilities as producers and consumers of Internet content.

Monday, October 18, 2010

Week 9: The Architect's Garden

This week’s reading: Morville & Rosenfeld, Chapters 20 and 21; Burnett & Marshall, Chapter 1

Three diverse readings this week! The Burnett & Marshall chapter seemed to pivot away from information architecture and into the role of information technology in society. This is a legitimately fascinating topic, but the first chapter read like a fifty-page master’s thesis condensed by force into fifteen pages; interesting models and taxonomies are introduced only to be immediately abandoned without real exploration. This week I won’t worry about Web Theory, but instead will indulge myself in a case study. Riffing from Morville & Rosenfeld’s Chapter 21, I’ll talk about a social problem encountered by the users of an Internet forum I help administer, and explain how we used information architecture to solve it.

The forum in question, In the Rose Garden, has about 700 members, of whom several dozen are active contributors. Users are bound by our common interest in the Japanese anime “Revolutionary Girl Utena,” whose immense literary merits – though outside the scope of this blog – have proven multifaceted enough to sustain analysis and discussion throughout the three years of the forum’s existence. Three volunteers, including myself, administer the forum; most commonly, administration involves some routine content maintenance (dealing with multiple threads on the same topic, for example) and keeping an eye out for interpersonal conflicts on the boards.

Though IRG members are brought together by Utena, the bulk of activity on the forum does not directly pertain to the anime. Sampling a few popular threads would reveal political and social discussions, sharing of other anime, airing of college angst, and conversations about shame, anger, and joy. The most frequently trafficked threads, however, are “forum games.” Forum games are threads in which posts follow a simple set of rules – one thread might ask posters to add two words to a developing story, while another is dedicated to the results of a personality quiz. These games, as played on IRG, are usually more reflexive than thoughtful, but they’re easy to join or to post to, which accounts for their disproportionate popularity.

In 2009, the proliferation of forum games grew to the point where many users on IRG perceived them as an unwelcome distraction. Because of their popularity, forum games were usually ranked highly on the chronologically-sorted thread directories, burying more serious or intimate threads in the same category. After experiencing the problem firsthand for months and receiving a few user complaints, I concluded that forum games were inconveniencing many users and stifling other threads. Banning such games, however, was not an acceptable solution; forum games are good social looseners, serve as an access point to IRG for many new users, and – most of all – make many of our users happy, even the ones who also want to be able to find and post to more serious threads.

The solution – obvious in hindsight – was a change to IRG’s organization. In consultation with the other administrators, I created a new subforum that would be devoted to forum games. The subforum was accessible from the front page of the forum. Migrating all the forum games to a single, dedicated area of the site addressed the problem in several ways, but they all boil down to usability. Site users after the change were able to easily identify what section of the site would contain the kind of thread they were looking for. Those who wanted to quickly join a forum game knew where to do that; those who wanted to have a thoughtful conversation weren’t distracted by the game-driven irrelevance of top results in other subfora. The number of clicks needed to access any given thread was constant before and after the change.

As might be expected, the investment of time needed to implement this change paid off in a big way. Forum games continued to thrive in “captivity,” while threads elsewhere enjoyed renewed popularity. I didn’t know it at the time, but I was doing information architecture: designing a website to meet the needs and expectations of its users in an efficient and organized way.

One footnote, apropos of Morville and Rosenfeld’s allusions to the unique aspects of the evolt community in Chapter 21: Many IRG users have a strong preference for either forum games or discussion, and rarely participate in the unpreferred category. From an IA perspective this strengthens the case for the change we made, but at the time the administrators worried that segregating forum games might be tantamount to segregating users. Our small community is tight-knit, unlike the communities of many large Internet forums, and we were concerned about the social impact of “marking” forum games (and, implicitly, their players) in such a visible way. Though the change certainly did not rend the social fabric, I’ve informally noticed that crossover between forum games and other threads has seemed less frequent in the ensuing year. A few game players whose activity previously spanned the forum have settled into their new subforum and rarely emerge from it. Fortunately, there are several others who still bridge the gap, and IRG has not diverged into two unconnected forums.

So much for my belief that IA is a totally new subject for me. It turns out that I am, in fact, an experienced and successful information architect!

Tuesday, October 12, 2010

Week 8: At Last, Librarianship

This week’s reading: Morville & Rosenfeld, Chapters 17 and 18

Our topics this week – first, marketing our services to skeptical managers, and second, comparing IA to business strategy – were far enough outside my experience that I’m not sure how to react to them in a way that goes beyond recapitulation. I think I can best elaborate by comparing the challenges information architects face in justifying their existence to the challenges librarians face in doing the same.

Let’s begin with the obvious. Though the specifics of their duties differ, information architects and librarians are both broadly in the business of making information accessible. Both design systems of organization and labeling to make their information systems more transparent to the user, both create and use metadata extensively to expedite searches, and both are concerned with economizing user effort. As a result of their shared central mission, both librarians and information architects sometimes face questions from decision makers who do not perceive disorganized information as a serious problem.

Google Search, in particular, has contributed to the false impression that all the information in the world is now organized and accessible. Public librarians tear out their hair when their acquisitions budget is cut because Google is free; information architects gnash their teeth when the client wants to install a Google Custom Search bar and dispense with the messy process of web architecture. Sure, Google can’t design our reference queries or our browsing hierarchies, but do users really need that stuff anyway? It falls to us to make the case that, yes, users do need that stuff – and lots more besides that Google can’t do.

Not all of the text’s suggestions on how information architects can make this case are equally applicable to librarians. For instance, public and academic librarians are unlikely to impress policymakers with a return-on-investment analysis, which would contain even more unknowns than a similar IA analysis and would operate outside the myopic timeframe with which their funding authorities concern themselves. But librarians can make good use of the “pain is your best friend” principle (p. 375). We can use stories and presentations to illustrate the often humorously painful consequences of replacing human expertise with search software. We can challenge the policymaker to find a particular commonly sought piece of information using Google, forcing him or her to confront the imperfections of Google directly. We can even use comparative analysis to point up the exact stages where human reference librarians add value to a search process, as well as the types of patrons (such as the young, elderly, and uneducated) who have special trouble conducting information searches by machine.

One important difference between IA’s image problems and those of librarianship looks to the future. The mood in the IA community, as on page 377 of the reading, seems to be that broader recognition of the role of information architects is inevitable. By contrast, the mood in the library community is that future technologies will pose even more stringent challenges to our necessity than current tools already have. I conclude that it might be wise for librarians to restyle their role in civic life as including social information architecture. Library websites should evolve past being electronic card catalogs and instead seek to architect a broad information system, encompassing both physical and Internet resources, that is responsive to the most common needs of its users. This goal is particularly ambitious – most websites undertake to organize a much more limited set of resources – but some libraries, including USF’s, have already begun such an undertaking. I can think of a number of ways to make such a project feasible, and perhaps I’ll study something like this as part of my term paper!