Monday, November 22, 2010

Week 14: Extraordinary Claims...

This week’s reading: Borgmann, Part 3

I had planned to use this final post to address Kellner’s criticism of Borgmann, but I can’t bring myself to. I agree with Kellner on many points, but – now that I’ve read Borgmann’s final section – it feels much more important to engage with Borgmann directly. Briefly, I think Kellner errs in finding Borgmann’s argument theological, but Kellner is correct insofar as he critiques Borgmann for arguing from bias rather than fact.

Borgmann errs frequently and badly in describing the vistas of virtual reality. Where Borgmann is factually wrong – as in his critique of computer photorealism on page 198 – his attitudes reflect a pervasive and unjustified pessimism about the power of technology, and where he is factually correct he draws judgments about technological information that are not grounded in those facts. His treatment of virtual information strikes a strange poise between encyclopedic surface knowledge and deeper ignorance. Part 3 of “Holding On to Reality” reads like a nuanced and valuable discussion of the social context and aesthetic forms of the popular music of the latter half of the twentieth century, only to conclude with a cantankerous dismissal of kids these days and their rock and roll music. Consider, for example, Borgmann’s striking statement that the virtual ambiguity of MUDs “renders virtual reality trivial, and, when pressed for its promise of engagement, evaporates” (p. 190). This implicitly moral judgment is preceded by a careful objective examination of the information environment of MUDs, but not by any grounds for condemnation, and the same is true of Borgmann’s subsequent writing off of online relationships as mere “virtual vacuity” (p. 190). Similar argumentative structure is echoed throughout Part 3. Borgmann relies on intuition rather than evidence to make his broadest points, and while such bald assertions may be effective when preaching to the choir, they are less so when trying to convince an undecided audience such as myself. His proclamations about the loss of intelligence, thing, and context in the virtual environment, even if true, repeatedly made me wonder aloud, “So what?”

Borgmann does not get around to answering that big question – “so what?” – until his conclusion, where he starkly warns, “The preternaturally bright and controllable quality of cyberspace makes real things look poor and recalcitrant by comparison” (p. 216). This is precisely my own worry about information technology – the reason I have felt sympathetic to Borgmann throughout the reading. But his final bases for this assertion, like the dreariness of science fiction novels and the basic seductibility of human beings, are themselves not obvious and are certainly not grounded in Borgmann’s study. Indeed, in some areas, from art to war, virtual information sometimes seems to have given reality a heft that previous generations did not always have the chance to acknowledge. Reading the final pages of the book gave me the same sinking feeling one experiences when one’s favorite sports team commits basic errors of play, or when a normally eloquent advocate of one’s own political view stumbles badly in an important debate. If the central question of “Holding On to Reality” is whether it is worse to experience information virtually than to experience it through nature or culture, and the book represents the best argument that can be made in favor of the answer “Yes,” I can hardly blame society if it answers “No.”

With all that said, Borgmann succeeds in making me want to engage more with reality, even if he fails in convincing me to promote this engagement as a cultural rule. And some of his closing ideas, like his worry that the “sheer disorganized and imposing mass” (p. 230) of hyperinformation will guarantee its future loss, have profound resonance for information architects. In a world where people’s preferred way of engaging with information seems likely to remain irremediably virtual, information architects have the momentous task of preserving that imposing mass of virtual information in a usable form. If, because of the nature of the information or its environment, we aren’t able to situate that information in a reality beyond the virtual, I don’t think that makes us contemptible; our mission is merely to organize, present, and expedite. But if we are able – if we can encourage our users to relate deeply to the information they’re gathering – then we will certainly have done our society and our users a subtle service.

Monday, November 15, 2010

Week 13: Information Condensation, Or, It's Only a Model!

This week’s reading: Borgmann, Part 2

“Holding On to Reality” is simultaneously the most general and the most personal treatment of information science I’ve yet read as part of the library science curriculum, and reading it is exhilarating. I’m not ready yet to address Ess’s analysis of Borgmann cited in the lecture notes, since Ess focuses primarily on Part 3 of Borgmann. Instead, I’ll return this week to the needs of users – a focal component of IA – and discuss what user needs have to do with Borgmann’s treatment of information.

Tying physical architecture to his discussion of the distinction between signs and things, Borgmann opines, “No design can specify its realization fully. To convey exactly as much information as the thing realized, a design would have to exhibit just as many features as the thing. But then it would be a duplicate of . . . the thing” (p. 113). That is, a fully realized (or fully imagined) thing must necessarily lose fidelity when it is condensed into a sign. Borgmann’s insight here is the very principle that makes indexers, abstracters, and information architects necessary. Many library users, and information consumers in general, need to know what they’re accessing before they access it. But to know exactly what a journal article says without reading the article is impossible by Borgmann’s principle. Users, then, need a general idea of what an article says – a low-fidelity version of the article. The job of an abstracter is to reduce a cataloged item (a thing) to an abstract of a more digestible length (a sign) with minimal loss of fidelity. A good abstracter makes it possible for users to make reasonable guesses about where a sign points without having to walk down the indicated road and see for themselves.

Labeling components of an information architecture is precisely analogous to abstracting media; it requires the same faculty of condensation, and has much the same end in mind in terms of how the user is served. Yet one difference between information architecture and physical architecture, which is Borgmann’s subject in Chapter 10, is that an edifice, once built, is stripped of the cultural signs used in its creation. The low-fidelity artifice of the blueprint outlives its usefulness, and the building’s users rarely need a blueprint to navigate the building. By contrast, abstracts and labels within an information system are useful precisely because they are condensed, and are indispensable for users long after the system goes live. There seems to be a fundamental disanalogy here: we can apprehend a building with our senses, and hence we navigate a building with the aid of natural signs (like a luggage carousel in an airport or a blackboard in a school) as well as cultural ones, while an information architecture is invisible to the senses and we can navigate it only through cultural signs and the guidance of the architect. Exploring a building is inherently an interactive experience; exploring an information architecture is not.

I think that this idea – the idea that an information architect must also be an information tour guide, providing signs that are naturally deficient in an online environment – is a key to overcoming user frustration with website interfaces and layouts. Since we cannot be physically present to help our users with their needs, our indexing and labeling functions are crucial to this aim. Just as John Harrison’s robust mechanical clock effectively condensed the vast grid of the world map into a longitude (p. 78), our navigation tools need to clearly help the user locate herself within the architecture; and rather like that map’s rigorous grid makes the sign revisable to match the thing, we should be ready to relabel our websites in a way that better matches the thing, or even revise the thing to match user expectations. This last possibility – the ability of the information architect to revise online reality for the convenience of the user – is probably the most exciting aspect of information architecture, and it might well be the subject of Borgmann’s Part 3, subtitled “Information As Reality.”

Monday, November 8, 2010

Week 12: The IA That Quashed a War

This week’s reading: Burnett & Marshall, Chapters 8 and 9 and Conclusion; Borgmann, Introduction and Part 1

Borgmann’s reading this week promises to grow into a rigorous information-science foundation in which to root information architecture, and next week I’ll begin examining the conjunction of his work with IA. For this week, though, I can’t resist the provocations of Burnett and Marshall as they describe the distinctive characteristics of the online music industry. Since they wrote too early to see the evolution of the iTunes music store, I’ll apply some of their ideas to iTunes and see if any characteristics of online music consumers can be extrapolated.

Burnett and Marshall sum up their critique of the music industry’s response to Napster in three points (p. 193). The first and second are closely related: they are the industry’s underestimation of the impact of the Internet on their business, and the studios’ choice to see MP3 and Napster as threats and not opportunities. Both points exemplify a mistaken and unprofitable philosophy of change: “How can we continue to do business as usual as everything changes around us?” Studios, like other businesses (and like libraries), ought instead to have asked, “How can we stand at the vanguard of this change and be ready when mainstream consumers demand digital services?”

The studios never did develop their own business model to deal with MP3s; other entrepreneurs did it for them. KaZaA, YouTube, and the Pirate Bay have each tolerated or encouraged widespread piracy in their attempts to develop commercial models that would flout the studios entirely. Pandora, Rhapsody, iTunes, and others have taken a different tack, negotiating with studios for the right to play or out-license their songs legally. Stores like iTunes have effectively stepped into the digital niche of the physical retailer, doing the work of salesmanship while artists and studios produce new work to license.

Why iTunes took so long to appear on the music scene is a question that would consume a much longer paper than this weekly response. In large part, however, the answer must be that studios hoped that the Internet and its challenges would go away and leave them to their profits of the 1990s. Even today, studios’ willingness to license music online is only grudging, and came about from desperation to steal market share back from pirate sites rather than from their own innovative proclivities.

But come about it did: piracy is almost as easy today as it was in the days of Napster and KaZaA, yet iTunes has crafted a successful business out of selling songs for $0.69 to $1.29. iTunes does well even though its songs are licensed and not bought. Whether studios and artists do proportionally well is, predictably, a matter of some dispute, but both Metallica and its label Elektra clearly make more money when I buy “Enter Sandman” from iTunes than when I download it from the Pirate Bay.

Why do music fans pay money for licensed music from iTunes rather than downloading music free of license from the Pirate Bay? I suspect that part of the answer – but only a small part – is that fans want to support the artists they listen to. Another part – but again, only a small part – is that fans fear retaliation for piracy, though anti-piracy lawsuits against consumers are still uncommon. The most important part of the answer, by far, must be that iTunes is a nice piece of software. The architecture of iTunes and its store encourage easy navigation, search, and downloading in a way that pirate sites do not. iTunes’ built-in ability to organize and play music likewise advantages it over the Pirate Bay, which operates strictly on a bring-your-own-software basis. In the end, the copyright wars between studios and fans have calmed not because of legal settlements – still ongoing – but because of great information architecture that makes fans able and willing to pay for music.

Perhaps the conclusion of this week’s reading, then, is that when an Apple information architect is asked what he does for a living, he might answer “I end copyright wars!”

Tuesday, November 2, 2010

Week 11: Election Day

This week’s reading: Burnett & Marshall, Chapters 5-7

In contrast to the polar bear book, an information setting where I often felt I lacked important context, Web Theory puts me in mind of a thousand different responses: enthusiastic agreements, vigorous ripostes, and occasional moments of eyebrow-raising realizations about how much the Internet has changed since 2003. This week I’ll take advantage of today's date – Election Day – to discuss the interface between politics and the issues of Chapter 7.

In their treatment of copyright issues on the Internet, Burnett and Marshall successfully make the crucial distinction between creators and copyright holders. Plaintiffs in key intellectual property suits are typically publishers and other firms, not the individuals who created the property at issue. Especially egregious examples of such firms have abounded in the news recently. One is Righthaven LLC, a group that has systematically purchased the copyright to stories in the Las Vegas Review-Journal in order to sue weblogs that have quoted from these stories. Another is the U.S. Copyright Group, which hires itself out to movie producers to sue thousands of unnamed defendants accused of illegally downloading movies via BitTorrent. Watchdog groups such as the Electronic Frontier Foundation and Ars Technica have found such suits socially and legally problematic; for current purposes, the most salient point is that the original creators of the material at issue are nowhere to be found in the legal proceedings.

When politicians make copyright law, then, the most vigorous lobbying rarely comes from authors or creators, nor from the disorganized masses who benefit from copyright liberalization; rather, the loudest voices are those of copyright holders, whose financial interest in their material gives them ample incentive to seek stringency in copyright law. Congress’s most recent extension of the copyright term – to the life of the author plus seventy years – coincided with the year when Mickey Mouse would otherwise have entered the public domain, and the Walt Disney Company lobbied for the change with corresponding intensity. Yet a broad-based consensus exists across the arts and sciences that a robust public domain from which to draw information and inspiration, as well as an expansive view of “fair use” of copyrighted material, contributes crucially to the “progress of Science and useful Arts” prioritized by the Copyright Clause of the Constitution. And so the public interest is often at loggerheads with the interests of intellectual property owners.

The Internet enters this conflict partly with its promulgation of Web 2.0 tools like YouTube and Flickr, which permit everyday citizens with no acquaintance with IP law to violate copyright on a daily basis. A photo of a friend posing in front of an iPod ad, if posted to a public Flickr account, might conceivably draw a DMCA takedown notice from Apple. A cottage industry of YouTube videos that redub a few minutes of popular cartoon shows for satirical purposes have frequently received similar notices. The individuals committing the alleged infringements may well be engaging in fair use, but they lack the recourse to defend themselves from the much better funded corporations who own the intellectual property that Web users are adapting. Researchers, too, often skirt IP law when they seek to develop new technologies; DMCA lawsuits have been threatened or actually prosecuted in cases concerning security research, DVD-ripping software, and even the manufacture of universal garage door openers.

The Internet, of course, also makes actual piracy – with no pretensions to fair use – as easy as downloading the right file from The Pirate Bay. But while IP holders invest a lot of money in chasing down pirates, too many artists, scientists, and other genuine creators get caught up in the dragnet. There’s very little money protecting these individuals in comparison with what IP holders spend on lobbying and lawyering. As long as intellectual property issues continue to exist under the radar of the general public, and as long as no large organizations find it’s in their interest to step up for content creators, neither political party will find the will to challenge the status quo as copyright grows more draconian. Victories for copyright liberalizers will come in the courts, not the legislatures – and for those interested in these issues, Election Day doesn’t represent a meaningful choice between alternatives.