Alan Jacobs
Computer Control
The perfect technique is the most adaptable and, consequently, the most plastic one. True technique will know how to maintain the illusion of liberty, choice, and individuality; but these will have been carefully calculated so that they will be integrated into the mathematical reality merely as appearances.
—Jacques Ellul, The Technological Society
1
In 1986 a musician, composer, and computer programmer named Laurie Spiegel used her Macintosh to produce the first version of an application she called Music Mouse. Music Mouse is not easy to describe, but, in Spiegel's own account, it works like this:
It lets you use the computer itself as a musical instrument, played by moving the mouse with one hand while you control dozens of available musical parameters from the Mac's "qwerty." It's a great musical idea generator, ear trainer, compositional tool, and improvising instrument. The software does a lot of harmony handling for you (you control the variables it uses for this), so it's useful—as are all "real instruments" at any level of musical training, experience, or skill, from beginning through professional.
The effects produced by Music Mouse are more striking than any description of the application, as many laudatory reviewers of Music Mouse have noted. If you have a Macintosh—or, buried somewhere in your basement, a by-now ancient Atari or Amiga computer—you can discover Music Mouse for yourself (a free demo of the most recent Mac version can be downloaded at http://retiary.org/ls/programs.html); but for the great majority who dwell in GatesWorld, alas, Music Mouse has never been ported to Windows.1
In light of recent developments in computer-assisted and computer-generated music, Music Mouse may sound almost quaint, but in its early years it certainly knocked a lot of socks off; including the rather distinguished socks of Richard Lanham, a professor of rhetoric at ucla, who wrote approvingly of Music Mouse in a 1989 essay called "Literary Study and the Digital Revolution." That essay was reprinted in Lanham's pathbreaking book The Electronic Word: Democracy, Technology, and the Arts, and the book's subtitle alerts us to one of the reasons for Lanham's enthusiasm. For Lanham, the work of people like Spiegel uses technology to democratize the arts: it enables ordinary people, even people who have not had access to formal musical training, not only to listen to music but to make it. Spiegel understands Music Mouse in similar terms, and sees it as emancipating "musicality" from "sheer physical coordination" and "the ability to deal with and manipulate symbolic notation"; these, in her view, are "irrelevant" and "have nothing to do with musical ability."
Much as I enjoy Music Mouse, I am dubious about some of the claims made by both Lanham and Spiegel—in part because it is my habit, whenever I hear someone proclaim that they have achieved my liberation from anything, to make sure my wallet is still in my back pocket. Precisely what sort of liberation is this? Instead of struggling with the limits of my physical coordination, limits that have for so long interfered with my desire to play the guitar like the Edge or Chet Atkins, I could simply play Music Mouse; but this does not often strike me as a desirable alternative. For one thing, I'm not sure that one gets better at playing Music Mouse, or at least not markedly better, and it is generally rewarding to improve one's skills; for another, the many "variables" in harmonization, tempo, and voicing that the program offers me don't seem to approximate the range of musical sounds I can make even with my rudimentary guitar-playing skills.
But more centrally, the claim that Music Mouse can liberate my "musicality" from the defects or limits of my physical skill makes me wonder if one couldn't say the same for an EA Sports computer simulation of professional football or baseball: those tacklers I could never escape in the real world, those curve balls I always swung ten inches above, pose little challenge to my skills with a mouse or joystick. The only problem is that when I'm sitting at the computer I'm not playing a sport; I'm playing a computer game that offers a shrunken and two-dimensional visual representation of a sport. And similarly, when I use Music Mouse I am not making music—or at least that's how it feels to me: rather, I'm offering the computer some extremely simple prompts which allow it to make music, and only the kind of music it knows how to make. The computer sports game actually demands much more skill from me than does Music Mouse, which for Laurie Spiegel is the beauty of her program; but that's not the only way to think about the matter.
In his essay, Lanham acknowledges this point in passing: imagining a series of further developments of computerized musical technology, he exclaims, "All these permutations are available to performers without formal musical training. (The computer training required is something else.)" There's a deeper significance hidden in Lanham's parenthetical comment. What about the computer training—and musical training!—required to write a program like Music Mouse? I am able to use the application so readily because so much of Laurie Spiegel's expertise in both fields has been employed so thoroughly in its making. Indeed, Spiegel has done almost all the work; all that's left for me is to twiddle the mouse. Am I "emancipated" by this situation? Or am I merely reduced to the level of a very junior partner in the music-making enterprise?
Computer technology seems to have this curious effect on many people: it makes at least some of us feel that we are doing things that, in point of fact, the computer itself is doing, and doing according to the instructions of people who have certain highly developed skills that most of us do not have. When this happens to me—or, more precisely, when I realize that it has happened to me—I feel as though I had temporarily convinced myself that I am the Wizard of Oz; I've forgotten that I'm just a little man hiding behind an enormous shield of technology that, in this case, I didn't even make myself.
Lanham's book is actually a brilliant one, and his exploration of how electronic text changes our orientation to writing and reading is compelling and provocative. The same is true of another humanist's foray into the world of computer-based critical literacy, George P. Landow's Hypertext: The Convergence of Contemporary Critical Theory and Technology. If Lanham's emphasis falls on the infinite revisability of electronic text, Landow, a professor of English at Brown University and an eminent scholar of the Victorian period, is greatly fascinated (as are most theorists of hypertext) by the concept of the link. He describes in some detail the way that links perceptually work, how they break the linear flow that we are accustomed to from our long acquaintance with the discourses generated by print technology. For Landow, these traits of hyperlinks are liberating and empowering because "the linear habits of thought associated with print technology often force us to think in particular ways that require narrowness, decontextualization, and intellectual attenuation, if not downright impoverishment."
Everything about this sentence strikes me as highly arguable, but let's not argue now. Instead, let's think for a moment in a far more mundane way about how links are available to us in the typical web browser—because thinking in this way will, I believe, lead us to a deep problem with Lanham and Landow's celebratory rhetoric. So, here we are, online, reading a page. A link offers itself (by its color or by its underlining); we click on the link. The page we are reading now disappears and another replaces it; or, in certain situations, a new window displaying a new page appears superimposed on the page we had been reading, which now lurks in the background, mostly or wholly hidden from view. The new page also presents links which will take us still elsewhere—eventually, perhaps, back to the page where we began. Pages of electronic text seem to circulate without a clear directional force; they seem to form an infinite web, with interstices at which we can turn in any direction.
But web browsers need not have been configured in just this way. What if, when we clicked that link, the new window appeared beneath, or beside, the one we had been looking at, so that both are equally visible, though in smaller portions? What if that process of subdividing the screen were to continue up to a limit set by the user—say, four pages displayed at a time, perhaps numbered sequentially, with earlier pages relegated to the History menu (as all previously viewed pages now are)?
There are, of course, very good reasons why browsers are not configured in this way, most of them involving the size and resolution of the typical monitor; but my point is that they could have been configured in this way, and if browsers had evolved along those lines, people like Landow would talk about links in a different way than they now do. This "tiling" of a screen's open windows was in fact the method used by the very first version of Microsoft's Windows, released way back in 1983, when all the windows contained was plain text—and, very likely, was one of the chief reasons Windows 1.0 flopped. But if that model had caught on—which could have happened if the technology of monitors had enabled larger and more precisely rendered images than they did at the time—today the experience of linking would be different because the software architecture would produce a different experiential environment.
Similarly, the computer scientist (and part-time art critic) David Gelernter, of Yale University, has argued for some years now that computer users have become unnecessarily fixed on the metaphor of the "desktop," as though that were the only way to conceive of our relationship to the information stored on our computers. In his book Machine Beauty and elsewhere, he points out that the desktop metaphor had a particular historical origin—in research done by some scientists working for Xerox, later appropriated by Apple for the Macintosh and then by Microsoft—and contends that the role that computers currently play in many of our lives calls for a different metaphor, one based on the organization of time rather than the organization of space.
Gelernter and his colleagues have created a software environment called Lifestreams that provides an alternative interface with our computers, and with what we store in them. As a writer for Wired put it, Lifestreams is "a diary rather than a desktop." (The project, still in its early stages, can be investigated at www.scopeware.com.) If the Lifestreams model of presenting information to us in the form of time-stamped "cards," with our most recently viewed documents occupying the foreground of our screen, had been developed 20 years ago—and there was (I think) no serious technological barrier to this having happened—then writers like Lanham and Landow might be equally enthusiastic about the effects of computers on writing and learning, but the effects about which they enthused would be quite different ones. I am somewhat troubled by what appears to be a matching of ends to the available means: we become excited about doing whatever our technology is able to do; if its architecture enabled other actions, would the pursuit of those actions automatically become our new goal?
It is because I worry about this that Gelernter, who is a deeply humane thinker and anything but a naÏve enthusiast about computer technology, nevertheless makes me somewhat uncomfortable when he writes (in Machine Beauty) about the goals of software engineering: "Complexity makes programs hard to build and potentially hard to use"; therefore what we need is a "defense against complexity." For Gelernter,
software's ultimate goal [is] to break free of the computer, to break free conceptually … The gravity that holds the imagination back as we cope with these strange new items is the computer itself, the old-fashioned physical machine. Software's goal is to escape this gravity field, and every key step in software history has been a step away from the computer, toward forgetting about the machine and its physical structure and limitations.
In one sense this is exactly what I want—what all of us want who have been frustrated by our inability to get the computer to perform a task, either because of a bug in the software or because our machine doesn't have enough memory or a fast enough processor. But were it ever to happen that such frustrations were eliminated, that the computer became "transparent" (as Gelernter sometimes puts it) to my wishes and desires, would that be because the computer had matched itself to my character and interests? Or, instead, because I had gradually and unconsciously reshaped my character and interests in order to match the capabilities of the technology?
Whatever emancipation or other benefit we receive from computer technology (from any technology) depends on decisions made by people who know how to design computers, other people who know how to build computer components, and still other people who know how to write code. Given the increasingly central role that computers play in our lives, how comfortable are we—and by "we" I mean average computer users—with knowing so little about how these machines came to be what they are, and to do what they do? How content are we simply to roll a mouse across a pad and let someone else's music tickle our ears?
2
I didn't think much about these matters until about two years ago, when I was reading Neal Stephenson's roller-coaster ride of a novel, Cryptonomicon, and noticed that the book had its own website: www.cryptonomicon.com. The novel is full of arcane and fascinating information about cryptography and cryptanalysis, the first digital computers, the possible alternatives offered by analog computing, gold mining, undersea data cables, data havens, secret societies, the ethics of computer hacking, and so on—it's a worthy successor to Stephenson's earlier (and also technologically literate, funny, and thoughtful) books, especially Snow Crash and The Diamond Age— and I thought the website might offer further information about at least some of these topics. Alas, it did not; but it did contain a curious essay-cum-manifesto (downloadable as a plain-text file, and since published as a short book) by Stephenson, called In the Beginning Was the Command Line. It's an absolutely brilliant piece of polemic, and I'll have more to say about it later, but this is the passage that first caught my attention:
Contemporary culture is a two-tiered system, like the Morlocks and the Eloi in H.G. Wells's The Time Machine, except that it's been turned upside down. In The Time Machine the Eloi were an effete upper class, supported by lots of subterranean Morlocks who kept the technological wheels turning. But in our world it's the other way round. The Morlocks are in the minority, and they are running the show, because they understand how everything works. The much more numerous Eloi learn everything they know from being steeped from birth in electronic media directed and controlled by book-reading Morlocks. So many ignorant people could be dangerous if they got pointed in the wrong direction, and so we've evolved a popular culture that is (a) almost unbelievably infectious and (b) neuters every person who gets infected by it, by rendering them unwilling to make judgments and incapable of taking stands.
Morlocks, who have the energy and intelligence to comprehend details, go out and master complex subjects … so that Eloi can get the gist without having to strain their minds or endure boredom.
What I realized as I was reading this passage is that, in relation to computers particularly, I am an Eloi. I may be as much or more of a "book-reader" as the most adept Morlock, I may hold a ph.d. and have a subject to profess, but in relation to the technologies and interfaces that make an obvious and daily difference in how our lives are structured I am as ignorant as can be. After all, I don't read the books they read—standing in line at Borders, I clutch Mansfield Park or The Divine Conspiracy, while they lug Programming in C++ or Unix Administration in a Nutshell.
Now, you may think that my ignorance should not have been news to me, but it was, and the reasons may be significant. Spending most of my time around "humanities types" as I do, I rarely deal with people who are expert in using computers. Indeed, some of my colleagues and students have even made the mistake of crediting me with significant computer literacy, largely because I can use some applications that daunt the more helpless Eloi and because I know the meanings of a number of computerese acronyms. (Just today the head of Computing Services at the college where I teach saw me and said, "How's the English Department's resident geek?" I fairly glowed with pleasure.) In short, living among the Eloi of Eloi—that is, people who are accustomed to relying every day on computers, applications, and operating systems whose organization and structure they have literally no understanding of whatsoever—I have gotten used to thinking of myself as a kind of minor-league Morlock.
But I'm not a Morlock. I really don't know anything. And I'm wondering if it might not be time for me to learn a thing or two. But how does one become a Morlock, even a minor-league one? The more I thought about this question, the more my attention came to focus on one issue: the computer's operating system (OS), the set of routines and instructions that govern the basic functions of the computer, and on top of which other applications (word processing programs, web browsers, graphics programs, whatever) run.2
As I read more about these matters, alas, I discovered that when you learn something—even learn a lot—about how an OS works, how one OS differs from another, you've managed to grab only a few small pieces of the puzzle. That is, while such knowledge gets you considerably closer to how the computer "really works" than one gets by clicking a mouse button to open a file or application, you're still several removes from the elemental instructions which allow the different parts of the computer—the memory, the processor—to be aware of one another's existence and to negotiate some kind of ongoing relationship. Even if I were to become a masterful manipulator of an existing operating system, I would remain only a user. But by some miracle were I able to learn a few programming skills, even manage to write some code that would generate a tiny subroutine in an OS, I still wouldn't have any idea what was going on at that deeper level. I still wouldn't have the complete knowledge of the computer's workings that the true Morlock should have (or so I tend to feel).
However, it may be that no such true Morlock exists. As J. Daniel Hillis, one of the world's leading computer scientists and a specialist in "massively parallel" supercomputers, explains in his admirably lucid book The Pattern on the Stone, today's machines are so complex that no one completely understands how one works:
Often when a large software system malfunctions, the programmers responsible for each of the parts can convincingly argue that each of their respective subroutines is doing the right thing. Often they are all correct, in the sense that each subroutine is correctly implementing its own specified function. The flaw lies in the specifications of what the parts are supposed to do and how they are supposed to interact. Such specifications are difficult to write correctly without anticipating all possible interactions. Large complex systems, like computer operating systems or telephone networks, often exhibit puzzling and unanticipated behaviors even when every part is functioning as designed …
It is amazing to me that the engineering process works as well as it does. Designing something as complicated as a computer or an operating system can require thousands of people. If the system is sufficiently complicated, no one person can have a complete view of the system.
Now, this does not sound like good news to me. If we are making machines that none of us understands, are we not, perforce, making machines that none of us can control? And what does this state of affairs do to my hopes for Morlockian knowledge?
3
Curiously, though, for many people who know a lot more about computers than I do, our ability to make machines that transcend our abilities to understand them is exciting—a testimony to "the power of modularity," as Carliss Y. Baldwin and Kim B. Clark, of Harvard Business School, put it in their recent book Design Rules. For Baldwin and Clark, the application of the principle of modularity (seen especially as the division of design and production tasks into many separate "modules" governed only by a shared set of "design rules" that ensure ultimate complementarity and fit) "decentralizes" the process of design, increases the number of design options, and "accommodates uncertainty"—that is, allows for the process of design and production to go on in ways that make sense, that are almost certain to produce coherent and useful results, but that are fundamentally unpredictable. In a fully modular environment for the design of a product, or anything else, the precise kind of product that will emerge at the end of the process is not, cannot be, foreseen at the beginning. The people involved must simply trust in the "design rules" to set procedures, but not ensure particular results. Baldwin and Clark call this model "design evolution," for reasons we will discuss a bit later.
Baldwin and Clark are primarily concerned with the ways in which modularity makes good business sense, but the principle of modularity has other and deeper implications. Some of these are explored in Steven Johnson's fascinating new book Emergence. For Johnson, modularity (though he does not use that term) makes "emergent systems" possible. But what are "emergent systems"? Johnson begins his book with a concise, lucid explanation of the curious behavior of the slime mold, an exceptionally primitive amoeba-like organism that has the ability, at need, to organize itself into large communities which act as a single organism—and then (also at need) to divide again into smaller units. For many years this behavior puzzled scientists, who could but assume that "slime mold swarms formed at the command of 'pacemaker' cells that ordered the other cells to begin aggregating." But no one could find the pacemaker cells. Johnson points out that scientists had similar difficulty in understanding the behavior of ant colonies, with their remarkable divisions of labor and consequent ability to manage an extraordinarily complex social order: surely the ant "queen" was somehow giving orders that the other ants carried out? But no one could figure out how such "orders" could be given.
In fact, no one gives orders in an ant colony, and slime molds have no pacemaker cells. Slime molds and ant colonies are "self-organizing complex systems"—emergent systems, in that their behavior simply emerges rather than deriving from some centralized or hierarchical plan. The division of labor in an ant colony (some ants foraging for food, others taking out the trash, still others disposing of the dead) is thoroughly "modular": the colony's remarkable complexity results from a few simple "design rules" that govern ants' responses to trails of pheromones deposited by other ants. Having long been blinded by our focus on hierarchical, command-governed thinking, Johnson argues, we have only recently been able to discern the power of self-organization: we have "unearthed a secret history of decentralized thinking, a history that had been submerged for many years beneath the weight of the pacemaker hypothesis and the traditional boundaries of scientific research."
The prevalence of modularity in computer design, and of "emergent" strategies and techniques in software design, makes my dreams of Morlockian power seem absurdly far-fetched. If the people who design and build the microchips, construct the hardware, and write the software codes don't understand how exactly the whole thing manages to work—or, as is often the case, why it doesn't—then shouldn't I just become the most skillful computer user I can (a "power user," as the lingo has it) and forget about tinkering under the hood? After all, Johnson argues that, as self-organizing systems become more technologically dominant—a development he thinks is inevitable, especially in computer software—the ability to "accommodate uncertainty" (as Baldwin and Clark put it), to accept one's lack of control over the outcome of a given process, will become a cardinal virtue. Johnson already sees this virtue manifesting itself in children who play computer games that rely on emergence:
The conventional wisdom about these kids is that they're more nimble at puzzle solving and more manually dexterous than the TV generation, and while there's certainly some truth to that, … I think they have developed another skill, one that almost looks like patience: they are more tolerant of being out of control, more tolerant of that exploratory phase where the rules don't all make sense, and where few goals have been clearly defined. In other words, they are uniquely equipped to embrace the more oblique control system of emergent software.
If Johnson is right, then the best course for me to take might be to spend some time cultivating such "patience." In this light, my desire to become more of a Morlock may just indicate an attachment to outdated command-governed models of human behavior.
Perhaps. But something keeps nagging at me: the question of who writes the "design rules," and why. I have already noted the tendency of Baldwin and Clark to talk about "design evolution"; Johnson also likes to employ Darwinian metaphors. It's easy to see why: anytime one establishes a system which, under certain rules, is allowed to develop without explicit direction or control, it starts to look like an ecosystem. In the natural world, biochemistry and environmental conditions combine to establish the "rules" under which certain organisms thrive and others fail; biologists call this "natural selection." But any "evolution" that takes place in the world of computer usage isn't "natural" at all, because some human writes the rules—in the term used by Laurence Lessig, a professor of law at Stanford University, someone constructs (many people construct) a particular architecture that could be different.
Take Steve Grand for instance. Grand is an indepndent English computer scientist who, some years ago, wrote a computer game called Creatures (since succeeded by Creatures 2 and Creatures 3) that relies on techniques of emergence to produce digital life forms called Norns. Grand designed Norns to live for about sixteen hours of screen time, but some skillfully nurturing players of Creatures have kept their Norns "alive" for years. Creatures is just a computer game, some may say, but Grand's view of his achievement is, well, more grand: "I am an aspiring latter-day Baron Frankenstein," he writes in his book Creation: Life and How to Make It. But as that title and subtitle suggest, Grand's ambitions may exceed Dr. Frankenstein's. "A game it might have been, but if you'll forgive the staggering lack of modesty this implies, Creatures was probably the closest thing there has been to a new form of life on this planet in four billion years." And Grand's plans are not confined to his work with his own Norns:
I would like to assert that, although the materialist viewpoint is undoubtedly the truth, it is not the whole truth. I am a computer programmer by background, and as familiar as anyone with the means by which apparently abstract ideas can be reduced to simple mechanical steps. But I believe that the computer … can be the saviour of the soul rather than its executioner. It can show us how to transcend mere mechanism, and reinstate a respect for the spiritual domain that materialism has so cruelly (if unintentionally) destroyed.
To create the first new form of life in four billion years, and to save the souls of any pre-existing nondigital lifeforms—perhaps "ambition" isn't quite the word for it. What's so troubling, to me anyway, about Grand's vision is his apparently complete lack of concern about his fitness for such tasks. Though he can say, with an ironic smile, "It's tough being a god" (as he did to a New York Times reporter), he can't spare much time or worry for the ancient nightmares of technology run amok. The medieval Jewish legend of the Golem, Frankenstein's difficulties with his creation, Arthur C. Clarke's hal 9000—these stories take a cautionary tone toward technology, reminding us that our wisdom may not be adequate to our aspirations or our expertise. After all, Rabbi Loew only made the Golem because he felt compassion for people who had to work too hard, and wished to provide them some assistance. Grand sees nothing to learn from either the rabbi or the scientists as he contemplates designing creatures that, while fully sentient, couldn't possibly go astray in the way that humans have:
Human beings are not just nasty because we enjoy it. We're nasty because we feel hard done by, because we're doing something we hate and feel trapped by. And we envy other people. When we have intelligent machines, there's no reason at all why these machines will be envious or unhappy, because we will program them to enjoy the things they do.
(How simple a solution! One can but wonder why God didn't think of it.) For Grand, the challenge of "making life" can be met by taking the time and trouble to write a few basic design rules, to situate the resulting "creatures" in a digital environment that's complex enough to be interesting to observers, and then let "natural selection" do its work. Easy as that.
Steve Grand's conviction that he has created new forms of life, and his plans to create fully intelligent digital lifeforms sometime in the near future, constitute an extreme example of what one might call cyber-triumphalism. And it may be that, whatever the results of Grand's work, it will have little relevance to any of us who don't play Creatures. However, in his blue-skies-smiling-at-me view of a future of infinite technological possibility, he's not unusual; many people have invested similar hopes in (for instance) the community-building, information-providing, freedom-enhancing culture of the Internet. But the driving idea behind Lessig's 1999 book Code is that when people talk about the "nature" of cyberspace, of the Internet, they are talking nonsense: "If there is any place where nature has no rule, it is in cyberspace. If there is any place that is constructed, cyberspace is." At the time that Lessig wrote that book, many observers were arguing that the Internet is "unregulable"; but Lessig simply responds, "whether the Net is unregulable depends, and it de-pends on its architecture." And architecture is determined by code; and code is written by people; people with various beliefs, commitments, motives, and aspirations. Flawed and fallen people, like Steve Grand (and like me). And it is used by people of the same moral constitution. One need not be of a paranoid, or even a suspicious, temperament to be concerned about that.
Still, concern should not lead immediately to repudiation of this or any tech- nology. If an uncritical embrace of technological possibility is dangerous, it is also true, as David Gelernter has rightly cautioned, that "to hate technology is in the end to hate humanity, to hate yourself, because technology is what human beings do"—perhaps the most eloquent response imaginable to Theodore Kaczynski, otherwise known as the Unabomber, one of whose bombs almost killed Gelernter in his Yale University office in 1993.
4
When Lawrence Lessig wrote Code, he believed very strongly that the architecture of cyberspace was still up for grabs: in the last words of the book, he wrote, "We are entering a time when our power to muck about with the structures that regulate is at an all-time high. It is imperative, then, that we understand just what to do with this power. And, more important, what not to do."
By emphasizing "what not to do" Lessig meant to emphasize especially the dangers of overzealous government regulation; but he was aware that other forces also work to regulate cyberspace. And now, in his new book The Future of Ideas, he has come to believe that our window of opportunity to make the most liberating and enriching decisions about the the architecture of cyberspace has closed. We had our chance, and, at least for now, we have blown it. But it is not the U.S. government that has closed the window, in Lessig's view; rather, it is the increasing dominance of cyberspace by two businesses: aol Time Warner and Microsoft:
Additions that benefit either company will be encouraged; additions that don't, won't … Content and access will once again be controlled; the innovation commons will have been carved up and sold.
For Lessig, "the irony astounds. We win the political struggle against state control so as to reentrench control in the name of the market."
In light of these dismal reflections, it would appear that this out-of-control feeling that I can't quite get rid of—whether because I lack patience, as Steven Johnson suggests, or for some nobler reason—has two basic sources: first, my ignorance of computer technology, and second, the increasing dominance that a very few organizations have over my experience of using the computer. I have tried to avoid the latter problem by minimizing my acquaintance with Mr. Gates's products, but insofar as I have stuck with Apple products instead, there's an irony in that.
Indeed, Apple's history, as Neal Stephenson points out, is that of a "control freak" corporate culture: the bosses of Apple "have been able to plant this image of themselves as creative and rebellious freethinkers," but they have historically insisted on an unbreakable bond between their hardware and their software. The fences the company has historically built around software developers; its refusal to allow the Mac OS to run on other hardware except for a brief period in the '90s; its insistence on maintaining a "closed architecture" whose workings are inaccessible to the ordinary user; its determination to maintain what Stephenson calls a "rigid monopoly"—all these tendencies have made Apple a thoroughly inappropriate champion of freedom and autonomy for the computer user. (Some of these tendencies may be changing with the introduction, in the Spring of 2001, of Macintosh OS X; but that's a matter I will take up in another essay.) Strangely, it is only Apple's failure to achieve the monopoly it desires that has enabled it to continue to market itself as a computer company for those who would "Think Different." That said, I continue to think that Apple's products are cool as all get-out, and I remain a devoted aficionado; but not an idealistic one.
So, if my devotion to Apple does little do dispel the demons of corporate control, and nothing at all to correct my technological ignorance, what hope is there? Some hope, I think, and it comes from an interesting group of people with increasing influence in the world of computers. Collectively they tend to be called the Open Source Software movement, though I prefer to think of them as the Cyber-Amish, for reasons I will explain in my next essay.
These people believe that, largely through the resources of the Internet, communities of like-minded hackers can build all of the software any computer user could possibly need, from the kernel of the OS all the way to graphics, cd-burning, and video-editing applications—even games—and make that software available to anyone who wants it and has the skill to download, install, and configure it. Richard Stallmann, of the Free Software Foundation, and Linus Torvalds, the inventor of the Linux operating system—these are the heroes of the Open Source world. If any "true Morlocks" exist, these must be the guys. And they (so I am told) have already bought my ticket to real empowerment, real liberation.
Well. We'll see.
—This is the first article in a three-part series.
Article 2: Life Among the Cyber-Amish
Article 3: The Virtues of Resistance
Alan Jacobs is professor of English at Wheaton College. He is the suthor most recently of A Visit to Vanity Fair: Moral Essays on the Present Age (Brazos Press), which he is recording for a Mars Hill audio book, and A Theology of Reading: The Hermeneutics of Love (Westview Press).
However, music-making software similar to—but funkier and funnier than—Music Mouse is available for Windows as well as Mac: C. Todd Robbins's Sound Toys (formerly just Sound Toy—it has grown in recent years). It can be seen, and bought, at Sound Toys website. Sound Toys, whose sound palette is primarily blues-based, with some world-music and electronica inflections, goes beyond Music Mouse in that it allows you to record and save your "compositions."
The OS is usually thought of as a single program, or super-program, but in fact it is an assemblage of the most common and valuable actions (or, in the jargon, subroutines) that a computer is called upon to perform. For instance, the OS allows you to open all files in the same way, and to enter text via the keyboard in the same way; it would be quite cumbersome for every application program to have to provide its own subroutines for such universal actions, and confusing for users, so instead the writers of such programs rely on the OS's way of doing things.
Books mentioned in this Essay:
- Carliss Y. Baldwin and Kim B. Clark, Design Rules, Volume 1: The Power of Modularity (MIT Press, 2000).
- David Gelernter, Machine Beauty: Elegance and the Heart of Technology (Basic Books, 1998).
- Steve Grand, Creation: Life and How to Make It (Harvard Univ. Press, 2001).
- W. Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work (Basic Books, 1998.)
- Steven Johnson, Emergence: The Connected Lives of Ants, Brains, Cities, and Software (Scribner, 2001.)
- George P. Landow, Hypertext 2.0: The Convergence of Contemporary Critical Theory and Technology (Johns Hopkins Univ. Press, 1997).
- Richard A. Lanham, The Electronic Word: Democracy, Technology, and the Arts (Univ. of Chicago Press, 1993).
- Lawrence Lessig, Code: And Other Laws of Cyberspace (Basic Books, 1999).
- Lawrence Lessig, The Future of Ideas: The Fate of the Commons in a Connected World (Random House, 2001).
- Neal Stephenson, Cryptonomicon (HarperCollins, 1999).
- Neal Stephenson, In the Beginning Was the Command Line (Avon, 1999).
Copyright © 2002 by the author or Christianity Today/Books & Culture magazine.
Click here for reprint information on Books & Culture.
Displaying 00 of 0 comments.
Displaying 00 of 0 comments.
*