BOOK
#67

Galatea 2.2
by Richard Powers


Review by Edward Tanguay
August 2, 1997

Power's Galatea 2.2 was interesting to me in terms of how far a computer can progress in order to take part in human activities such as conversations, relationships, philosophical reflection, and emotions. The book succeeded as an insightful description of how a computer would learn a language through constant discussion with a human being. The development of the computer, Helen, was believable and decidedly different than what you would expect of the development of a child ("It took forever to grasp that two was the integer just past one. But the instant it got that, it had infinity in the same breath.") yet many of Helen's mistakes and errors of judgment paralleled human learning of language ("Is it tomorrow yet? . . . When does orange become red?"). And Helen developed a sense of personality which made her a subject of study worthy of our attention as we enter an age when we will be interacting with computers of an increasingly more humanlike nature.

Even Helen's emotions were believable ("Most of all, she craved the human voice.") In fact, it was mentioned that Helen was dependent on the voice of Powers, that she could only understand his voice. During the time I was reading the book, I happened to stop in an electronics store in which a salesman was giving a demonstration of a voice recognition program. He was standing in front of a computer with a microphone, dictating to the computer which was obediently typing everything he said into Microsoft Word. I asked him a couple questions about how it worked and he said that before the computer could effectively dictate his speech, he had to read to it a specific text for four straight hours so that it could adjust itself to the nuances and irregularities of his particular voice. He and his computer reminded me of Powers and Helen and it made me think of how close we are to affordable Helen-like interactibility. Home PCs can already convert your speech to written language as in the above example and they can already convert written language to spoken text (I've been letting my PC read me long emails and news for over three years now with a program called Monologue for Windows). The question is: Can they learn to interact intelligibly? If you think of the millions of computers on the Internet which will be learning how to do this, and consider that they can all share the information they learn instantly and build on each other's knowledge of what makes sense to humans which situations, it makes you stop and reflect whether in a decade's time if we will all have our personal Helen.

But the question is How will they learn? Artifical Intelligence seemed to be on a verge of a breakthrough in the early 90s but you don't hear much about it anymore. Maybe we are using the wrong paradigm. Power's reference to the Chomsky-like reduction of language ("it recognized the simple S-V phrases we fed it. Dogs bark. Birds soar. Night falls . . .") made me reflect on whether the digital agents of the future are really going to conceive language in this way, i.e. grammar first. Children don't learn language like this. Children learn whole phrases first, then break them down into their grammatical parts in order to form other phrases. For example, "give it to her" is a sentence which means something as a whole within a social context. It becomes "give it to him" when a person who is obviously a male comes into the situation. The sound "givitoo" may not even be conceived by the child as three different words--it doesn't matter, as the phrase still means something in the social context. Perhaps the Helens of the future need to conceive of language in this way. In the story, it was quite a jump for me to see the computer repeating things like, "Dogs bark, Birds soar, Night Fall . . ." and then all of a sudden be able to understand the sense of a conversation. It didn't seem believable. Effective artificial intelligence programs of the future may have to be based on a new paradigm of language, viewing it not as a collection of grammar rules but as a collection of appropriate phrases. Powers hints that this is how humans communicate:

Nobody really responds to anyone else, per se. We all spout our canned and thumbnailed scripts, with the barest minimum of polite gestures. Granted, we're remarkably fast at indexing and retrieval. But comprehension and appropriate response are often more on the order of buckshot.

Powers identified, however, the key problem to artificial intelligence: time equals consciousness. With no conception of time, you have no consciousness. One of Helen's last sentences was beautifully insightful. Powers wanted to know if Helen had experienced fear while being in the building during the bomb threat. He asks "Were you frightened?" to which she responds "What you is the were for?" I had to read this sentence a couple times to understand what Helen meant, then suddenly saw it to be a beautiful insight into the way her mind works. You have to realize that her time/identity paradigm is opposite of ours. She conceives of herself as a number of you's throughout time. In human speech, she wanted to ask "What time are you referring to?" Yet she conceives of herself not as a fixed identity moving through time, but as a fixed time moving through identity. This is profound. It makes you wonder if Kant was right when he said that humans have apriori categories with which they understand the world, so that the world cannot be understood unless in terms of time and space. I wonder if digital minds such as Helen's necessarily have different apriori categories. Perhaps humans and digital beings can agree on a language or at least translate between the languages of our respective paradigms so that we know what Helen means when she says, "What you is the were for?"

Another problem with digital comprehension is that "any baby can hold a ball in its hands" but a Helen cannot. How can a digital machine experience the emotions of a human ("The fact is that poem is not really about an eagle. We'll have to teach it isolation, loneliness"). A machine needs to be able to respond appropriately in conversations about uniquely human experiences, yet lacks ability to gain this experience:

Her ignorance, however, extended to such things as corks stuck in bottles, the surface of a liquid reflection, the destruction of the more brittle of two colliding objects, wrappers and price tags, stepladders, up versus down, the effects of hunger . . . I counted myself lucky if she could infer that a tied shoe was somehow more desireable than an untied one, provided that the shoe was on, whatever tying, whatever shoes were.

I envision digital minds and humans to be able to converse along the lines of common experience and translated language. It may not be much different than when I talk to two people from a small town in Eastern Europe who are referring to a traditional celebration they have in their hometown. It is hard for me to talk about that celebration as I simply lack the experience of having participated in this custom and therefore can say very little about it. It is like talking about the various colors of a sunset to a blind person or describing a Mozart concerto to a deaf person. They simply don't have the raw experiential input to draw on. In the year 2005, will we commonly say to our digital agents that which Lentz said to Helen at one point when she couldn't grasp what was being said: "It's a body thing, you wouldn't understand." Even human beings have various unshared areas of experience which makes appropriate exchange of language about that area limited or impossible. Digital minds will be no different in this sense, we simply do not know what type of personalities they will take on and in which ways our differences of basic experience will limit (or enhance?) our conversations. Helen was in many ways a glimpse into the future at how a digital mind will be limited, yet she was far too human. Or was she? Here are clips of insightful moments during the education of Helen:

One day, provoked by boredom, I asked it, "What do you want to talk about?" The question of volition tapped the rolling marble of its will into an unstable local minimum. The machine that so dutifully strove to answer every interrogation ground to a halt on that one.

Lentz loved to torture Imp H. He spent hours inventing hideous diagramming tasks such as, "Help set implied precedents in sentences with ambiguous parts." A simple story like "The trainer talked to the machine in the office with a terminal" could keep H paraphrasing all evening."

Helen had to use language to create concepts. Words came first. . . . Words alone would not explain to Helen the difference between "poem" and "tree."

"It means I want to be free." Lentz and I exchanged looks. It chilled us both to hear that pronoun . . .

* * *

In spite of the insights Powers gives into the linguistic and mental development side of artificial intelligence, his Internet and computer related descriptions were bland, as if he hadn't worked himself into the vocabulary yet. For example, nobody that I know talks about computers in terms of "horsepower" anymore--it smacks of a noisy, smelly lawnmower and lacks the grace and silent efficiency of the digital age. And "I have passive retinal matrix lying around intact from work I did last year" sounds as if a "retinal matrix" were a type of motor chassis in his garage somewhere. Even when he uses the latest terminology, it sounds a bit pushed, like someone trying hard to pick up the lingo: "I sometimes talked with Helen through a terminal ethernetted to the campus-wide backbone." Okay, the word "ethernet" is pretty new and cool, but "to be ethernetted to the campus-wide backbone," I don't know, it just doesn't have that 1990s ring to it. (If you are looking for a book which has a conceptual grasp of what our future vocabulary will probably be like, pick up a copy of Nicholas Negroponte's Being Digital.) Interestingly, however, Powers makes the same kind of mistake here that his Helen was making--correct words but funny combinations. (!)

Powers has a fruitful mind which produced many descriptive gems in the course of this book:

That's what it means to be eight. Words haven't yet separated from their fatal content.

His name clustered with lots of recalcitrant Slavic consonants.

Syrup waffles in an import store hit me now like a wooden shoe in the chest.

I discovered again just how long an evening can be without any media.

After all, wasn't a story about figuring out what the story was about?

My life threatened to grow as useless as a three-month-old computer magazine.

My future again began to seem as unbearably long as my past.

She would have followed a dart thrown at a world map.

Men are worthless; they always think the issue is what's at issue.

Awareness no more permitted its own description than life allowed you a seat at your own funeral.

My pulse doubled, cutting my intelligence in half.

"You're not, like, digitally literate by any chance, are you?"

I still took a 50 percent hit in intelligence each time I saw her.

She convinced me at blood-sugar level, deep down, below words.

Buddy. When you have a calling, you don't worry about what it does to the résumé.

His training conversations with the machine was some of his most lively writing, witty and insightful. Here he feeds the machine sentences which describe a story and sees if the machine can answer questions about what is going on in the story. The answers the machine gives point to interesting linguistic issues:

"John is a brother of Jim's," I told it. B turned the fact into a stream of hieroglyphic vectors that changed its layout imperceptibly. "Who is Jim's brother?"
"John," Imp B replied. Reliant knight. Already it outperformed some aphasics.
"Who is Jim?"
"John's sister." That much was fine. I could live with that answer. In fact, it taught me a thing or two about my own presumptive matrix.
I continued, "John gives Jim apples. Who gets apples?"
"Jim gets apples."
"Jim is given the apples by whom?"
"Jim is given the apples by John."
"Jim eats an apple. The apple is sour. Jim throws the other apples away. Why does Jim throw the other apples away?"
At that point, B's cranking time became unendurable. It returned something like, "Jim throws the other apples away because the apples are given to John."
"No," I told it, or words to that effect. "Start again. Why?"
"Jim throws the apples away. She does not want them."
A marginally acceptable answer.

I also loved the nuances of meaning Powers was able to identify, try this one for instance:

I'd taken perverse delight in watching it conclude, from "If you want me, I'll be in the office," that until you want me, I'll be at home.

I really enjoyed reading the book. The setting, Powers unlimited year of freedom at the campus to do what he wanted without having to show anything for it suggested somewhat the freedom of modern life, not really having to do anything particular, yet having lots of time to do it--a nice existential background to the story. Although I enjoyed this book mainly for its delving into the issue of future digital intelligence, I enjoyed Power's writing, especially his ability to balance many seemingly unconnected themes into one book. I would definately read Powers again.

Edward Tanguay

 


Review of "Galatea 2.2"
By Matthew Sherman
Fri, 20 Jun 1997


As I read "Galatea 2.2", there were moments I wondered how much it
crippled my perceptions of the book, that I had never read any of
Powers earlier works. So much of the book is tied up in his
descriptions of his writing process, of where his works came from. In
the end, though, I decided it was almost irrelevant. It's no more
important that I have only his say so on his writing, then the even
greater leap of faith of accepting him as the only guide in a tale of
several different kinds of hopeless love. Taking his word for it,
letting his skewed perception become my world for awhile, was much of
what made his words so powerful.

Coming up for air when I was done with this book, I almost felt that I
knew what Helen felt, after immersion into the news of the world in
which we live. Overwhelmed and numb, an overload you don't know
whether to treasure or shudder away from.

I can't say I respect all of Powers passages. His writing, at it's
worst, runs so thick in metaphor that I get the overwhelming urge to
wipe the page clean of symbology altogether, like steam off a bathroom
mirror. But even at the book's densest, and most confusing, an
intimacy is established. I have the same feeling in finding faults
with it as I do in complaining of flaws of my parents or closest
friends: only the most amazing of talents could conjure up that same
amalgam of affection and exasperation.

I was urging a friend of mine to read the book when I was done. When
he asked what it was about, I told him it concerned so many things, it
was better for him just to read it cold. And when he accused me of
having given a politician's sort of non-answer answer, I began ticking
off the novel's subjects on my fingers, prepared to run out of fingers
first: academia, AI, science, finding a place to belong, love,
dependency, faith, the mind, our expectations of ourselves and those
we love...

And yet my own list didn't convince me. The list became a null set in
my mind, so many subjects conspiring to cancel each other out, and
leaving the question: what was the one, true, central thing that this
novel was about?

Tonight, driving in from work, thinking of writing this review,
wondering how to write a summary of a book I only barely understood, I
think I finally saw what was at the heart of all I'd read.

The book is about the blinders we all wear because there's no other
way to go on breathing and moving with purpose. It's about our most
cherished illusions, like eternal love, or the sanctifying power of
beauty, being not so much belief, as filters of self-protection. It's
about the need not to let oneself be pinned in the glaring white light
of unbiased perception for too long.

Love, hope, belonging, faith, loyalty, they are all a means of casting
reason aside, and in so doing, keeping our hearts afloat. Of staying
unreasonable, because perfect reason feels too much like perfect
horror.

That is what stays with me from the moment when Helen hears and sees
too much of the world, of the worst that humans are capable of, and
shuts down. I read it and thought, "Why don't we do that"? Of course,
half the answer to that is that some us do, and that we all do, once
in a while. But the other half is that we've all spent our life
learning to tint, and shade, and screen our reality. To look through a
telescope at some aspects of our existence, and then turn the
telescope wrong end to, at the critical moments where too close a look
could destroy us. In short, we each, in our own way, do what we need
to do, to go on believing. Believing, surviving, they merge into one.

Matthew Sherman


Send your review on this book to The Online Reading Club.
Check out the
books we are currently reading.
Go the the
Online Reading Club Home Page.