Студопедия

Главная страница Случайная страница

Разделы сайта

АвтомобилиАстрономияБиологияГеографияДом и садДругие языкиДругоеИнформатикаИсторияКультураЛитератураЛогикаМатематикаМедицинаМеталлургияМеханикаОбразованиеОхрана трудаПедагогикаПолитикаПравоПсихологияРелигияРиторикаСоциологияСпортСтроительствоТехнологияТуризмФизикаФилософияФинансыХимияЧерчениеЭкологияЭкономикаЭлектроника






The Real Transformers






By ROBIN MARANTZ HENIG

I was introduced to my first sociable robot on a sunny afternoon in June. The robot, developed by graduate students at the Massachusetts Institute of Technology, was named Mertz. It had camera sensors behind its eyes, which were programmed to detect faces; when it found mine, the robot was supposed to gaze at me directly to initiate a kind of conversation. But Mertz was on the fritz that day, and one of its designers, a dark-haired young woman named Lijin Aryananda, was trying to figure out what was wrong with it. Mertz was getting fidgety, Aryananda was getting frustrated and I was starting to feel as if I were peeking behind the curtain of the Wizard of Oz.

Mertz consists of a metal head on a flexible neck. It has a childish computer-generated voice and expressive brows above its Ping-Pong-ball eyes — features designed to make a human feel kindly toward the robot and enjoy talking to it. But when something is off in the computer code, Mertz starts to babble like Chatty Cathy on speed, and it becomes clear that behind those big black eyes there’s truly nobody home.

In a video of Aryananda and Mertz in happier times, Aryananda can be seen leaning in, trying to get the robot’s attention by saying, “I’m your mother.” She didn’t seem particularly maternal on that June day, and Mertz didn’t seem too happy, either. It directed a stream of sentences at me in apparently random order: “You are too far away.” “Please teach me some colors.” “You are too far away.”

Maybe something was wrong with its camera sensor, Aryananda said. Maybe that was why it kept looking up at the ceiling and complaining. As she fiddled with the computer that runs the robot, I smiled politely — almost as much for the robot’s sake, I realized, as for the robot maker’s — and thought: Well, maybe it is the camera sensor, but if this thing wails “You are too far away” one more time, I’m going to throttle it.

At the Humanoid Robotics Group at M.I.T., a robot’s “humanoid” qualities can include fallibility and whininess as much as physical traits like head, arms and torso. This is where our cultural images of robots as superhumans run headlong into the reality of motors, actuators and cold computer code. Today’s humanoids are not the sophisticated machines we might have expected by now, which just shows how complicated a task it was that scientists embarked on 15 years ago when they began working on a robot that could think. They are not the docile companions of our collective dreams, robots designed to flawlessly serve our dinners, fold our clothes and do the dull or dangerous jobs that we don’t want to do. Nor are they the villains of our collective nightmares, poised for robotic rebellion against humans whose machine creations have become smarter than the humans themselves. They are, instead, hunks of metal tethered to computers, which need their human designers to get them going and to smooth the hiccups along the way.

But these early incarnations of sociable robots are also much more than meets the eye. Bill Gates has said that personal robotics today is at the stage that personal computers were in the mid-1970s. Thirty years ago, few people guessed that the bulky, slow computers being used by a handful of businesses would by 2007 insinuate themselves into our lives via applications like Google, e-mail, YouTube, Skype and MySpace. In much the same way, the robots being built today, still unwieldy and temperamental even in the most capable hands, probably offer only hints of the way we might be using robots in another 30 years.

Mertz and its brethren — at the Humanoid Robotics lab, at the Personal Robotics Lab across the street in another M.I.T. building and at similar laboratories in other parts of the United States, in Europe and in Japan — are still less like thinking, autonomous creatures than they are like fancy puppets that frequently break down. But what the M.I.T. robots may lack in looks or finesse, they make up for in originality: they are programmed to learn the way humans learn, through their bodies, their senses and the feedback generated by their own behavior. It is a more organic style of learning — though organic is, of course, a curious word to reach for to describe creatures that are so clearly manufactured.

Sociable robots come equipped with the very abilities that humans have evolved to ease our interactions with one another: eye contact, gaze direction, turn-taking, shared attention. They are programmed to learn the way humans learn, by starting with a core of basic drives and abilities and adding to them as their physical and social experiences accrue. People respond to the robots’ social cues almost without thinking, and as a result the robots give the impression of being somehow, improbably, alive.

At the moment, no single robot can do very much. The competencies have been cobbled together: one robot is able to grab a soup can when you tell it to put it on a shelf; another will look you in the eye and make babbling noises in keeping with the inflection of your voice. One robot might be able to learn some new words; another can take the perspective of a human collaborator; still another can recognize itself in a mirror. Taken together, each small accomplishment brings the field closer to a time when a robot with true intelligence — and with perhaps other human qualities, too, like emotions and autonomy — is at least a theoretical possibility. If that possibility comes to pass, what then? Will these new robots be capable of what we recognize as learning? Of what we recognize as consciousness? Will it know that it is a robot and that you are not?

The word “robot” was popularized in 1920, in the play “Rossum’s Universal Robots, ” commonly called “R.U.R., ” by the Czech writer Karel Capek. The word comes from the Czech “robota, ” meaning forced labor or drudgery. In the world of R.U.R., Robots (always with a capital R) are built to be factory workers, meaning they are designed as simply as possible, with no extraneous frills. “Robots are not people, ” says the man who manufactures them. “They are mechanically more perfect than we are, they have an astounding intellectual capacity, but they have no soul.” Capek’s Robots are biological, not mechanical. The thing that separates them from humans is not the material they are made of — their skin is real skin; their blood, real blood — but the fact that they are built rather than born.

What separates the current crop of humanoid robots from humans is something harder to name. Because if roboticists succeed in programming their machines with a convincing version of social intelligence, with feelings that look like real feelings and thoughts that look like real thoughts, then all our fancy notions about our place in the universe start to get a little wobbly.

Eliminating the Cognition Box

We already live with many objects that are, in one sense, robots: the voice in a car’s Global Positioning System, for instance, which senses shifts in its own location and can change its behavior accordingly. But scientists working in the field mean something else when they talk about sociable robots. To qualify as that kind of robot, they say, a machine must have at least two characteristics. It must be situated, and it must be embodied. Being situated means being able to sense its environment and be responsive to it; being embodied means having a physical body through which to experience the world. A G.P.S. robot is situated but not embodied, while an assembly-line robot that repeats the same action over and over again is embodied but not situated. Sociable robots must be both, as well as exhibiting an understanding of social beings.

The push for sociable robots comes from two directions. One is pragmatic: if Bill Gates is right and the robots are coming, they should be designed in a way that makes them fit most naturally into the lives of ordinary people. The other is more theoretical: if a robot can be designed to learn the same way natural creatures do, this could be a significant boost for the field of artificial intelligence.

Both pragmatism and theory drive Rodney Brooks, author of “Flesh and Machines, ” who until the end of last month was director of M.I.T.’s Computer Science and Artificial Intelligence Laboratory, home to the Humanoid Robotics lab that houses Mertz. Brooks is an electric, exaggerated personality, an Australian native with rubbery features and bulgy blue eyes. That mobile face and Aussie accent helped turn him into a cult figure after the 1997 theatrical release of “Fast, Cheap & Out of Control, ” a documentary by Errol Morris that featured Brooks — along with a wild animal trainer, a topiary gardener and an expert in naked mole rats — as a man whose obsessions made him something of a misfit, a visionary with a restless, uncategorizable genius.

As Brooks sat with me in his office and reflected on his career from the vantage point of a 52-year-old about to return to full-time research — a man going through what he called “a scientific midlife crisis” — a theme emerged. Each time he faced a problem in artificial intelligence, he said, he looked for the implicit assumption that everyone else took for granted, and then he tried to negate it. In the 1980s, the implicit assumption was that abstract reasoning was the highest form of intelligence, the one that programmers should strive to imitate. This led to a focus on symbolic processing, on tough tasks like playing chess or solving problems in algebra or calculus. Tasks that, as Brooks slyly put it in “Flesh and Machines, ” “highly educated male scientists found challenging.”

But Brooks wanted to build an artificial intelligence system that did the supposedly simple things, not mental acrobatics like chess but things that come naturally to any 4-year-old and that were eluding the symbolic processing capabilities of the computers. These cognitive tasks — visually distinguishing a cup from a chair, walking on two legs, making your way from bedroom to bathroom — were difficult to write into computer code because they did not require an explicit chain of reasoning; they just happened. And the way they happened was grounded in the fact that the 4-year-old had a body and that each action the child took provided more sensory information and, ultimately, more learning. This approach has come to be known as embodied intelligence.

That’s where the robots came in. Robots had bodies, and they could be programmed to use those bodies as part of their data gathering. Instead of starting out with everything they needed to know already programmed in, these robots would learn about the world the way babies do, starting with some simple competencies and adding to them through sensory input. For babies, that sensory input included seeing, touching and balancing. For robots, it would mean input from mechanical sensors like video cameras and gyroscopes.

In 1993, Brooks started to develop a new robot, a humanoid equipped with artificial intelligence, according to this new logic. His motivation was more theoretical than practical: to offer a new way of thinking about intelligence itself. Most artificial-intelligence programs at the time were designed from the top down, connecting all relevant processes of a robot — raw sensory input, perception, motor activity, behavior — in what was called a cognition box, a sort of centralized zone for all high-level computation. A walking robot, for instance, was programmed to go through an elaborate planning process before it took a step. It had to scan its location, obtain a three-dimensional model of the terrain, plan a path between any obstacles it had detected, plan where to put its right foot along that path, plan the pressures on each joint to get its foot to that spot, plan how to twist the rest of its body to make its right foot move and plan the same set of behaviors for placing its left foot at the next spot along the path, and then finally it would move its feet.

Brooks turned the top-down approach on its head; he did away with the cognition box altogether. “No cognition, ” he wrote in “Flesh and Machines.” “Just sensing and action.” In effect, he wrote, he was leaving out what was thought to be the “intelligence” part of “artificial intelligence.” The way Brooks’s robot was designed to start walking, he wrote, was “by moving its feet.”

This was the approach that Brooks and his team used to design their humanoid robot. This one couldn’t walk. The robot, named Cog, was stationary, a big man-size metal torso with big man-size arms that spanned six and a half feet when extended. But it was designed to think. Perched on a pedestal almost three feet high, it seemed to hulk over its human creators, dominating the Humanoid Robotics lab from 1993 until it was retired 11 years later and put on permanent display at the M.I.T. Museum. (It has been lent out as part of a traveling exhibit, “Robots + Us, ” currently at the Notebaert Nature Museum in Chicago.) Its presence was disarming, mostly because it was programmed to look at anything that moved. As one visitor to the lab put it: “Cog ‘noticed’ me soon after I entered its room. Its head turned to follow me, and I was embarrassed to note that this made me happy.”

Cog was designed to learn like a child, and that’s how people tended to treat it, like a child. Videos of graduate students show them presenting Cog with a red ball to track, a waggling hand to look at, a bright pink Slinky to manipulate — the toys children are given to explore the world, to learn some basic truths about anatomy and physics and social interactions. As the robot moved in response to the students’ instructions, it exhibited qualities that signaled “creature.” The human brain has evolved to interpret certain traits as indicators of autonomous life: when something moves on its own and with apparent purpose, directs its gaze toward the person with whom it interacts, follows people with its eyes and backs away if someone gets too close. Cog did all these things, which made people who came in contact with it think of it as something alive. Even without a face, even without skin, even without arms that looked like arms or any legs at all, there was something creaturelike about Cog. It took very little, just the barest suggestion of a human form and a pair of eyes, for people to react to the robot as a social being.

In addition, Cog was programmed to learn new things based on its sensory and motor inputs, much as babies learn new things by seeing how their bodies react to and affect their surroundings. Cog’s arm motors, for instance, were calibrated to respond to the weight of a held object. When a student handed a Slinky to Cog, the oscillators in its elbowlike joints gave feedback about the toy’s weight and position. After a few hours of practice, the robot could make the Slinky slither by raising and lowering its arms. If it was given a heavier Slinky or a drumstick, it would be able to adjust its motions accordingly. The learning was minimal, but it was a start — and it was, significantly, learning derived from the input of motors, gears and oscillators, the robot equivalent of muscles.

Cog was able to learn other things too, including finding and naming objects it had never seen before. (The robot had microphones for ears and was equipped with some basic speech recognition software and an artificial voice.) But while Brooks showed a kind of paternal delight in what the robot could do, he was hesitant to give it the label of “learning” per se. “I am so careful about saying that any of our robots ‘can learn, ’ ” he wrote in an e-mail message. “They can only learn certain things, just like a rat can learn only certain things and a chimpanzee can only learn certain things and even [you] can only learn certain things.” Even now, 14 years after the Cog project began, each of today’s humanoid robots can still only learn a very small number of things.

Cynthia Breazeal came to Brooks’s lab as a graduate student in 1990 and did much of the basic computational work on Cog. In 1996, when it was time for Breazeal to choose a doctoral project, she decided to develop a sociable robot of her own. Her goals were as much pragmatic as theoretical; she said she hoped her robot would be a model for how to design the domestic robots of the future. The one she built had an animated head with big blue eyes, flirty lashes, red lips that curved upward or downward depending on its mood and pink ears that did the same. She called the robot Kismet, after the Turkish word for fate.

How Smart Can a Robot Be?

Kismet was the most expressive sociable robot built to that point, even though it consisted of only a hinged metal head on a heavy base, with wires and motors visible and eyes and lips stuck on almost like an afterthought. Breazeal is now 39 years old, an associate professor at M.I.T. and director of the Personal Robotics Group. She retains a polished, youthful prettiness, amplified these days by a late pregnancy with her third child. When she talks about Kismet, she is careful to call it “it” instead of a more animate pronoun like “he” or “she.” But her voice softens, her rapid-fire speech slows a little and it can be difficult to tell from her tone of voice whether she’s describing her robot or one of her two preschool-age sons.

The robot expressed a few basic emotions through changes in its facial expression — that is, through the positioning of its eyes, lips, eyebrows and pink paper ears. The emotions were easy for an observer to recognize: anger, fear, disgust, joy, surprise, sorrow. According to psychologists, these expressions are automatic, unconscious and universally understood. So when the drivers on Kismet’s motors were set to make surprise look like raised eyebrows, wide-open eyes and a rounded mouth, the human observer knew exactly what was going on.

Kismet’s responses to stimulation were so socially appropriate that some people found themselves thinking that the robot was actually feeling the emotions it was displaying. Breazeal realized how complicated it was to try to figure out what, or even whether, Kismet was feeling. “Robots are not human, but humans aren’t the only things that have emotions, ” she said. “The question for robots is not, Will they ever have human emotions? Dogs don’t have human emotions, either, but we all agree they have genuine emotions. The question is, What are the emotions that are genuine for the robot? ”

Unlike Cog’s, Kismet’s learning was more social than cognitive. What made the robot so lifelike was its ability to have what Breazeal called “proto-conversations” with a variety of human interlocutors. Run by 15 parallel computers operating simultaneously, Kismet was programmed to have the same basic motivations as a 6-month-old child: the drive for novelty, the drive for social interaction and the drive for periodic rest. The behaviors to achieve these goals, like the ability to look for brightly colored objects or to recognize the human face, were also part of Kismet’s innate program. So were the facial behaviors that reflected Kismet’s mood states — aroused, bored or neutral — which changed according to whether the robot’s basic drives were being satisfied.

The robot was a model for how these desires and emotions are reflected in facial expression and how those expressions in turn affect social interaction. Take the drive for novelty. With no stimulus nearby, Kismet’s eyes would droop in apparent boredom. Then a lovely thing happened. If there was a person nearby, she would see Kismet’s boredom and wave a toy in front of the robot’s eyes. This activated Kismet’s program to look for brightly colored objects, which in turn moved the robot into its “aroused” affective state, with a facial expression with the hallmarks of happiness. The happy face, in turn, led the human to feel good about the interaction and to wave the toy some more — a socially gratifying feedback loop akin to playing with a baby.

Kismet is now retired and on permanent display, inert as a bronze statue, at the M.I.T. Museum. The most famous robot now in Breazeal’s lab, the one that the graduate students compete for time with, looks nothing like Kismet. It is a three-foot-tall, head-to-toe creature, sort of a badger, sort of a Yoda, with big eyes, enormous pointy ears, a mouth with soft lips and tiny teeth, a furry belly, furry legs and pliable hands with real-looking fingernails. The reason the robot, called Leonardo (Leo for short), is so lifelike is that it was made by Hollywood animatronics experts at the Stan Winston Studio. (Breazeal consulted with the studio on the construction of the robotic teddy bear in the 2001 Steven Spielberg film “A.I.”) As soon as Leo arrived in the lab, Breazeal said, her students started dismantling it, stripping out all the remote-control wiring and configuring it instead with a brain and body that operated not by remote control but by computer-based artificial intelligence.

I had studied the videos posted on the M.I.T. Media Lab Web site, and I was fond of Leo even before I got to Cambridge. I couldn’t wait to see it close up. I loved the steadiness of its gaze, the slow way it nodded its head and blinked when it understood something, the little Jack Benny shrug it gave when it didn’t. I loved how smart it seemed. In one video, two graduate students, Jesse Gray and Matt Berlin, engaged it in an exercise known in psychology as the false-belief test. Leo performed remarkably. Some psychologists contend that very young children think all minds are permeable and that everyone knows exactly what they themselves know. Older children, after the age of about 4 or 5, have learned that different people have different minds and that it is possible for someone else to hold beliefs that the children themselves know to be false. Leo performed in the video like a sophisticated 5-year-old, one who had developed what psychologists call a theory of mind.

In the video, Leo watches Jesse Gray, who is wearing a red T-shirt, put a bag of chips into Box 1 and a bag of cookies into Box 2, while Matt Berlin, in a brown T-shirt, also watches. After Berlin leaves the room, Gray switches the items, so that now the cookies are in Box 1 and the chips are in Box 2. Gray locks the two boxes and leaves the room, and Leo now knows what Gray knows: the new location for the chips and cookies. But it also knows that Berlin doesn’t know about the switch. Berlin still thinks there are chips in Box 1.

The amazing part comes next. Berlin, in the brown T-shirt, comes back into the room and tries to open the lock on the first box. Leo sees Berlin struggling, and it decides to help by pressing a lever that will deliver to Berlin the item he’s looking for. Leo presses the lever for the chips. It knows that there are cookies in the box that Berlin is trying to open, but it also knows — and this is the part that struck me as so amazing — that Berlin is trying to open the box because he wants chips. It knows that Berlin has a false belief about what is in the first box, and it also knows what Berlin wants. If Leo had indeed passed this important developmental milestone, I wondered, could it also be capable of all sorts of other emotional tasks: empathy, collaboration, social bonding, deception?

Unfortunately, Leo was turned off the day I arrived, inertly presiding over one corner of the lab like a fuzzy Buddha. Berlin and Gray and their colleague, Andrea Thomaz, a postdoctoral researcher, said that they would be happy to turn on the robot for me but that the process would take time and that I would have to come back the next morning. They also wanted to know what it was in particular that I wanted to see Leo do because, it turned out, the robot could go through its paces only when the right computer program was geared up. This was my first clue that Leo maybe wasn’t going to turn out to be quite as clever as I had thought.

When I came back the next day, Berlin and Gray were ready to go through the false-belief routine with Leo. But it wasn’t what I expected. I could now see what I had seen on the video. But in person, I could also peek behind the metaphoric curtain and see something that the video camera hadn’t revealed: the computer monitor that showed what Leo’s cameras were actually seeing and another monitor that showed the architecture of Leo’s brain. I could see that this wasn’t a literal demonstration of a human “theory of mind” at all. Yes, there was some robotic learning going on, but it was mostly a feat of brilliant computer programming, combined with some dazzling Hollywood special effects.

It turned out Leo wasn’t seeing the young men’s faces or bodies; it was seeing something else. Gray and Berlin were each wearing a headband and a glove, which I hadn’t noticed in the video, and the robot’s optical motion tracking system could see nothing but the unique arrangements of reflective tape on their accessories. What the robot saw were bunches of dots. Dots in one geometric arrangement meant Person A; in a different arrangement, they meant Person B. There was a different arrangement of tape on the two different snacks, too, and also on the two different locks for the boxes. On a big monitor alongside Leo was an image of what was going on inside its “brain”: one set of dots represented Leo’s brain; another set of dots represented Berlin’s brain; a third set of dots represented Gray’s. The robot brain was programmed to keep track of it all.

Leo did not learn about false beliefs in the same way a child did. Robot learning, I realized, can be defined as making new versions of a robot’s original instructions, collecting and sorting data in a creative way. So the learning taking place here was not Leo’s ability to keep track of which student believed what, since that skill had been programmed into the robot. The learning taking place was Leo’s ability to make inferences about Gray’s and Berlin’s actions and intentions. Seeing that Berlin’s hand was near the lock on Box 1, Leo had to search through its internal set of task models, which had been written into its computer program, and figure out what it meant for a hand to be moving near a lock and not near, say, a glass of water. Then it had to go back to that set of task models to decide why Berlin might have been trying to open the box — that is, what his ultimate goal was. Finally, it had to convert its drive to be helpful, another bit of information written into its computer program, into behavior. Leo had to learn that by pressing a particular lever, it could give Berlin the chips he was looking for. Leo’s robot learning consisted of integrating the group of simultaneous computer programs with which it had begun.

Leo’s behavior might not have been an act of real curiosity or empathy, but it was an impressive feat nonetheless. Still, I felt a little twinge of disappointment, and for that I blame Hollywood. I’ve been exposed to robot hype for years, from the TV of my childhood — Rosie the robot maid on “The Jetsons, ” that weird talking garbage-can robot on “Lost in Space” — to the more contemporary robots-gone-wild of films like “Blade Runner” and “I, Robot.” Despite my basic cold, hard rationalism, I was prepared to be bowled over by a robot that was adorable, autonomous and smart. What I saw in Leo was no small accomplishment in terms of artificial intelligence and the modeling of human cognition, but it was just not quite the accomplishment I had been expecting. I had been expecting something closer to “real.”

Why We Might Want to Hug a Desk Lamp

I had been seduced by Leo’s big brown eyes, just like almost everyone else who encounters the robot, right down to the students who work on its innards. “There we all are, soldering Leonardo’s motors, aware of how it looks from behind, aware that its brain is just a bunch of wires, ” Guy Hoffman, a graduate student, told me. Yet as soon as they get in front of it, he said, the students see its eyes move, see its head turn, see the programmed chest motion that looks so much like breathing, and they start talking about Leo as a living thing.

People do the same thing with a robotic desk lamp that Hoffman has designed to move in relation to a user’s motions, casting light wherever it senses the user might need it. It’s just a lamp with a bulky motor-driven neck; it looks nothing like a living creature. But, he said, “as soon as it moves on its own and faces you, you say: ‘Look, it’s trying to help me.’ ‘Why is it doing that? ’ ‘What does it want from me? ’ ”

When something is self-propelled and seems to engage in goal-directed behavior, we are compelled to interpret those actions in social terms, according to Breazeal. That social tendency won’t turn off when we interact with robots. But instead of fighting it, she said, “we should embrace it so we can design robots in a way that makes sense, so we can integrate robots into our lives.”

The brain activity of people who interacted with Cog and Kismet, and with their successors like Mertz, is probably much the same as the brain activity of someone interacting with a real person. Neuroscientists recently found a collection of brain cells called mirror neurons, which become activated in two different contexts: when someone performs an activity and when someone watches another person perform the same activity. Mirror-neuron activation is thought to be the root of such basic human drives as imitation, learning and empathy. Now it seems that mirror neurons fire not only when watching a person but also when watching a humanoid robot. Scientists at the University of California, San Diego, reported last year that brain scans of people looking at videos of a robotic hand grasping things showed activity in the mirror neurons. The work is preliminary, but it suggests something that people in the M.I.T. robotics labs have already seen: when these machines move, when they direct their gaze at you or lean in your direction, they feel like real creatures.

Would a Robot Make a Better Boyfriend?

Cog, Kismet and Mertz might feel real, but they look specifically and emphatically robotic. Their gears and motors show; they have an appealing retro-techno look, evoking old-fashioned images of the future, not too far from the Elektro robot of the 1939 World’s Fair, which looked a little like the Tin Man of “The Wizard of Oz.” This design was in part a reflection of a certain kind of aesthetic sensibility and in part a deliberate decision to avoid making robots that look too much like us.

Another robot-looking robot is Domo, whose stylized shape somehow evokes the Chrysler Building almost as much as it does a human. It can respond to some verbal commands, like “Here, Domo, ” and can close its hand around whatever is placed in its palm, the way a baby does. Shaking hands with Domo feels almost like shaking hands with something alive. The robot’s designer, Aaron Edsinger, has programmed it to do some domestic tricks. It can grab a box of crackers placed in its hand and put it on a shelf and then grab a bag of coffee beans — with a different grip, based on sensors in its mechanical hand — and put it, too, on a shelf. Edsinger calls this “helping with chores.” Domo tracks objects with its big blue eyes and responds to verbal instructions in a high-pitched artificial voice, repeating the words it hears and occasionally adding an obliging “O.K.”

Domo’s looks are just barely humanoid, but that probably works to its advantage. Scientists believe that the more a robot looks like a person, the more favorably we tend to view it, but only up to a point. After that, our response slips into what the Japanese roboticist Masahiro Mori has called the “uncanny valley.” We start expecting too much of the robots because they so closely resemble real people, and when they fail to deliver, we recoil in something like disgust.

If a robot had features that made it seem, say, 50 percent human, 50 percent machine, according to this view, we would be willing to fill in the blanks and presume a certain kind of nearly human status. That is why robots like Domo and Mertz are interpreted by our brains as creaturelike. But if a robot has features that make it appear 99 percent human, the uncanny-valley theory holds that our brains get stuck on that missing 1 percent: the eyes that gaze but have no spark, the arms that move with just a little too much stiffness. This response might be akin to an adaptive revulsion at the sight of corpses. A too-human robot looks distressingly like a corpse that moves.

This zombie effect is one aspect of a new discipline that Breazeal is trying to create called human-robot interaction. Last March, Breazeal and Alan Schultz of the Naval Research Laboratory convened the field’s second annual conference in Arlington, Va., with presentations as diverse as describing how people react to instructions to “kill” a humanoid robot and a film festival featuring videos of human-robot interaction bloopers.

To some observers, the real challenge is not how to make human-robot interaction smoother and more natural but how to keep it from overshadowing, and eventually seeming superior to, a different, messier, more complicated, more flawed kind of interaction — the one between one human and another. Sherry Turkle, a professor in the Program in Science, Technology and Society at M.I.T., worries that sociable robots might be easier to deal with than people are and that one day we might actually prefer our relationships with our machines. A female graduate student once approached her after a lecture, Turkle said, and announced that she would gladly trade in her boyfriend for a sophisticated humanoid robot as long as the robot could produce what the student called “caring behavior.” “I need the feeling of civility in the house, ” she told Turkle. “If the robot could provide a civil environment, I would be happy to help produce the illusion that there is somebody really with me.” What she was looking for, the student said, was a “no-risk relationship” that would stave off loneliness; a responsive robot, even if it was just exhibiting scripted behavior, seemed better to her than an unresponsive boyfriend.

The encounter horrified Turkle, who thought it revealed how dangerous, and how seductive, sociable robots could be. “They push our Darwinian buttons, ” she told me. Sociable robots are programmed to exhibit the kind of behavior we have come to associate with sentience and empathy, she said, which leads us to think of them as creatures with intentions, emotions and autonomy: “You see a robot like that as a creature; you feel a desire to nurture it. And with this desire comes the fantasy of reciprocation. You begin to care for these creatures and to want the creatures to care about you.”

If Lijin Aryananda, Brooks’s former student, had ever wanted Mertz to “care” about her, she certainly doesn’t anymore. On the day she introduced me to Mertz, Aryananda was heading back to a postdoctoral research position at the University of Zurich. Her new job is in the Artificial Intelligence Lab, and she will still be working with robots, but Aryananda said she wants to get as far away as possible from humanoids and from the study of how humans and robots interact.

“Anyone who tells you that in human-robot interactions the robot is doing anything — well, he is just kidding himself, ” she told me, grumpy because Mertz was misbehaving. “Whatever there is in human-robot interaction is there because the human puts it there.”

Nagging, a Killer App

The building and testing of sociable robots remains a research-based enterprise, and when the robots do make their way out of the laboratory, it is usually as part of somebody’s experiment. Breazeal is now overseeing two such projects. One is the work of Cory Kidd, a graduate student who designed and built 17 humanoid robots to serve as weight-loss coaches. The robot coach, a child-size head and torso holding a small touch screen, is called Autom. It is able, using basic artificial-voice software, to speak approximately 1, 000 phrases, things like “It’s great that you’re doing well with your exercise” or “You should congratulate yourself on meeting your calorie goals today.” It is programmed to get a little more informal as time goes on: “Hello, I hope that we can work together” will eventually shift to “Hi, it’s good to see you again.” It is also programmed to refer to things that happened on other days, with statements like “It looks like you’ve had a little more to eat than usual recently.”

Kidd is recruiting 15 volunteers from around Boston to take Autom into their homes for six weeks. They will be told to interact with the robot at least once a day, recording food intake and exercise on its touch screen. The plan is to compare their experiences with those of two other groups of 15 dieters each. One group will interact with the same weight-loss coaching software through a touch screen only; the other will record daily food intake and exercise the old-fashioned way, with paper and pen. Kidd said that the study is too short-term to use weight loss as a measure of whether the robot is a useful dieting aid. But at this point, his research questions are more subjective anyway: Do participants feel more connected to the robot than they do to the touch screen? And do they think of that robot on the kitchen counter as an ally or a pest?

Breazeal’s second project is more ambitious. In collaboration with Rod Grupen, a roboticist at the University of Massachusetts in Amherst, she is designing and building four toddler-size robots. Then she will put them into action at the Boston Science Museum for two weeks in June 2009. The robots, which will cost several hundred thousand dollars each, will roll around in what she calls “a kind of robot Romper Room” and interact with a stream of museum visitors. The goal is to see whether the social competencies programmed into these robots are enough to make humans comfortable interacting with them and whether people will be able to help the robots learn to do simple tasks like stacking blocks.

The bare bones of the toddler robots already exist, in the form of a robot designed in Grupen’s lab called uBot-5. A few of these uBots are now being developed for use in assisted-living centers in research designed to see how the robots interact with the frail elderly. Each uBot-5 is about three feet tall, with a big head, very long arms (long enough to touch the ground, should the arms be needed for balance) and two oversize wheels. It has big eyes, rubber balls at the ends of its arms and a video screen for a face. (Breazeal’s version will have sleek torsos, expressive faces and realistic hands.) In one slide that Grupen uses in his PowerPoint presentations, the uBot-5 robot is holding a stethoscope to the chest of a woman lying on the ground after a simulated fall. The uBot is designed to connect by video hookup to a health care practitioner, but still, the image of a robot providing even this level of emergency medical care is, to say the least, disconcerting.

Does It Know It’s a Robot?

More disconcerting still is the image of a robot looking at itself in the mirror and waving hello — a robot with a primitive version of self-awareness. A first step in this direction occurred in September 2004 with reports from Yale about Nico, a humanoid robot. Nico, its designers announced, was able to recognize itself in a mirror. One of its creators, Brian Scassellati, earned his doctorate in 2001 at M.I.T., where he worked on Cog and Kismet — to which Nico bears a family resemblance. Nico has visible workings, a head, arms and torso made of steel and a graceful tilt to its shoulders and neck. Like the M.I.T. robots, Nico has no legs, because Scassellati, now an associate professor of computer science at Yale, wanted to concentrate on what it could do with its upper body and, in particular, the cameras in its eyes.

Here is how Nico learned to recognize itself. The robot had a camera behind its eye, which was pointed toward a mirror. When a reflection came back, Nico was programmed to assign the image a score based on whether it was most likely to be “self, ” “another” or “neither.” Nico was also programmed to move its arm, which sent back information to the computer about whether the arm was moving. If the arm was moving and the reflection in the mirror was also moving, the program assigned the image a high probability of being “self.” If the reflection moved but Nico’s arm was not moving, the image was assigned a high probability of being “another.” If the image did not move at all, it was given a high probability of being “neither.”

Nico spent some time moving its arm in front of the mirror, so it could learn when its motor sensors were detecting arm movement and what that looked like through its camera. It learned to give that combination a high score for “self.” Then Nico and Kevin Gold, a graduate student, stood near each other, looking into the mirror, as the robot and the human took turns moving their arms. In 20 runs of the experiment, Nico correctly identified its own moving arm as “self” and Gold’s purposeful flailing as “another.”

One way to interpret this might be to conclude that Nico has a kind of self-awareness, at least when in motion. But that would be quite a leap. Robot consciousness is a tricky thing, according to Daniel Dennett, a Tufts philosopher and author of “Consciousness Explained, ” who was part of a team of experts that Rodney Brooks assembled in the early 1990s to consult on the Cog project. In a 1994 article in The Philosophical Transactions of the Royal Society of London, Dennett posed questions about whether it would ever be possible to build a conscious robot. His conclusion: “Unlikely, ” at least as long as we are talking about a robot that is “conscious in just the way we human beings are.” But Dennett was willing to credit Cog with one piece of consciousness: the ability to be aware of its own internal states. Indeed, Dennett believed that it was theoretically possible for Cog, or some other intelligent humanoid robot in the future, to be a better judge of its own internal states than the humans who built it. The robot, not the designer, might some day be “a source of knowledge about what it is doing and feeling and why.”

But maybe higher-order consciousness is not even the point for a robot, according to Sidney Perkowitz, a physicist at Emory. “For many applications, ” he wrote in his 2004 book, “Digital People: From Bionic Humans to Androids, ” “it is enough that the being seems alive or seems human, and irrelevant whether it feels so.”

In humans, Perkowitz wrote, an emotional event triggers the autonomic nervous system, which sparks involuntary physiological reactions like faster heartbeat, increased blood flow to the brain and the release of certain hormones. “Kismet’s complex programming includes something roughly equivalent, ” he wrote, “a quantity that specifies its level of arousal, depending on the stimulus it has been receiving. If Kismet itself reads this arousal tag, the robot not only is aroused, it knows it is aroused, and it can use this information to plan its future behavior.” In this way, according to Perkowitz, a robot might exhibit the first glimmers of consciousness, “namely, the reflexive ability of a mind to examine itself over its own shoulder.”

Robot consciousness, it would seem, is related to two areas: robot learning (the ability to think, to reason, to create, to generalize, to improvise) and robot emotion (the ability to feel). Robot learning has already occurred, with baby steps, in robots like Cog and Leonardo, able to learn new skills that go beyond their initial capabilities. But what of emotion? Emotion is something we are inclined to think of as quintessentially human, something we only grudgingly admit might be taking place in nonhuman animals like dogs and dolphins. Some believe that emotion is at least theoretically possible for robots too. Rodney Brooks goes so far as to say that robot emotions may already have occurred — that Cog and Kismet not only displayed emotions but, in one way of looking at it, actually experienced them.

“We’re all machines, ” he told me when we talked in his office at M.I.T. “Robots are made of different sorts of components than we are — we are made of biomaterials; they are silicon and steel — but in principle, even human emotions are mechanistic.” A robot’s level of a feeling like sadness could be set as a number in computer code, he said. But isn’t a human’s level of sadness basically a number, too, just a number of the amounts of various neurochemicals circulating in the brain? Why should a robot’s numbers be any less authentic than a human’s?

“If the mechanistic explanation is right, then one can in principle make a machine which is living, ” he said with a grin. That explains one of his longtime ultimate goals: to create a robot that you feel bad about switching off.

The permeable boundary between humanoid robots and humans has especially captivated Kathleen Richardson, a graduate student in anthropology at Cambridge University in England. “I wanted to study what it means to be human, and robots are a great way to do that, ” she said, explaining the 18 months she spent in Brooks’s Humanoid Robotics lab in 2003 and 2004, doing fieldwork for her doctorate. “Robots are kind of ambiguous, aren’t they? They’re kind of like us but not like us, and we’re always a bit uncertain about why.”

To her surprise, Richardson found herself just as fascinated by the roboticists at M.I.T. as she was by the robots. She observed a kinship between human and humanoid, an odd synchronization of abilities and disabilities. She tried not to make too much of it. “I kept thinking it was merely anecdotal, ” she said, but the connection kept recurring. Just as a portrait might inadvertently give away the painter’s own weaknesses or preoccupations, humanoid robots seemed to reflect something unintended about their designers. A shy designer might make a robot that’s particularly bashful; a designer with physical ailments might focus on the function — touch, vision, speech, ambulation — that gives the robot builder the greatest trouble.

“A lot of the inspiration for the robots seems to come from some kind of deficiency in being human, ” Richardson, back in England and finishing her dissertation, told me by telephone. “If we just looked at a machine and said we want the machine to help us understand about being human, I think this shows that the model of being human we carry with us is embedded in aspects of our own deficiencies and limitations.” It’s almost as if the scientists are building their robots as a way of completing themselves.

“I want to understand what it is that makes living things living, ” Rodney Brooks told me. At their core, robots are not so very different from living things. “It’s all mechanistic, ” Brooks said. “Humans are made up of biomolecules that interact according to the laws of physics and chemistry. We like to think we’re in control, but we’re not.” We are all, human and humanoid alike, whether made of flesh or of metal, basically just sociable machines.

Robin Marantz Henig is a contributing writer. Her last article for the magazine was about evolutionary theories of religion.

 






© 2023 :: MyLektsii.ru :: Мои Лекции
Все материалы представленные на сайте исключительно с целью ознакомления читателями и не преследуют коммерческих целей или нарушение авторских прав.
Копирование текстов разрешено только с указанием индексируемой ссылки на источник.