The possibilities for social robots -- defined as robots that interact with people -- are enormous. They could serve as helpmates and companions for elderly or disabled people, or work behind store counters.
"[These are] environments that people are changing all the time," says Andrea Thomaz, assistant professor in the School of Interactive Computing. "People create a lot of dynamics that are impossible to anticipate and program into a robot."
Thomaz envisions a future populated with service robots helping with household chores. These robots could be programmed with basic information, she notes, "but everybody's house is a little bit different, so you'd want to be able to take it home and teach it exactly what you want it to do."
Thomaz and her colleagues in the Socially Intelligent Machines research lab are exploring ways that robots can learn from other robots and, ultimately, from humans.
Their work involves a pair of 10-inch-tall, wheeled, upper-torso robots and an environment of geometric forms of different shapes, sizes and colors. Some objects—depending on their physical characteristics, their position in the environment and the nature of the robot's interaction with them—will produce sound. The robots' goal was to "learn" which objects and under what conditions would generate sound.
To establish a performance baseline, each machine individually learned to identify the sound-making objects. In subsequent tests, this was compared to how well the machine could learn in a social context.
In the social learning mode, the learner robot's choices were influenced by the actions it observed another robot performing in the environment. The learner employed two social-learning mechanisms: stimulus enhancement, meaning when the observer is focused on objects others are interacting with; and emulation, meaning when the observer recognizes a connection between an object and the goal, then applies its own capabilities to achieve the same goal with the same object. Results showed that these mechanisms greatly improve learning performance, and each is beneficial in different contexts.
Setting robotic objectives
Developmental psychologist Rosa Arriaga, senior research scientist in the School of Interactive Computing and director of pediatric research at Georgia Tech's Health Systems Institute, says stimulus enhancement and emulation are among the social learning strategies that comprise the basic building blocks of human cognitive development.
"One of the primary objectives in robotics is to develop robots that can have an end goal and carry it out as well as a human," says Arriaga, who brings a multidisciplinary aspect to the research by suggesting ways robotics can be used to test prominent theories about human development. The studies were inspired by Michael Tomasello's, The Cultural Origins of Human Cognition, she says.
"What we've found from this set of studies," Arriaga notes, "is that robots can achieve an objective by following a sequence of relatively simple social learning paths, as opposed to the extremely difficult and complex job of trying to program them to imitate human behavior."
Toward autonomous humanoids
Over at the Humanoid Robotics Lab, Assistant Professor Mike Stilman and his students in the School of Interactive Computing are tackling the functional aspects of robots working in human environments.
At the center of their work is the design and development of Golem, a novel, six-foot-tall humanoid robot mounted on two wheels. Research focuses on planning and control algorithms that enable the robot to perform the same kinds of physical tasks as a human. Golem can autonomously sit, stand, navigate and even do limbo. In the future, the machine will be able to go through a room strewn with fixed and moveable obstacles. It will use its arms and its entire body to push and pull objects, clearing a path to its destination.
In principle, tasks ranging from household chores to search-and-rescue operations can be outsourced to autonomous robots. To achieve this goal, robots must be able to autonomously manipulate their environments, a task that is nowhere near as simple as pop-culture representations of robots—think "Terminator" or "Wall-E"—would suggest.
"My research addresses the questions associated with robot interactions in dynamic, real-world settings," Stilman says. "Questions like: How would the robot decide which environment interactions are useful? How would it plan to perform the interactions with guarantees of safety and stability? What would it do if something went wrong?"