School of Interactive Computing (IC) robotics Ph.D. student Siddhartha Banerjee remembers the moment he fell in love with LEGOs. He had a neighbor in India, where he grew up, who bought his son an early-generation LEGO tech kit. Flip a switch, and a motor is activated that would move the LEGO structures around.
This particular model was a spaceship. Flip a few more switches and everything from the payload door to a satellite or flashing LED lights would activate.
“I thought it was the coolest thing,” Banerjee recalls fondly.
He begged his parents for the same spaceship kit, but it was expensive and he had never really owned any of LEGO’s robotic kits. So, instead, his parents built up to it. Each birthday, Banerjee received a new LEGO set. He built the model, which began as just plain, unmoving LEGO models first, and then wait for his next birthday for another set.
Finally, around his 13th birthday, he got the spaceship that he had wanted all that time ago.
“I still have that kit lying around,” he said, some 14 years later.
By the time he got the spaceship, LEGO had begun producing its Mindstorms games, which utilize programmable robotics. Banerjee was fascinated by the opportunity to program his creations.
“As a kid, it was amazing to be able to type something on a screen and watch as your LEGO robot did what you told it to do,” he said. “I think that was the first time I really encountered robotics, and I was sold.”
He joined an extracurricular group in high school that competed in the Indian Robot Olympiad. Admittedly, his team didn’t do very well, but he was a part of it, and that was most important.
“We were abysmal, but it was fun,” he said. “I think that’s where the interest came from for me.”
A career interrupted
That interest has led him on a path to Georgia Tech where he began studying as a Ph.D. student in 2015. He came by way of Yale University and a two-year stint in industry. At Yale, his main focus was on electrical engineering. At Redfin, a real estate brokerage, it was on web development.
But he never lost that affinity for robotics and feared seeing his career be pigeon holed into one different area. So, in 2015, he came to Georgia Tech to study with Associate Professor Sonia Chernova.
With Chernova, Banerjee has been exploring interruptibility, an interesting and largely untapped area of robotics research. Interruptibility explores robotic capabilities to distinguish between someone who may be free and approachable and someone who is distracted or focused on another task.
The field has important impacts in environments like hospitals or, as Banerjee and Chernova pitched to NASA, in space exploration.
Consider the scenario:
On a space or extraterrestrial deployment, robots are in use. Perhaps one of the robots is in need of assistance to complete a programmed task, but the accompanying human astronauts are busy. The astronauts’ time is obviously very valuable, but, while mission control is likely always available, there is also high latency because of the distance and delay between the two parties.
The more effective approach would be to request assistance from the astronaut, but it is preferable that the robot only interrupts at a convenient moment.
“That was the impetus for this project,” Banerjee said.
To study the subject, Banerjee said he and his collaborators began with some specific clues in human behavior. When an individual is busy, they are usually hunched over in a skeleton pose that shows engagement with something else. An unoccupied individual’s gaze is likely wandering, looking around and not focused on one thing in particular.
“There are body cues,” Banerjee said. “There are also contextual cues such as where you are. Are you in the kitchen, or are you in the office? If you’re in the kitchen, are you working on a laptop, or are you drinking from a coffee mug? Those things signal whether you might be interruptible or not.”
Interruptibility has been studied a lot in human-computer interaction research to gauge when people are available for smartphone messages. On robots, contextual cues used in HCI can be augmented with context from understanding a scene through the robot’s cameras, such as with the use of object detection. Coupled with social cues, you can begin to get robust interruptibility estimates.
The research is far from complete. Early findings from associated studies have shown various results.
- The time it takes a human to complete a task while being interrupted by a robot is largely unaffected. Why? Humans tend to speed up production to make up for lost time. When they speed up, however, accuracy and effectiveness of the overall task begins to decrease.
- A larger effect was seen on the robot’s task performance. If a robot can interrupt an individual quickly, rather than waiting for someone until they are available to assist, its efficiency increases.
The latest research in this area is currently under review for publication.
Planning for the future
In the future, Banerjee sees the research taking a slightly different direction. More specifically, he and his collaborators would like to step back and try to figure out what’s causing this interruptibility classification to come about in the first place.
“There’s a need for a robot to approach a human,” Banerjee said. “What’s that need? Is it a new environment configuration that the robot hasn’t seen before, and is this a trigger for interrupting a human to ask for a demonstration?”
In his two plus years at the school, he’s volunteered as the social chair and now president of the RoboGrads.
He said he’s looking forward to seeing what the next few years has for his research, collaboration, and community at Georgia Tech.
“I’m very thankful that Georgia Tech gave me a chance to work with its amazing researchers,” he said.