The Robot Made Me Do It

How Robots Are Changing the Lives of Children with Disabilities


The behavioral therapy study group was amazed. They had been trying to engage a child who wouldn’t share. The very concept of turn-taking was not in this child’s frame of reference. Then a miniature humanoid robot was added to the mix. Almost immediately, the child was interacting with it, saying, “It’s your turn, it’s your turn.”

Roboticist Dr. Ayanna Howard’s voice softens as she recalls this moment: “One of the clinicians who was observing turned to me, and her eyes were like, ‘Oh my gosh! Did you just see that?’ These are the kind of stories that make you say, ‘Yeah, I’m going to keep doing this. I’m going to keep doing this.’”

What Dr. Howard, Chair of the School for Interactive Computing at Georgia Institute of Technology, is continuing to do is find ways to use robots to help children with behavioral or motor disabilities.

Dr. Ayanna Howard and graduate student, Jin Xu, setting up the NAO robot.

Dr. Ayanna Howard and graduate student, Jin Xu, setting up the NAO robot.

“You’re working with a target population that’s been typically underserved, and therefore the things you do really make a difference. It’s not like the next big gadget that maybe 50 billion people will use, but if they didn’t have it, it wouldn’t change their lives. With this, there may be only 1,000 kids who’ll use it—but it significantly changes their lives.”

Dr. Howard’s fascination with robots, which goes back to her own childhood, has led her to an area called human/robot interaction, in which robots are an integral part of a human’s life. “Humans and robots are in this world together,” she says. “This is a thread that has run through all my research.”

Earlier in her career, while at NASA, Dr. Howard was interested in building intelligent space rovers that could mimic the thinking patterns of scientists. As her research evolved into healthcare robotics, she is now focusing on ways to incorporate concepts like trust and transparency into algorithms. To Dr. Howard, that’s a key concept in designing what she refers to as “service robots”, whether they’re being used in self-driving cars or as therapy coaches for children with disabilities.

Dr. Howard holds the NAO robot while graduate student Jin Xu initializes the program on the laptop.

Dr. Howard’s research utilizes NAO robots from SoftBank Robotics to keep the children in the program engaged during the physical therapy sessions.

Robots to Engage and Encourage

Physical therapy sessions are hard; they are designed to push a patient’s ability. And the sessions can be repetitive, requiring the patient to make the same movement time and time again. Children with developmental disabilities such as cerebral palsy must endure hours upon hours of therapy. Keeping them engaged in the therapy regimen is of utmost importance.

This is where the cool factor of robots comes into “play”. Almost every child has played a video game, but not everyone gets to play with a robot. Currently, Dr. Howard’s team is working with small humanoid NAOs from SoftBank Robotics. These robots are about two to three feet high and can be very expressive. Dr. Howard’s team has found these robots can keep the child interested and engaged.

“Studies have shown that if kids with disabilities like cerebral palsy are to improve, they need home exercise programs,” says Dr. Yu-Ping Chen, professor of physical therapy at Georgia State University and Dr. Howard’s clinical associate. 

Graduate student Jin Xu sets up the NAO in the lab. A cable connects the robot to the laptop.

Graduate student Jin Xu sets up the NAO robot in the Georgia Tech lab.

“Parents understand the importance of this, but they typically don’t have the time and knowledge to oversee it. The robot can provide guidance, motivation, feedback, while creating a bond with the child who is using them.”

Dr. Howard concurs. “Children are used to technology. They like it, it’s intuitive to them, so a robot fits into that category of technology. It’s different enough that it’s novel, but still intuitive, so it’s not an uncomfortable experience, like, ‘What is this? What am I supposed to do with this?’ It’s like ‘Oh, it’s a robot, I’m comfortable with it, I know how to use it and interact with it.’”

In addition to the robot, the system now being tested consists of a virtual reality game played on a TV screen, which encourages the child to make arm movements that pop a bubble. A Kinect camera is used so the child can view himself/herself playing the game.

As the child plays the game, the NAO engages with the child, providing feedback about whether the actions have been performed correctly or not. The fully embodied robot can also show the child the correct way to make a movement, since the 3D representation is more meaningful to a child than a 2D avatar.

Machine Learning to Teach the Robot

Video length is 1:54

One of the key aspects of the robot-assisted therapy is the need for the robot to motivate the child to continue the exercises. To accomplish this, the robot must learn how therapists motivate and coach a child during therapy sessions. To this end, Dr. Howard’s team collects visual data (images and video) from therapy sessions.

Computer vision is used to extract what the child is doing during these sessions. What are their body movements that correspond to different tasks during the therapy? The visual data set also shows how a clinician interacts with a child when the child completes the exercise correctly or incorrectly.

The data set is used to teach the robot the appropriate behavior for interacting with children. Machine learning is used on the visual data set to classify the interactions between child and therapist. From this, the robot learns how to respond to various therapy scenarios and is programmed with the needed reaction to motivate the child to try again. 

The team is currently working to train the NAO in emotional recognition during real-time play. This will enable it to analyze facial gestures in real time, to determine if the child has become bored, tired, or distracted, and adjust the game accordingly. This brings together an understanding of psychology, and a deeper level of machine learning.

Hardware is Hard

“Hardware is hard. We use software simulations to test algorithms before deployment to the hardware platform, the robot. Depending on the algorithm and the stage of development, the simulation is completed with a combination of MATLAB, Gazebo, and the NAO Interface.”

Simulation is used to answer questions such as: What are the needed joint kinematics that allow the robot to make a happy gesture? Dr. Howard’s team creates highly efficient prototypes in software that incorporate aspects of gravity, friction, and the environment. The simulations enable the team to quickly reiterate if something isn’t working properly.

“That’s really important, because it costs a lot of time and development in terms of hardware—and you can break stuff, so you don’t want to make too many mistakes.”

A laptop shows the motion tracking results from a physical therapy session.

Computer vision is used to analyze the child’s movement.

“We can look at simulation baselines and make improvements,” Dr. Howard says, “and because we can do it quicker, there’s no fear. If it doesn’t work, it doesn’t matter. We just change a parameter and run the simulation again overnight.”

Simulations enable the team to quickly iterate on improvements to the algorithms. Dr. Howards says, “The advantages of this are it allows us to make things better, quicker, because we can’t necessarily build robots that work correctly the first time, at a price point that makes sense.” 

Dr. Howard holds the NAO robot. Algorithms are tested in simulation before deployment to the robot.

Dr. Howard’s team relies on simulations to test the algorithms before deployment to the robot hardware. 

A Robot for Every Child

Right now, the robots are used in home settings when a therapist present. The goal is to develop the system, so it can be used for in-home therapy sessions.

“We don’t have it perfect yet,” Dr. Howard says, “because every single child is unique and special, everyone interacts differently, has a slightly different response. Their movement profile is slightly different. So, the big challenge is, how do we ensure that our robot is adaptive enough so that when we bring it in, it can change based on the abilities of the child we are working with at that moment in time? How do you create our system so that it can be used by any parent or clinician, given that there is no norm, when we’re talking about a child with a special need—and make it simple and low-cost enough, in terms of the interface, to use in the home?”

Dr. Chen, for one, has no doubt these challenges can be met. “Whatever I’m asking, Dr. Howard can make it happen,” she says. “As a therapist, I have a lot of crazy ideas. I’m hoping for some of the kids who live far away, they can play a game and we can observe what they are doing from a distance. Whatever I need, I tell her, and she can make this happen. That’s the most exciting part. That’s my dream come true.”


Read Other Stories

Panel Navigation

BIOTECH

Fighting Childhood Pneumonia:

Turning a Medical Crisis into an Engineering Challenge to Save Lives

Panel Navigation

AI

Cyborg Drummer and AI Team Create Music That’s Not Humanly Possible:

Designing a Robotic Prosthetic for a Professional Drummer

Panel Navigation

AI

Getting into the Weeds:

Farmers Rely on Artificial Intelligence to Boost Production