Imagine you're an elderly retiree in the not-so-distant future. You've chosen to stay in your own home as you get older, a concept called "aging in place." Technological advances have made robot assistants widely available, and you have one. You've given it a name, say, "Kato," after The Green Hornet's competent and capable sidekick. Kato can remind you when it's time to take your medications, fetch you a drink, vacuum the floor, let the dog out, and make your life easier in a thousand other ways.
More than simply a mechanical servant, Kato is a friend and companion. Kato can monitor your health and mental state, ask how you're doing, listen and react appropriately, and even give you a hug when you need one. Kato can call for help in an emergency, or pick you up when you fall down. Sounds great, right?
But what if, instead of gently lifting you back into a standing or sitting position, Kato grabs your wrist, wrenches you up, and tosses you into whatever piece of furniture happens to be nearby? What if Kato tears your clothes or, worse, hurts you in the process? What if Kato isn't designed to take your feelings or physical abilities into account, and simply acts in the most efficient way possible, as robots are usually programmed to do?
Suddenly, Kato isn't a friendly helper, but a threat. In your own home.
In another example, you're the "driver" in an autonomous car. (Think of a self-driving car as a robot – it performs complex, repetitive tasks that were once performed by humans.) You're on your way to the office. Comfortable in the knowledge that your car's in control, you're replying to emails, booking meetings, scrolling through your Instagram feed, when suddenly you feel a jolt. In a panic, you stop the car and get out. Sure enough, to your horror, you've struck a pedestrian, who now lies injured on the curb.
Suddenly, your self-driving car isn't a convenient ride, but a dangerous loose cannon.
So what makes the difference? Where is the line between useful companion and brute automaton? Between efficient transportation and deadly menace? The answer lies in the study of human-robot interaction, or how humans and robots behave toward each other.
Robots becoming more common
In the past, robots didn't interact much with humans, other than with the people who programmed them, used them, or worked in their immediate vicinity. But as automation moves out of the factory and into the world at large, the chances of robots interacting with people in the community are much higher.
Personal care robots or autonomous cars are perfect examples, as both of these would interact with people they don't serve and who don't control them. Imagine a personal care robot answering the door or directing first responders to where you've fallen down. As for self-driving cars, they'll have to interact with everyone and everything around them, other vehicles and pedestrians alike.
As such technologies get closer to feasibility and widespread adoption, the design decisions made now will have implications far into the future.
"Robots are going to be in our houses and on our streets," says Karthik Mahadevan, a master's student in computer science, studying in UCalgary's uTouch lab in the Faculty of Science. "They're going to interact with us whether we want them to or not. We can choose to design those interactions to be more human, or we can choose to ignore that aspect."
What a car has to tell us
Mahadevan's research centres on autonomous cars, and how they communicate with pedestrians.
"A car has to communicate two things at a crosswalk," says Mahadevan. "Awareness and intent, meaning the car (or driver) has seen a pedestrian, and what it's going to do next."
When people drive cars, the rest of us around them rely on a series of visual and social cues to understand what they intend to do. Turn signals, eye contact, turning wheels, slowing down – these are all ways in which a driver of a car signals their intentions. In turn, we make decisions on whether to cross the street or wait for a car to pass, for example.
But if a car can drive itself, and the driver isn't required to pay attention, how do people around the car know what it's going to do? While autonomous cars currently still have drivers in them at all times, Mahadevan says the day when cars are literally driving themselves isn't that far off.
"Imagine a ridesharing service with a fleet of autonomous cars," says Mahadevan. "These vehicles won't always have a person on board because every few stops someone is going to get out, someone is going to get in, and so on. And these people will be focused on other things. But the car still has to communicate effectively at crosswalks so pedestrians can cross safely."
Recent fatal crashes involving self-driving cars highlight the risks if the vehicles don't know how to respond to their surroundings. Most of the crashes have been due to car's software not reacting properly to something that happened outside. So far, all but one of the fatalities have been to the drivers of the autonomous vehicles. But in one instance, a self-driving car killed a pedestrian after it failed to stop or avoid her while she was crossing the road.
In his research, Mahadevan explores different ways for self-driving cars to communicate what they intend to do. In one study, he looked at different types of signals that autonomous vehicles could be equipped with to communicate with pedestrians. "We augmented vehicles with all sorts of interfaces," he says. "We tried visual cues like a screen or LED strips that communicated with visual information like colour changes and state changes. We tried audio signals, human voices, even haptic feedback delivered to a pedestrian's cellphone when they're standing on the sidewalk." Mahadevan's group even devised an artificial hand, mounted to the roof of a vehicle, which would wave to pedestrians to signal if it was safe for them to cross.
Fully automated roads?
So far, Mahadevan says he hasn't determined which signal is the most effective. "We started by imagining all the possibilities," he says. "Next, I'm trying to examine whether audio or visual cues are more successful."
Aside from crosswalks, Mahadevan says vehicle communication is important in every situation where the vehicle has to interact with others. "Traffic lights, unmanned crossings, four-way stops, highways – the car has to communicate this information at all times," he says. And that's assuming everyone follows the rules at all times – never mind unpredictable circumstances like jaywalking or kids playing in the street.
While autonomous or semi-autonomous cars are still mostly in the testing and development phases, scientists predict the day isn't far off when they dominate streets and highways. "There's a very high probability that autonomous cars will change the way we live within a decade, if not sooner," says Dr. Ehud Sharlin, PhD, an associate professor in UCalgary's Department of Computer Science.
Sharlin, who supervises Mahadevan's work, says the most likely scenario is a transition period where more and more autonomous and semi-autonomous vehicles share the roads with traditional cars driven by people, until the majority of vehicles are driving themselves. This intermingling of human- and machine-driven cars highlights the need to get interaction design right. The more autonomous vehicles are on the roads at the same time as human drivers, the more opportunities there will be for unfortunate incidents.
"In the long-term future, we'll only have autonomous vehicles and they'll be so smart we won't have to worry about them," says Sharlin. "But the short term looks like we'll have a mixture of completely manual cars, semi-autonomous cars with drivers that pay attention, with drivers that don't pay attention, autonomous vehicles with people inside them, with no people inside them, all of them on the road together. The question of how autonomous cars communicate awareness and intent will be fundamental for pedestrians."
Learning curve for people
In order to study which signals may be most effective for cars to communicate with pedestrians, Mahadevan is building a virtual reality simulator that immerses users in an urban environment. Wearing a headset and using a joystick, users walk around a city. As cars approach, users decide whether or not to cross streets based on whatever signals they get from the car about its intent.
While the simulator is still in its early stages, Mahadevan hopes to be able to use it to experiment with different signals and indicators to help pedestrians understand what cars are going to do. Conversely, the studies will also show the best way to teach people what each signal means. Once a universal standard is chosen by car manufacturers, it'll be just as important to make sure the public knows how to read the signals that an autonomous vehicle is giving off.
"If we want these interfaces to be successful, we need to be able to train people to understand what they mean," says Mahadevan.
The social side of robotics
Designing the ways that humans and robots interact with each other isn't just about personal safety. It's also about making the robots behave in a way where people still feel like they're in control and that they're engaging in something familiar. After all, few people are going to let robots into their homes and their lives if interacting with them feels cold and impersonal, or foreign and threatening.
"Most people getting into robotics are hardcore technologists," says Tim Au Yeung, a PhD student also under Sharlin's supervision. "Most of the thinking goes into getting robots to do the same thing over and over, and not into how they impact or displace the human beings around them. I try to provide a voice that says, 'let's not forget about the human part of human-robot interaction.'"
Au Yeung's current research interests are in social robotics. In particular, his work involves the use of social robots to address mental health issues and to help care for the elderly. (In one of his projects, he studies how a life-size, humanoid robot named Baxter can give hugs.)
As populations age and birth rates decline, Au Yeung sees a future where elderly people will need to be more self-reliant. "We're in a situation where there are fewer and fewer children for every adult out there," he says. "At some point, we're going to have a lot of people who have to get by on their own, even when they're dealing with issues like dementia or physical frailty or loneliness."
For Au Yeung, robotics can provide the elderly with tools to be self-sufficient, and can also help detect and intervene with mental health issues before they become more serious. However, because such issues are so sensitive, explorations of what it means to be human must happen alongside explorations of the technical capabilities of such robots.
Au Yeung, who helps teach an undergrad course in human-robot interaction, says he often guides students to examine the human and social implications of putting personal care robots in an elder's home. "One group this past semester looked at how to create a robot that can take care of your grandmother," he says. "They asked what kinds of things that robot should be able to do. And, more importantly, how should that robot interact with your grandmother that still preserves her dignity, so that she's still a person with agency? What can you do to make her feel like she's still in charge?"
Great robots, great responsibility
Dignity and agency come up often in Au Yeung's discussions of robots. For robots to be effective in mental health and elder care, the people they're caring for need to trust them. And to create and maintain trust, humans need to feel respected and that they have a say in what happens to them.
Because human interactions are so complex and nuanced, it's hard to design robots that get social behaviours right. Not only do they need to communicate, but they also need to interpret responses and decide how to react. Also, people with mental health or aging issues may feel more vulnerable and less willing to trust than others. So it's easy to imagine how a robot asking personal questions about your mental state or demanding input about your health history could seem intrusive and unwelcome.
But to be effective, such a robot would have to ask those types of questions – and get honest answers. If the idea is that a robot could check your mental state, your mood, your health, your habits – and call for help if needed – then it needs to have accurate information.
"If we want to help people before they get into disruptive cycles of thinking and mood," says Au Yeung, "well, the challenge is that we need pervasive information about someone to be able to do that."
According to Au Yeung, such technology would be cheap and easy to produce. "You get a $10 microcomputer, you attach it to a $15 camera," he says. "You get it to talk to Google's machine learning services. And for $25 plus a little baling wire, you have something that can identify the faces of people in the room and see if they're happy or not, whether they're smiling or not."
Of course, once any technology comes into being, questions about responsible use always follow. Au Yeung says those discussions should happen sooner, either before or alongside discussions about how to actually build such robots.
"We can gather all kinds of information but once it requires human moderation, it becomes very invasive," says Au Yeung. "But if we think about integrating this technology with machine learning, if we can build in the appropriate privacy measures, if we can protect the person's agency – then we can start addressing issues of equipping the elderly to function without additional support."
Setting the course for the future
Because human-robot interaction is a relatively new field, and social robotics is even newer, Au Yeung says now is the time for students to be able to influence the direction of the discipline and lay the groundwork for the future. "HRI is such a fledgling field," he says. "It's not one of those disciplines where there's 500 years of ordained theories, and we pass them down to you. It's really about exploring a brand-new, cutting-edge area where we're just starting to understand how humans and robots should interact with each other."
Understanding human-robot interaction is also a way for humans to understand themselves, says Sharlin. "Robots are really a reflection on humanity," he says. "This is a mirror – a very deep one that goes beyond the visual. It goes deeply into what makes us human. This is part of what we try to teach in everything we do in our research."
As major social media platforms are now learning the hard way, rushing to develop new technology without thinking about how it could go wrong can be dangerous. "That's a design decision," says Sharlin. "You can abuse the rights to be in people's lives or you can do it correctly."
– – – – –
Explore our computer science program
Participate in a research study
– – – – –
ABOUT OUR EXPERTS
Dr. Ehud Sharlin, PhD, is an associate professor in UCalgary's Department of Computer Science. His research interests lie in human-robot interaction, tangible user interfaces, virtual reality, mixed reality, mobile interfaces, computer games interfaces and human-computer interaction. Read more about Ehud