Robot regret: New research helps robots make safer decisions around humans
Imagine for a moment that you’re in an auto factory. A robot and a human are working next to each other on the production line. The robot is busy rapidly assembling car doors while the human runs quality control, inspecting the doors for damage and making sure they come together as they should.
Robots and humans can make formidable teams in manufacturing, health care and numerous other industries. While the robot might be quicker and more effective at monotonous, repetitive tasks like assembling large auto parts, the person can excel at certain tasks that are more complex or require more dexterity.
But there can be a dark side to these robot-human interactions. People are prone to making mistakes and acting unpredictably, which can create unexpected situations that robots aren’t prepared to handle. The.
New and emerging research could change the way robots handle the uncertainty that comes hand-in-hand with human interactions. Morteza Lahijanian, an associate professor in CU Boulder’s Ann and H.J. Smead Department of Aerospace Engineering Sciences, develops processes that let robots make safer decisions around humans while still trying to complete their tasks efficiently.
From left, engineering professor Morteza Lahijanian and graduate student Karan Muvvala watch as a robotic arm completes a task using wooden blocks. (Credit: Casey Cass)
In a new study presented at the International Joint Conference on Artificial Intelligence in August 2025, Lahijanian and graduate students Karan Muvvala and Qi Heng Ho devised new algorithms that help robots create the best possible outcomes from their actions in situations that carry some uncertainty and risk.
“How do we go from very structured environments where there is no human, where the robots are doing everything by themselves, to unstructured environments where there are a lot of uncertainties and other agents?” Lahijanian asked.
“If you’re a robot, you have to be able to interact with others. You have to put yourself out there and take a risk and see what happens. But how do you make that decision, and how much risk do you want to tolerate?”
Similar to humans, robots have mental models that they use to make decisions. When working with a human, a robot will try to predict the person’s actions and respond accordingly. The robot is optimized for completing a task—assembling an auto part, for example—but ideally, it will also take other factors into consideration.
In the new study, the research team drew upon game theory, a mathematical concept that originated in economics, to develop the new algorithms for robots. Game theory analyzes how companies, governments and individuals make decisions in a system where other “players” are also making choices that affect the ultimate outcome.
In robotics, game theory conceptualizes a robot as being one of numerous players in a game that it’s trying to win. For a robot, “winning” is completing a task successfully—but winning is never guaranteed when there’s a human in the mix, and keeping the human safe is also a top priority.
So instead of trying to guarantee a robot will always win, the researchers proposed the concept of a robot finding an “admissible strategy.” Using such a strategy, a robot will accomplish as much of its task as possible while also minimizing any harm, including to a human.
“In choosing a strategy, you don't want the robot to seem very adversarial,” said Lahijanian. “In order to give that softness to the robot, we look at the notion of regret. Is the robot going to regret its action in the future? And in optimizing for the best action at the moment, you try to take an action that you won't regret.”
Let’s go back to the auto factory where the robot and human are working side-by-side. If the person makes mistakes or is not cooperative, using the researchers’ algorithms, a robot could take matters into its own hands. If the person is making mistakes, the robot will try to fix these without endangering the person. But if that doesn’t work, the robot could, for example, pick up what it’s working on and take it to a safer area to finish its task.
Karan Muvvala watches the robotic arm pick up a blue block. (Credit: Casey Cass)
Much like a chess champion who thinks several turns ahead about an opponent’s possible moves, a robot will try to anticipate what a person will do and stay several steps ahead of them, Lahijanian said.Ěý
But the goal is not to attempt the impossible and perfectly predict a person’s actions. Instead, the goal is to create robots that put people’s safety first.
“If you want to have collaboration between a human and a robot, the robot has to adjust itself to the human. We don't want humans to adjust themselves to the robot,” he said. “You can have a human who is a novice and doesn't know what they're doing, or you can have a human who is an expert. But as a robot, you don't know which kind of human you're going to get. So you need to have a strategy for all possible cases.”
And when robots can work safely alongside humans, they can enhance people's lives and provide real and tangible benefits to society.
As more industries embrace robots and artificial intelligence, there are many lingering questions about what AI will ultimately be capable of doing, whether it will be able to take over the jobs that people have historically done, and what that could mean for humanity. But there are upsides to robots being able to take on certain types of jobs. They could work in fields with labor shortages, such as health care for older populations, and physically challenging jobs that may take a toll on workers’ health.
Lahijanian also believes that, when they're used correctly, robots and AI can enhance human talents and expand what we're capable of doing.
"Human-robot collaboration is about combining complementary strengths: humans contribute intelligence, judgment, and flexibility, while robots offer precision, strength, and reliability," he said.
"Together, they can achieve more than either could alone, safely and efficiently."
ĚýĚýĚýBeyond the Story
Our research impact by the numbers:
- $742 million in research funding earned in 2023–24
- No. 5 U.S. university for startup creation
- $1.4 billion impact of CU Boulder's research activities on the Colorado economy in 2023–24
Ěý