Robot Learns From Great Surgeons to Perform Own Surgical Tasks

surgeon surgery
Adobe Stock
Published on
Updated on

Key Takeaways

  • An AI program has learned to drive a multi-armed surgical robot to perform surgical tasks

  • The AI learned by watching hundreds of videos of robot operations driven by human docs

  • The AI now is as skillful as human surgeons in performing a handful of key tasks

TUESDAY, Nov. 12, 2024 (HealthDay News) -- A multi-armed surgical robot has become better at treating people by watching videos of human surgeons at work, a new study says.

The robot is powered by the same AI that underpins ChatGPT, but instead of words and text, the program focuses on kinematics – a language that breaks down robotic motion into math.

After analyzing hundreds of surgical videos, the AI became as skillful as human doctors at performing a handful of key tasks required during robot-assisted surgery, researchers are to report this week at the Conference on Robot Learning in Munich, Germany.

“It’s really magical to have this model and all we do is feed it camera input and it can predict the robotic movements needed for surgery,” senior researcher Axel Krieger, an associate professor of mechanical engineering with Johns Hopkins University, said in a news release. “We believe this marks a significant step forward toward a new frontier in medical robotics.”

The study focused on the da Vinci Surgical System, which uses three to four different overhead surgical arms to perform minimally invasive surgeries.

Nearly 7,000 da Vinci robots are used in surgical suites worldwide, and more than 50,000 surgeons have been trained on the system.

The robots are typically guided by human surgeons, but researchers wanted to see if an AI could learn how to perform some tasks using the arms of the da Vinci robot.

The team fed the AI videos recorded from wrist cameras placed on the arms of da Vinci robots during surgical procedures.

“All we need is image input and then this AI system finds the right action,” lead investigator Ji Woong “Brian” Kim, a postdoctoral engineering researcher with Johns Hopkins University, said in a news release. “We find that even with a few hundred demos the model is able to learn the procedure and generalize new environments it hasn’t encountered.”

The team trained the robot to perform three tasks:

  • Manipulate a needle.

  • Lift body tissue during a surgical procedure.

  • Suture.

The AI was programmed to interpret videos based on relative movements, which made it capable of picking up things doctors didn’t even consider, results show.

“Here the model is so good learning things we haven’t taught it,” Krieger said. “Like if it drops the needle, it will automatically pick it up and continue. This isn’t something I taught it do.”

The team is now teaching the AI to perform a full surgery, rather than small surgical tasks.

Prior to this advance, programming a robot to perform even simple surgical tasks required hand-coding every task. Someone might spend a decade teaching the robot to suture just for one type of surgery, Krieger said.

“It’s very limiting,” he said. “What is new here is we only have to collect imitation learning of different procedures, and we can train a robot to learn it in a couple days. It allows us to accelerate to the goal of autonomy while reducing medical errors and achieving more accurate surgery.”

Because these findings were presented at a medical meeting, they should be considered preliminary until published in a peer-reviewed journal.

More information

The Cleveland Clinic has more on the da Vinci Surgical System.

SOURCE: Johns Hopkins University, news release, Nov. 11, 2024

What This Means For You

In the future, AI programs might be able to perform full robotic surgeries after learning by watching the actions of human surgeons.

Related Stories

No stories found.
logo
www.healthday.com