The future of MedTech is robotic—hospitals will be fully automated, with AI-driven surgical systems, robotic assistants, and autonomous patient care transforming healthcare as we know it.
Building AI-driven robotic systems poses several key challenges. Integrating data collection with expert insights is one. Creating detailed biomechanical simulations for realistic anatomy, sensors, and robots is another. These simulations are crucial for generating synthetic data and training robots. Ensuring a seamless transition from virtual to real-world deployment is vital, as is managing high-bandwidth, multi-modal sensor AI with ultra-low latency during operation.
These limitations underscore the need for a holistic framework consisting of AI computing for training advanced models, simulation computing for developing and validating robotic behaviors in a high-fidelity virtual environment, and runtime computing for real-time execution in clinical settings.
NVIDIA Isaac for Healthcare, a developer framework for AI healthcare robotics, enables healthcare robotics developers in solving these challenges. Isaac for Healthcare is a domain-specific framework that leverages NVIDIA three computer system for enabling physical AI.
It includes MONAI for pretrained models and agentic AI frameworks, where models like MAISI and Vista-3D can generate anatomical synthetic data needed for simulation workflows. Second, it includes NVIDIA Omniverse (NVIDIA Isaac Sim and NVIDIA Isaac Lab) for simulation, enabling developers to bring in medical devices/robots, sensors, and anatomies to create domain-specific, physically accurate virtual environments where robotic systems can safely learn skills. Third, it includes NVIDIA Holoscan for on-robot deployment and real-time sensor processing.
The framework offers capabilities such as digital prototyping, hardware-in-the-loop (HIL) product development and testing, synthetic data generation for AI training, policy training, and real-time deployment for medical robotics across:
- Surgical and interventional robotics.
- Imaging and diagnostic robotics.
- Rehab, assistive, service robotics.

Isaac for Healthcare: Powering the Next Wave of AI Robotics in Healthcare
Isaac for Healthcare brings the combined power of digital twins and physical AI for:
- Digital prototyping of next-gen healthcare robotic systems, sensors and instruments.
- Training AI models with real and synthetic data generated by ?high-fidelity simulation environments.
- Evaluating AI models in a digital twin environment with hardware-in-the-loop (HIL).
- Collecting data for training robotic policies through imitation learning, by enabling XR and/or haptics-enabled teleoperation of robotic systems in digital twins.
- Training robotic policies for augmented dexterity (e.g., in robot-assisted surgery),using GPU parallelization to train reinforcement and imitation learning algorithms.
- Continuous testing (CT) of robotic systems through HIL digital twin systems.
- Creating deployment applications to bridge simulation to the real world and deployment on a physical surgical robot.
The latest release features two end-to-end reference workflows of surgical subtask automation and autonomous robotic ultrasound, covering use cases across surgical and imaging robotics, designed to fast-track the development of autonomous robotic capabilities for your use cases.
The following reviews these workflows.
Robotic surgery subtask automation workflow
This workflow serves as a template for developers aiming to build and deploy surgical subtask automation solutions. Combining digital twins, reinforcement and imitation learning, high-fidelity synthetic data generation, and real-time robotic evaluation, provides a scalable approach to AI-driven surgical automation.
The workflow builds on ORBIT-Surgical, a collaboration between NVIDIA, PAIR Lab (University of Toronto and Georgia Tech), and AUTOLAB (UC Berkeley), with research collaborations from ETH Zurich.
ORBIT-Surgical is transitioning to Isaac for Healthcare and evolving into the robotic surgery subtask automation workflow, where further development will take place with both existing and new collaborators across academia, industry, and clinical settings.

Collaborators at Johns Hopkins and Stanford Universities, integrated a vision-language model (VLM)—trained on hours of surgical videos—with the da Vinci Research Kit (dVRK), where the system autonomously performs three critical surgical tasks: carefully lifting body tissue, using a surgical needle, and suturing a wound (shown in the figure above).
Using this workflow, developers can bring their own surgical robots, sensors, instruments, and patient models into NVIDIA Omniverse to create high-fidelity surgical digital twins. This enables them to simulate complex procedures like suturing, cutting, and tissue manipulation—without touching a patient—while generating vast amounts of photorealistic physics-based synthetic data at scale to train robot policies.
The synthetically generated dataset is then used in Isaac Lab for training reinforcement and imitation learning pipelines, or fine-tuning existing generalist vision language action models (e.g. π0) to capture the skill and dexterity of human surgeons for surgical robots.
Finally, the policies, fully trained in the digital twin, bridge simulation to the real world and are deployed on a physical surgical robot (in this case dVRK).
Key capabilities of Isaac for Healthcare in Surgical Subtask Automation Workflow:
- Bring your own (BYO) components: Use custom robots, instruments, supplies, and anatomies.
- Simulation-ready environments: Photo-realistic, physics-enabled digital twins.
- Data generation and collection: Synthetic data and expert demonstrations.
- Policy training: Reinforcement and imitation learning for skill acquisition.
- Evaluation and testing: Benchmark in digital twins with HIL testing.
- Sim2Real transfer: Deploy AI from simulation to real-world surgery.

BYO anatomy
This pipeline for creating photorealistic anatomical models starts from synthetic AI-assisted CT synthesis (by NVIDIA MAISI) and segmentation (by NVIDIA VISTA3D or Auto3DSeg), and followed by mesh conversion, mesh cleaning and refinement, photorealistic texturing, and finally culminating in the assembly of all textured organs into a unified OpenUSD file.
The workflow enables creation of patient-specific models, for simulation of rare or complex cases. This is particularly important because real patient data for such cases are often scarce, making simulation an invaluable tool for training and preparation.
The photorealistic human organ models are available on GitHub.
BYO robot/instrument
The workflow is conducted on the da Vinci Research Kit (dVRK), but the template provided enables generalization to other robotic platforms. The process for importing your surgical robot follows the general Isaac Sim import guidance. For detailed guidance, refer to the Isaac Sim URDF Import Tutorial.
Isaac Sim 4.5 provides a streamlined workflow for preparing robot models for simulation by enabling you to convert your robot CAD models into USD format. After the conversion to USD, you can proceed with the essential steps of articulation rigging, which involves adding joint physics and defining the kinematic properties of your robot. Once these crucial preparations are complete, your robot model becomes simulation-ready for integration into the simulation scene (digital twin), where it can interact with organs or other objects in a physically accurate manner.
BYO sensor
The workflow offers multiple perception modalities for AI policy learning. Developers can integrate different imaging sensors (e.g., stereo cameras, endoscopic cameras, depth sensors) to tailor the AI perception pipeline.
Expert demonstrations collection (through teleoperation)
This workflow also provides recipes/examples to enable the generation of high-quality demonstration data, utilizing teleoperation across various surgical tasks, which are crucial for training and evaluating AI models in surgical robotics.
Various peripheral devices, including keyboard, spacemouse, gamepad, VR controller, and the da Vinci Research Kit (dVRK) Master Tool Manipulator (MTM) can communicate with the digital twin and provide input commands to control the robots in Cartesian space.
Policy learning
For task automation, there is support for various state-of-the-art reinforcement and imitation learning algorithms (such as, Action Chunking Transformer (ACT) and https://arxiv.org/abs/2303.04137) for efficient surgical skill acquisition
Autonomous robotic ultrasound workflow
Ultrasound imaging is primarily non-invasive, portable, and safe. However, capturing quality ultrasound images requires a skilled ultrasonography technician. With the growing shortage of trained staff, ultrasound imaging nicely demonstrates the potential benefits of task automation to scale access to care and support timely, accurate diagnostics.
This reference workflow provides a reproducible, customizable, and modular framework to build ultrasound robotic automation using AI, digital twins, and a broader three-computer framework. Many of the main capabilities for this workflow overlap with those of the Robotic surgery subtask automation workflow. Therefore, here we only review the unique capabilities specific to ultrasound workflow.

Using this workflow, developers can bring their own robotic arm, camera sensor(s), ultrasound probes, and patient models into NVIDIA Omniverse to create high-fidelity ultrasound examination digital twins. You can build realistic anatomical models and virtual probes simulating how ultrasound waves interact with tissues of varying densities, giving a rich dataset for training.
This approach enables you to explore different scanning angles, pressure levels, and anatomical variations without the limitations of a physical lab. Developers can leverage Isaac Lab to ingest data from simulations and expert demonstrations to employ reinforcement learning or imitation learning and train robotic systems on optimal positioning and orienting an ultrasound probe to capture high-quality images.

Early adopters and ecosystem partners of Isaac for Healthcare
Isaac for Healthcare is accelerating the future of AI-driven medical robotics through collaboration with industry leaders across surgical, interventional, and imaging robotics, as well as robotic arm providers.
In surgical robotics, Virtual Incision is evaluating Isaac for Healthcare for surgical synthetic data generation (SDG) to develop robotic task autonomy for their future robotic surgery systems, and harness realistic simulation environments for improving surgical precision.
Moon Surgical is prototyping autonomous robot setups, enabling dynamic adaptation to surgeons’ techniques and procedural workflows for enhanced precision and efficiency.
In interventional robotics, Neptune Medical is using NVIDIA Omniverse and Isaac Sim to design and simulate robotic endoscopy, enhancing diagnostic capabilities.
XCath is using Isaac for Healthcare to create comprehensive digital twins of its endovascular robot, treatment devices and human vasculature, enabling motion planning and control for autonomous navigation in its catheter-based robotic system.
Leading robotic arm providers such as Kinova and Franka are enabling the developer ecosystem by delivering simulation-ready, pre-built robotic arms within Isaac for Healthcare.
Paired with comprehensive reference workflows, these solutions provide a strong technical foundation for developers to rapidly prototype and deploy autonomous functionalities into medical devices, driving innovation in healthcare robotics.
Get started
Explore Isaac for Healthcare with our comprehensive suite of resources designed to accelerate your journey into AI-driven healthcare robotics. If interested, begin by visiting our early access page.
Select from the Surgical Subtask Automation Workflow or the Autonomous Robotic Ultrasound Workflow to kickstart your project.