Background: Augmented reality robot-assisted partial nephrectomy (AR-RAPN) is limited by the need of a constant manual overlapping of the hyper-accuracy 3D (HA3DTM) virtual models to the real anatomy. This paper captures preliminary experience with automatic 3D virtual model overlapping during AR-RAPN.

Materials: To reach a fully automated HA3DTM model overlapping, we pursued computer vision strategies, based on the identification of landmarks to link the virtual model. Due to the limited field of view of RAPN, we used the whole kidney as a marker. Moreover, to overcome the limit of similarity of colors between the kidney and its neighboring structures, we super-enhanced the organ, using the NIRF Firefly® fluorescence imaging technology. A specifically developed software named “IGNITE” (Indocyanine GreeN automatIc augmenTed rEality) allowed the automatic anchorage of the HA3D™ model to the real organ, leveraging the enhanced view offered by NIRF technology.

Results: Ten automatic AR-RAPN were performed. For all the patients a HA3D™ model was produced and visualized as AR image inside the robotic console. During all the surgical procedures, the automatic ICG-guided AR technology successfully anchored the virtual model to the real organ without hand-assistance (mean anchorage time: 7 seconds), even when moving the camera throughout the operative field, while zooming and translating the organ. In 7 patients with totally endophytic or posterior lesions, the renal masses were correctly identified with automatic AR technology, performing a successful enucleoresection. No intraoperative or postoperative Clavien >2 complications or positive surgical margins were recorded.

Conclusions: Our pilot study provides the first demonstration of the application of computer vision technology for AR procedures, with a software automatically performing a visual concordance during the overlap of 3D models and in vivo anatomy.

https://pubmed.ncbi.nlm.nih.gov/35063460/

Recommended Posts