Download PDF
Conference Report  |  Open Access  |  5 Feb 2024

The 1st Orsi Innotech Surgical AI Day congress report

Views: 446 |  Downloads: 138 |  Cited:  0
Art Int Surg 2024;4:28-36.
10.20517/ais.2024.06 |  © The Author(s) 2024.
Author Information
Article Notes
Cite This Article

The first Orsi Innotech Surgical AI Day conference took place on the 14th and 15th of December 2023, at Orsi Academy located in Melle, Belgium. Orsi Academy is an open and inclusive ecosystem for the safe and effective implementation of new technologies in medical practice, with a specific focus on training in minimal invasive surgery. Orsi Academy is an independent institution that is endorsed by multiple universities and collaborates with numerous scientific societies and industrial partners. The infrastructure is constructed to allow these stakeholders to work together if appreciated but in full privacy if needed. Using by Artificial Intelligence (AI) and computer vision, Orsi Academy’s technical department, Orsi Innotech [Figure 1A], aims to improve surgical training with the ultimate goal to achieve better patient outcomes by safer and more patient-centered surgery. The Surgical AI Day conference was organized by Orsi Academy’s CEO Prof. Dr. Alexandre Mottrie together with Dr. Ir. Pieter De Backer, Medical Doctor (MD) and Head of the Innotech department.

The 1st Orsi Innotech Surgical AI Day congress report

Figure 1. Conference photos.

The event spanned two days, presenting a comprehensive overview as well as in-depth exploration of AI’s impact on surgery. The event convened renowned national and international surgical AI experts from both clinical and engineering backgrounds, providing attendees with insights on the current and future implications of this technology.

The congress program encompassed keynote presentations, live demonstrations of real-time surgical AI applications in Orsi Academy’s surgical clusters, as well as a dedicated abstract session.

Seven main sessions were organized, spread over the two day congress. All sessions, with the exception of the live demos, were chaired in the on-premise auditorium at Orsi Academy as well as streamed through Surgquest (https://www.surgquest.com/).

The afternoon session of the first day focused on Surgical Data Science, Dataset Building, Collaboration between institutes and Knowledge Sharing strategies.

The morning sessions of the second day encompassed topics such as Surgical Skill Assessment, Trustworthiness and Explainability in AI, Surgical Navigation, and 3D Modeling. Additionally, industry talks were delivered by prominent entities including Medtronic, Intuitive, Encord, and Materialise. The afternoon sessions of the second day focused on the Operating Room of the Future, featuring a presentation session showcasing six abstracts, culminating in an award ceremony honoring the best submission.

KEY HIGHLIGHTS FROM THE FIRST DAY

The conference started with an inaugural address delivered by Andrew A. Gumbs [Figure 1B], MD, Director of the Advanced and Minimally Invasive Surgery Excellence Center at the American Hospital of Tbilisi (Georgia), Professor of Surgery at Grigol Robakidze University Tbilisi (Georgia) and Otto von Guericke University Magdeburg (Germany). The talk explored the potential future direction of surgery by delving into the automation of surgical operations using cutting-edge AI technologies. Moreover, Gumbs introduced the AIRGOS project, a groundbreaking initiative employing deep learning to tailor patient personalized chemotherapy and immunotherapy regimens.

Following the congress opening, the first session focused on Surgical Data Science, moderated by by Joel Lavanchy, MD, Attending Surgeon at Clarunis University Digestive Health Care Center of Basel (Switzerland) and Niki Rashidian, MD, Ph.D., Surgeon at the Department of Hepatobiliary Surgery and Liver Transplantation at University Hospital Ghent (Belgium) and Head of Training and Research Institute for Surgical AI (TRISAI) at Ghent University (Belgium).

The first part of the session was curated by Martin Wagner, MD, Attending Surgeon of visceral surgery, Professor of AI-based Assistance Systems in Surgery at University Hospital Carl Gustav Carus of Dresden (Germany). Wagner delved into Surgomics for the Tactile Internet, explaining how in-depth surgical skill analysis can have an impact on saving human lives. The surgeon’s experience, as well as the quality of the performed operation, affects mortality rates after the procedure. In addition, Wagner precised that a proficient mentor can help to reduce the learning curve by 66% for less proficient surgeons.

Sebastian Bodenstedt, Ph.D., Group Leader of Data-driven Surgical Analytics and Robotics of the Department of Translational Surgical Oncology (TSO) at the National Center for Tumor Diseases in Dresden (Germany), elaborated on how deep learning and active learning techniques can facilitate the generation of soft-labeled data, thereby contributing to the training of AI networks while fortifying the feedback loop between clinicians and engineers. The focus of the discussion also concerned the utility of computer vision for automated video-based skill assessment and the integration of new technologies for enhanced surgical training in operating rooms equipped with data acquisition sensors.

Jennifer Eckhoff, MD, Research Fellow at Surgical AI and Innovation Laboratory (SAIL) of the Massachusetts General Hospital (USA), continued the session with a discourse on surgical phase recognition. Eckhoff identified the major limitations within the medical field for this task, citing the scarcity of inspected procedures and inadequate intraoperative cues. The live implementation of AI algorithms will assist surgeons in preventing complications and optimizing clinical workflows.

In the concluding segment of the session, Chinedu Nwoye, Ph.D., Senior Data Scientist in Strasbourg (France), provided a technical discussion on classifying surgical phases through deep learning-based algorithms. Nwoye expressed the critical importance of representative and high-quality data for the development of reliable predictive models, noting the weaknesses of deep learning when data is lacking a correct representation of the actual surgical workflow. Kinematic data visualizations revealed the potential improvement in surgical workflow segmentation. To conclude, data diversity can enhance both human and AI knowledge in the operating room across pre-, intra-, and postoperative stages.

The second conference session centered around Data Building, Collaboration between institutes and Knowledge Sharing strategies in the surgical AI landscape, moderated by Hans Fuchs, MD, Professor of Surgery at University of Cologne (Germany) and Pietro Mascagni, MD, Ph.D., Clinical Resident Advisor on computer science and AI at Institute of Image-Guided Surgery in Strasbourg (France).

Daniel Hashimoto, MD, Director of the Penn Computer Assisted Surgery and Outcomes Laboratory (USA), kicked off the session by delving into clinical considerations for the creation of usable datasets [Figure 1C]. The data collection process resulted strictly related to the clinical problem. Further, well-designed online platforms can serve as valuable resources for training novices to think and act like skilled surgeons.

Shifting to technical considerations regarding data acquisition, Julian Abbing, Technical Physician, Doctoral candidate at Meander Medical Center in Amersfoort (Netherlands), addressed the multifaceted questions surrounding data quality and procedural efficiency in the surgical domain. Abbing explained strategies for managing spatial and temporal data to extract clinical relevant conclusions, proposing a comparative analysis of surgical phase durations between residents and proficient surgeons.

Pietro Mascagni contributed to the discussion by exploring the role of heterogeneous data and the need of collaborations across surgical centers to build trusted AI algorithms. The talk delved into how data from various organizations significantly impacts model performance. Mascagni emphasized the need for mutual education between clinicians and computer scientists in their respective scientific areas, addressing needs, tools, and methodologies.

Zhijin Li, Ph.D., Senior Solutions Architect at NVIDIA (USA), addressed data sharing across surgical centers. Li described Federated Learning strategies as frameworks towards generalization, acknowledging the increased usage of AI in medicine and the associated ethical and clinical implications. A paradigm shift concerning the deployment of AI models in each center is proposed by Li. The possibility to maintain the data on-premise and share the model weights can address the public questions related to data privacy and anonymization.

The session concluded with Ozanan Meireles, MD, Director of the Surgical AI and Innovation Laboratory (SAIL) at the Massachusetts General Hospital (USA), Assistant Professor of Surgery at Harvard Medical School (USA). Meireles talked about the cultural transformation required for the successful integration of AI in operating rooms. Despite the tangible impact of innovation, healthcare has yet to fully realize technology’s promise in the operating room. An open-minded approach to the ethical use of innovative technologies is one of the priorities as described by Meireles.

Following the talks, the first day concluded with a company tour and live demonstrations by NVIDIA, Medtronic and the Orsi Innotech department. The live demos showcased real-time AI applications, including the segmentation of surgical instruments and the use of manually assisted augmented reality for overlaying a 3D model onto a porcine kidney as well as clinical demos on the Medtronic HugoTM robot-assisted surgery system.

KEY HIGHLIGHTS FROM THE SECOND DAY

Day two started of with a session on Surgical Skill Assessment, moderated by Anthony Gallagher, Director of Research and Skill Development at Orsi Academy (Belgium), Founder of Proficiency Based Progression Methodology[1] and Dwight Meglan, Ph.D., Engineer, Chief Technology Officer and Founder of HeartLander Surgical (USA).

The first segment of the session was inaugurated by Pieter De Backer, MD, Ph.D., Head of Orsi Innotech at Orsi Academy (Belgium). De Backer gave a talk on Proficiency-Based Progresssion (PBP) in surgical training, describing how current surgeon training programs are lacking due to subjective performance scoring metrics. PBP however, has been show to reduce procedural errors by approximately 60%[1]. While PBP is a time intensive scoring process, De Backer further explained how to reduce this workload by automating PBP scoring using AI. He subsequently elaborated on the next steps towards automatic error detection and skill assessment.

The subsequent talk was delivered by Andrew Hung, MD, Vice Chair for Academic Development at the Departments of Urology and Computational Biomedicine (USA) [Figure 1D]. Hung addressed surgical skill and gesture assessment in robotic surgery, explaining the process of defining valuable automated performance metrics to drive clinical outcomes. The discussion touched on how to train a machine learning algorithm to predict the length of stay for a patient after a robot-assisted radical prostatectomy. Furthermore, Hung delved into the use of deep learning temporal-aware models to assess nerve-sparing during robotic prostatectomy. Starting with dissection gesture classification, it is possible to encode a robotic surgery as a sequence of surgical actions. In particular, when examining the left neurovascular bundles, AI can help predict cases where Erection Sufficient for Intercourse (ESI) disease is not well treated, depending on the observed sequence of dissection gestures during the surgical procedure.

Concluding the first part of the session, Giovanni Cacciamani, MD, Associate Professor of Urology and Radiology at the University of Southern California (USA), presented on complication assessment and the role of AI. Cacciamani focused attention on the importance of a complication reporting framework to unbiased link surgical errors with their consequences. He advocated truthful reporting of intraoperative complications and adverse events, facilitating the application of AI in surgery on a global scale through a standardized labeling framework.

The second part of the session shifted focus to Trustworthiness and Explainable AI, with a presentation by Barbara Seeliger, MD specialized in visceral surgery in Strasbourg (France), Ph.D., Professor of Surgery at University of Strasbourg (France). Seeliger pointed out the ethical, reliable, and certifiable goals of AI-assisted surgery, highlighting the crucial role of European Union regulations in ensuring patient safety and managing high-risk AI systems. The speaker provided hints into the legislative future of products containing explainable and non-explainable AI components, illustrating a practical case of AI supporting pre- and intraoperative aspects of adrenal surgery.

Following this, Annika Reinke, Ph.D., Deputy Head at DKFZ Cancer Research Center (Germany), explored the role of metrics as enablers for trustworthiness. Reinke stressed the importance of understanding the clinical problem to optimally align metrics. Small prediction errors can have substantial impacts on patient health and risk prevention, so the metrics critically influence the reliability of AI.

The second session of the day covered Surgical Navigation and the use of 3D models to improve the real-time intraoperative experience. The moderators of the session were Karel Decaestecker, MD, Ph.D., urologist at AZ Maria Middelares hospital (Belgium) and Professor of Urology at Ghent University (Belgium) and Daniele Amparore, MD, Assistant Professor of Urology at the Department of Oncology at the San Luigi Gonzaga Hospital in Turin (Italy).

Daniele Amparore, in addition to moderating the session, initiated it with a discussion on augmented reality in robotic surgery. The transformative capabilities of new 3D digital tools is enabling tailored surgery for individual patients. The detailed visualizations of internal cavities, arteries, and blood vessels impact the quality of the surgical procedures. Amparore concluded by envisioning a future characterized by pervasive augmented reality, facilitating virtual surgeon training and real-time remote operations.

Further exploring the clinical evidence on the impact of personalized 3D organ models, Adrien Bartoli, Ph.D., Professor of Computer Science at University Clermont Auvergne (France), Research Scientist at the University Hospital of Clermont-Ferrand (France), provided a comprehensive overview of technical development and intra-operative utilization. Bartoli presented a practical case involving the superimposition and registration of a 3D liver model with active deformations and exposure of tumor areas for removal. The demonstration showcased the automatic deep learning-based registration of a deformable organ, tracking it in real-time with respect to camera movements.

Continuing the exploration of 3D computational models and digital twins for biomedical engineering applications, Charlotte Debbaut, Ph.D., Associate Professor in Computational Biomechanics at Ghent University (Belgium), introduced three distinct clinical applications. The first focused on transarterial embolization for liver cancer, utilizing upstream truncation to study the delivery of the right tumor dose. The second application delved into tumor biophysics and drug delivery through fluid dynamics studies based on blood flows within the target organ. The final intervention centered on partial nephrectomy planning, incorporating the definition and 3D visualization of blood perfusion zones.

Federica Ferraguti, Ph.D., Assistant Professor at the Department of Sciences and Methods for Engineering in Modena (Italy), shared research findings through practical applications of 3D augmented reality. She offered a case study on how 3D modeling can be employed to predict optimal arteries clamping points, minimizing the risk of ischemia during kidney surgeries. Furthermore, in a simplified organ registration scenario, real-time assistance can be provided to less-experienced surgeons in practicing percutaneous nephrolithotomy surgery.

Juan M. Verde, MD, Digestive surgeon at the Institute of Image-Guided Surgery in Strasbourg (France), concluded the session with an overview of AI systems integrated into minimally invasive robotic percutaneous surgeries. Verde discussed the inherent challenges in percutaneous surgery, encompassing training, planning, guidance, assistance, and control. He provided insights into current hardware solutions enriched with intelligent capabilities to mitigate the learning complexity of such physical infrastructures. Some practical applications were described, including trajectory correction through passive steering.

The third session of the day featured company talks presenting cutting-edge AI hardware and software technologies. Medtronic kicked off the session with Karen Kerr, Ph.D., AI Product Development Director, and Sandra Van Arenbergh, Commercial Development Manager. The discussion centered on Medtronic’s Touch Surgery system designed to process surgical videos, extracting pertinent statistical cues such as surgical phases, instrument detection and tracking, scene segmentation, and critical structures identification. Medtronic stressed their dedication to optimizing workflow within the operating room.

Intuitive, represented by Antony Jarc, Ph.D., Senior Director and Data Analytics Expert, showcased the AI software capabilities of their robotic platforms Da Vinci. Jarc outlined the company’s research direction, accentuating their commitment to understanding the various factors across patient and surgical episodes that impact patient outcomes.

Encord, represented by its CEO and Co-Founder Eric Landau, introduced its software solutions aimed at advancing the surgical landscape towards large foundation models starting from the spatial and temporal data annotation. Encord also presented an active learning framework designed to guide an AI product from prototyping to production.

Concluding the company session, Materialise, represented by Innovation Manager Antoon Dierckx, Biomedical Engineer, expressed its commitment to developing customized solutions through AI technologies. The focus includes advanced 3D modeling, intra-operative 3D visualizations, and the creation of 3D printed surgical guides and implants. Materialise aims to leverage AI to enhance the precision and customization of medical solutions.

The fourth session of the day focused on the Future Developments in operating rooms. The moderators of the session were Mahendra Bhandari, MD, Ph.D., Director of Robotic Surgery Research and Education of Vattikuti Urology Institute in Detroit (USA) and Isabelle Van Herzeele, MD, Ph.D., Consultant in Vascular Surgery at Ghent University Hospital (Belgium), Professor of Surgery at Ghent University (Belgium).

Teodor Grantcharov, MD, Ph.D., Professor of Surgery at Stanford University (USA), Founder of Surgical Safety Technologies (Canada), discussed the usage of ambient intelligence to enhance surgical safety. Starting from the observation that the operating room remains one of the least understood environments in healthcare, despite the technological advancements, Grantcharov provided an overview of devices and sensors that hold the potential to improve surgical workflow and elevate patient care standards. Notably, eye-tracking arise as a reliable method to measure the impact of distractions on attention.

Following this, Tom Vercauteren, Ph.D., Professor of Interventional Image Computing at King’s College in London (United Kingdom), Co-founder and Chief Scientific Officer at Hypervision Surgical (United Kingdom), delved into hyperspectral imaging techniques combined with AI. Vercauteren explained how visual technologies based on the acquisition of different wavelengths can capture unique signals reflecting intrinsic properties of organic tissues. The combination of image processing and AI methods is expected to enhance the predictive capabilities of learning models. Vercauteren supported this assertion with two cases involving hyperspectral imaging during colorectal surgeries and neurosurgery.

The third talk of the session delved into stereo reconstruction in surgery and was presented by Evangelos Mazomenos. Mazomenos, Ph.D., Assistant Professor in Surgical Data Science at University College London (United Kingdom), bring attention to the importance of 3D reconstruction as an intra- and postoperative support tool. The reconstruction and navigation of endoscopic cavities based on Simultaneous Localization and Mapping (SLAM) and Neural Radiance Field (NeRF) methods showcased the potential of computer vision and deep learning in the surgical domain.

Nassir Navab, Ph.D., Professor of Computer Science at the Technical University of Munich (Germany), delivered the final speech of the session, focusing on the future that the scientific surgical community is shaping for the operating room and emphasizing the power of holistic domain modeling. Navab underscored the distinction in analyzing interventional images compared to diagnostic images. Drawing from a history of medical applications developed since 2006 to tackle complex tasks and expand the possibilities of data analysis within operating rooms, he highlighted the critical role of AI in advancing the understanding of intraoperative events. Concrete technological framework examples such as dynamic scene graph construction and operating room segmentation (SegmentOR) supported his talk.

The final session of the day featured presentations of the best abstract submissions, reviewed in advance of the conference. Six presentations followed, each lasting 5 min.

Best Abstracts Winners: Samaneh Azargoshasb was honored with the Best Abstract Award [Figure 1E] for her work on the usage of a drop-in gamma probe during robotic radio-guided surgery for prostate cancer. Beerend Gerats [Figure 1F] received the Audience Award, through voting of all faculty members. His work focused on the offline tracking of pixels belonging to surgical tools using pseudo-depth maps from monocular laparoscopic video clips. A commendable monetary prize of €1,500 was awarded to both winners sponsored by Orsi Academy and AIS.

For future updates and announcements regarding the conference, We recommended visiting the Orsi Academy website (https://www.orsi-online.com/) or the Surgical AI day website (https://www.surgicalaiday.com/). The next conference edition is scheduled for 16th and 17th of December 2024. The session recordings can be consulted at Surgquest (https://www.surgquest.com/).

DECLARATIONS

Acknowledgments

We express our sincere gratitude to the event organizers, speakers, and participants for their diverse and valuable contributions, each playing a key role in the success of this annual conference. We also want to thank our sponsors Medtronic, Intuitive, Encord, Distalmotion, and Materialise for their financial support and active contributions.

Authors’ contributions

Writing: Mezzina M, Hofman J, Simoens J, De Backer P

Manuscript reviewing: Mottrie A

Availability of data and materials

Not applicable.

Financial support and sponsorship

This event was made possible through the financial support of our conference sponsors Medtronic, Intuitive, Encord, Distalmotion, Materialise.

The authors’ research is funded by the following grants: Baekeland grant of Flanders Innovation & Entrepreneurship (VLAIO), grant number HBC.2020.2252 (De Backer P) and HBC.2023.0156 (Mezzina M).

Conflicts of interest

Authors’ role: Mezzina M: Orsi Academy employee; Hofman J: Orsi Academy employee; Simoens J: Orsi Academy employee; Mottrie A: Orsi Academy’s CEO; De Backer P: Medical Doctor (MD) and Head of the Orsi Innotech Department.

Personal financial interests/Non-financial conflicts of interest: all authors declared that there are no conflicts of interest.

Ethical approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Copyright

© The Author(s) 2024.

REFERENCES

1. Mazzone E, Puliatti S, Amato M, et al. A systematic review and meta-analysis on the impact of proficiency-based progression simulation training on performance outcomes. Ann Surg 2021;274:281-9.

Cite This Article

Conference Report
Open Access
The 1st Orsi Innotech Surgical AI Day congress report
Marco MezzinaMarco  Mezzina, ... Pieter De BackerPieter  De Backer

How to Cite

Mezzina, M.; Hofman J.; Simoens J.; Mottrie A.; De Backer P. The 1st Orsi Innotech Surgical AI Day congress report. Art. Int. Surg. 2024, 4, 28-36. http://dx.doi.org/10.20517/ais.2024.06

Download Citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click on download.

Export Citation File:

Type of Import

Tips on Downloading Citation

This feature enables you to download the bibliographic information (also called citation data, header data, or metadata) for the articles on our site.

Citation Manager File Format

Use the radio buttons to choose how to format the bibliographic data you're harvesting. Several citation manager formats are available, including EndNote and BibTex.

Type of Import

If you have citation management software installed on your computer your Web browser should be able to import metadata directly into your reference database.

Direct Import: When the Direct Import option is selected (the default state), a dialogue box will give you the option to Save or Open the downloaded citation data. Choosing Open will either launch your citation manager or give you a choice of applications with which to use the metadata. The Save option saves the file locally for later use.

Indirect Import: When the Indirect Import option is selected, the metadata is displayed and may be copied and pasted as needed.

About This Article

© The Author(s) 2024. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, sharing, adaptation, distribution and reproduction in any medium or format, for any purpose, even commercially, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Data & Comments

Data

Views
446
Downloads
138
Citations
0
Comments
0
5

Comments

Comments must be written in English. Spam, offensive content, impersonation, and private information will not be permitted. If any comment is reported and identified as inappropriate content by OAE staff, the comment will be removed without notice. If you have any queries or need any help, please contact us at support@oaepublish.com.

0
Download PDF
Share This Article
Scan the QR code for reading!
See Updates
Contents
Figures
Related
Artificial Intelligence Surgery
ISSN 2771-0408 (Online)
Follow Us

Portico

All published articles will be preserved here permanently:

https://www.portico.org/publishers/oae/

Portico

All published articles will be preserved here permanently:

https://www.portico.org/publishers/oae/