Download PDF
Review  |  Open Access  |  13 May 2026

Smart hands: robotic systems as surgical collaborators in plastic and reconstructive surgery

Views: 28 |  Downloads: 0 |  Cited:  0
Plast Aesthet Res. 2026;13:15.
10.20517/2347-9264.2025.121 |  © The Author(s) 2026.
Author Information
Article Notes
Cite This Article

Abstract

Robotic systems, artificial intelligence (AI), and augmented reality (AR) are increasingly reshaping plastic and reconstructive surgery by enhancing precision, reproducibility, and surgical decision-making. While conventional robotic platforms such as the Da Vinci system have demonstrated clear benefits in minimally invasive surgery, their application in microsurgery has remained limited due to instrument size and insufficient submillimetric control. Recently developed microsurgical platforms, including the Symani® Surgical System (Medical Microinstruments, Pisa, Italy) and the MUSA® system (MicroSure B.V., Eindhoven, The Netherlands), address these limitations through motion scaling, tremor suppression, and compatibility with open-field reconstruction. Early clinical studies report successful lymphatic and microvascular anastomoses, improved ergonomics, and reduced surgeon fatigue, marking a significant step toward routine robot-assisted microsurgery. In parallel, advances in AI and machine learning enable data-driven surgical planning, perforator mapping, flap selection, complication prediction, and automated documentation. Large language models further support clinical workflows through structured documentation and patient communication, while AR and virtual reality enhance anatomical orientation, intraoperative navigation, and surgical training. Despite these advances, challenges remain, including heterogeneous data quality, algorithmic bias, limited interoperability, and evolving regulatory frameworks. Addressing these issues is essential to ensure safe and equitable implementation. Future developments are expected to converge into an integrated, AI-augmented surgical ecosystem combining preoperative planning, robotic execution, and outcome-based learning. Rather than replacing surgical expertise, these technologies aim to augment human skill and support a more personalized and efficient reconstructive practice.

Keywords

Robotic-assisted microsurgery, artificial intelligence, augmented reality, surgical planning, microsurgical robotics, large language models, precision surgery, reconstructive surgery

INTRODUCTION

Background

Plastic and reconstructive surgery has historically been shaped by a blend of manual precision, anatomical expertise, and creative problem-solving. From the foundational work of Gillies and Millard to modern advances in microsurgical free flap reconstruction and lymphatic surgery, the specialty has relied on the surgeon’s technical skill and tactile feedback to manipulate tissues at the finest level[1]. As surgical procedures become increasingly complex and patient expectations rise, the demand for enhanced precision, reduction of surgeon fatigue, and reproducibility has driven the field to explore new technological horizons.

One of the most transformative developments in modern surgery has been the development of robotic surgical systems. The Da Vinci Surgical System, first introduced more than two decades ago, advanced minimally invasive surgery by offering high-definition three-dimensional (3D) visualization, tremor reduction, and improved instrument articulation. It is now widely established in urologic, gynecologic, and general surgical procedures[2]. However, despite these successes, the Da Vinci platform has achieved only limited integration into plastic and reconstructive surgery. Its large footprint, high costs, and relatively coarse instrument scale restrict its applicability for microsurgical procedures, which require delicate tissue handling and fine motor precision[3]. Nevertheless, the system has demonstrated the potential benefits of robotic assistance - improved control, decreased surgeon fatigue, and novel opportunities for visualization and access - and thereby highlights the need to explore next-generation robotic technologies tailored to the unique demand of reconstructive surgery.

Following these early developments, newer robotic systems - such as the Symani® Surgical System (Medical Microinstruments, Italy) and MicroSurgical Assistant (MUSA, MicroSure, Netherlands) - have been specifically designed for microsurgical applications. These externally mounted systems provide submillimetric motion scaling, real-time tremor suppression, and compatibility with traditional microsurgical workflows. They enable high-precision tasks, such as vascular and lymphatic anastomoses, that are beyond the resolution of conventional robotic platforms[4,5]. Unlike other surgical robotic devices, which are optimized for cavity-based surgeries, these next-generation systems are tailored for superficial, open-field reconstructions, making them ideal for plastic and reconstructive surgery applications[6].

Concurrently, advances in artificial intelligence (AI) have introduced new tools for data-driven surgical planning, risk stratification, and intraoperative decision support. Machine learning (ML) algorithms trained on large clinical datasets can assist with tasks such as flap design, perforator identification, and outcome prediction[7]. Large Language Models (LLMs) such as Generative Pre-trained Transformer (GPT)-4 bring natural language understanding to the surgical setting, supporting intraoperative consultation, automated documentation, and patient-specific communication[8]. These AI tools, when integrated with robotic platforms, promise the creation of intelligent surgical ecosystems - environments where human expertise is enhanced by real-time computational insight[9].

Plastic surgery, with its emphasis on individualization, fine motor control, and multidisciplinary collaboration, stands at a pivotal stage in this technological evolution. As robotic systems progress from concept to clinical implementation and AI becomes more accessible, the specialty must decide how to responsibly integrate these tools into education, clinical care, and research. While platforms such as Da Vinci have proven the viability of robotics in high-volume surgeries, microsurgical platforms (e.g., Symani® and MUSA) demonstrate that plastic surgery can develop its own robotic identity, tailored to its unique needs[4-6].

This article is part of the Plastic and Aesthetic Research Special Issue “A New Frontier in Plastic Surgery - From Robotics to LLMs: An Expanding AI Landscape.” It explores the historical trajectory, current status, and future direction of robotic and AI-based technologies in plastic and reconstructive surgery, emphasizing their potential to reshape operative precision, surgical training, and patient engagement in the years ahead.

Methods

This article draws on a narrative literature review supplemented by an expert perspective on clinical reports, feasibility studies, and technological developments in surgical robotics and AI. Publications indexed in PubMed and Scopus between 2015 and 2025 were examined, with a focus on robot-assisted microsurgery, AI-guided surgical navigation, LLMs for clinical documentation, and augmented reality (AR) as an adjunct in operative visualization. In addition, case studies from pioneering institutions using the Symani® Surgical System, MUSA, and early AI-integrated platforms were assessed.

AI-DRIVEN SURGICAL PLANNING AND OUTCOME PREDICTION

ML in preoperative decision-making

The preoperative phase of plastic and reconstructive surgery is critical in determining the success of the treatment. ML algorithms can support this stage by analyzing complex, patient-specific risk profiles, the selection of suitable reconstructive techniques, and the anticipation of intraoperative complications.

Hassan et al. (2023) developed a predictive model to assess skin flap necrosis after mastectomy using clinical variables. The model attained an Area Under the Curve (AUC) of 0.70, indicating moderate predictive performance and highlighting the importance of risk factors such as smoking, body mass index (BMI), and diabetes[10]. This is particularly relevant in the context of breast reconstruction, where flap vitality is of pivotal significance.

In the domain of microvascular reconstruction of the head and neck region, ML methods have demonstrated high accuracy in predicting postoperative complications. Tighe et al. (2022) applied ML-based risk adjustment to enhance the classification of free flap loss, thereby demonstrating a clinically significant enhancement in decision accuracy when compared to conventional risk models[11].

In their systematic review, Kapila et al. (2024) identified “preoperative planning” as one of the six main domains for the use of AI in microsurgery. This encompasses the selection of appropriate flaps, perforator analysis based on imaging, and the automatic segmentation of anatomical structures[12].

Park et al. (2024) describe the use of deep learning-based image analysis from computed tomography (CT) and magnetic resonance imaging (MRI) data to model individual surgical access routes for complex facial reconstructions, thereby enabling patient-centered planning[13].

Predictive modeling for surgical outcomes

Predictive modeling is a method of quantifying the potential outcomes of surgical procedures, including complications, revision rates and healing progression. This approach constitutes a fundamental element of personalized medicine.

The concept of “human vs. machine” was investigated by Duran et al. (2025) in a comparative study. In this study, GPT-4 and Gemini were fed with plastic surgery expertise and then compared with medical decisions in realistic case vignettes. The models demonstrated comparable accuracy for standardized questions and exhibited considerable potential for facilitating decision-making[14].

Huang et al. (2024) developed an ML model with the objective of predicting donor site complications after deep inferior epigastric perforator (DIEP) flap retrieval. The model demonstrated an accuracy of 82%, facilitating the identification of individual risk factors at an early stage and the planning of preventive measures[15].

As asserted by Mansoor and Ibrahim (2025), a narrative review underlines the capacity of AI to assimilate postoperative outcome data into future recommendations through continuous learning. This facilitates the dynamic adaptation of therapy algorithms to actual clinical courses and promotes the principle of the “learning healthcare system”[9].

In the study undertaken by Kapila et al. (2024), an extensive classification of diverse outcome-related models was conducted, with the emphasis placed on the noteworthy utility of AI applications in the assessment of flap vitality, wound healing disorders and hospitalization duration[12].

Integrating patient data for personalized treatment plans

The future of reconstructive surgery is predicated on the integration of multiple data sources: Anamnestic information, diagnostic imaging, intraoperative sensor technology and postoperative courses are increasingly being integrated into intelligent, predictive systems.

In their systematic review, Kiwan et al. (2024) analyzed over 30 current studies on the integration of electronic health records (EHRs), imaging procedures and intraoperative data into AI-supported systems. This combination enables individualized, evidence-based treatment planning with greater precision and reduced variability[16].

The authors also emphasize the importance of the use of structured data formats [e.g., Fast Healthcare Interoperability Resources (FHIR)] in order to facilitate interoperability of information from different sources, a key aspect for the future development of fully digital surgical workflows[16].

Mansoor and Ibrahim (2025) delineated in their study concrete application scenarios of multimodal data integration. To illustrate, they proposed a platform capable of automatically segmenting CT scans, amalgamating them with clinical parameters, and proposing a personalized flap strategy based on historical cases[9].

Park et al. (2024) posit that the integration of computer vision and natural language processing (NLP) with AI facilitates a profound comprehension of intricate patient data, thereby representing a pivotal progression towards context-sensitive, automated assistance[17].

ROBOTIC SYSTEMS IN PLASTIC AND RECONSTRUCTIVE SURGERY

The integration of robotic assistance systems signifies a paradigm shift in the domain of plastic and reconstructive surgery. Robotic systems have been utilized effectively in abdominal and urological surgery for an extended period. Nevertheless, their utilization in reconstructive microsurgery has been subject to delay, primarily due to constraints such as instrument size and the absence of submillimeter precision. However, recent advancements in the domain of specialized microrobots, exemplified by the Symani® Surgical System and MUSA, have engendered novel avenues for utilization, particularly in the realm of vascular and lymphatic anastomoses within the submillimeter range[18]. The ensuing sections are devoted to an exposition of the state of the art, an exposition of the ergonomic advantages, and an exposition of the initial clinical experience.

Advances in robotic-assisted microsurgery

The advent of dedicated microsurgery platforms, such as the Symani® (Medical Microinstruments) and MUSA (MicroSure) systems, has engendered the availability of robotic systems that are specifically optimized for microanatomical procedures to plastic surgeons.

Symani® employs magnetically controlled, scalable micromanipulators to reduce movements by a factor of up to 20 and completely eliminate tremor[5]. The platform facilitates precise anastomoses on vessels with a diameter of less than 0.8 mm, a task previously exclusive to experienced supermicrosurgeons [Figure 1].

Smart hands: robotic systems as surgical collaborators in plastic and reconstructive surgery

Figure 1. Symani® Surgical System (Medical Microinstruments S.p.A., Pisa, Italy).

Innocenti et al. (2023) reported successful anastomoses in lymphatic vessel reconstructions and free flap reconstruction surgery without complications in the first clinical study on the use of Symani®[4]. In their “first-in-human” study with MUSA®, Van Mulken et al. (2020) described eight lymphovenous anastomoses that were performed with clinically satisfactory results, thus representing a breakthrough for robotic supermicrosurgery[5,19] [Figure 2].

Smart hands: robotic systems as surgical collaborators in plastic and reconstructive surgery

Figure 2. MUSA® 3 microsurgical robotic systems for robot-assisted micro- and supermicrosurgery (MicroSure B.V., Eindhoven, The Netherlands).

Enhancing dexterity, precision, and ergonomics

A significant benefit of robotic systems is their precision, which contributes to a reduction in intraoperative stress. Microsurgical procedures are time-consuming and force-intensive and frequently result in physical fatigue and musculoskeletal complaints among surgeons. Robot-based systems have been developed to compensate for these limitations through motion stabilization, tremor elimination and ergonomic console technology[16].

In a series of 100 robotically assisted anastomoses with Symani®, Kapila et al. (2024) reported that the surgical burden on assistants and specialists was significantly reduced. The learning curve demonstrated a rapid improvement in operating times, concomitant with consistently high anastomosis quality[12].

In the 2024 study by Malzone et al., the ergonomic advantages of robotic systems in microsurgery are summarized as a “significant step towards extending the surgical career and improving surgical performance”[6].

Clinical applications and case studies

The clinical applications documented to date relate primarily to lymphovenous anastomoses for lymphoedema, free perforator flaps [e.g., anterolateral thigh flap (ALT), DIEP] and reconstructive procedures following tumor resection. The Symani® platform has been employed with a high degree of success in a variety of surgical contexts, including breast reconstruction, facial reconstruction and limb reconstruction. Notably, the anastomosis times for vessel diameters of less than 1 mm have been reported to range from 20 to 40 min[4,12].

In a European multicenter case series (2023), there was a consistent achievement of favorable functional and aesthetic outcomes. The complication rate was comparable to that of the conventional technique, but with superior reproducibility and reduced variability between surgeons[5,19].

In the future, the potential applications of robotic microsystems may extend to areas such as plexus reconstruction, peripheral nerve transplantation, and reconstructive lymph node transfers. Initial preclinical studies have confirmed the feasibility of these procedures[6,20].

3D SIMULATION AND AR IN SURGICAL PRACTICE

The integration of digital technologies, including 3D simulation, virtual reality (VR) and AR, is expanding the surgical possibilities in plastic surgery. This expansion encompasses preoperative planning, intraoperative orientation and the training of surgeons. These technologies facilitate detailed visualization of patient-specific anatomy, enhance surgical precision and promote sustainable training[21,22].

Virtual surgical planning and simulation

It is evident that 3D VR and AR technologies have a pivotal function in the process of preoperative planning, primarily by enhancing an individual’s spatial thinking capabilities and facilitating improved anatomical orientation. In a systematic review of six studies, Vles et al. (2020) demonstrated that AR significantly enhances accuracy, for instance in mandibular osteotomies and the identification of perforators in the DIEP flap. The operating time in DIEP procedures decreased (P < 0.01) and the precision of the osteotomy increased significantly when AR navigation was used[21].

As Sayadi et al. (2019) emphasize, the utilization of AR-based visual models renders complex anatomies more tangible and concomitantly results in a significant reduction in the error rate during 3D-based planning[23].

AR in intraoperative navigation

AR is also being used with increasing frequency in operating theatres. In the study described by McGraw et al. (2023), the authors present a preclinical investigation in which they employed AR glasses, such as the HoloLens, to project 3D holograms of vessels and bones over the surgical field during the intraoperative phase. The accuracy with which anatomical landmarks were localized was in the submillimeter range, representing a significant advancement for surgical assistance that is grounded in realism[24].

Cai et al. (2021) reported on mixed reality navigation in craniofacial surgery, demonstrating a significant reduction in operating time with an identical level of results[25].

As demonstrated by Vles et al. (2020), clinical evidence indicates that AR improves intraoperative perforator identification in DIEP flap procedures compared with traditional Doppler[21].

Educational and training applications

In addition to its clinical application, AR/VR is also of great importance in the training of surgeons. Shafarenko et al. (2022) conducted a randomized study of surgeons in training who underwent holo-surgery simulations. As demonstrated in[26], users demonstrated a marked enhancement in anatomical orientation and surgical accuracy following only a limited number of sessions.

Touch Surgery, a VR application designed for surgical scenarios, has been validated specifically for plastic and reconstructive surgery. According to Kowalewski et al. (2017), this application demonstrated significantly higher cognitive performance in residents than traditional text-based learning methods[27].

A recent meta-analysis of VR-based surgical training has confirmed the positive effect on the learning curve, precision and error reduction, particularly in laparoscopic training. However, the potential for adaptation for plastic-reconstructive surgery has also been demonstrated[28].

LLMs AND CONVERSATIONAL AI IN CLINICAL PRACTICE

The advent of powerful LLMs, such as GPT-4, has precipitated a paradigm shift in the manner in which clinical information is processed, documented and communicated. In the domain of plastic and reconstructive surgery, there exists potential for application in the domains of automated documentation, patient-centered communication and digital assistance. Such applications are accompanied by critical discussion of the ethical and regulatory aspects.

Automating documentation and workflow support

LLMs have the capacity to expedite and standardize clinical documentation processes, thereby liberating human resources. In the study conducted by Patel et al. (2024), an investigation was made into the comparison of educational documents generated by ChatGPT with those created by surgeons. The findings of this investigation revealed that the AI-generated texts were shorter in length (averaging 1,023 words compared to 2,901 words), more structured, and linguistically more accessible, whilst maintaining comparable medical completeness[29].

The utilization of AI in clinical protocols and surgical reports has also been demonstrated to be a valuable asset. In a field report from Plastic and Reconstructive Surgery - Global Open (PRS Global Open), LLM-based scribe systems were described that extract surgical documentation from audio transcripts and automatically convert it into structured reports[30]. Whilst these systems hold great promise, they do raise significant concerns with regard to data protection.

Another example is the SurgeryLLM model, which utilizes Retrieval Augmented Generation (RAG). The generation of responses is informed by external sources, including guidelines and literature. In a preliminary study, SurgeryLLM was able to provide consistent, evidence-based responses to queries pertaining to surgical decision-making. The potential of this tool to enhance clinical efficiency is currently under discussion[31].

Patient communication and preoperative counseling

LLMs have the capacity to formulate medical information in a patient-friendly manner and to support preoperative counselling. In a study on rhinoplasty consultations, GPT-4 demonstrated an answer accuracy of over 90% for common patient queries, including operation duration, risks and aftercare[32]. The responses thus obtained were evaluated as accurate; however, in certain instances, they were still considered to be overly technical.

LLMs have the capacity to facilitate the generation of bespoke information sheets. A randomized study found that patients perceived AI-generated forms as “clearer and more structured” in terms of content, although the language complexity was sometimes too high[29].

In practical applications, such systems have the potential to function as “virtual assistance”. For instance, they could take the form of chatbots that provide standardized information on a 24-h basis. Preliminary feasibility studies on this matter have been conducted, yielding favorable user feedback[32,33].

Ethical and regulatory considerations

Notwithstanding the considerable potential of LLMs, there are substantial challenges in the areas of ethics and regulation. A salient issue pertains to the phenomenon of “hallucination”, characterized by the propensity of LLMs to produce statements that, while resonating with a convincing auditory impression, are devoid of factual veracity. Consequently, Byrd et al. (2024) caution against the unrefined integration of AI-generated texts within clinical contexts and advocate for mandatory human oversight[34].

It is also important to consider the potential risks associated with algorithmic bias. A systematic review found that only around 60% of the LLM-based studies analyzed had integrated countermeasures to reduce bias, such as dataset stratification by gender, ethnicity, or socioeconomic status[35].

Data protection and regulatory compliance represent additional key challenges. Systems that process sensitive patient data, including AI-assisted transcription or documentation platforms (e.g., medical scribe applications), are subject to stringent regulations such as the General Data Protection Regulation (GDPR) in the European Union and the Health Insurance Portability and Accountability Act (HIPAA) in the United States. These frameworks specify requirements for data minimization, encryption, consent, and traceability[36,37]. Despite the guarantees provided by several providers that no identifiable data are stored, independent analyses have demonstrated that some commercial AI tools may inadvertently retain or reproduce sensitive content, thus posing potential compliance and privacy risks[30].

A regulatory debate is underway concerning the categorization of LLMs that facilitate or assist with medically pertinent decisions. The question under discussion is whether these LLMs should be designated as medical devices. The US Food and Drug Administration (FDA) and the European Medical Device Regulation (MDR) are currently developing criteria that will cover these technologies and make them subject to quality, safety and transparency requirements[38].

CHALLENGES AND LIMITATIONS

Notwithstanding the considerable progress achieved in the development of AI-based systems and robotic assistance technologies, there are still significant challenges to their implementation in plastic and reconstructive surgery. These issues pertain specifically to the quality and representativeness of the underlying data, the risk of algorithmic bias, and the integration of these systems into routine clinical practice. It is imperative that these aspects are given due consideration to ensure patient safety and promote acceptance by healthcare professionals.

Data quality, bias, and generalizability

A significant challenge in the development and application of AI systems pertains to the quality of the data utilized. It is evident that a significant proportion of AI models are trained on data sets originating from specific geographical regions, specific clinical centres, or homogeneous patient groups. This can result in limited generalizability and inadequate model performance when applied to a broader or different population[39]. As posited by Johnson et al. (2016), the validity of such models may be questionable, particularly when implemented in contexts divergent from their original training milieu. This can result in clinically significant misclassifications[40].

Another central problem is that of algorithmic bias. This phenomenon may emerge at various levels, including, but not limited to, imbalanced training data, systematic distortions in clinical documentation, or insufficient consideration of social determinants. In a systematic review, Kyaw et al. (2019) analyzed a substantial number of surgical AI studies, numbering over 40, and found that only around 60% of the studies explicitly implemented bias compensation measures[28,35]. Mehrabi et al. (2022) also emphasize that bias has the potential to occur at any stage of the AI development cycle, from data collection to annotation to clinical implementation, and that this can have a particularly deleterious effect on marginalized groups[41].

The utilization of federated learning is a subject that is being discussed with increasing frequency in order to enhance fairness and generalizability. This approach facilitates the training of AI models on distributed data sets, obviating the necessity for centralized data storage. Concurrently, open science initiatives - such as TRIPOD-AI (Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis-Artificial Intelligence) and CONSORT-AI (Consolidated Standards of Reporting Trials-Artificial Intelligence) - are being promoted to enable standardized reporting and transparent model evaluation[42].

Integration into clinical workflow

In addition to methodological quality, the practical integration of AI systems into existing clinical processes is also a significant challenge. Research has demonstrated that a significant proportion of AI applications function in a pilot phase yet ultimately prove ineffective in routine clinical practice. This is primarily attributed to deficiencies in interoperability, standardization, and user acceptance. In particular, the integration of technical components into EHRs, picture archiving systems (PACS), and surgical robotics platforms is impeded by the utilization of incompatible data formats and proprietary interfaces. Gu et al. (2022) emphasize that a lack of compatibility with standards such as DICOM (Digital Imaging and Communications in Medicine) and HL7 (Health Level Seven) is a key obstacle to the implementation of imaging-based AI solutions in pathology. This analogy can be transferred to plastic surgery[43].

Another salient problem is the so-called “black box problem”: many AI systems are based on deep learning, which makes their decisions difficult for users to understand. In a study on the automated assessment of surgical skills, Liu et al. (2020) demonstrated that models lacking an explainable decision logic frequently yielded inaccurate assessments, which directly impacted trust and acceptance by medical staff[44].

Furthermore, the training of clinical staff is imperative. The integration of AI-driven systems necessitates not only a technical comprehension, but also the reallocation of roles within the team and the delineation of explicit responsibilities. The Sepsis Watch early warning system is an example of this. It demonstrates that even well-developed AI tools are ignored or mistrusted if their recommendations are not transparent or are made without a clinical context[45]. In such circumstances, the implementation of comprehensive change management strategies and interdisciplinary training programs is imperative.

It is also imperative to consider regulatory adaptation. While the FDA and the European MDR have classified AI systems as “Software as a Medical Device” (SaMD), experts such as Ong et al. (2025) are calling for the establishment of a flexible, lifecycle-oriented approval approach for adaptive AI systems that continue to develop with new data[31].

FUTURE DIRECTIONS AND INNOVATIONS

Plastic and reconstructive surgery is undergoing a paradigm shift towards a new era in which AI is no longer considered a mere instrument, but rather an indispensable component of a sophisticated surgical ecosystem characterized by learning, adaptation and personalization. Future developments in this field are expected to concentrate on the personalization of AI models, the integration of multimodal data, and the establishment of a networked surgical environment in which robotic systems, AR, and ML function in a highly coordinated manner. In addition, the emerging concept of digital twins in surgery describes the creation of dynamic, patient-specific virtual replicas that integrate imaging, clinical parameters, and procedural data to simulate surgical strategies, predict outcomes, and continuously adapt treatment planning through real-time data feedback, thereby representing a key step toward truly personalized and predictive surgery[46].

Personalized AI models in plastic surgery

A significant development in the coming years will be the personalization of AI systems. In lieu of being grounded in extensive, generic training data sets, the future development of models will entail a customization to individual patient characteristics, encompassing anatomical variations, genetic markers and antecedent surgical histories. Mansoor and Ibrahim (2025) describe in their review article the vision of a dynamic learning system that integrates intra- and post-operative data in real time to improve surgical decisions on a personalized level and thus minimize complications[9].

An illustration of such personalized applications can be found in preoperative simulation: Huang et al. (2024) demonstrated that text-to-image AI models, similar to DALL-E 2, can be utilized to generate preoperative visualizations based on individual anatomical parameters. In their study, patients were shown realistic, AI-generated simulations of a possible post-operative outcome (e.g., a lip lift) within a few minutes, which led to a significant improvement in patient satisfaction[15].

Multimodal AI: integrating imaging, genomics, and patient-reported outcomes

A promising trend is the development of multimodal AI architectures that combine imaging, clinical data, genetic profiles and patient-reported outcomes (PROMs). Whilst numerous antecedent models were confined to the processing of unidimensional data sources, the integration of multimodal data facilitates more comprehensive and contextualized decision-making processes. Parvin et al. (2025) emphasize that such systems are already demonstrating substantial performance enhancements, notably in the domain of oncological diagnostics. This enhancement is exemplified by the integrated evaluation of radiomic image features, mutation status, and laboratory parameters[47].

In the field of plastic surgery, this approach involves the integration of CT or MRI imaging for flap selection with genomic risk profiles (e.g., wound healing disorders, fibrosis tendency) and subjective patient assessments. This holistic planning approach aims to provide a comprehensive and multifaceted framework for surgical decision-making. Radiomics-based models have already demonstrated the capacity to automatically analyze texture, shape and vessel progressions, thus rendering them usable for flap planning, as evidenced by Jarvis et al.(2020)[48].

Vision for a fully AI-augmented surgical ecosystem

The long-term vision is to establish an intelligent, AI-supported surgical ecosystem that connects all phases - from indication, planning and execution to aftercare. This encompasses a series of technical and organizational components, which are outlined below: Firstly, the integration of AR interfaces and robotic platforms that combine intraoperative navigation, image overlay and fine motor precision is of paramount importance. Secondly, the utilization of real-time feedback systems, such as post-operative image analysis or PROMs, facilitates continuous enhancement of algorithms through a process known as “crescendo learning”[9,47].

Thirdly, generative AI has the potential to unlock new dimensions of information and communication. The utilization of LLMs and text-to-image systems will facilitate the development of interactive consultations that are tailored to patients’ anatomical and genetic characteristics. Furthermore, cultural and linguistic preferences could also be taken into account, as demonstrated by Genovese et al. (2025) in the context of GPT-supported rhinoplasty consultations[32].

Recent advancements in the field of regenerative medicine have seen the commencement of endeavors to integrate AI-controlled systems with bioprinting technologies. This integration is aimed at the creation of patient-specific tissue structures, which are derived from individual 3D data and gene expression profiles. However, the integration of such innovations necessitates the establishment of explicit ethical and regulatory guidelines, as emphasized in recent declarations by international professional societies[49].

In conclusion, it is evident that the utilization of AI in plastic surgery is evolving beyond a mere assistive technology, with the potential to function as the central coordinating element of a networked, adaptive, and learning surgical system in the future. The challenge that now lies before us is to transform this vision into an ethically acceptable, regulatory and clinically feasible reality.

Expert opinion

From an expert perspective, current robotic and AI-driven technologies in plastic and reconstructive surgery represent a highly promising yet transitional stage of development. Although dedicated microsurgical robotic platforms already demonstrate clear benefits in precision, tremor elimination, and ergonomics, important limitations persist, particularly regarding workflow integration, interoperability with imaging and planning systems, and the absence of intuitive haptic feedback. To fully realize their clinical and educational potential, future developments should focus on tighter integration between robotic execution, AI-based surgical planning, and adaptive outcome analysis, enabling patient-specific optimization and continuous learning. Such convergence may ultimately support the evolution toward intelligent, digitally augmented surgical ecosystems that enhance reproducibility, training, and long-term surgical performance.

CONCLUSION

The integration of robotic systems, AI, and AR represents a fundamental shift in plastic and reconstructive surgery. Dedicated microsurgical platforms such as Symani® and MUSA enable robotic precision in the submillimetre range, improving dexterity, reproducibility, and ergonomics. Concurrently, AI-driven tools support preoperative planning, outcome prediction, and intraoperative guidance by integrating multimodal clinical data.

To translate these innovations into routine practice, robust clinical validation, standardized data frameworks, and clear regulatory guidance are essential. Future research should focus on multicentre studies, interoperability with clinical information systems, and structured training programs. As personalized and adaptive AI models evolve, they will increasingly support patient-specific, predictive surgical strategies.

The future of plastic surgery will not be defined by the replacement of human skill, but by a symbiotic partnership between surgeon and machine - an intelligent surgical ecosystem.

DECLARATIONS

Authors’ contributions

Conceptualization and design of the work: Radtke C, Fast A, Novotny MJ

Investigation and literature review: Novotny MJ, Anna Fast

Drafting of the manuscript: Novotny MJ, Fast A

Critical revision of the manuscript for important intellectual content: Fast A, Radtke C

Supervision: Radtke C

All authors have read and approved the final version of the manuscript.

Availability of data and materials

Not applicable.

AI and AI-assisted tools statement

During the preparation of this manuscript, the AI tools Elicit (Ought, version 2.0, released 2023-11-15) and ChatGPT (OpenAI, GPT-5.3, released 2025-12-15) were used solely for language editing. The tools did not influence the study design, data collection, analysis, interpretation, or the scientific content of the work. All authors take full responsibility for the accuracy, integrity, and final content of the manuscript.

Financial support and sponsorship

None.

Conflicts of interest

All authors declared that there are no conflicts of interest.

Ethical approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Copyright

© The Author(s) 2026.

REFERENCES

1. Gillies SHD, Millard DR. Principles and art of plastic surgery. Available from: https://archive.org/details/principlesartofp0000gill/page/n7/mode/2up. [Last accessed on 28 Feb 2026].

2. Ruccia F, Mavilakandy A, Imtiaz H, et al. The application of robotics in plastic and reconstructive surgery: a systematic review. Int J Med Robot. 2024;20:e2661.

3. Tan YPA, Liverneaux P, Wong JKF. Current limitations of surgical robotics in reconstructive plastic microsurgery. Front Surg. 2018;5:22.

4. Innocenti M, Malzone G, Menichini G. First-in-human free flap tissue reconstruction using a dedicated microsurgical robotic platform. Plast Reconstr Surg. 2023;151:1078-82.

5. van Mulken TJM, Schols RM, Scharmga AMJ, et al; MicroSurgical Robot Research Group. First-in-human robotic supermicrosurgery using a dedicated microsurgical robot for treating breast cancer-related lymphedema: a randomized pilot trial. Nat Commun. 2020;11:757.

6. Malzone G, Menichini G, Innocenti M, Ballestín A. Microsurgical robotic system enables the performance of microvascular anastomoses: a randomized in vivo preclinical trial. Sci Rep. 2023;13:14003.

7. Yu E, Chu X, Zhang W, et al. Large language models in medicine: applications, challenges, and future directions. Int J Med Sci. 2025;22:2792-801.

8. Farid Y, Fernando Botero Gutierrez L, Ortiz S, et al. Artificial intelligence in plastic surgery: insights from plastic surgeons, education integration, chatgpt’s survey predictions, and the path forward. Plast Reconstr Surg Glob Open. 2024;12:e5515.

9. Mansoor M, Ibrahim AF. The transformative role of artificial intelligence in plastic and reconstructive surgery: challenges and opportunities. J Clin Med. 2025;14:2698.

10. Hassan AM, Biaggi AP, Asaad M, et al. Development and assessment of machine learning models for individualized risk assessment of mastectomy skin flap necrosis. Ann Surg. 2023;278:e123-30.

11. Tighe D, McMahon J, Schilling C, Ho M, Provost S, Freitas A. Machine learning methods applied to risk adjustment of cumulative sum chart methodology to audit free flap outcomes after head and neck surgery. Br J Oral Maxillofac Surg. 2022;60:1353-61.

12. Kapila AK, Georgiou L, Hamdi M. Decoding the impact of AI on microsurgery: systematic review and classification of six subdomains for future development. Plast Reconstr Surg Glob Open. 2024;12:e6323.

13. Park BJ, Hunt SJ, Martin C 3rd, Nadolski GJ, Wood BJ, Gade TP. Augmented and mixed reality: technologies for enhancing the future of IR. J Vasc Interv Radiol. 2020;31:1074-82.

14. Duran A, Demiröz A, Çörtük O, Ok B, Özten M, Eroğlu S. Human vs machine: the future of decision-making in plastic and reconstructive surgery. Aesthet Surg J. 2025;45:434-40.

15. Huang H, Lu Wang M, Chen Y, Chadab TM, Vernice NA, Otterburn DM. A machine learning approach to predicting donor site complications following DIEP flap harvest. J Reconstr Microsurg. 2024;40:70-7.

16. Kiwan O, Al-Kalbani M, Rafie A, Hijazi Y. Artificial intelligence in plastic surgery, where do we stand? JPRAS Open. 2024;42:234-43.

17. Park KW, Diop M, Willens SH, Pepper JP. Artificial intelligence in facial plastics and reconstructive surgery. Otolaryngol Clin North Am. 2024;57:843-52.

18. von Reibnitz D, Weinzierl A, Barbon C, et al. 100 anastomoses: a two-year single-center experience with robotic-assisted micro- and supermicrosurgery for lymphatic reconstruction. J Robot Surg. 2024;18:164.

19. Kueckelhaus M, Nistor A, van Mulken T, et al. Clinical experience in open robotic-assisted microsurgery: user consensus of the European Federation of Societies for Microsurgery. J Robot Surg. 2025;19:171.

20. Rusch M, Hoffmann G, Wieker H, et al. Evaluation of the MMI Symani(®) robotic microsurgical system for coronary-bypass anastomoses in a cadaveric porcine model. J Robot Surg. 2024;18:168.

21. Vles MD, Terng NCO, Zijlstra K, Mureau MAM, Corten EML. Virtual and augmented reality for preoperative planning in plastic surgical procedures: a systematic review. J Plast Reconstr Aesthet Surg. 2020;73:1951-9.

22. Kim Y, Kim H, Kim YO. Virtual reality and augmented reality in plastic surgery: a review. Arch Plast Surg. 2017;44:179-87.

23. Sayadi LR, Naides A, Eng M, et al. The new frontier: a review of augmented reality and virtual reality in plastic surgery. Aesthet Surg J. 2019;39:1007-16.

24. McGraw JR, Wakim JJ, Gallagher RS, Kovach SJ 3rd. Intraoperative navigation in plastic surgery with augmented reality: a preclinical validation study. Plast Reconstr Surg. 2023;151:170e-1.

25. Cai EZ, Gao Y, Ngiam KY, Lim TC. Mixed reality intraoperative navigation in craniomaxillofacial surgery. Plast Reconstr Surg. 2021;148:686e-8.

26. Shafarenko MS, Catapano J, Hofer SOP, Murphy BD. The role of augmented reality in the next phase of surgical education. Plast Reconstr Surg Glob Open. 2022;10:e4656.

27. Kowalewski KF, Hendrie JD, Schmidt MW, et al. Validation of the mobile serious game application Touch SurgeryTM for cognitive training and assessment of laparoscopic cholecystectomy. Surg Endosc. 2017;31:4058-66.

28. Kyaw BM, Saxena N, Posadzki P, et al. Virtual reality for health professions education: systematic review and meta-analysis by the digital health education collaboration. J Med Internet Res. 2019;21:e12959.

29. Patel I, Om A, Cuzzone D, Garcia Nores G. Comparing ChatGPT vs surgeon-generated informed consent documentation for plastic surgery procedures. Aesthet Surg J Open Forum. 2024;6:ojae092.

30. Mess SA, Mackey AJ, Yarowsky DE. Artificial intelligence scribe and large language model technology in healthcare documentation: advantages, limitations, and recommendations. Plast Reconstr Surg Glob Open. 2025;13:e6450.

31. Ong CS, Obey NT, Zheng Y, Cohan A, Schneider EB. SurgeryLLM: a retrieval-augmented generation large language model framework for surgical decision support and workflow enhancement. NPJ Digit Med. 2024;7:364.

32. Genovese A, Prabha S, Borna S, et al. Artificial intelligence for patient support: assessing retrieval-augmented generation for answering postoperative rhinoplasty questions. Aesthet Surg J. 2025;45:735-44.

33. Song T, Pabst F, Eck U, Navab N. Enhancing patient acceptance of robotic ultrasound through conversational virtual agent and immersive visualizations. arXiv 2025; arXiv:2502.10088.

34. Byrd Iv TF, Tignanelli CJ. Artificial intelligence in surgery - a narrative review. J Med Artif Intell. 2024;7:29.

35. Pressman SM, Borna S, Gomez-Cabello CA, Haider SA, Haider C, Forte AJ. AI and ethics: a systematic review of the ethical considerations of large language model use in surgery research. Healthcare. 2024;12:825.

36. Gilbert S. The EU passes the AI Act and its implications for digital medicine are unclear. NPJ Digit Med. 2024;7:135.

37. Vokinger KN, Gasser U. Regulating AI in medicine in the United States and Europe. Nat Mach Intell. 2021;3:738-9.

38. Warraich HJ, Tazbaz T, Califf RM. FDA perspective on the regulation of artificial intelligence in health care and biomedicine. JAMA. 2025;333:241-7.

39. Oakden-Rayner L, Dunnmon J, Carneiro G, Ré C. Hidden stratification causes clinically meaningful failures in machine learning for medical imaging. Proc ACM Conf Health Inference Learn. 2020;2020:151-9.

40. Johnson AE, Ghassemi MM, Nemati S, Niehaus KE, Clifton DA, Clifford GD. Machine learning and decision support in critical care. Proc IEEE Inst Electr Electron Eng. 2016;104:444-66.

41. Mehrabi N, Morstatter F, Saxena N, Lerman K, Galstyan A. A survey on bias and fairness in machine learning. ACM Comput Surv. 2021;54:1-35.

42. Collins GS, Moons KGM. Reporting of artificial intelligence prediction models. Lancet. 2019;393:1577-9.

43. Gu H, Liang Y, Xu Y, et al. Improving workflow integration with xPath: design and evaluation of a human-AI diagnosis system in pathology. ACM Trans Comput Hum Interact. 2023;30:1-37.

44. Liu X, Cruz Rivera S, Moher D, Calvert MJ, Denniston AK; SPIRIT-AI and CONSORT-AI Working Group. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Nat Med. 2020;26:1364-74.

45. WIRED. AI can help patients - but only if doctors understand it. Available from: https://www.wired.com/story/ai-help-patients-doctors-understand/. [Last accessed on 28 Feb 2026].

46. Bruynseels K, Santoni de Sio F, van den Hoven J. Digital twins in health care: ethical implications of an emerging engineering paradigm. Front Genet. 2018;9:31.

47. Parvin N, Joo SW, Jung JH, Mandal TK. Multimodal AI in biomedicine: pioneering the future of biomaterials, diagnostics, and personalized healthcare. Nanomaterials. 2025;15:895.

48. Jarvis T, Thornburg D, Rebecca AM, Teven CM. Artificial intelligence in plastic surgery: current applications, future directions, and ethical implications. Plast Reconstr Surg Glob Open. 2020;8:e3200.

49. Dhawan R, Brooks KD, Shauly O, Shay D, Losken A. Ethical considerations for generative artificial intelligence in plastic surgery. Plast Reconstr Surg Glob Open. 2025;13:e6825.

Cite This Article

Review
Open Access
Smart hands: robotic systems as surgical collaborators in plastic and reconstructive surgery

How to Cite

Download Citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click on download.

Export Citation File:

Type of Import

Tips on Downloading Citation

This feature enables you to download the bibliographic information (also called citation data, header data, or metadata) for the articles on our site.

Citation Manager File Format

Use the radio buttons to choose how to format the bibliographic data you're harvesting. Several citation manager formats are available, including EndNote and BibTex.

Type of Import

If you have citation management software installed on your computer your Web browser should be able to import metadata directly into your reference database.

Direct Import: When the Direct Import option is selected (the default state), a dialogue box will give you the option to Save or Open the downloaded citation data. Choosing Open will either launch your citation manager or give you a choice of applications with which to use the metadata. The Save option saves the file locally for later use.

Indirect Import: When the Indirect Import option is selected, the metadata is displayed and may be copied and pasted as needed.

About This Article

Disclaimer/Publisher’s Note: All statements, opinions, and data contained in this publication are solely those of the individual author(s) and contributor(s) and do not necessarily reflect those of OAE and/or the editor(s). OAE and/or the editor(s) disclaim any responsibility for harm to persons or property resulting from the use of any ideas, methods, instructions, or products mentioned in the content.
© The Author(s) 2026. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, sharing, adaptation, distribution and reproduction in any medium or format, for any purpose, even commercially, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Data & Comments

Data

Views
28
Downloads
0
Citations
0
Comments
0
0

Comments

Comments must be written in English. Spam, offensive content, impersonation, and private information will not be permitted. If any comment is reported and identified as inappropriate content by OAE staff, the comment will be removed without notice. If you have any queries or need any help, please contact us at support@oaepublish.com.

0
Download PDF
Share This Article
Scan the QR code for reading!
See Updates
Contents
Figures
Related
Plastic and Aesthetic Research
ISSN 2349-6150 (Online)   2347-9264 (Print)

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/