Non-contact artificial intelligence-assisted intraoperative 3D navigation technology prospective application study in lung cancer surgery
Highlight box
Key findings
• The non-contact hand-controlled navigation can reduce navigation time while enhancing navigation efficiency and surgical experience.
What is known and what is new?
• Non-contact intraoperative three-dimensional (3D) navigation systems are recognized for combining medical imaging, computer graphics, and high-precision measurement to improve surgical safety. Gesture recognition interfaces reduce infection risks but face challenges like environmental interference.
• Novel application of error back-propagation algorithm (EBPA) with adaptive learning rates and support vector machine (SVM)-based risk minimization to enhance gesture recognition robustness and reduce training errors. The technique demonstrates quantifiable benefits of non-contact navigation, including reduced anesthesia time, lower blood loss, and improved surgeon-system interaction. This highlights the need for hybrid solutions (e.g., combining gesture and gaze tracking) to reduce false alarms.
What is the implication, and what should change now?
• Non-contact navigation systems can revolutionize surgical workflows by restoring full control to surgeons, minimizing sterility breaches, and improving procedural efficiency.
• It is suggested to refine real-time feedback mechanisms and fine-movement control to address gesture recognition delays and inaccuracies and develop hybrid systems integrating gaze tracking, speech recognition, or torso-direction sensing to reduce misinterpretation risks.
Introduction
Surgical procedures are evolving in parallel with advancing digitalization and intelligent technologies. The early detection rate of lung cancer has been increased by computed tomography (CT) scanning (1,2), and minimally invasive thoracoscopic segmentectomy has emerged as a critical therapeutic option for the treatment of early-stage lung cancer (3-6). An essential auxiliary tool for segmentectomy surgery is three-dimensional (3D) reconstruction navigation (7-11). Intraoperative navigation is typically via 3D printing, virtual reality (VR)/augmented reality (AR), etc., or largely depends on assistance in guiding the procedure from the bedside (12-15). The specific sterility standards of the operating room limit the use of interactive navigation technologies such as mouse and tracker. Personal management of the image increases the likelihood of contamination, which raises the expenses in terms of time and therapeutic concerns (16,17). In previous surgical procedures, surgeons typically rely on the guidance of other members of the surgical team to manipulate images. This may be suitable for relatively discrete and simple image interaction operations, but this indirect operation method lacks interactivity with the surgeon, which is not conducive to the surgeon’s direct acquisition of the image data and the use of medical images for more analytical and important assessment tasks (18). Consequently, one of the primary goals of navigation is to enable surgeons to perform direct and precise operations, engage with information, work in person, and maintain the sterility of the operating room.
To achieve this goal, we developed a non-contact intraoperative 3D navigation system as a novel intraoperative navigation technology, which enables direct dynamic image control without physical touch, thus offering a solution to intraoperative navigation methods. By using gestures instead of a mouse, the target image can be zoomed in, zoomed out, rotated and positioned, expanding the freedom of the 3D data interface (19). The system utilizes computer-aided technology and high-precision sensors to provide real-time 3D images and navigation information during the surgery, and can capture the surgeon’s voice commands, helping doctors more accurately locate lesions and anatomical sites and perform surgical procedures autonomously and in a non-contact manner. This article examines the prospective value of the non-contact intraoperative 3D navigation system in surgical procedures, evaluates its effectiveness in the clinical practice of pulmonary segmentectomy, and assesses its potential impacts. We present this article in accordance with the TRIPOD reporting checklist (available at https://jtd.amegroups.com/article/view/10.21037/jtd-2025-1136/rc).
Methods
Surgery and data acquisition
Patient
From March 2022 to March 2025, a total of 62 patients with early-stage lung cancer who were eligible for segmentectomy were randomly divided into two groups at a 1:1 ratio. The randomization sequence was generated by an independent statistician using SPSS software (version 25.0) with block randomization design (varying block sizes of 4 and 6). Sequentially numbered, opaque sealed envelopes containing group assignments were prepared, and envelopes were only opened after patient enrollment. Blood loss, surgical success rate, operation time, navigation time, and operator satisfaction were compared between the two groups. Navigation Time, the primary endpoint of this study, was operationally defined as the continuous duration in minutes from stereoscopic navigation system activation until successful achievement of target localization accuracy. Three key secondary endpoints were evaluated, including: (I) surgical success rate, defined as the accomplishment of navigation-assisted resection only upon visual identification, individual dissection, and selective transection of all targeted anatomical structures (artery, vein, and bronchus) within the planned segment/subsegment; (II) operation time, quantified as the continuous skin incision-to-wound closure duration in minutes; (III) blood loss, calculated through combined volumetric assessment of suction canister content and gravimetric gauze analysis (mL).
Inclusion criteria
Diagnosed of early-stage non-small cell lung cancer with lesions limited to a specific lung segment: According to the indications recommended in the National Comprehensive Cancer Network (NCCN) guidelines, CT examination shows that the diameter of the tumor outside the lung parenchyma (peripheral type) is ≤2 cm and at least one of the following conditions is met: (I) histological type confirmed as adenocarcinoma; (II) artificial intelligence-assisted diagnosis with strong suspicion of malignant tumor, ground glass opacity (GGO) ≥50%; (III) follow-up imaging examinations confirmed that the tumor doubled in size for more than 400 days; (IV) according to AI model analysis, the probability of malignancy of high-risk nodules is over 80%; (V) aged between 18 and 75 years old, regardless of gender; (VI) no serious cardiovascular, liver and kidney dysfunction; (VII) signed the informed consent form and agreed to participate in the study.
Exclusion criteria
(I) The lesion diameter is >2 cm and the solid content is >50%, lobectomy is required; or the ground glass nodule is less than 2 cm from the lung surface, wedge resection can be performed; (II) the number of lesions is ≥3, involving multiple lung segments, and lobectomy is required; (III) contraindications to surgery, such as severe cardiopulmonary insufficiency, coagulation disorders, etc.; (IV) refused to participate in the trial or refuse to sign the informed consent. All patients were informed and consented.
Building an effective and intelligent diagnostic system model based on radiomics for pulmonary nodules
We use a generative adversarial network to normalize CT images obtained by various imaging techniques. A forward mapping from the source domain to the target domain and a backward mapping from the target domain to the source domain are both learned by the generator loop. The discriminator is trained to detect “fake” images produced by the generator. Therefore, the generator and the discriminator compete with each other to drive optimization so that the generator can figure out how to transform the properties of this one domain into images of the other domain. Region of interest (ROI) is also manually defined by medical professionals with extensive clinical experience and its precision is far higher than that of the artificial intelligence model more commonly used today. To reduce the error, intraclass correlation coefficient (ICC) was used to measure the difference between the two physicians’ cutoff groups. Radiomics features were extracted from the original lung CT image data and volume of interest (VOI), including quantified nodule size, shape and intensity as well as texture matrix, wavelet transform features, etc. T-test, variance test and other methods are used to retain features with significant differences, and least absolute shrinkage and selection operator (LASSO) algorithm is also used to delete redundant features. Finally, the selected features were used to create a nomogram model for identifying ground-glass nodules in the lung (Figure 1). The model performed well in terms of sensitivity (95.3%) and accuracy (93.4%). The workflow for pulmonary nodule screening is shown in Figure 1.
Key technologies of the non-contact intraoperative 3D navigation system
The non-contact intraoperative 3D navigation system is based on a gesture-controlled image visualization system. It uses binocular active infrared sensors for precise hand tracking and relies on deep learning algorithms to track and recognize hand movements and complex gestures. Figure 2 shows the flowchart of network establishment for a non-contact intraoperative 3D navigation systemmeaning and explanations of the various hand movements.
Hand tracking
We designed a total of 21 key nodes for hand feature detection (Figure 3A). When an object point is imaged, the imaging light beam cannot converge at one point due to aberration, creating a diffuse circular projection called a circle of confusion. There are circles of confusion before and after camera focus. These circles have a very small diameter, appear blurry in the field of vision and are not visible to the naked eye. Therefore, they are called permissible circles of confusion. The distance between the two allowable circles of blur is called the depth of field (DOF). The binocular vision-based DOF extraction algorithm is the most commonly used technique for this purpose. This method captures objects in the same scene with multiple cameras from different angles, resulting in two parallax images that are pre-processed to produce grayscale images, which are then stereoscopically aligned to extract the parallax information between the images (Figure 3B,3C). Lastly, the parallax information is combined with the camera calibration data to produce real depth data. By appropriately processing the handheld data captured by the infrared binocular camera, the DOF data can be obtained. In addition, track-before-detect (TBD) technology is used to track the operator’s joint information. The main application of TBD technology is to track the coordinates of an object in three dimensions, such as the tips of pens and fingers (Figure 3D-3F). Algorithms used in TBD technology include recursive Bayesian filtering, dynamic programming and space-time domain matching filters.
Gesture recognition
Figure 3G depicts the entire implementation process of gesture recognition, including data preparation, pretreatment, convolutional layer, subsample collection layer, nonlinear layers, main network, expert network, pose estimation, hand model fitting, etc. The technology is based on the convolutional neural network (CNN), an emerging artificial neural network consisting of multiple layers and containing a volume filter that obtains information about the image at any point in time.
After receiving the binocular image, the convolutional layer of the neural network uses multiple convolutional layers and pooling layers, to extract and display the visual information. Both the main network and the expert network are fully connected networks that generate hand pose predictions based on the captured hand image features. The difference is that the main network is trained on all available hand gesture data, while the expert network is trained only on specific categories of hand gestures, such as open hands, fists, reaching, V-shaped gestures, etc. This improves the network performance in recognizing specific gestures to be both targeted and generalizable.
Intraoperative navigation workflow in the control group
The surgical navigation workflow integrated continuously displayed preoperative 3D reconstructions from CT scans with a strict non-interventional interaction protocol where surgeons issued verbal image manipulation commands (e.g., rotate 30 degrees, occlude pulmonary artery branches) to a dedicated non-sterile assistant who exclusively handled visualizations while remaining completely isolated from physical surgical actions. After receiving commands, the assistant adjusted the virtual models. The operating surgeon then visually confirmed anatomical correspondence before proceeding with instrument navigation guided by the optimized views.
Surgery process
The study was conducted in accordance with the Declaration of Helsinki and its subsequent amendments. The study was conducted in the Department of Thoracic Surgery at the First Medical Center of PLA General Hospital and was approved by the Hospital Ethics Review Board (Lunshen: S2019-222-01) and informed consent was obtained from all individual participants. The surgeons were all senior doctors who were familiar with navigation systems and had similar skills. The operation was performed using a minimally invasive single-port thoracoscopy. If large adhesions or unexpected bleeding occur during the operation, the procedure can be changed to open chest surgery. The patient was intubated with a double-lumen cannula and placed in the lateral decubitus position under general anesthesia. Before the operation, the patient undergoes a thin-slice CT. Digital Imaging and Communications in Medicine (DICOM) data, the anatomical structure is utilized to recreate the surgical area’s anatomical structure of the surgical area is recreated 3D based on the examination results. The self-developed non-contact intraoperative 3D navigation system then imports the 3D data in .STL format. When performing lung segment resection during surgery, the experimental group employed this navigation system to perform lung segment resection during surgery (Video 1), while the control group used the display and support for surgical navigation after completing the 3D reconstruction. The application scenarios are shown in Figure 4 below.
Operator quantitative assessment
We used quantitative tables to evaluate the effectiveness and application impact of the non-contact intraoperative 3D navigation system and standard navigation technology in segmentectomy. The evaluation was carried out in terms of surgical navigation reality and interactive real-time, navigation speed and intuitiveness, with the aim of quantifying the support value that the model provides to the surgeon during the operation. Using comprehensive assessment questions and operator evaluation, we were able to determine the benefits of new experimental navigation technology over existing navigation technology.
Statistical analysis
IBM SPSS 25.0 was used for data analysis. The study collected quantitative data such as non-contact navigation, navigation time, surgical operation time, blood loss amount and surgeons’ satisfaction rating, only determining the results according to normal distribution and homogeneity of variance. For statistical analysis, t-tests can be used, taking the data as mean ± standard deviation (SD) can be specified. Otherwise, a nonparametric test will be used to compare differences between the two groups, with results reported as interquartile range. Categorical data, such as the frequency of occurrence of complications, were compared using the Chi-squared test or Fisher’s exact chi-square test. A P value <0.05 indicates a statistically significant difference.
Results
Table 1 and Figure 5 show the statistical indicators and two data sets of data comparing the 3D non-contact intraoperative navigation system with the traditional navigation. There were significant differences between the two groups in terms of navigation time (50–107 s) and operation time (84.23±13.18 min) of the experimental group were considerably shorter than those of the control group (120–234 s) and (101.84±11.01 min), respectively. Surgeons preferred the non-contact intraoperative 3D navigation system more than the control group, with the former’s operator satisfaction being higher (96.9±0.34 vs. 89.7±0.54). There were minimal differences in blood loss between the two surgical techniques (24.74±6.88 vs. 24.68±6.0 mL). Both the experimental group (96.77%) and the control group (93.55%) surgical success rates were similar. Despite the lack of a substantial difference, the success rate of the experimental group is still slightly higher.
Table 1
| Variables | Experimental group (n=31) | Control group (n=31) | U//t/χ2 | P |
|---|---|---|---|---|
| Statistical indicators | ||||
| Navigation time (s) | 80 (50–107; 32–125) | 173 (120–234; 100–306) | 65.50 (U) | <0.001 |
| Surgical success rate (%) | 96.77 | 93.55 | 0.000 (χ2) | >0.99 |
| Operation time (min) | 84.23±13.18 [66–103] | 101.84±11.01 [80–125] | 165.50 (U) | <0.001 |
| Blood loss (mL) | 24.74±6.88 [15–41] | 24.68±6.02 [16–40] | 480.00 (U) | 0.99 |
| Surgeons score | 96.9±0.34 | 89.7±0.54 | 21.000 (U) | <0.001 |
| Baseline characteristics | ||||
| Gender | 0.065 (χ2) | 0.80 | ||
| Male | 14 | 15 | ||
| Female | 17 | 16 | ||
| Age (years) | 50±2.6 [26–76] | 53.4±2.9 [24–79] | 0.745 (t) | 0.46 |
| Segment† | 0.067 (χ2) | 0.80 | ||
| Pulmonary segment | 18 | 19 | ||
| Pulmonary subsegment | 13 | 12 | ||
Data are presented as n, median (interquartile range; range) or mean ± standard deviation unless otherwise indicated. P value thresholds for tests of normality and homogeneity of variance were set at 0.1. †, No. of patients who successfully underwent anatomic segmentectomy or subsegmental resection. Cases requiring en bloc stapling of undissected pulmonary parenchyma were deemed failures. Pulmonary segment: No. of patients scheduled for anatomic segmentectomy; pulmonary subsegment: No. of patients scheduled for anatomic subsegmental or combined subsegmental resections.
Discussion
The experimental results demonstrate that non-contact hand-controlled navigation can significantly reduce navigation time while enhancing navigation efficiency and surgical experience. However, during the gesture capture process, error back-propagation algorithm (EBPA) design and neural network selection are critical for accurate gesture recognition and interpretation. The purpose of EBPA is to enhance robustness of the system, automatic data transformation and continuous data streaming. The EBPA transmits signals in two directions: One is for positive data travels along neural network and to deal with output. The other involves back-propagation and deciding in relation to expected outcomes based on output results.
In each iteration of the algorithm, the learning rate determines the step length. Typically, 0.1 is chosen as the learning rate. An excessively high learning rate may cause the algorithm to converge fast while sometimes skip the optimal solution; an excessively low learning rate would cause the process to converge extremely slowly. In order to effectively separate and process the data, different learning rates are sometimes specified for different data segments. When the error of the target training set reaches a certain minimum value, the algorithm iteration stops. Generally speaking, the EBPA algorithm processes only one sample at a time. The influence of other samples could occasionally be neutralized as the settings are changed frequently. To obtain more precise results, more iterations are usually performed. By adjusting the learning rate and iteration frequency, the algorithm can more efficiently converge to an exact solution on large data sets by achieving sample-by-sample updates (20).
Then, we use support vector machine (SVM) theory as the classifier. The main concept is based on small interval segregation hyper-graph on feature space determined by kernel function. It can enhance the learning capacity by applying the risk minimization concept to the solution of small sample problems. The linear discriminant function is used to reduce the complexity of the algorithm and is effective in quadratic optimization problems. Theoretically, this can result in the globally optimal solution (21-25).
In contemporary surgery, non-contact intraoperative 3D navigation systems have developed into be crucial auxiliary instruments. It combines advanced technologies such as medical imaging, computer graphics, high-precision measurement and stereo positioning, and relies on deep learning algorithms and CNNs to accurately track and recognize hand movements and complex gestures (26). It offers surgeons an unparalleled surgical navigation experience while also greatly enhancing surgical precision and safety. This research demonstrates that the non-contact intraoperative 3D navigation system has notable advantages compared to conventional surgical techniques in several important parameters. First, there was a noticeable reduction in operating time for surgical navigation. By providing more precise and immediate surgical route guidance, navigation technology can reduce the time required for intraoperative placement and confirmation. This means fewer anesthesia problems are possible because the duration of the procedure is shorter. This advantage reduces complications and risks during surgery and recovery and improves surgical efficiency. Second, although no statistically significant differences were observed in bleeding volume or procedural success rates between groups, the non-contact navigation system demonstrated inherent technical advantages: by expanding intraoperative field-of-view accuracy, minimizing anatomical misinterpretation risks, and enhancing visualization of delicate structures, this approach potentially reduces iatrogenic tissue damage—providing a streamlined methodology for technically demanding operations. Surgeons also have great recognition and satisfaction with the non-contact 3D navigation system. The system adapts to the surgical operating habits of different surgeons and enhances the surgical team’s collaboration ability and overall surgical quality. While the observed improvements in operative control may potentially contribute to enhanced surgical precision, their translation to patient-centered outcomes such as postoperative recovery requires further validation. Future studies with larger cohorts should specifically evaluate the system’s impact on complication rates, hospital length of stay, and patient-reported recovery metrics to establish clinical utility.
When analyzing images, surgeons are limited by traditional interactive mechanisms for acquiring, displaying, and processing traditional images. The fundamental limitation of operating systems, including mice and keyboards, is the requirement to maintain a strict demarcation between sterile and non-sterile areas in the operating room. The touch-based interface, which requires the surgeon to physically manipulate the image, significantly raises the risk of surgical infection. During long operations that may require multiple interactions with images, the operation time may be extended, resulting in higher financial costs and clinical risks. To avoid sterility violations, device systems may need to be used by non-sterile team members. However, team members are not always present and information discrepancies during data transfer may be due to differences in professional levels and cognitive abilities. A surgeon once spent up to seven minutes instructing his assistant to perform the click task to configure the navigation system (27,28). This example clearly demonstrates the potential communication difficulties associated with third-party processing of image information. By providing direct control over the image data and a complete return of surgical rights to the surgeon, the non-contact intraoperative 3D navigation system enables the surgeon to mentally understand the surgical process and implement surgical ideas consistently and comprehensively throughout the entire surgical procedure. The system carries out quantitative analysis and surgical process planning with comprehensive information on the size, characteristics and location of lesions via real-time acquisition, detection, and 3D display of the relative positional relationship of surgical instruments, lesion sites, etc. Among these, gesture-based interaction can expand the degree of freedom of the user interface and improve the way that medical personnel use images when operating images, including interface flexibility, optimizing medical staff’s interaction with images during procedures, and accuracy of spatial operations such as volume-based rotation and target positioning (29). By assisting physicians, it raises the success rate of operations in rapid recognition and correction of abnormalities during the procedure for surgical precision. This is particularly crucial when treating smaller lesions and complicated surgeries.
However, insufficient real-time feedback functions and inaccurate measurements during gesture procedures are occasionally observed in the non-contact intraoperative 3D navigation system. For instance, operating systems based on Microsoft Kinect and Leap Motion sensors require that doctors hold their hands still for two seconds before starting or completing a measurement and that the cursor position shifts relative to the anatomical position during surgery (30). Hand tremor caused by factors such as the relative position of the sensing device and ambient light can potentially impact the precision and accuracy of gesture recognition. Further algorithmic improvement is required for better intraoperative image feedback and control of fine movements.
Additionally, gesture recognition is a crucial aspect of the system’s usability. Gesture recognition is a method for identifying significant gestures from unintended movements and determining the start and end points of gestures in an input sequence (31,32). Gesture-based interfaces outperform nice in completing intraoperative tasks. Display features such as windowing, zooming, rotating, highlighting, and annotating intraoperative images can help increase system usability. However, while developing gesture control applications, the limited space in the operating room as well as the proper distinction between gesture expressions for interacting with the system and interpretive gestures must be taken into account (30). A variety of gesture variations can exist in spatiotemporal space, and successive gestures are expressed together (31). In addition to gestures that support “dialogue” with the system, other actions that are carried out while moving in front of the screen, such as switching between gestures, may be misinterpreted as intraoperative command gestures, resulting in additional or incorrect actions and impacting the accuracy of intraoperative cognition. Jacob et al. offered a good solution. By determining the user’s intent when interacting with the device (e.g., gaze, head direction, torso direction, etc.), the system’s false alarm rate was successfully reduced without compromising detection performance. The average recognition rate for isolated gestures was 97.23% (33). Another way to boost gesture interface systems is to use speech recognition, which can adjust parameters related to discrete command execution. Nevertheless, speech recognition technology has some limitations because the background noise in the operating room and the accents used by surgeons vary significantly (34,35). Therefore, it is necessary to develop a more comprehensive, multi-parameter non-contact recognition system solution that leverages diverse theoretical research, local trajectories to segment gestures, and independent tracking of each regional module to adapt to increasingly complex intraoperative situations.
Conclusions
The non-contact intraoperative 3D navigation system represents a novel surgical platform with potential applicability across numerous procedures. Our findings suggest it may contribute to enhanced surgical precision and safety, potentially influencing operative efficiency and postoperative recovery trajectories. To promote use and development, it is vital to foster more research and technical innovation in relevant areas. Understanding the unique requirements of each surgical practice, establishing a comprehensive non-contact detection system for better controllability, providing stereoscopic visualization, and improving interaction with images will encourage deployment and advancement in surgical procedures.
Acknowledgments
None.
Footnote
Reporting Checklist: The authors have completed the TRIPOD reporting checklist. Available at https://jtd.amegroups.com/article/view/10.21037/jtd-2025-1136/rc
Data Sharing Statement: Available at https://jtd.amegroups.com/article/view/10.21037/jtd-2025-1136/dss
Peer Review File: Available at https://jtd.amegroups.com/article/view/10.21037/jtd-2025-1136/prf
Funding: This research was funded by
Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at https://jtd.amegroups.com/article/view/10.21037/jtd-2025-1136/coif). The authors have no conflicts of interest to declare.
Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. The study was conducted in accordance with the Declaration of Helsinki and its subsequent amendments. The study was conducted in the Department of Thoracic Surgery at the First Medical Center of PLA General Hospital and was approved by the Hospital Ethics Review Board (Lunshen: S2019-222-01) and informed consent was obtained from all individual participants.
Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.
References
- Sung H, Ferlay J, Siegel RL, et al. Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries. CA Cancer J Clin 2021;71:209-49. [Crossref] [PubMed]
- Jiang Q, Sun H, Chen Q, et al. High-resolution computed tomography with 1,024-matrix for artificial intelligence-based computer-aided diagnosis in the evaluation of pulmonary nodules. J Thorac Dis 2025;17:289-98. [Crossref] [PubMed]
- Brunelli A, Decaluwe H, Gonzalez M, et al. European Society of Thoracic Surgeons expert consensus recommendations on technical standards of segmentectomy for primary lung cancer. Eur J Cardiothorac Surg 2023;63:ezad224. [Crossref] [PubMed]
- Schuchert MJ, Abbas G, Awais O, et al. Anatomic segmentectomy for the solitary pulmonary nodule and early-stage lung cancer. Ann Thorac Surg 2012;93:1780-5; discussion 1786-7. [Crossref] [PubMed]
- Saji H, Okada M, Tsuboi M, et al. Segmentectomy versus lobectomy in small-sized peripheral non-small-cell lung cancer (JCOG0802/WJOG4607L): a multicentre, open-label, phase 3, randomised, controlled, non-inferiority trial. Lancet 2022;399:1607-17. [Crossref] [PubMed]
- Lu G, Xiang Z, Zhou Y, et al. Comparison of lobectomy and sublobar resection for stage I non-small cell lung cancer: a meta-analysis based on randomized controlled trials. Front Oncol 2023;13:1261263. [Crossref] [PubMed]
- Kato H, Oizumi H, Suzuki J, et al. Thoracoscopic anatomical lung segmentectomy using 3D computed tomography simulation without tumour markings for non-palpable and non-visualized small lung nodules. Interact Cardiovasc Thorac Surg 2017;25:434-41. [Crossref] [PubMed]
- Saji H, Inoue T, Kato Y, et al. Virtual segmentectomy based on high-quality three-dimensional lung modelling from computed tomography images. Interact Cardiovasc Thorac Surg 2013;17:227-32. [Crossref] [PubMed]
- Chen K, Niu Z, Jin R, et al. Three-dimensional reconstruction computed tomography in thoracoscopic segmentectomy: a randomized controlled trial. Eur J Cardiothorac Surg 2024;66:ezae250. [Crossref] [PubMed]
- Nakamura S, Hayashi Y, Kawaguchi K, et al. Clinical application of a surgical navigation system based on virtual thoracoscopy for lung cancer patients: real time visualization of area of lung cancer before induction therapy and optimal resection line for obtaining a safe surgical margin during surgery. J Thorac Dis 2020;12:672-9. [Crossref] [PubMed]
- Xu W, Li Z, Cao X, et al. Oncologic outcomes of three-dimensional navigation-guided segmentectomy for early-stage non-small cell lung cancer >2-3 cm. Asian J Surg 2024; Epub ahead of print. [Crossref]
- Qiu B, Ji Y, He H, et al. Three-dimensional reconstruction/personalized three-dimensional printed model for thoracoscopic anatomical partial-lobectomy in stage I lung cancer: a retrospective study. Transl Lung Cancer Res 2020;9:1235-46. [Crossref] [PubMed]
- Li C, Zheng B, Yu Q, et al. Augmented Reality and 3-Dimensional Printing Technologies for Guiding Complex Thoracoscopic Surgery. Ann Thorac Surg 2021;112:1624-31. [Crossref] [PubMed]
- Doornbos MJ, Peek JJ, Maat APWM, et al. Augmented Reality Implementation in Minimally Invasive Surgery for Future Application in Pulmonary Surgery: A Systematic Review. Surg Innov 2024;31:646-58. [Crossref] [PubMed]
- Peek JJ, Zhang X, Hildebrandt K, et al. A novel 3D image registration technique for augmented reality vision in minimally invasive thoracoscopic pulmonary segmentectomy. Int J Comput Assist Radiol Surg 2025;20:787-95. [Crossref] [PubMed]
- Hartmann B, Benson M, Junger A, et al. Computer keyboard and mouse as a reservoir of pathogens in an intensive care unit. J Clin Monit Comput 2004;18:7-12. [Crossref] [PubMed]
- Ledwoch K, Dancer SJ, Otter JA, et al. How dirty is your QWERTY? The risk of healthcare pathogen transmission from computer keyboards. J Hosp Infect 2021;112:31-6. [Crossref] [PubMed]
- Johnson R, O'Hara K, Sellen A, et al. Exploring the potential for touchless interaction in image-guided interventional radiology. Proceedings of the 2011 Annual Conference: Human Factors in Computing Systems 2011:3323-32. doi:
10.1145/1978942.1979436 . - Gallo L. A study on the degrees of freedom in touchless interaction. SIGGRAPH Asia 2013 Technical Briefs. 2013;1-4. Doi:
10.1145/2542355.2542390 . - Ikram A, Liu Y, editors. Real time hand gesture recognition using leap motion controller based on cnn-svm architechture. 2021 IEEE 7th international conference on virtual reality (ICVR); Foshan, China, 2021;5-9. doi:
10.1109/ICVR51878.2021.9483844 . - Ding W, Li G, Sun Y, et al. DS evidential theory on sEMG signal recognition. International Journal of Computing Science and Mathematics 2017;8:138-45.
- Pan MS, Tang JT, Yang XL. An adaptive median filter algorithm based on B-spline function. Int J Autom Comput 2011;8:92-9.
- Liu W, Zhang D, Cui M, et al. An enhanced depth map based rendering method with directional depth filter and image inpainting. Vis Comput 2016;32:579-89.
- Miao W, Li G, Sun Y, et al. Gesture recognition based on sparse representation. International Journal of Wireless and Mobile Computing 2016;11:348-56.
- Wegner F, Both M, Fink R. Automated detection of elementary calcium release events using the à trous wavelet transform. Biophysical Journal 2006;90:2151-63. [Crossref] [PubMed]
- Salvador RA, Naval P. Towards a feasible hand gesture recognition system as sterile non-contact interface in the operating room with 3d convolutional neural network. Informatica 2022;46: [Crossref]
- Grätzel C, Fong T, Grange S, et al. A non-contact mouse for surgeon-computer interaction. Technol Health Care 2004;12:245-57.
- Uba J, Jurewicz KA. A review on development approaches for 3D gestural embodied human-computer interaction systems. Appl Ergon 2024;121:104359. [Crossref] [PubMed]
- Kirmizibayrak C, Radeva N, Wakid M, et al. Evaluation of gesture based interfaces for medical volume visualization tasks. Proceedings of the 10th international conference on Virtual reality continuum and its applications in industry; 2011. Doi:
10.1145/2087756.2087764 . - Nestorov N, Hughes P, Healy N, et al. Application of natural user interface devices for touch-free control of radiological images during surgery. 2016 IEEE 29th International Symposium on Computer-Based Medical Systems (CBMS), Belfast and Dublin, Ireland, 2016;229-234. doi:
10.1109/CBMS.2016.20 . - Kang H, Lee CW, Jung K. Recognition-based gesture spotting in video games. Pattern Recognition Letters 2004;25:1701-14.
- Kostic Z, Dumas C, Pratt S, et al. Exploring Mid-Air Hand Interaction in Data Visualization. IEEE Trans Vis Comput Graph 2024;30:6347-64. [Crossref] [PubMed]
- Jacob M, Cange C, Packer R, et al. Intention, context and gesture recognition for sterile MRI navigation in the operating room. In: Alvarez L, Mejail M, Gomez L, et al. editors. Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications. CIARP 2012. Lecture Notes in Computer Science, vol 7441. Springer, Berlin, Heidelberg. Doi:
10.1007/978-3-642-33275-3_27 . - Jacob MG, Wachs JP, Packer RA. Hand-gesture-based sterile interface for the operating room using contextual cues for the navigation of radiological images. J Am Med Inform Assoc 2013;20:e183-6. [Crossref] [PubMed]
- Ebert LC, Hatch G, Ampanozi G, et al. You can’t touch this: touch-free navigation through radiological images. Surgical innovation 2012;19:301-7. [Crossref] [PubMed]


