Detection of malignant lung tumors using stimulated Raman histology and convolutional neural networks
Original Article

Detection of malignant lung tumors using stimulated Raman histology and convolutional neural networks

Karl-Moritz Schröder1#, Andreas Weber1,2#, Marlene Schmid1, Mohamed Hassan3, Uyen-Thao Le3, Severin Schmid3, Bernward Passlick3, Martin Werner1,4, Birte Ohm3*, Peter Bronsert1,5*

1Institute for Surgical Pathology, Medical Center, University of Freiburg, Freiburg, Germany; 2Faculty of Biology, University of Freiburg, Freiburg, Germany; 3Department of Thoracic Surgery, Medical Center, University of Freiburg, Freiburg, Germany; 4Tumorbank Comprehensive Cancer Center Freiburg, Medical Center, University of Freiburg, Freiburg, Germany; 5Core Facility for Histopathology and Digital Pathology, Medical Center, University of Freiburg, Freiburg, Germany

Contributions: (I) Conception and design: P Bronsert, B Ohm; (II) Administrative support: P Bronsert, M Werner, B Passlick; (III) Provision of study materials or patients: M Schmid, M Hassan, UT Le, S Schmid, B Passlick; (IV) Collection and assembly of data: KM Schröder, A Weber, M Hassan, UT Le, S Schmid, B Passlick; (V) Data analysis and interpretation: KM Schröder, A Weber; (VI) Manuscript writing: All authors; (VII) Final approval of manuscript: All authors.

#These authors contributed equally to this work as co-first authors.

*These authors contributed equally to this work as co-senior authors.

Correspondence to: Peter Bronsert, MD. Institute for Surgical Pathology, Medical Center University Freiburg, Breisacher Straße 115A, 79106 Freiburg, Germany; Core Facility for Histopathology and Digital Pathology, Medical Center, University of Freiburg, Freiburg, Germany. Email: peter.bronsert@uniklinik-freiburg.de.

Background: Patients with lung tumors often receive their histopathological diagnosis intraoperatively, based on hematoxylin and eosin-stained frozen sections. However, this approach is time and labour-intensive. Intraoperative stimulated Raman histology (SRH) may bypass traditional histopathologic processing as it leverages stimulated Raman scattering (SRS) of photons at molecules in fresh tissue samples to generate histologic images. Automated image analysis using convolutional neural networks (CNNs) could further accelerate intraoperative histopathological diagnosis. This study aimed to investigate CNN-based detection of lung cancer and the ability to distinguish between histologic subtypes, primary lung tumors, and pulmonary metastasis.

Methods: A total of 459 fresh frozen tissue samples were obtained from 133 patients undergoing lung resection for intrapulmonary malignancies. SRS and SRH images were acquired, images were annotated, and three CNNs were trained and evaluated on both SRS and SRH images.

Results: When distinguishing between intrapulmonary malignancy and normal lung tissue, the three different CNNs VGG19 achieved a balanced accuracy of 0.89 (0.95 on SRH images), ResNet50 achieved a balanced accuracy of 0.87 (0.89 on SRH images), and Inception-ResNet-v2 achieved a balanced accuracy of 0.91 (0.94 on SRH images). On SRS images, Inception-ResNet-v2 (0.91) showed the best results, followed by VGG19 (0.89). Compared to SRS images, the SRH images show higher balanced accuracy. Neither a distinction between primary lung cancer and metastases nor between squamous cell carcinoma and adenocarcinoma was achieved.

Conclusions: The results of this study demonstrate the ability of CNNs to identify malignant tumors of the lung on SRS and SRH images. A distinction between different World Health Organization (WHO) subtypes of primary lung cancer and between primary lung cancer and metastases was inaccurate in our dataset.

Keywords: Lung cancer; deep learning; convolutional neural networks (CNNs); stimulated Raman histology (SRH)


Submitted Nov 16, 2024. Accepted for publication Apr 24, 2025. Published online Sep 26, 2025.

doi: 10.21037/jtd-2024-1928


Highlight box

Key findings

• VGG19 has achieved the highest balanced accuracy on stimulated Raman histology (SRH) (0.95) detecting malignant tumors of the lung vs. normal tissue.

• There was no differentiation between malignant tumor subtypes [squamous cell carcinoma vs. adenocarcinoma vs. metastasis] possible.

What is known and what is new?

• Several studies emphesise the valuable diagnostic tool of SRH.

• The application of convolutional neural networks and SRH allows for the detection of lung carcinoma.

What is the implication, and what should change now?

• The potential of SRH combined with deep learning as a promising alternative for intraoperative diagnosis of the presence of malignant lung tumors is underscored by our findings.

• Broader Ramanshifts and a bigger dataset (especially for metastases) maybe necessary to distinguish primary and secondary lung tumors.


Introduction

Lung cancer is the leading cause of cancer-related deaths worldwide (1). Advances in screening with computed tomography have led to an increased rate of detection of intrapulmonary malignant lesions (2). These lesions can either be primary lung cancers or pulmonary metastases. According to the World Health Organization (WHO) classification, primary lung cancers are categorized into adenocarcinoma (ADC), squamous cell carcinoma (SCC), small cell lung carcinoma (SCLC), and large cell carcinoma (3). Pulmonary metastasis can originate from a variety of extrapulmonary tumors, such as ADC from the gastrointestinal tract, pancreatico-biliary system, mammary gland, thyroid, or testes. SCC metastasis often originates from primary head and neck tumors or from the esophagus. Furthermore, clear-cell carcinoma metastasis from renal primary tumors, as well as metastasis of non-epithelial tumors like sarcoma and malignant melanoma are frequently encountered in the lung (4).

Often, patients receive their histopathologic diagnosis intraoperatively. In this setting, intraoperative frozen section represents the state-of-the-art diagnostic tool to guide surgical decision-making (5). Frozen section is a complex process, requiring a highly equipped laboratory space and trained, experienced technicians and pathologists.

Distinguishing between benign lesions, primary lung cancer subtypes, and pulmonary metastasis has therapeutic consequences and may result in different surgical treatment regimens (4). Rapid intraoperative determination of the histopathological origin of pulmonary lesions can thus help guide surgical decision-making, may reduce operating times, and save resources in the histopathology department.

Intraoperative stimulated Raman histology (SRH) represents an alternative to traditional frozen sections. It rapidly provides a label-free high-resolution image without requiring prior time-consuming tissue processing. Furthermore, the technique may provide further molecular insight that may even surpass the information offered by frozen sections.

SRH leverages stimulated Raman scattering (SRS) of photons with an energy shift of certain wavenumbers at biomolecules such as DNA, proteins or lipids to depict cellular structures of a tissue sample. Using the NIO Laser Imaging System (Invenio Imaging Inc., Santa Clara, CA, USA), the obtained SRS images are converted with a special look-up table into images resembling hematoxylin and eosin (H&E)-stained tissue sections but without the need for sectioning and staining. These SRH images can then be analyzed and assessed by pathologists.

Deep learning models are widely used for the classification of images of lung cancer samples. For whole-slide images (WSI), Coudray et al. trained a convolutional neural network (CNN) to distinguish between ADC, SCC and normal lung tissue (6). Even series of CT scans at different points in time were used to predict lung cancer treatment response (7). In the broader context of Raman spectroscopy, Weng et al. used coherent anti-Stokes Raman spectroscopy (CARS) to classify normal lung, small-cell carcinoma, ADC and SCC images with a CNN. The generated CARS signals are stronger than the conventional Raman scatterings (8). This enables faster and more sensitive imaging. Furthermore, we not only included Lung carcinoma but also investigated metastases to the lung.

Nevertheless, Raman spectroscopy was neither used for lung tissue to distinguish between normal lung tumor and cancer, nor to subclassify the WHO cancer type. In this study, we evaluate the utility of different CNNs for the identification of malignant lesions within the lung SRS and SRH images of lung tissue. We present this article in accordance with the TRIPOD reporting checklist (available at https://jtd.amegroups.com/article/view/10.21037/jtd-2024-1928/rc).


Methods

Study protocol and sample acquisition

The study was conducted in accordance with the Declaration of Helsinki and its subsequent amendments. This study received ethical approval from the Ethics Committee of Medical Center, University of Freiburg (No. 22-1322_2-S1). Patient consent was obtained before study inclusion. The inclusion criteria comprised individuals of legal age (>18 years) with confirmed primary lung cancers or pulmonary metastases, without prior neoadjuvant therapy, and with an indication for surgical resection. All surgical procedures were performed at the Department for Thoracic Surgery, Medical Center University Freiburg. Subsequent histopathological confirmation was performed at the Department for Surgical Pathology, Medical Center University Freiburg.

Cohort characteristics

In total, 133 patients were included. Relevant diagnoses were non-small cell lung cancer (NSCLC) and metastasis. Additionally, NSCLC was subclassified into ADC and SSC. All diagnoses were verified by immunohistochemistry.

Primary lung cancer

In total, fresh frozen tissue specimens (tumoral and non-tumoral) derived from 86 patients suffering from NSCLC were included in the study. Out of these, 45 patients were diagnosed with ADC and 41 with SSC.

Lung metastases

In addition, fresh frozen tissue specimens (tumor and (if present) non-tumor) derived from 30 patients were included in the study. Out of these, 12 (40.0%) patients diagnosed with colorectal lung metastases, 2 patients (6.7%) with breast cancer, 3 patients (10.0%) with malignant melanoma, three patients (10.0%) with lung metastases originating from clear cell renal cell carcinoma and 10 patients (30%) with other locations were included in the study.

SRH image acquisition

SRH images were acquired following published methods (9). Tissue specimens (0.4 cm × 0.4 cm × 0.2 cm) were obtained from tumor-suspected and non-neoplastic areas, identified by pathologists. Specimens were placed on custom slides, and multiple line scans were performed using the NIO Laser Imaging System, with a pixel size of 472 nm and a depth of 10 µm. The system measures Raman shifts at 2,845 and 2,930 cm−1 to generate two-channel SRS images, which are processed into SRH images resembling H&E-stained slides using vendor-specific software (NIO Laser Imaging System software version 1.6.0).

Histopathological evaluation of SRH images

For annotations, all SRH images were transferred to QuPath (Version 0.4.3) (10).

Tissue specimens with histopathologically confirmed malignancy (NSCLC or metastases) were selected, and the tumors were annotated for each image (Figure 1). The tissue was differentiated between tumor and non-tumor (fat, stroma, necrosis, normal lung parenchyma). All annotations were transcribed into the GeoJSON format to be applied to the CNN.

Figure 1 Different types of tumor (primary adeno- and squamous carcinoma, metastasis) in comparison. SRH without annotation (first row), SRH with annotation (red marked area displays tumor) (second row), and classical (Kryo-) H&E (third row). H&E, hematoxylin and eosin; SRH, stimulated Raman histology.

Data set generation

In total, we analyzed 459 images for 133 patients. Thereof, 240 images were classified as primary NSCLC and 83 images were classified as metastasis. SRS images consist of pixel arrays comprising two channels, each designated for storing the scattering values corresponding to the two above-mentioned energy shifts. The initial two channels of an empty array were filled with scattering values representing the CH2 and CH3 bonds. Concurrently, the third channel of the array was populated with the spectral difference CH3-CH2 for each pixel (11). SRS and SRH pixel values were scaled to the range (0.1). Subsequently, the SRS and SRH images underwent subdivision into tiles measuring 250×250 pixels. A tile attained the status of being labeled if at least 99% of its area overlapped with an annotated region. The 99% threshold ensures that tiles predominantly consist of labeled pixels. The final data set comprised 26,460 tiles from 459 images, with 11,495 tiles labeled as tumor and 14,965 tiles labeled as no-tumor. For the subclassification between NSCLC and metastasis, as well as SCC and ADC, the data set comprised 6,251 tiles labeled as NSCLC (comprising 3,331 tiles labeled as SCC and 2,920 tiles labeled as ADC) and 2,450 tiles labeled as metastasis. Figure 2 provides an example of an SRH and corresponding SRS image together with annotations and tiles [the contrast of the SRS image was enhanced using adaptive histogram equalization from scikit-image (12) with a clipping limit of 0.03; contrast enhancement was not included as a preprocessing step of the SRS images and was only performed to provide better visualization of the SRS images].

Figure 2 SRH image with corresponding SRS image with annotations and locations of labeled tiles. (A) SRH image as created by the NIO system using a vendor-specific look-up table. (B) SRH image with annotations “Tumor” and “no Tumor”. (C) Location of labeled tiles overlapping at least to 99% with an annotation on SRH image. (D) SRS image corresponding to SRH image depicting CH2 channel with enhanced contrast using adaptive histogram equalization and a viridis look-up table. (E) SRS image with annotations “Tumor” and “no Tumor”. (F) Location of labeled tiles overlapping at least to 99% with an annotation on SRS image. Using the NIO Laser Imaging System (Invenio Imaging Inc., Santa Clara, CA, USA), the obtained SRS images are converted with a special look-up table into images resembling H&E-stained tissue sections but without the need for sectioning and staining. H&E, hematoxylin and eosin; SRH, stimulated Raman histology; SRS, stimulated Raman scattering.

Data split and class distributions

The dataset was split, with 80% allocated to the training set and 20% to the test set, based on class proportions rather than image count. Within the training set, 10% was used as a validation set for hyperparameter tuning. The class distributions for the training, validation, test sets, and the entire dataset are detailed in Table 1.

Table 1

No. of images and tiles for each class in the training, validation and test set

Category Total Training set Validation set Test set
Tumoral vs. non-tumoral
   Number of images 459 346 38 75
   Number of non-tumoral tiles (%) 14,965 (56.6) 10,780 (56.6) 1,188 (56.6) 2,997 (56.4)
   Number of tumoral tiles (%) 11,495 (43.4) 8,270 (43.4) 912 (43.4) 2,313 (43.6)
Squamous cell carcinoma vs. adenocarcinoma
   Number of images 323 267 23 33
   Number of squamous cell carcinoma tiles (%) 3,331 (53.3) 2,449 (54.2) 253 (51.4) 629 (50.6)
   Number of adenocarcinoma tiles (%) 2,920 (46.7) 2,066 (45.8) 239 (48.6) 615 (49.4)
Primary NSCLC vs. metastasis
   Number of images 317 124 135 58
   Number of primary NSCLC tiles (%) 6,251 (71.8) 4,425 (71.8) 581 (72.2) 1,245 (71.7)
   Number of metastasis tiles (%) 2,450 (28.2) 1,735 (28.2) 224 (27.8) 491 (28.3)

The percentages behind the number of tiles denote the relative proportion of the class within the respective subset. NSCLC, non-small cell lung cancer.

Deep learning-based evaluation of images

Three different architectures of CNNs were employed: VGG19, ResNet50 (13) and Inception-ResNet-v2. All weights were randomly initialized and a data augmentation layer was added on top of each CNN for random horizontal and vertical flipping. Hyperparameters such as learning rate and batch size were optimized using the validation set. A grid search with learning rates of 0.001 and 0.0001 and batch sizes of 30 and 100 yielded an optimal performance with a learning rate of 0.0001 and a batch size of 30. The slight class imbalance between no-tumor and tumor tiles was addressed with a weighting of the loss function according to the class distribution of the training set. The CNNs were trained for 100 epochs, and all computations were performed using Python 3.9.16 and Tensorflow 2.6.0 (14) on a NVIDIA Geforce RTX 4090.

Statistical analysis

The neural networks’ performance on unseen data was evaluated using precision, recall, F1-score, and balanced accuracy on the test set. The F1-score is calculated as: F1 = 2 × (precision × recall)/(precision + recall).

The confusion matrix, which illustrates the performance of an algorithm, shows the count of predicted versus true classes and is a class-specific insight expressed by the F1-score. Additionally, receiver operating characteristic (ROC) curves were used to display the relationship between the model’s efficacy and error rate (quality criteria). The area under the curve (AUC) served as a quality metric to assess class separability after training. Both ROC and AUC were used for discrimination and comparing models.


Results

Classification performance of tumoral and non-tumoral tissue

On SRS images, Inception-ResNet-v2 showed the best results with a balanced accuracy of 0.91, followed by VGG19 with a balanced accuracy of 0.89. On SRH images, VGG19 showed the best results with a balanced accuracy 0.95, followed by Inception-ResNet-v2 with a balanced accuracy of 0.94. Regarding the aggregated performance metrics F1-score and balanced accuracy, all networks performed better on SRH images than on SRS images. All performance metrics on SRS (and SRH images) can be found in Table 2.

Table 2

Performance metrics of the CNNs on the test set on SRS images (SRH images)

Convolutional neuronal network Precision Recall F1-score Balanced accuracy
VGG19 0.89 (0.95)
   Non-tumoral 0.90 (0.97) 0.91 (0.93) 0.91 (0.95)
   Tumoral 0.88 (0.92) 0.87 (0.97) 0.88 (0.94)
ResNet50 0.87 (0.89)
   Non-tumoral 0.88 (0.94) 0.90 (0.85) 0.89 (0.90)
   Tumoral 0.87 (0.83) 0.84 (0.93) 0.85 (0.88)
Inception-ResNet-v2 0.91 (0.94)
   Non-tumoral 0.95 (0.95) 0.89 (0.95) 0.92 (0.95)
   Tumoral 0.86 (0.94) 0.94 (0.93) 0.90 (0.93)
VGG19 0.50 (0.50)
   Squamous cell carcinoma 0.00 (0.00) 0.00 (0.00) 0.00 (0.00)
   Adenocarcinoma 0.49 (0.49) 1.00 (1.00) 0.66 (0.66)
ResNet50 0.52 (0.53)
   Squamous cell carcinoma 0.52 (0.92) 0.68 (0.06) 0.59 (0.10)
   Adenocarcinoma 0.53 (0.51) 0.36 (1.00) 0.43 (0.67)
Inception-ResNet-v2 0.58 (0.62)
   Squamous cell carcinoma 0.55 (0.64) 0.91 (0.57) 0.69 (0.61)
   Adenocarcinoma 0.74 (0.61) 0.25 (0.68) 0.37 (0.64)
VGG19 0.50 (0.50)
   Primary NSCLC 0.00 (0.00) 0.00 (0.00) 0.00 (0.00)
   Metastasis 0.28 (0.28) 1.00 (1.00) 0.44 (0.44)
ResNet50 0.37 (0.52)
   Primary NSCLC 0.64 (0.73) 0.55 (0.70) 0.59 (0.72)
   Metastasis 0.14 (0.30) 0.19 (0.33) 0.16 (0.31)
Inception-ResNet-v2 0.54 (0.54)
   Primary NSCLC 0.77 (0.75) 0.38 (0.74) 0.51 (0.74)
   Metastasis 0.30 (0.34) 0.70 (0.35) 0.42 (0.34)

CNN, convolutional neural network; NSCLC, non-small cell lung cancer; SRH, stimulated Raman histology; SRS, stimulated Raman scattering.

The confusion matrices for VGG19 (Figure 3, top) show that the number of false positives (upper right corner) is approximately 23% lower for SRH images compared to SRS images while the number of false negatives (lower left corner) is approximately 75% lower. The confusion matrices for ResNet50 (Figure 3, middle) show that the number of false positives is approximately 47% higher for SRH images compared to SRS images while the number of false negatives is approximately 60% lower. The confusion matrices for Inception-ResNet-v2 (Figure 3, bottom) show that the number of false positives is approximately 57% lower for SRH images compared to SRS images while the number of false negatives is approximately 20% higher.

Figure 3 Confusion matrices of VGG19, ResNet50 and Inception-ResNet-v2 on SRS images (left) and SRH images (right). SRH, stimulated Raman histology; SRS, stimulated Raman scattering.

ROC curves and respective AUC values for (Figure 4) show a high separability of the two classes tumoral and non-tumoral, for all networks, with a consistent, slightly better separability for SRH images compared to SRS images.

Figure 4 ROC curves for VGG19, ResNet50 and Inception-ResNet-v2 for SRS images (red) and SRH images (blue) with AUC values. AUC, area under the curve; ROC, receiver operating characteristic; SRH, stimulated Raman histology; SRS, stimulated Raman scattering.

To summarize, in almost every case, the number of incorrect predictions is lower on SRH images compared to SRS images. For ResNet50, the number of false positives is significantly higher on SRH images and for Inception-ResNet-v2 the number of false negatives is slightly higher in SRH images. In every other case, the number of incorrect predictions is lower on SRH images compared to SRS images.

Classification performance of squamous cell carcinoma and adenocarcinoma

Subclassification of tumor tiles into squamous cell carcinoma and adenocarcinoma could not be achieved with neural networks. VGG19 with a balanced accuracy of 0.50 on both SRS and SRH images performed equal to a classifier which randomly predicts the subtypes. ResNet50 with a balanced accuracy of 0.52 on SRS and 0.53 on SRH images performed not notably better. Inception-ResNet-v2 showed the best overall performance with a balanced accuracy of 0.58 on SRS and 0.62 on SRH images which is slightly better than random. Detailed performance metrics such as precision, recall and F1-score can be found in Table 2.

Classification performance of primary tumor and metastasis

Classification of tumor tiles into primary tumor and metastasis could not be effectively achieved with neural networks. VGG19 showed a balanced accuracy of 0.50 on both SRS and SRH images. ResNet50 with a balanced accuracy of 0.37 on SRS and 0.52 on SRH images performed not notably better. Inception-ResNet-v2 showed a balanced accuracy of 0.54 on both SRS and SRH images. Detailed performance metrics such as precision, recall and F1-score can be found in Table 2.


Discussion

This study compared the performance of three CNN (VGG19, ResNet50 and Inception-ResNet-v2) on SRS and SRH images of intrapulmonary malignant lesions to distinguish between tumoral and non-tumoral tissue. The SRH images showed a higher performance compared to SRS images (VGG19 SRH 0.95) and the lowest on ResNet50 (SRS 0.57).

In a diagnostic meta-analysis, Ke et al. summarized the efficacy of Raman spectroscopy in lung cancer in diagnostic studies published before 1 June 2020. Total pooled sensitivity and specificity of 0.92 and 0.94 indicates that Raman spectroscopy is a valid diagnostic tool for detecting lung cancer (15).

Another previous study with SRH and Deep Learning include the use of an Inception-ResNet-v2 architecture for predicting tumors of the central nervous system (11). Hollon et al. trained the network on 2.5 million tiles from 415 patients in order to predict 13 diagnostic classes and achieved an overall accuracy of 94.6% on SRH images (11). Although trained on a data set two orders of magnitude smaller than the data set used by Hollon et al., our VGG19 achieved an accuracy score of 0.95 on SRH images and our Inception-ResNet-v2 an accuracy score of 0.94, which are both comparable to the performance of the Inception-ResNet-v2 used by Hollon et al.

For detecting laryngeal squamous cell carcinoma, Zhang et al. (16) employed a ResNet34 model and trained it on 18,750 tiles from SRS images which were labeled as “normal” and “neoplasia”. This network achieved an accuracy of 100% on 33 independent specimens. However, the diagnostic capacity is stated to be an accuracy of 90% on 80 SRS images included in the study. Here, the size of the training set is comparable to the size of the training set in our study, and the balanced accuracies of the neural networks on SRS images presented in our study are between 0.87 and 0.91, which are comparable to the accuracy denoting the diagnostic accuracy of Zhang et al.

The slight outperformance of a VGG19-based network trained on SRS images of oral squamous cell carcinoma compared to the same network trained on SRH images reported in could not be observed in our study. F1-scores for each class, as well as balanced accuracy scores, were higher when the CNNs were trained on SRH images compared to SRS images.

While the distinction between tumor and no-tumor yielded promising results, a further subclassification of tumorous tiles regarding histological subtype could not be achieved. A reason for this could be the small size of the data set. Our data set comprised 3,331 tiles with squamous cell carcinoma, 2,920 tiles with adenocarcinoma, 6,251 tiles with primary tumor and 2,450 tiles with metastasis. Even under the assumption that the relevant information for predicting these subtypes is present in the SRS and SRH images, the number of examples is likely too low for the neural networks to approximate the function between input and output. Additionally, no full Raman spectra were analyzed but only two discrete Raman shifts. In addition, our access to (fresh frozen) tissue specimens was limited, as patients with lung metastases are generally less likely to undergo surgical treatment than those with primary lung cancer. The performance of Inception-ResNet-v2 on SRH images for distinguishing between squamous cell carcinoma and adenocarcinoma (balanced accuracy: 0.62) can be cautiously seen as an indication that neural networks could, in principle, approximate this function.

One notable benefit of SRH is its stable contrast and intensities compared to traditional staining methods like H&E, where variations induced by lab procedures can alter the resulting images. With SRH, both the technique and the resulting images remain consistent, facilitating automated processing and analysis.


Conclusions

The potential of SRH combined with deep learning as a promising alternative for intraoperative diagnosis of the presence of malignant lung tumors is underscored by our findings. The significant improvement in performance on SRH images compared to SRS images highlights the advantages of SRH in maintaining consistent image quality. Despite the promising results, a subclassification between primary lung cancer and metastases or between squamous cell carcinoma and adenocarcinoma was not achieved in our study. Nevertheless, our results underscore the potential clinical utility of the technique for rapid intraoperative decision-making when resecting pulmonary nodules.


Acknowledgments

The authors would like to acknowledge Florian Khalid for his valuable technical assistance.


Footnote

Reporting Checklist: The authors have completed the TRIPOD reporting checklist. Available at https://jtd.amegroups.com/article/view/10.21037/jtd-2024-1928/rc

Data Sharing Statement: Available at https://jtd.amegroups.com/article/view/10.21037/jtd-2024-1928/dss

Peer Review File: Available at https://jtd.amegroups.com/article/view/10.21037/jtd-2024-1928/prf

Funding: This work was supported by the German Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung, BMBF) (No. 13GW0571D, to A.W.).

Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at https://jtd.amegroups.com/article/view/10.21037/jtd-2024-1928/coif). A.W. reports that this work was supported by the German Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung, BMBF) (No. 13GW0571D). The other authors have no conflicts of interest to declare.

Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. The study was conducted in accordance with the Declaration of Helsinki and its subsequent amendments. This study was approved by the Ethics Committee of Medical Center, University of Freiburg (No. 22-1322_2-S1). Patient consent was obtained before study inclusion.

Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.


References

  1. IARC. Absolute numbers and Mortality, both sexes, in 2022 world. Available online: https://gco.iarc.fr/today/en/dataviz/bars?mode=cancer&cancers=15&populations=900&group_populations=1&key=total&multiple_populations=1&types=0_1&cancers_h=15&sort_by=value1, 24.04.2024
  2. Hoffman RM, Atallah RP, Struble RD, et al. Lung Cancer Screening with Low-Dose CT: a Meta-Analysis. J Gen Intern Med 2020;35:3015-25. [Crossref] [PubMed]
  3. WHO Classification of Tumours Editorial Board. Thoracic tumours. Lyon (France): International Agency for Research on Cancer; 2021 [cited 24.04.2024].
  4. Krämer S, Bläker H, Denecke T, et al. Lungenmetastasen – Onkologische Bedeutung und Therapie. Onkologie 2023;29:202-12.
  5. Marchevsky AM, Changsri C, Gupta I, et al. Frozen section diagnoses of small pulmonary nodules: accuracy and clinical implications. Ann Thorac Surg 2004;78:1755-9. [Crossref] [PubMed]
  6. Coudray N, Ocampo PS, Sakellaropoulos T, et al. Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning. Nat Med 2018;24:1559-67. [Crossref] [PubMed]
  7. Xu Y, Hosny A, Zeleznik R, et al. Deep Learning Predicts Lung Cancer Treatment Response from Serial Medical Imaging. Clin Cancer Res 2019;25:3266-75. [Crossref] [PubMed]
  8. Weng S, Xu X, Li J, et al. Combining deep learning and coherent anti-Stokes Raman scattering imaging for automated differential diagnosis of lung cancer. J Biomed Opt 2017;22:1-10. [Crossref] [PubMed]
  9. Steybe D, Poxleitner P, Metzger MC, et al. Stimulated Raman histology for histological evaluation of oral squamous cell carcinoma. Clin Oral Investig 2023;27:4705-13. [Crossref] [PubMed]
  10. Bankhead P, Loughrey MB, Fernández JA, et al. QuPath: Open source software for digital pathology image analysis. Sci Rep 2017;7:16878. [Crossref] [PubMed]
  11. Hollon TC, Pandian B, Adapa AR, et al. Near real-time intraoperative brain tumor diagnosis using stimulated Raman histology and deep neural networks. Nat Med 2020;26:52-8. [Crossref] [PubMed]
  12. van der Walt S, Schönberger JL, Nunez-Iglesias J, et al. scikit-image: image processing in Python. PeerJ 2014;2:e453. [Crossref] [PubMed]
  13. He K, Zhang X, Ren S, et al. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Trans Pattern Anal Mach Intell 2015;37:1904-16. [Crossref] [PubMed]
  14. Abadi M, Agarwal A, Barham P, et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems (Preliminary White Paper, November 9, 2015), Google Research. Available online: http://download.tensorflow.org/paper/whitepaper2015.pdf
  15. Ke ZY, Ning YJ, Jiang ZF, et al. The efficacy of Raman spectroscopy in lung cancer diagnosis: the first diagnostic meta-analysis. Lasers Med Sci 2022;37:425-34. [Crossref] [PubMed]
  16. Zhang L, Wu Y, Zheng B, et al. Rapid histology of laryngeal squamous cell carcinoma with deep-learning based stimulated Raman scattering microscopy. Theranostics 2019;9:2541-54. [Crossref] [PubMed]
Cite this article as: Schröder KM, Weber A, Schmid M, Hassan M, Le UT, Schmid S, Passlick B, Werner M, Ohm B, Bronsert P. Detection of malignant lung tumors using stimulated Raman histology and convolutional neural networks. J Thorac Dis 2025;17(9):6815-6825. doi: 10.21037/jtd-2024-1928

Download Citation