Sort by:
Page 38 of 1411410 results

S <math xmlns="http://www.w3.org/1998/Math/MathML"><mmultiscripts><mrow></mrow> <mrow></mrow> <mn>3</mn></mmultiscripts> </math> TU-Net: Structured convolution and superpixel transformer for lung nodule segmentation.

Wu Y, Liu X, Shi Y, Chen X, Wang Z, Xu Y, Wang S

pubmed logopapersAug 20 2025
Accurate segmentation of lung adenocarcinoma nodules in computed tomography (CT) images is critical for clinical staging and diagnosis. However, irregular nodule shapes and ambiguous boundaries pose significant challenges for existing methods. This study introduces S<sup>3</sup>TU-Net, a hybrid CNN-Transformer architecture designed to enhance feature extraction, fusion, and global context modeling. The model integrates three key innovations: (1) structured convolution blocks (DWF-Conv/D<sup>2</sup>BR-Conv) for multi-scale feature extraction and overfitting mitigation; (2) S<sup>2</sup>-MLP Link, a spatial-shift-enhanced skip-connection module to improve multi-level feature fusion; and 3) residual-based superpixel vision transformer (RM-SViT) to capture long-range dependencies efficiently. Evaluated on the LIDC-IDRI dataset, S<sup>3</sup>TU-Net achieves a Dice score of 89.04%, precision of 90.73%, and IoU of 90.70%, outperforming recent methods by 4.52% in Dice. Validation on the EPDB dataset further confirms its generalizability (Dice, 86.40%). This work contributes to bridging the gap between local feature sensitivity and global context awareness by integrating structured convolutions and superpixel-based transformers, offering a robust tool for clinical decision support.

[Preoperative discrimination of colorectal mucinous adenocarcinoma using enhanced CT-based radiomics and deep learning fusion model].

Wang BZ, Zhang X, Wang YL, Wang XY, Wang QG, Luo Z, Xu SL, Huang C

pubmed logopapersAug 20 2025
<b>Objective:</b> To develop a preoperative differentiation model for colorectal mucinous adenocarcinoma and non-mucinous adenocarcinoma using a combination of contrast-enhanced CT radiomics and deep learning methods. <b>Methods:</b> This is a retrospective case series study. Clinical data of colorectal cancer patients confirmed by postoperative pathological examination were retrospectively collected from January 2016 to December 2023 at Shanghai General Hospital Affiliated to Shanghai Jiao Tong University School of Medicine (Center 1, <i>n</i>=220) and the First Affiliated Hospital of Bengbu Medical University (Center 2, <i>n=</i>51). Among them, there were 108 patients diagnosed with mucinous adenocarcinoma, including 55 males and 53 females, with an age of (68.4±12.2) years (range: 38 to 96 years); and 163 patients diagnosed with non-mucinous adenocarcinoma, including 96 males and 67 females, with an age of (67.9±11.0) years (range: 43 to 94 years). The cases from Center 1 were divided into a training set (<i>n</i>=156) and an internal validation set (<i>n</i>=64) using stratified random sampling in a 7︰3 ratio, and the cases from Center 2 were used as an independent external validation set (<i>n</i>=51). Three-dimensional tumor volume of interest was manually segmented on venous-phase contrast-enhanced CT images. Radiomics features were extracted using PyRadiomics, and deep learning features were extracted using the ResNet-18 network. The two sets of features were then combined to form a joint feature set. The consistency of manual segmentation was assessed using the intraclass correlation coefficient. Feature dimensionality reduction was performed using the Mann-Whitney <i>U</i> test and the least absolute shrinkage and selection operator regression. Six machine learning algorithms were used to construct models based on radiomics features, deep learning features, and combined features, including support vector machine, Logistic regression, random forest, extreme gradient boosting, k-nearest neighbors, and decision tree. The discriminative performance of each model was evaluated using receiver operating characteristic curves, the area under the curve (AUC), DeLong test, and decision curve analysis. <b>Results:</b> After feature selection, 22 features with the most discriminative value were finally retained, among which 12 were traditional radiomics features and 10 were deep learning features. In the internal validation set, the Random Forest algorithm based on the combined features model achieved the best performance (AUC=0.938, 95%<i>CI:</i> 0.875 to 0.984), which was superior to the single-modality radiomics feature model (AUC=0.817, 95%<i>CI:</i> 0.702 to 0.913,<i>P</i>=0.048) and the deep learning feature model (AUC=0.832, 95%<i>CI:</i> 0.727 to 0.926,<i>P=</i>0.087); in the independent external validation set, the Random Forest algorithm with the combined features model maintained the highest discriminative performance (AUC=0.891, 95%<i>CI:</i> 0.791 to 0.969), which was superior to the single-modality radiomics feature model (AUC=0.770, 95%<i>CI:</i> 0.636 to 0.890,<i>P</i>=0.045) and the deep learning feature model (AUC=0.799, 95%<i>CI:</i> 0.652 to 0.911,<i>P</i>=0.169). <b>Conclusion:</b> The combined model based on radiomics and deep learning features from venous-phase enhanced CT demonstrates good performance in the preoperative differentiation of colorectal mucinous from non-mucinous adenocarcinoma.

Validation of an artificial intelligence-based automated PRAGMA and mucus plugging algorithm in pediatric cystic fibrosis.

Raut P, Chen Y, Taleb A, Bonte M, Andrinopoulou ER, Ciet P, Charbonnier JP, Wainwright CE, Tiddens H, Caudri D

pubmed logopapersAug 20 2025
PRAGMA-CF is a clinically validated visual chest CT scoring method, quantifying relevant components of structural airway damage in CF. We aimed to validate a newly developed AI-based automated PRAGMA-AI and Mucus Plugging algorithm using the visual PRAGMA-CF as reference. The study included 363 retrospective chest CT's of 178 CF patients (100 New-Zealand and Australian, 78 Dutch) with at least one inspiratory CT matching the image selection criteria. Eligible CT scans were analyzed using visual PRAGMA-CF, automated PRAGMA-AI and Mucus Plugging algorithm. Outcomes were compared using descriptive statistics, correlation, intra- and interclass correlation and Bland-Altman plots. Sensitivity analyses evaluated the impact of disease severity, study cohort, number of slices and convolution kernel (soft vs. hard). The algorithm successfully analyzed 353 (97 %) CT scans. A strong correlation between the methods was found for %bronchiectasis ( %BE) and %disease ( %DIS), but weak for %Airway wall thickening ( %AWT). The automated Mucus plugging outcomes showed strong correlation with visual %mucus plugging ( %MP). ICC's between visual and automated sub-scores witnessed average agreement for %BE and %DIS, except for %AWT which was weak. Sensitivity analyses revealed that convolution kernel did not affect the correlation between visual and automated outcomes, but harder kernels yielded lower disease scores, especially for %BE and %AWT. Our results show that AI-derived outcomes are not identical to visual PRAGMA-CF scores in size, but strongly correlated on measures of bronchiectasis, bronchial-disease and mucus plugging. They could therefore be a promising alternative for time-consuming visual scoring, especially in larger studies.

Interpreting convolutional neural network explainability for head-and-neck cancer radiotherapy organ-at-risk segmentation.

Strijbis VIJ, Gurney-Champion OJ, Grama DI, Slotman BJ, Verbakel WFAR

pubmed logopapersAug 19 2025
Convolutional neural networks (CNNs) have emerged to reduce clinical resources and standardize auto-contouring of organs-at-risk (OARs). Although CNNs perform adequately for most patients, understanding when the CNN might fail is critical for effective and safe clinical deployment. However, the limitations of CNNs are poorly understood because of their black-box nature. Explainable artificial intelligence (XAI) can expose CNNs' inner mechanisms for classification. Here, we investigate the inner mechanisms of CNNs for segmentation and explore a novel, computational approach to a-priori flag potentially insufficient parotid gland (PG) contours. First, 3D UNets were trained in three PG segmentation situations using (1) synthetic cases; (2) 1925 clinical computed tomography (CT) scans with typical and (3) more consistent contours curated through a previously validated auto-curation step. Then, we generated attribution maps for seven XAI methods, and qualitatively assessed them for congruency between simulated and clinical contours, and how much XAI agreed with expert reasoning. To objectify observations, we explored persistent homology intensity filtrations to capture essential topological characteristics of XAI attributions. Principal component (PC) eigenvalues of Euler characteristic profiles were correlated with spatial agreement (Dice-Sørensen similarity coefficient; DSC). Evaluation was done using sensitivity, specificity and the area under receiver operating characteristic (AUROC) curve on an external AAPM dataset, where as proof-of-principle, we regard the lowest 15% DSC as insufficient. PatternNet attributions (PNet-A) focused on soft-tissue structures, whereas guided backpropagation (GBP) highlighted both soft-tissue and high-density structures (e.g. mandible bone), which was congruent with synthetic situations. Both methods typically had higher/denser activations in better auto-contoured medial and anterior lobes. Curated models produced "cleaner" gradient class-activation mapping (GCAM) attributions. Quantitative analysis showed that PCλ<sub>1</sub> of guided GCAM's (GGCAM) Euler characteristic (EC) profile had good predictive value (sensitivity>0.85, specificity>0.90) of DSC for AAPM cases, with AUROC = 0.66, 0.74, 0.94, 0.83 for GBP, GCAM, GGCAM and PNet-A. For for λ<sub>1</sub> < -1.8e3 of GGCAM's EC-profile, 87% of cases were insufficient. GBP and PNet-A qualitatively agreed most with expert reasoning on directly (structure borders) and indirectly (proxies used for identifying structure borders) important features for PG segmentation. Additionally, this work investigated as proof-of-principle how topological data analysis could be used for quantitative XAI signal analysis to a-priori mark potentially inadequate CNN-segmentations, using only features from inside the predicted PG. This work used PG as a well-understood segmentation paradigm and may extend to target volumes and other organs-at-risk.

A Cardiac-specific CT Foundation Model for Heart Transplantation

Xu, H., Woicik, A., Asadian, S., Shen, J., Zhang, Z., Nabipoor, A., Musi, J. P., Keenan, J., Khorsandi, M., Al-Alao, B., Dimarakis, I., Chalian, H., Lin, Y., Fishbein, D., Pal, J., Wang, S., Lin, S.

medrxiv logopreprintAug 19 2025
Heart failure is a major cause of morbitidy and mortality, with the severest forms requiring heart transplantation. Heart size matching between the donor and recipient is a critical step in ensuring a successful transplantation. Currently, a set of equations based on population measures of height, weight, sex and age, viz. predicted heart mass (PHM), are used but can be improved upon by personalized information from recipient and donor chest CT images. Here, we developed GigaHeart, the first heart-specific foundation model pretrained on 180,897 chest CT volumes from 56,607 patients. The key idea of GigaHeart is to direct the foundation models attention towards the heart by contrasting the heart region and the entire chest, thereby encouraging the model to capture fine-grained cardiac features. GigaHeart achieves the best performance on 8 cardiac-specific classification tasks and further, exhibits superior performance on cross-modal tasks by jointly modeling CT images and reports. We similarly developed a thorax-specific foundation model and observed promising performance on 9 thorax-specific tasks, indicating the potential to extend GigaHeart to other organ-specific foundation models. More importantly, GigaHeart addresses the heart sizing problem. It avoids oversizing by correctly segmenting the sizes of hearts of donors and recipients. In regressions against actual heart masses, our AI-segmented total cardiac volumes (TCVs) has a 33.3% R2 improvement when compared to PHM. Meanwhile, GigaHeart also solves the undersizing problem by adding a regression layer to the model. Specifically, GigaHeart reduces the mean squared error by 57% against PHM. In total, we show that GigaHeart increases the acceptable range of donor heart sizes and matches more accurately than the widely used PHM equations. In all, GigaHeart is a state-of-the-art, cardiac-specific foundation model with the key innovation of directing the models attention to the heart. GigaHeart can be finetuned for accomplishing a number of tasks accurately, of which AI-assisted heart sizing is a novel example.

CT-based auto-segmentation of multiple target volumes for all-in-one radiotherapy in rectal cancer patients.

Li X, Wang L, Yang M, Li X, Zhao T, Wang M, Lu S, Ji Y, Zhang W, Jia L, Peng R, Wang J, Wang H

pubmed logopapersAug 19 2025
This study aimed to evaluate the clinical feasibility and performance of CT-based auto-segmentation models integrated into an All-in-One radiotherapy workflow for rectal cancer. This study included 312 rectal cancer patients, with 272 used to train three nnU-Net models for CTV45, CTV50, and GTV segmentation, and 40 for evaluation across one internal (<i>n</i> = 10), one clinical AIO (<i>n</i> = 10), and two external cohorts (<i>n</i> = 10 each). Segmentation accuracy (DSC, HD, HD95, ASSD, ASD) and time efficiency were assessed. In the internal testing set, mean DSC of CTV45, CTV50, and GTV were 0.90, 0.86, and 0.71; HD were 17.08, 25.48, and 79.59 mm; HD 95 were 4.89, 7.33, and 56.49 mm; ASSD were 1.23, 1.90, and 6.69 mm; and ASD were 1.24, 1.58, and 11.61 mm. Auto-segmentation reduced manual delineation time by 63.3–88.3% (<i>p</i> < 0.0001). In clinical practice, average DSC of CTV45, CTV50 and GTV were 0.93, 0.88, and 0.78; HD were 13.56, 23.84, and 35.38 mm; HD 95 were 3.33, 6.46, and 21.34 mm; ASSD were 0.78, 1.49, and 3.30 mm; and ASD were 0.74, 1.18, and 2.13 mm. The results from the multi-center testing also showed applicability of these models, since the average DSC of CTV45 and GTV were 0.84 and 0.80 respectively. The models demonstrated high accuracy and clinical utility, effectively streamlining target volume delineation and reducing manual workload in routine practice. The study protocol was approved by the Institutional Review Board of Peking University Third Hospital (Approval No. (2024) Medical Ethics Review No. 182-01).

Deep Learning-Enhanced Opportunistic Osteoporosis Screening in 100 kV Low-Voltage Chest CT: A Novel Way Toward Bone Mineral Density Measurement and Radiation Dose Reduction.

Li Y, Ye K, Liu S, Zhang Y, Jin D, Jiang C, Ni M, Zhang M, Qian Z, Wu W, Pan X, Yuan H

pubmed logopapersAug 19 2025
To explore the feasibility and accuracy of a deep learning (DL) method for fully automated vertebral body (VB) segmentation, region of interest (ROI) extraction, and bone mineral density (BMD) calculation using 100kV low-voltage chest CT performed for lung cancer screening across various scanners from different manufacturers and hospitals. This study included 1167 patients who underwent 100 kV low-voltage chest and 120 kV lumbar CT from October 2022 to August 2024. Patients were divided into a training set (495 patients), a validation set (169 patients), and three test sets (245, 128, and 130 patients). The DL framework comprised four convolutional neural networks (CNNs): 3D VB-Net and SCN for automated VB segmentation and ROI extraction, and DenseNet and ResNet for BMD calculation of target VBs (T12-L2). The BMD values of 120 kV QCT were identified as reference data. Linear regression and BlandAltman analyses were used to compare the BMD values between 120 kV QCT and 100 kV CNNs and 100 kV QCT. Receiver operating characteristic curve analysis was used to evaluate the diagnostic performance of 100 kV CNNs and 100 kV QCT for osteoporosis and low BMD from normal BMD. For three test sets, linear regression and BlandAltman analyses revealed a stronger correlation (R<sup>2</sup> = 0.970-0.994 and 0.968-0.986, P < .001) and better agreement (mean error, -2.24 to 1.52 and 2.72 to 3.06 mg/cm<sup>3</sup>) for the BMD between the 120 kV QCT and 100 kV CNNs than between the 120 kV and 100 kV QCT. The areas under the curve of the 100 kV CNNs and 100 kV QCT were 1.000 and 0.999-1.000, and 1.000 and 1.000 for detecting osteoporosis and low BMD from normal BMD, respectively. The DL method achieved high accuracy for fully automated osteoporosis screening in 100 kV low-voltage chest CT scans obtained for lung cancer screening and performed well on various scanners from different manufacturers and hospitals.

Lung adenocarcinoma subtype classification based on contrastive learning model with multimodal integration.

Wang C, Liu L, Fan C, Zhang Y, Mai Z, Li L, Liu Z, Tian Y, Hu J, Elazab A

pubmed logopapersAug 19 2025
Accurately identifying the stages of lung adenocarcinoma is essential for selecting the most appropriate treatment plans. Nonetheless, this task is complicated due to challenges such as integrating diverse data, similarities among subtypes, and the need to capture contextual features, making precise differentiation difficult. We address these challenges and propose a multimodal deep neural network that integrates computed tomography (CT) images, annotated lesion bounding boxes, and electronic health records. Our model first combines bounding boxes with precise lesion location data and CT scans, generating a richer semantic representation through feature extraction from regions of interest to enhance localization accuracy using a vision transformer module. Beyond imaging data, the model also incorporates clinical information encoded using a fully connected encoder. Features extracted from both CT and clinical data are optimized for cosine similarity using a contrastive language-image pre-training module, ensuring they are cohesively integrated. In addition, we introduce an attention-based feature fusion module that harmonizes these features into a unified representation to fuse features from different modalities further. This integrated feature set is then fed into a classifier that effectively distinguishes among the three types of adenocarcinomas. Finally, we employ focal loss to mitigate the effects of unbalanced classes and contrastive learning loss to enhance feature representation and improve the model's performance. Our experiments on public and proprietary datasets demonstrate the efficiency of our model, achieving a superior validation accuracy of 81.42% and an area under the curve of 0.9120. These results significantly outperform recent multimodal classification approaches. The code is available at https://github.com/fancccc/LungCancerDC .

Machine Learning in Venous Thromboembolism - Why and What Next?

Gurumurthy G, Kisiel F, Reynolds L, Thomas W, Othman M, Arachchillage DJ, Thachil J

pubmed logopapersAug 19 2025
Venous thromboembolism (VTE) remains a leading cause of cardiovascular morbidity and mortality, despite advances in imaging and anticoagulation. VTE arises from diverse and overlapping risk factors, such as inherited thrombophilia, immobility, malignancy, surgery or trauma, pregnancy, hormonal therapy, obesity, chronic medical conditions (e.g., heart failure, inflammatory disease), and advancing age. Clinicians, therefore, face challenges in balancing the benefits of thromboprophylaxis against the bleeding risk. Existing clinical risk scores often exhibit only modest discrimination and calibration across heterogeneous patient populations. Machine learning (ML) has emerged as a promising tool to address these limitations. In imaging, convolutional neural networks and hybrid algorithms can detect VTE on CT pulmonary angiography with areas under the curves (AUCs) of 0.85 to 0.96. In surgical cohorts, gradient-boosting models outperform traditional risk scores, achieving AUCs between 0.70 and 0.80 in predicting postoperative VTE. In cancer-associated venous thrombosis, advanced ML models demonstrate AUCs between 0.68 and 0.82. However, concerns about bias and external validation persist. Bleeding risk prediction models remain challenging in extended anticoagulation settings, often matching conventional models. Predicting recurrent VTE using neural networks showed AUCs of 0.93 to 0.99 in initial studies. However, these lack transparency and prospective validation. Most ML models suffer from limited external validation, "black box" algorithms, and integration hurdles within clinical workflows. Future efforts should focus on standardized reporting (e.g., Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis [TRIPOD]-ML), transparent model interpretation, prospective impact assessments, and seamless incorporation into electronic health records to realize the full potential of ML in VTE.

State of Abdominal CT Datasets: A Critical Review of Bias, Clinical Relevance, and Real-world Applicability

Saeide Danaei, Zahra Dehghanian, Elahe Meftah, Nariman Naderi, Seyed Amir Ahmad Safavi-Naini, Faeze Khorasanizade, Hamid R. Rabiee

arxiv logopreprintAug 19 2025
This systematic review critically evaluates publicly available abdominal CT datasets and their suitability for artificial intelligence (AI) applications in clinical settings. We examined 46 publicly available abdominal CT datasets (50,256 studies). Across all 46 datasets, we found substantial redundancy (59.1\% case reuse) and a Western/geographic skew (75.3\% from North America and Europe). A bias assessment was performed on the 19 datasets with >=100 cases; within this subset, the most prevalent high-risk categories were domain shift (63\%) and selection bias (57\%), both of which may undermine model generalizability across diverse healthcare environments -- particularly in resource-limited settings. To address these challenges, we propose targeted strategies for dataset improvement, including multi-institutional collaboration, adoption of standardized protocols, and deliberate inclusion of diverse patient populations and imaging technologies. These efforts are crucial in supporting the development of more equitable and clinically robust AI models for abdominal imaging.
Page 38 of 1411410 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.