Sort by:
Page 20 of 1411403 results

Machine-learning model for differentiating round pneumonia and primary lung cancer using CT-based radiomic analysis.

Genç H, Yildirim M

pubmed logopapersSep 12 2025
Round pneumonia is a benign lung condition that can radiologically mimic primary lung cancer, making diagnosis challenging. Accurately distinguishing between these diseases is critical to avoid unnecessary invasive procedures. This study aims to distinguish round pneumonia from primary lung cancer by developing machine-learning models based on radiomic features extracted from computed tomography (CT) images. This retrospective observational study included 24 patients diagnosed with round pneumonia and 24 with histopathologically confirmed primary lung cancer. The lesions were manually segmented on the CT images by 2 radiologists. In total, 107 radiomic features were extracted from each case. Feature selection was performed using an information-gain algorithm to identify the 5 most relevant features. Seven machine-learning classifiers (Naïve Bayes, support vector machine, Random Forest, Decision Tree, Neural Network, Logistic Regression, and k-NN) were trained and validated. The model performance was evaluated using AUC, classification accuracy, sensitivity, and specificity. The Naïve Bayes, support vector machine, and Random Forest models achieved perfect classification performance on the entire dataset (AUC = 1.000). After feature selection, the Naïve Bayes model maintained a high performance with an AUC of 1.000, accuracy of 0.979, sensitivity of 0.958, and specificity of 1.000. Machine-learning models using CT-based radiomics features can effectively differentiate round pneumonia from primary lung cancer. These models offer a promising noninvasive tool to aid in radiological diagnosis and reduce diagnostic uncertainty.

Risk prediction for lung cancer screening: a systematic review and meta-regression

Rezaeianzadeh, R., Leung, C., Kim, S. J., Choy, K., Johnson, K. M., Kirby, M., Lam, S., Smith, B. M., Sadatsafavi, M.

medrxiv logopreprintSep 12 2025
BackgroundLung cancer (LC) is the leading cause of cancer mortality, often diagnosed at advanced stages. Screening reduces mortality in high-risk individuals, but its efficiency can improve with pre- and post-screening risk stratification. With recent LC screening guideline updates in Europe and the US, numerous novel risk prediction models have emerged since the last systematic review of such models. We reviewed risk-based models for selecting candidates for CT screening, and post-CT stratification. MethodsWe systematically reviewed Embase and MEDLINE (2020-2024), identifying studies proposing new LC risk models for screening selection or nodule classification. Data extraction included study design, population, model type, risk horizon, and internal/external validation metrics. In addition, we performed an exploratory meta-regression of AUCs to assess whether sample size, model class, validation type, and biomarker use were associated with discrimination. ResultsOf 1987 records, 68 were included: 41 models were for screening selection (20 without biomarkers, 21 with), and 27 for nodule classification. Regression-based models predominated, though machine learning and deep learning approaches were increasingly common. Discrimination ranged from moderate (AUC{approx}0.70) to excellent (>0.90), with biomarker and imaging-enhanced models often outperforming traditional ones. Model calibration was inconsistently reported, and fewer than half underwent external validation. Meta-regression suggested that, among pre-screening models, larger sample sizes were modestly associated with higher AUC. Conclusion75 models had been identified prior to 2020, we found 68 models since. This reflects growing interest in personalized LC screening. While many demonstrate strong discrimination, inconsistent calibration and limited external validation hinder clinical adoption. Future efforts should prioritize improving existing models rather than developing new ones, transparent evaluation, cost-effectiveness analysis, and real-world implementation.

The best diagnostic approach for classifying ischemic stroke onset time: A systematic review and meta-analysis.

Zakariaee SS, Kadir DH, Molazadeh M, Abdi S

pubmed logopapersSep 12 2025
The success of intravenous thrombolysis with tPA (IV-tPA) as the fastest and easiest treatment for stroke patients is closely related to time since stroke onset (TSS). Administering IV-tPA after the recommended time interval (< 4.5 h) increases the risk of cerebral hemorrhage. Despite advances in diagnostic approaches have been made, the determination of TSS remains a clinical challenge. In this study, the performances of different diagnostic approaches were investigated to classify TSS. A systematic literature search was conducted in Web of Science, Pubmed, Scopus, Embase, and Cochrane databases until July 2025. The overall AUC, sensitivity, and specificity magnitudes with their 95%CIs were determined for each diagnostic approach to evaluate their classification performances. This systematic review retrieved a total number of 9030 stroke patients until July 2025. The results showed that the human readings of DWI-FLAIR mismatch as the current gold standard method with AUC = 0.71 (95%CI: 0.66-0.76), sensitivity = 0.62 (95%CI: 0.54-0.71), and specificity = 0.78 (95%CI: 0.72-0.84) has a moderate performance to identify the TSS. ML model fed by radiomic features of CT data with AUC = 0.89 (95%CI: 0.80-0.98), sensitivity = 0.85 (95%CI: 0.75-0.96), and specificity = 0.86 (95%CI: 0.73-1.00) has the best performance in classifying TSS among the models reviewed. ML models fed by radiomic features better classify TSS than the human reading of DWI-FLAIR mismatch. An efficient AI model fed by CT radiomic data could yield the best classification performance to determine patients' eligibility for IV-tPA treatment and improve treatment outcomes.

Enhanced U-Net with Attention Mechanisms for Improved Feature Representation in Lung Nodule Segmentation.

Aung TMM, Khan AA

pubmed logopapersSep 11 2025
Accurate segmentation of small and irregular pulmonary nodules remains a significant challenge in lung cancer diagnosis, particularly in complex imaging backgrounds. Traditional U-Net models often struggle to capture long-range dependencies and integrate multi-scale features, limiting their effectiveness in addressing these challenges. To overcome these limitations, this study proposes an enhanced U-Net hybrid model that integrates multiple attention mechanisms to enhance feature representation and improve the precision of segmentation outcomes. The assessment of the proposed model was conducted using the LUNA16 dataset, which contains annotated CT scans of pulmonary nodules. Multiple attention mechanisms, including Spatial Attention (SA), Dilated Efficient Channel Attention (Dilated ECA), Convolutional Block Attention Module (CBAM), and Squeeze-and-Excitation (SE) Block, were integrated into a U-Net backbone. These modules were strategically combined to enhance both local and global feature representations. The model's architecture and training procedures were designed to address the challenges of segmenting small and irregular pulmonary nodules. The proposed model achieved a Dice similarity coefficient of 84.30%, significantly outperforming the baseline U-Net model. This result demonstrates improved accuracy in segmenting small and irregular pulmonary nodules. The integration of multiple attention mechanisms significantly enhances the model's ability to capture both local and global features, addressing key limitations of traditional U-Net architectures. SA preserves spatial features for small nodules, while Dilated ECA captures long-range dependencies. CBAM and SE further refine feature representations. Together, these modules improve segmentation performance in complex imaging backgrounds. A potential limitation is that performance may still be constrained in cases with extreme anatomical variability or lowcontrast lesions, suggesting directions for future research. The Enhanced U-Net hybrid model outperforms the traditional U-Net, effectively addressing challenges in segmenting small and irregular pulmonary nodules within complex imaging backgrounds.

Mapping of discrete range modulated proton radiograph to water-equivalent path length using machine learning

Atiq Ur Rahman, Chun-Chieh Wang, Shu-Wei Wu, Tsi-Chian Chao, I-Chun Cho

arxiv logopreprintSep 11 2025
Objective. Proton beams enable localized dose delivery. Accurate range estimation is essential, but planning still relies on X-ray CT, which introduces uncertainty in stopping power and range. Proton CT measures water equivalent thickness directly but suffers resolution loss from multiple Coulomb scattering. We develop a data driven method that reconstructs water equivalent path length (WEPL) maps from energy resolved proton radiographs, bypassing intermediate reconstructions. Approach. We present a machine learning pipeline for WEPL from high dimensional radiographs. Data were generated with the TOPAS Monte Carlo toolkit, modeling a clinical nozzle and a patient CT. Proton energies spanned 70-230 MeV across 72 projection angles. Principal component analysis reduced input dimensionality while preserving signal. A conditional GAN with gradient penalty was trained for WEPL prediction using a composite loss (adversarial, MSE, SSIM, perceptual) to balance sharpness, accuracy, and stability. Main results. The model reached a mean relative WEPL deviation of 2.5 percent, an SSIM of 0.97, and a proton radiography gamma index passing rate of 97.1 percent (2 percent delta WEPL, 3 mm distance-to-agreement) on a simulated head phantom. Results indicate high spatial fidelity and strong structural agreement. Significance. WEPL can be mapped directly from proton radiographs with deep learning while avoiding intermediate steps. The method mitigates limits of analytic techniques and may improve treatment planning. Future work will tune the number of PCA components, include detector response, explore low dose settings, and extend multi angle data toward full proton CT reconstruction; it is compatible with clinical workflows.

Virtual staining for 3D X-ray histology of bone implants

Sarah C. Irvine, Christian Lucas, Diana Krüger, Bianca Guedert, Julian Moosmann, Berit Zeller-Plumhoff

arxiv logopreprintSep 11 2025
Three-dimensional X-ray histology techniques offer a non-invasive alternative to conventional 2D histology, enabling volumetric imaging of biological tissues without the need for physical sectioning or chemical staining. However, the inherent greyscale image contrast of X-ray tomography limits its biochemical specificity compared to traditional histological stains. Within digital pathology, deep learning-based virtual staining has demonstrated utility in simulating stained appearances from label-free optical images. In this study, we extend virtual staining to the X-ray domain by applying cross-modality image translation to generate artificially stained slices from synchrotron-radiation-based micro-CT scans. Using over 50 co-registered image pairs of micro-CT and toluidine blue-stained histology from bone-implant samples, we trained a modified CycleGAN network tailored for limited paired data. Whole slide histology images were downsampled to match the voxel size of the CT data, with on-the-fly data augmentation for patch-based training. The model incorporates pixelwise supervision and greyscale consistency terms, producing histologically realistic colour outputs while preserving high-resolution structural detail. Our method outperformed Pix2Pix and standard CycleGAN baselines across SSIM, PSNR, and LPIPS metrics. Once trained, the model can be applied to full CT volumes to generate virtually stained 3D datasets, enhancing interpretability without additional sample preparation. While features such as new bone formation were able to be reproduced, some variability in the depiction of implant degradation layers highlights the need for further training data and refinement. This work introduces virtual staining to 3D X-ray imaging and offers a scalable route for chemically informative, label-free tissue characterisation in biomedical research.

Application of Deep Learning for Predicting Hematoma Expansion in Intracerebral Hemorrhage Using Computed Tomography Scans: A Systematic Review and Meta-Analysis of Diagnostic Accuracy.

Ahmadzadeh AM, Ashoobi MA, Broomand Lomer N, Elyassirad D, Gheiji B, Vatanparast M, Bathla G, Tu L

pubmed logopapersSep 11 2025
We aimed to systematically review the studies that utilized deep learning (DL)-based networks to predict hematoma expansion (HE) in patients with intracerebral hemorrhage (ICH) using computed tomography (CT) images. We carried out a comprehensive literature search across four major databases to identify relevant studies. To evaluate the quality of the included studies, we used both the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) and the METhodological RadiomICs Score (METRICS) checklists. We then calculated pooled diagnostic estimates and assessed heterogeneity using the I<sup>2</sup> statistic. To assess the sources of heterogeneity, effects of individual studies, and publication bias, we performed subgroup analysis, sensitivity analysis, and Deek's asymmetry test. Twenty-two studies were included in the qualitative synthesis, of which 11 and 6 were utilized for exclusive DL and combined DL meta-analyses, respectively. We found pooled sensitivity of 0.81 and 0.84, specificity of 0.79 and 0.91, positive diagnostic likelihood ratio (DLR) of 3.96 and 9.40, negative DLR of 0.23 and 0.18, diagnostic odds ratio of 16.97 and 53.51, and area under the curve of 0.87 and 0.89 for exclusive DL-based and combined DL-based models, respectively. Subgroup analysis revealed significant inter-group differences according to the segmentation technique and study quality. DL-based networks showed strong potential in accurately identifying HE in ICH patients. These models may guide earlier targeted interventions such as intensive blood pressure control or administration of hemostatic drugs, potentially leading to improved patient outcomes.

Artificial intelligence in gastric cancer: a systematic review of machine learning and deep learning applications.

Alsallal M, Habeeb MS, Vaghela K, Malathi H, Vashisht A, Sahu PK, Singh D, Al-Hussainy AF, Aljanaby IA, Sameer HN, Athab ZH, Adil M, Yaseen A, Farhood B

pubmed logopapersSep 11 2025
Gastric cancer (GC) remains a major global health concern, ranking as the fifth most prevalent malignancy and the fourth leading cause of cancer-related mortality worldwide. Although early detection can increase the 5-year survival rate of early gastric cancer (EGC) to over 90%, more than 80% of cases are diagnosed at advanced stages due to subtle clinical symptoms and diagnostic challenges. Artificial intelligence (AI), particularly machine learning (ML) and deep learning (DL), has shown great promise in addressing these limitations. This systematic review aims to evaluate the performance, applications, and limitations of ML and DL models in GC management, with a focus on their use in detection, diagnosis, treatment planning, and prognosis prediction across diverse clinical imaging and data modalities. Following the PRISMA 2020 guidelines, a comprehensive literature search was conducted in MEDLINE, Web of Science, and Scopus for studies published between 2004 and May 2025. Eligible studies applied ML or DL algorithms for diagnostic or prognostic tasks in GC using data from endoscopy, computed tomography (CT), pathology, or multi-modal sources. Two reviewers independently performed study selection, data extraction, and risk of bias assessment. A total of 59 studies met the inclusion criteria. DL models, particularly convolutional neural networks (CNNs), demonstrated strong performance in EGC detection, with reported sensitivities up to 95.3% and Area Under the Curve (AUCs) as high as 0.981, often exceeding expert endoscopists. CT-based radiomics and DL models achieved AUCs ranging from 0.825 to 0.972 for tumor staging and metastasis prediction. Pathology-based models reported accuracies up to 100% for EGC detection and AUCs up to 0.92 for predicting treatment response. Cross-modality approaches combining radiomics and pathomics achieved AUCs up to 0.951. Key challenges included algorithmic bias, limited dataset diversity, interpretability issues, and barriers to clinical integration. ML and DL models have demonstrated substantial potential to improve early detection, diagnostic accuracy, and individualized treatment in GC. To advance clinical adoption, future research should prioritize the development of large, diverse datasets, implement explainable AI frameworks, and conduct prospective clinical trials. These efforts will be essential for integrating AI into precision oncology and addressing the increasing global burden of gastric cancer.

A full-scale attention-augmented CNN-transformer model for segmentation of oropharyngeal mucosa organs-at-risk in radiotherapy.

He L, Sun J, Lu S, Li J, Wang X, Yan Z, Guan J

pubmed logopapersSep 11 2025
Radiation-induced oropharyngeal mucositis (ROM) is a common and severe side effect of radiotherapy in nasopharyngeal cancer patients, leading to significant clinical complications such as malnutrition, infections, and treatment interruptions. Accurate delineation of the oropharyngeal mucosa (OPM) as an organ-at-risk (OAR) is crucial to minimizing radiation exposure and preventing ROM. This study aims to develop and validate an advanced automatic segmentation model, attention-augmented Swin U-Net transformer (AA-Swin UNETR), for accurate delineation of OPM to improve radiotherapy planning and reduce the incidence of ROM. We proposed a hybrid CNN-transformer model, AA-Swin UNETR, based on the Swin UNETR framework, which integrates hierarchical feature extraction with full-scale attention mechanisms. The model includes a Swin Transformer-based encoder and a CNN-based decoder with residual blocks, connected via a full-scale feature connection scheme. The full-scale attention mechanism enables the model to capture long-range dependencies and multi-level features effectively, enhancing the segmentation accuracy. The model was trained on a dataset of 202 CT scans from Nanfang Hospital, using expert manual delineations as the gold standard. We evaluated the performance of AA-Swin UNETR against state-of-the-art (SOTA) segmentation models, including Swin UNETR, nnUNet, and 3D UX-Net, using geometric and dosimetric evaluation parameters. The geometric metrics include Dice similarity coefficient (DSC), surface DSC (sDSC), volume similarity (VS), Hausdorff distance (HD), precision, and recall. The dosimetric metrics include changes of D<sub>0.1 cc</sub> and D<sub>mean</sub> between results derived from manually delineated OPM and auto-segmentation models. The AA-Swin UNETR model achieved the highest mean DSC of 87.72 ± 1.98%, significantly outperforming Swin UNETR (83.53 ± 2.59%), nnUNet (85.48%± 2.68), and 3D UX-Net (80.04 ± 3.76%). The model also showed superior mean sDSC (98.44 ± 1.08%), mean VS (97.86 ± 1.43%), mean precision (87.60 ± 3.06%) and mean recall (89.22 ± 2.70%), with a competitive mean HD of 9.03 ± 2.79 mm. For dosimetric evaluation, the proposed model generates smallest mean [Formula: see text] (0.46 ± 4.92 cGy) and mean [Formula: see text] (6.26 ± 24.90 cGY) relative to manual delineation compared with other auto-segmentation results (mean [Formula: see text] of Swin UNETR = -0.56 ± 7.28 cGy, nnUNet = 0.99 ± 4.73 cGy, 3D UX-Net = -0.65 ± 8.05 cGy; mean [Formula: see text] of Swin UNETR = 7.46 ± 43.37, nnUNet = 21.76 ± 37.86 and 3D UX-Net = 44.61 ± 62.33). In this paper, we proposed a transformer and CNN hybrid deep-learning based model AA-Swin UNETR for automatic segmentation of OPM as an OAR structure in radiotherapy planning. Evaluations with geometric and dosimetric parameters demonstrated AA-Swin UNETR can generate delineations close to a manual reference, both in terms of geometry and dose-volume metrics. The proposed model out-performed existing SOTA models in both evaluation metrics and demonstrated is capability of accurately segmenting complex anatomical structures of the OPM, providing a reliable tool for enhancing radiotherapy planning.

An Interpretable Deep Learning Framework for Preoperative Classification of Lung Adenocarcinoma on CT Scans: Advancing Surgical Decision Support.

Shi Q, Liao Y, Li J, Huang H

pubmed logopapersSep 10 2025
Lung adenocarcinoma remains a leading cause of cancer-related mortality, and the diagnostic performance of computed tomography (CT) is limited when dependent solely on human interpretation. This study aimed to develop and evaluate an interpretable deep learning framework using an attention-enhanced Squeeze-and-Excitation Residual Network (SE-ResNet) to improve automated classification of lung adenocarcinoma from thoracic CT images. Furthermore, Gradient-weighted Class Activation Mapping (Grad-CAM) was applied to enhance model interpretability and assist in the visual localization of tumor regions. A total of 3800 chest CT axial slices were collected from 380 subjects (190 patients with lung adenocarcinoma and 190 controls, with 10 slices extracted from each case). This dataset was used to train and evaluate the baseline ResNet50 model as well as the proposed SE-ResNet50 model. Performance was compared using accuracy, Area Under the Curve (AUC), precision, recall, and F1-score. Grad-CAM visualizations were generated to assess the alignment between the model's attention and radiologically confirmed tumor locations. The SE-ResNet model achieved a classification accuracy of 94% and an AUC of 0.941, significantly outperforming the baseline ResNet50, which had an 85% accuracy and an AUC of 0.854. Grad-CAM heatmaps produced from the SE-ResNet demonstrated superior localization of tumor-relevant regions, confirming the enhanced focus provided by the attention mechanism. The proposed SE-ResNet framework delivers high accuracy and interpretability in classifying lung adenocarcinoma from CT images. It shows considerable potential as a decision-support tool to assist radiologists in diagnosis and may serve as a valuable clinical tool with further validation.
Page 20 of 1411403 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.