Sort by:
Page 6 of 2922917 results

Predicting the molecular subtypes of 2021 WHO grade 4 glioma by a multiparametric MRI-based machine learning model.

Xu W, Li Y, Zhang J, Zhang Z, Shen P, Wang X, Yang G, Du J, Zhang H, Tan Y

pubmed logopapersJul 14 2025
Accurately distinguishing the different molecular subtypes of 2021 World Health Organization (WHO) grade 4 Central Nervous System (CNS) gliomas is highly relevant for prognostic stratification and personalized treatment. To develop and validate a machine learning (ML) model using multiparametric MRI for the preoperative differentiation of astrocytoma, CNS WHO grade 4, and glioblastoma (GBM), isocitrate dehydrogenase-wild-type (IDH-wt) (WHO 2021) (Task 1:grade 4 vs. GBM); and to stratify astrocytoma, CNS WHO grade 4, by distinguish astrocytoma, IDH-mutant (IDH-mut), CNS WHO grade 4 from astrocytoma, IDH-wild-type (IDH-wt), CNS WHO grade 4 (Task 2:IDH-mut <sup>grade 4</sup> vs. IDH-wt <sup>grade 4</sup>). Additionally, to evaluate the model's prognostic value. We retrospectively analyzed 320 glioma patients from three hospitals (training/testing, 7:3 ratio) and 99 patients from ‌The Cancer Genome Atlas (TCGA) database for external validation‌. Radiomic features were extracted from tumor and edema on contrast-enhanced T1-weighted imaging (CE-T1WI) and T2 fluid-attenuated inversion recovery (T2-FLAIR). Extreme gradient boosting (XGBoost) was utilized for constructing the ML, clinical, and combined models. Model performance was evaluated with receiver operating characteristic (ROC) curves, decision curves, and calibration curves. Stability was evaluated using six additional classifiers. Kaplan-Meier (KM) survival analysis and the log-rank test assessed the model's prognostic value. In Task 1 and Task 2, the combined model (AUC = 0.907, 0.852 and 0.830 for Task 1; AUC = 0.899, 0.895 and 0.792 for Task 2) and the optimal ML model (AUC = 0.902, 0.854 and 0.832 for Task 1; AUC = 0.904, 0.899 and 0.783 for Task 2) significantly outperformed the clinical model (AUC = 0.671, 0.656, and 0.543 for Task 1; AUC = 0.619, 0.605 and 0.400 for Task 2) in both the training, testing and validation sets. Survival analysis showed the combined model performed similarly to molecular subtype in both tasks (p = 0.964 and p = 0.746). The multiparametric MRI ML model effectively distinguished astrocytoma, CNS WHO grade 4 from GBM, IDH-wt (WHO 2021) and differentiated astrocytoma, IDH-mut from astrocytoma, IDH-wt, CNS WHO grade 4. Additionally, the model provided reliable survival stratification for glioma patients across different molecular subtypes.

ESE and Transfer Learning for Breast Tumor Classification.

He Y, Batumalay M, Thinakaran R

pubmed logopapersJul 14 2025
In this study, we proposed a lightweight neural network architecture based on inverted residual network, efficient squeeze excitation (ESE) module, and double transfer learning, called TLese-ResNet, for breast cancer molecular subtype recognition. The inverted ResNet reduces the number of network parameters while enhancing the cross-layer gradient propagation and feature expression capabilities. The introduction of the ESE module reduces the network complexity while maintaining the channel relationship collection. The dataset of this study comes from the mammography images of patients diagnosed with invasive breast cancer in a hospital in Jiangxi. The dataset comprises preoperative mammography images with CC and MLO views. Given that the dataset is somewhat small, in addition to the commonly used data augmentation methods, double transfer learning is also used. Double transfer learning includes the first transfer, in which the source domain is ImageNet and the target domain is the COVID-19 chest X-ray image dataset, and the second transfer, in which the source domain is the target domain of the first transfer, and the target domain is the mammography dataset we collected. By using five-fold cross-validation, the mean accuracy and area under received surgery feature on mammographic images of CC and MLO views were 0.818 and 0.883, respectively, outperforming other state-of-the-art deep learning-based models such as ResNet-50 and DenseNet-121. Therefore, the proposed model can provide clinicians with an effective and non-invasive auxiliary tool for molecular subtype identification of breast cancer.

X-ray2CTPA: leveraging diffusion models to enhance pulmonary embolism classification.

Cahan N, Klang E, Aviram G, Barash Y, Konen E, Giryes R, Greenspan H

pubmed logopapersJul 14 2025
Chest X-rays or chest radiography (CXR), commonly used for medical diagnostics, typically enables limited imaging compared to computed tomography (CT) scans, which offer more detailed and accurate three-dimensional data, particularly contrast-enhanced scans like CT Pulmonary Angiography (CTPA). However, CT scans entail higher costs, greater radiation exposure, and are less accessible than CXRs. In this work, we explore cross-modal translation from a 2D low contrast-resolution X-ray input to a 3D high contrast and spatial-resolution CTPA scan. Driven by recent advances in generative AI, we introduce a novel diffusion-based approach to this task. We employ the synthesized 3D images in a classification framework and show improved AUC in a Pulmonary Embolism (PE) categorization task, using the initial CXR input. Furthermore, we evaluate the model's performance using quantitative metrics, ensuring diagnostic relevance of the generated images. The proposed method is generalizable and capable of performing additional cross-modality translations in medical imaging. It may pave the way for more accessible and cost-effective advanced diagnostic tools. The code for this project is available: https://github.com/NoaCahan/X-ray2CTPA .

Generative AI enables medical image segmentation in ultra low-data regimes.

Zhang L, Jindal B, Alaa A, Weinreb R, Wilson D, Segal E, Zou J, Xie P

pubmed logopapersJul 14 2025
Semantic segmentation of medical images is pivotal in applications like disease diagnosis and treatment planning. While deep learning automates this task effectively, it struggles in ultra low-data regimes for the scarcity of annotated segmentation masks. To address this, we propose a generative deep learning framework that produces high-quality image-mask pairs as auxiliary training data. Unlike traditional generative models that separate data generation from model training, ours uses multi-level optimization for end-to-end data generation. This allows segmentation performance to guide the generation process, producing data tailored to improve segmentation outcomes. Our method demonstrates strong generalization across 11 medical image segmentation tasks and 19 datasets, covering various diseases, organs, and modalities. It improves performance by 10-20% (absolute) in both same- and out-of-domain settings and requires 8-20 times less training data than existing approaches. This greatly enhances the feasibility and cost-effectiveness of deep learning in data-limited medical imaging scenarios.

Deep Learning Applications in Lymphoma Imaging.

Sorin V, Cohen I, Lekach R, Partovi S, Raskin D

pubmed logopapersJul 14 2025
Lymphomas are a diverse group of disorders characterized by the clonal proliferation of lymphocytes. While definitive diagnosis of lymphoma relies on histopathology, immune-phenotyping and additional molecular analyses, imaging modalities such as PET/CT, CT, and MRI play a central role in the diagnostic process and management, from assessing disease extent, to evaluation of response to therapy and detecting recurrence. Artificial intelligence (AI), particularly deep learning models like convolutional neural networks (CNNs), is transforming lymphoma imaging by enabling automated detection, segmentation, and classification. This review elaborates on recent advancements in deep learning for lymphoma imaging and its integration into clinical practice. Challenges include obtaining high-quality, annotated datasets, addressing biases in training data, and ensuring consistent model performance. Ongoing efforts are focused on enhancing model interpretability, incorporating diverse patient populations to improve generalizability, and ensuring safe and effective integration of AI into clinical workflows, with the goal of improving patient outcomes.

Early breast cancer detection via infrared thermography using a CNN enhanced with particle swarm optimization.

Alzahrani RM, Sikkandar MY, Begum SS, Babetat AFS, Alhashim M, Alduraywish A, Prakash NB, Ng EYK

pubmed logopapersJul 13 2025
Breast cancer remains the most prevalent cause of cancer-related mortality among women worldwide, with an estimated incidence exceeding 500,000 new cases annually. Timely diagnosis is vital for enhancing therapeutic outcomes and increasing survival probabilities. Although conventional diagnostic tools such as mammography are widely used and generally effective, they are often invasive, costly, and exhibit reduced efficacy in patients with dense breast tissue. Infrared thermography, by contrast, offers a non-invasive and economical alternative; however, its clinical adoption has been limited, largely due to difficulties in accurate thermal image interpretation and the suboptimal tuning of machine learning algorithms. To overcome these limitations, this study proposes an automated classification framework that employs convolutional neural networks (CNNs) for distinguishing between malignant and benign thermographic breast images. An Enhanced Particle Swarm Optimization (EPSO) algorithm is integrated to automatically fine-tune CNN hyperparameters, thereby minimizing manual effort and enhancing computational efficiency. The methodology also incorporates advanced image preprocessing techniques-including Mamdani fuzzy logic-based edge detection, Contrast-Limited Adaptive Histogram Equalization (CLAHE) for contrast enhancement, and median filtering for noise suppression-to bolster classification performance. The proposed model achieves a superior classification accuracy of 98.8%, significantly outperforming conventional CNN implementations in terms of both computational speed and predictive accuracy. These findings suggest that the developed system holds substantial potential for early, reliable, and cost-effective breast cancer screening in real-world clinical environments.

Impact of three-dimensional prostate models during robot-assisted radical prostatectomy on surgical margins and functional outcomes.

Khan N, Prezzi D, Raison N, Shepherd A, Antonelli M, Byrne N, Heath M, Bunton C, Seneci C, Hyde E, Diaz-Pinto A, Macaskill F, Challacombe B, Noel J, Brown C, Jaffer A, Cathcart P, Ciabattini M, Stabile A, Briganti A, Gandaglia G, Montorsi F, Ourselin S, Dasgupta P, Granados A

pubmed logopapersJul 13 2025
Robot-assisted radical prostatectomy (RARP) is the standard surgical procedure for the treatment of prostate cancer. RARP requires a trade-off between performing a wider resection in order to reduce the risk of positive surgical margins (PSMs) and performing minimal resection of the nerve bundles that determine functional outcomes, such as incontinence and potency, which affect patients' quality of life. In order to achieve favourable outcomes, a precise understanding of the three-dimensional (3D) anatomy of the prostate, nerve bundles and tumour lesion is needed. This is the protocol for a single-centre feasibility study including a prospective two-arm interventional group (a 3D virtual and a 3D printed prostate model), and a prospective control group. The primary endpoint will be PSM status and the secondary endpoint will be functional outcomes, including incontinence and sexual function. The study will consist of a total of 270 patients: 54 patients will be included in each of the interventional groups (3D virtual, 3D printed models), 54 in the retrospective control group and 108 in the prospective control group. Automated segmentation of prostate gland and lesions will be conducted on multiparametric magnetic resonance imaging (mpMRI) using 'AutoProstate' and 'AutoLesion' deep learning approaches, while manual annotation of the neurovascular bundles, urethra and external sphincter will be conducted on mpMRI by a radiologist. This will result in masks that will be post-processed to generate 3D printed/virtual models. Patients will be allocated to either interventional arm and the surgeon will be given either a 3D printed or a 3D virtual model at the start of the RARP procedure. At the 6-week follow-up, the surgeon will meet with the patient to present PSM status and capture functional outcomes from the patient via questionnaires. We will capture these measures as endpoints for analysis. These questionnaires will be re-administered at 3, 6 and 12 months postoperatively.

An improved U-NET3+ with transformer and adaptive attention map for lung segmentation.

Joseph Raj V, Christopher P

pubmed logopapersJul 13 2025
Accurate segmentation of lung regions from CT scan images is critical for diagnosing and monitoring respiratory diseases. This study introduces a novel hybrid architecture Adaptive Attention U-NetAA, which combines the strengths of U-Net3 + and Transformer based attention mechanisms models for high-precision lung segmentation. The U-Net3 + module effectively segments the lung region by leveraging its deep convolutional network with nested skip connections, ensuring rich multi-scale feature extraction. A key innovation is introducing an adaptive attention mechanism within the Transformer module, which dynamically adjusts the focus on critical regions in the image based on local and global contextual relationships. This model's adaptive attention mechanism addresses variations in lung morphology, image artifacts, and low-contrast regions, leading to improved segmentation accuracy. The combined convolutional and attention-based architecture enhances robustness and precision. Experimental results on benchmark CT datasets demonstrate that the proposed model achieves an IoU of 0.984, a Dice coefficient of 0.989, a MIoU of 0.972, and an HD95 of 1.22 mm, surpassing state-of-the-art methods. These results establish U-NetAA as a superior tool for clinical lung segmentation, with enhanced accuracy, sensitivity, and generalization capability.

Disentanglement and Assessment of Shortcuts in Ophthalmological Retinal Imaging Exams

Leonor Fernandes, Tiago Gonçalves, João Matos, Luis Filipe Nakayama, Jaime S. Cardoso

arxiv logopreprintJul 13 2025
Diabetic retinopathy (DR) is a leading cause of vision loss in working-age adults. While screening reduces the risk of blindness, traditional imaging is often costly and inaccessible. Artificial intelligence (AI) algorithms present a scalable diagnostic solution, but concerns regarding fairness and generalization persist. This work evaluates the fairness and performance of image-trained models in DR prediction, as well as the impact of disentanglement as a bias mitigation technique, using the diverse mBRSET fundus dataset. Three models, ConvNeXt V2, DINOv2, and Swin V2, were trained on macula images to predict DR and sensitive attributes (SAs) (e.g., age and gender/sex). Fairness was assessed between subgroups of SAs, and disentanglement was applied to reduce bias. All models achieved high DR prediction performance in diagnosing (up to 94% AUROC) and could reasonably predict age and gender/sex (91% and 77% AUROC, respectively). Fairness assessment suggests disparities, such as a 10% AUROC gap between age groups in DINOv2. Disentangling SAs from DR prediction had varying results, depending on the model selected. Disentanglement improved DINOv2 performance (2% AUROC gain), but led to performance drops in ConvNeXt V2 and Swin V2 (7% and 3%, respectively). These findings highlight the complexity of disentangling fine-grained features in fundus imaging and emphasize the importance of fairness in medical imaging AI to ensure equitable and reliable healthcare solutions.

AI-Enhanced Pediatric Pneumonia Detection: A CNN-Based Approach Using Data Augmentation and Generative Adversarial Networks (GANs)

Abdul Manaf, Nimra Mughal

arxiv logopreprintJul 13 2025
Pneumonia is a leading cause of mortality in children under five, requiring accurate chest X-ray diagnosis. This study presents a machine learning-based Pediatric Chest Pneumonia Classification System to assist healthcare professionals in diagnosing pneumonia from chest X-ray images. The CNN-based model was trained on 5,863 labeled chest X-ray images from children aged 0-5 years from the Guangzhou Women and Children's Medical Center. To address limited data, we applied augmentation techniques (rotation, zooming, shear, horizontal flipping) and employed GANs to generate synthetic images, addressing class imbalance. The system achieved optimal performance using combined original, augmented, and GAN-generated data, evaluated through accuracy and F1 score metrics. The final model was deployed via a Flask web application, enabling real-time classification with probability estimates. Results demonstrate the potential of deep learning and GANs in improving diagnostic accuracy and efficiency for pediatric pneumonia classification, particularly valuable in resource-limited clinical settings https://github.com/AbdulManaf12/Pediatric-Chest-Pneumonia-Classification
Page 6 of 2922917 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.