Sort by:
Page 122 of 3453445 results

A generative model uses healthy and diseased image pairs for pixel-level chest X-ray pathology localization.

Dong K, Cheng Y, He K, Suo J

pubmed logopapersJul 14 2025
Medical artificial intelligence (AI) offers potential for automatic pathological interpretation, but a practicable AI model demands both pixel-level accuracy and high explainability for diagnosis. The construction of such models relies on substantial training data with fine-grained labelling, which is impractical in real applications. To circumvent this barrier, we propose a prompt-driven constrained generative model to produce anatomically aligned healthy and diseased image pairs and learn a pathology localization model in a supervised manner. This paradigm provides high-fidelity labelled data and addresses the lack of chest X-ray images with labelling at fine scales. Benefitting from the emerging text-driven generative model and the incorporated constraint, our model presents promising localization accuracy of subtle pathologies, high explainability for clinical decisions, and good transferability to many unseen pathological categories such as new prompts and mixed pathologies. These advantageous features establish our model as a promising solution to assist chest X-ray analysis. In addition, the proposed approach is also inspiring for other tasks lacking massive training data and time-consuming manual labelling.

Predicting the molecular subtypes of 2021 WHO grade 4 glioma by a multiparametric MRI-based machine learning model.

Xu W, Li Y, Zhang J, Zhang Z, Shen P, Wang X, Yang G, Du J, Zhang H, Tan Y

pubmed logopapersJul 14 2025
Accurately distinguishing the different molecular subtypes of 2021 World Health Organization (WHO) grade 4 Central Nervous System (CNS) gliomas is highly relevant for prognostic stratification and personalized treatment. To develop and validate a machine learning (ML) model using multiparametric MRI for the preoperative differentiation of astrocytoma, CNS WHO grade 4, and glioblastoma (GBM), isocitrate dehydrogenase-wild-type (IDH-wt) (WHO 2021) (Task 1:grade 4 vs. GBM); and to stratify astrocytoma, CNS WHO grade 4, by distinguish astrocytoma, IDH-mutant (IDH-mut), CNS WHO grade 4 from astrocytoma, IDH-wild-type (IDH-wt), CNS WHO grade 4 (Task 2:IDH-mut <sup>grade 4</sup> vs. IDH-wt <sup>grade 4</sup>). Additionally, to evaluate the model's prognostic value. We retrospectively analyzed 320 glioma patients from three hospitals (training/testing, 7:3 ratio) and 99 patients from ‌The Cancer Genome Atlas (TCGA) database for external validation‌. Radiomic features were extracted from tumor and edema on contrast-enhanced T1-weighted imaging (CE-T1WI) and T2 fluid-attenuated inversion recovery (T2-FLAIR). Extreme gradient boosting (XGBoost) was utilized for constructing the ML, clinical, and combined models. Model performance was evaluated with receiver operating characteristic (ROC) curves, decision curves, and calibration curves. Stability was evaluated using six additional classifiers. Kaplan-Meier (KM) survival analysis and the log-rank test assessed the model's prognostic value. In Task 1 and Task 2, the combined model (AUC = 0.907, 0.852 and 0.830 for Task 1; AUC = 0.899, 0.895 and 0.792 for Task 2) and the optimal ML model (AUC = 0.902, 0.854 and 0.832 for Task 1; AUC = 0.904, 0.899 and 0.783 for Task 2) significantly outperformed the clinical model (AUC = 0.671, 0.656, and 0.543 for Task 1; AUC = 0.619, 0.605 and 0.400 for Task 2) in both the training, testing and validation sets. Survival analysis showed the combined model performed similarly to molecular subtype in both tasks (p = 0.964 and p = 0.746). The multiparametric MRI ML model effectively distinguished astrocytoma, CNS WHO grade 4 from GBM, IDH-wt (WHO 2021) and differentiated astrocytoma, IDH-mut from astrocytoma, IDH-wt, CNS WHO grade 4. Additionally, the model provided reliable survival stratification for glioma patients across different molecular subtypes.

ESE and Transfer Learning for Breast Tumor Classification.

He Y, Batumalay M, Thinakaran R

pubmed logopapersJul 14 2025
In this study, we proposed a lightweight neural network architecture based on inverted residual network, efficient squeeze excitation (ESE) module, and double transfer learning, called TLese-ResNet, for breast cancer molecular subtype recognition. The inverted ResNet reduces the number of network parameters while enhancing the cross-layer gradient propagation and feature expression capabilities. The introduction of the ESE module reduces the network complexity while maintaining the channel relationship collection. The dataset of this study comes from the mammography images of patients diagnosed with invasive breast cancer in a hospital in Jiangxi. The dataset comprises preoperative mammography images with CC and MLO views. Given that the dataset is somewhat small, in addition to the commonly used data augmentation methods, double transfer learning is also used. Double transfer learning includes the first transfer, in which the source domain is ImageNet and the target domain is the COVID-19 chest X-ray image dataset, and the second transfer, in which the source domain is the target domain of the first transfer, and the target domain is the mammography dataset we collected. By using five-fold cross-validation, the mean accuracy and area under received surgery feature on mammographic images of CC and MLO views were 0.818 and 0.883, respectively, outperforming other state-of-the-art deep learning-based models such as ResNet-50 and DenseNet-121. Therefore, the proposed model can provide clinicians with an effective and non-invasive auxiliary tool for molecular subtype identification of breast cancer.

X-ray2CTPA: leveraging diffusion models to enhance pulmonary embolism classification.

Cahan N, Klang E, Aviram G, Barash Y, Konen E, Giryes R, Greenspan H

pubmed logopapersJul 14 2025
Chest X-rays or chest radiography (CXR), commonly used for medical diagnostics, typically enables limited imaging compared to computed tomography (CT) scans, which offer more detailed and accurate three-dimensional data, particularly contrast-enhanced scans like CT Pulmonary Angiography (CTPA). However, CT scans entail higher costs, greater radiation exposure, and are less accessible than CXRs. In this work, we explore cross-modal translation from a 2D low contrast-resolution X-ray input to a 3D high contrast and spatial-resolution CTPA scan. Driven by recent advances in generative AI, we introduce a novel diffusion-based approach to this task. We employ the synthesized 3D images in a classification framework and show improved AUC in a Pulmonary Embolism (PE) categorization task, using the initial CXR input. Furthermore, we evaluate the model's performance using quantitative metrics, ensuring diagnostic relevance of the generated images. The proposed method is generalizable and capable of performing additional cross-modality translations in medical imaging. It may pave the way for more accessible and cost-effective advanced diagnostic tools. The code for this project is available: https://github.com/NoaCahan/X-ray2CTPA .

Generative AI enables medical image segmentation in ultra low-data regimes.

Zhang L, Jindal B, Alaa A, Weinreb R, Wilson D, Segal E, Zou J, Xie P

pubmed logopapersJul 14 2025
Semantic segmentation of medical images is pivotal in applications like disease diagnosis and treatment planning. While deep learning automates this task effectively, it struggles in ultra low-data regimes for the scarcity of annotated segmentation masks. To address this, we propose a generative deep learning framework that produces high-quality image-mask pairs as auxiliary training data. Unlike traditional generative models that separate data generation from model training, ours uses multi-level optimization for end-to-end data generation. This allows segmentation performance to guide the generation process, producing data tailored to improve segmentation outcomes. Our method demonstrates strong generalization across 11 medical image segmentation tasks and 19 datasets, covering various diseases, organs, and modalities. It improves performance by 10-20% (absolute) in both same- and out-of-domain settings and requires 8-20 times less training data than existing approaches. This greatly enhances the feasibility and cost-effectiveness of deep learning in data-limited medical imaging scenarios.

Deep Learning Applications in Lymphoma Imaging.

Sorin V, Cohen I, Lekach R, Partovi S, Raskin D

pubmed logopapersJul 14 2025
Lymphomas are a diverse group of disorders characterized by the clonal proliferation of lymphocytes. While definitive diagnosis of lymphoma relies on histopathology, immune-phenotyping and additional molecular analyses, imaging modalities such as PET/CT, CT, and MRI play a central role in the diagnostic process and management, from assessing disease extent, to evaluation of response to therapy and detecting recurrence. Artificial intelligence (AI), particularly deep learning models like convolutional neural networks (CNNs), is transforming lymphoma imaging by enabling automated detection, segmentation, and classification. This review elaborates on recent advancements in deep learning for lymphoma imaging and its integration into clinical practice. Challenges include obtaining high-quality, annotated datasets, addressing biases in training data, and ensuring consistent model performance. Ongoing efforts are focused on enhancing model interpretability, incorporating diverse patient populations to improve generalizability, and ensuring safe and effective integration of AI into clinical workflows, with the goal of improving patient outcomes.

Multimodal Deep Learning Model Based on Ultrasound and Cytological Images Predicts Risk Stratification of cN0 Papillary Thyroid Carcinoma.

He F, Chen S, Liu X, Yang X, Qin X

pubmed logopapersJul 14 2025
Accurately assessing the risk stratification of cN0 papillary thyroid carcinoma (PTC) preoperatively aids in making treatment decisions. We integrated preoperative ultrasound and cytological images of patients to develop and validate a multimodal deep learning (DL) model for non-invasive assessment of N0 PTC risk stratification before surgery. In this retrospective multicenter group study, we developed a comprehensive DL model based on ultrasound and cytological images. The model was trained and validated on 890 PTC patients undergoing thyroidectomy and lymph node dissection across five medical centers. The testing group included 107 patients from one medical center. We analyzed the model's performance, including the area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity. The combined DL model demonstrated strong performance, with an area under the curve (AUC) of 0.922 (0.866-0.979) in the internal validation group and an AUC of 0.845 (0.794-0.895) in the testing group. The diagnostic performance of the combined DL model surpassed that of clinical models. Image region heatmaps assisted in interpreting the diagnosis of risk stratification. The multimodal DL model based on ultrasound and cytological images can accurately determine the risk stratification of N0 PTC and guide treatment decisions.

Deep Learning-Accelerated Prostate MRI: Improving Speed, Accuracy, and Sustainability.

Reschke P, Koch V, Gruenewald LD, Bachir AA, Gotta J, Booz C, Alrahmoun MA, Strecker R, Nickel D, D'Angelo T, Dahm DM, Konrad P, Solim LA, Holzer M, Al-Saleh S, Scholtz JE, Sommer CM, Hammerstingl RM, Eichler K, Vogl TJ, Leistner DM, Haberkorn SM, Mahmoudi S

pubmed logopapersJul 14 2025
This study aims to evaluate the effectiveness of a deep learning (DL)-enhanced four-fold parallel acquisition technique (P4) in improving prostate MR image quality while optimizing scan efficiency compared to the traditional two-fold parallel acquisition technique (P2). Patients undergoing prostate MRI with DL-enhanced acquisitions were analyzed from January 2024 to July 2024. The participants prospectively received T2-weighted sequences in all imaging planes using both P2 and P4. Three independent readers assessed image quality, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR). Significant differences in contrast and gray-level properties between P2 and P4 were identified through radiomics analysis (p <.05). A total of 51 participants (mean age 69.4 years ± 10.5 years) underwent P2 and P4 imaging. P4 demonstrated higher CNR and SNR values compared to P2 (p <.001). P4 was consistently rated superior to P2, demonstrating enhanced image quality and greater diagnostic precision across all evaluated categories (p <.001). Furthermore, radiomics analysis confirmed that P4 significantly altered structural and textural differentiation in comparison to P2. The P4 protocol reduced T2w scan times by 50.8%, from 11:48 min to 5:48 min (p <.001). In conclusion, P4 imaging enhances diagnostic quality and reduces scan times, improving workflow efficiency, and potentially contributing to a more patient-centered and sustainable radiology practice.

STF: A Spherical Transformer for Versatile Cortical Surfaces Applications.

Cheng J, Zhao F, Wu Z, Yuan X, Wang L, Gilmore JH, Lin W, Zhang X, Li G

pubmed logopapersJul 14 2025
Inspired by the remarkable success of attention mechanisms in various applications, there is a growing need to adapt the Transformer architecture from conventional Euclidean domains to non-Euclidean spaces commonly encountered in medical imaging. Structures such as brain cortical surfaces, represented by triangular meshes, exhibit spherical topology and present unique challenges. To address this, we propose the Spherical Transformer (STF), a versatile backbone that leverages self-attention for analyzing cortical surface data. Our approach involves mapping cortical surfaces onto a sphere, dividing them into overlapping patches, and tokenizing both patches and vertices. By performing self-attention at patch and vertex levels, the model simultaneously captures global dependencies and preserves fine-grained contextual information within each patch. Overlapping regions between neighboring patches naturally enable efficient cross-patch information sharing. To handle longitudinal cortical surface data, we introduce the spatiotemporal self-attention mechanism, which jointly captures spatial context and temporal developmental patterns within a single layer. This innovation enhances the representational power of the model, making it well-suited for dynamic surface data. We evaluate the Spherical Transformer on key tasks, including cognition prediction at the surface level and two vertex-level tasks: cortical surface parcellation and cortical property map prediction. Across these applications, our model consistently outperforms state-of-the-art methods, demonstrating its ability to effectively model global dependencies and preserve detailed spatial information. The results highlight its potential as a general-purpose framework for cortical surface analysis.

Deep-learning reconstruction for noise reduction in respiratory-triggered single-shot phase sensitive inversion recovery myocardial delayed enhancement cardiac magnetic resonance.

Tang M, Wang H, Wang S, Wali E, Gutbrod J, Singh A, Landeras L, Janich MA, Mor-Avi V, Patel AR, Patel H

pubmed logopapersJul 14 2025
Phase-sensitive inversion recovery late gadolinium enhancement (LGE) improves tissue contrast, however it is challenging to combine with a free-breathing acquisition. Deep-learning (DL) algorithms have growing applications in cardiac magnetic resonance imaging (CMR) to improve image quality. We compared a novel combination of a free-breathing single-shot phase-sensitive LGE with respiratory triggering (FB-PS) sequence with DL noise reduction reconstruction algorithm to a conventional segmented phase-sensitive LGE acquired during breath holding (BH-PS). 61 adult subjects (29 male, age 51 ± 15) underwent clinical CMR (1.5 T) with the FB-PS sequence and the conventional BH-PS sequence. DL noise reduction was incorporated into the image reconstruction pipeline. Qualitative metrics included image quality, artifact severity, diagnostic confidence. Quantitative metrics included septal-blood border sharpness, LGE sharpness, blood-myocardium apparent contrast-to-noise ratio (CNR), LGE-myocardium CNR, LGE apparent signal-to-noise ratio (SNR), and LGE burden. The sequences were compared via paired t-tests. 27 subjects had positive LGE. Average time to acquire a slice for FB-PS was 4-12 s versus ~32-38 s for BH-PS (including breath instructions and break time in between breath hold). FB-PS with medium DL noise reduction had better image quality (FB-PS 3.0 ± 0.7 vs. BH-PS 1.5 ± 0.6, p < 0.0001), less artifact (4.8 ± 0.5 vs. 3.4 ± 1.1, p < 0.0001), and higher diagnostic confidence (4.0 ± 0.6 vs. 2.6 ± 0.8, p < 0.0001). Septum sharpness in FB-PS with DL reconstruction versus BH-PS was not significantly different. There was no significant difference in LGE sharpness or LGE burden. FB-PS had superior blood-myocardium CNR (17.2 ± 6.9 vs. 16.4 ± 6.0, p = 0.040), LGE-myocardium CNR (12.1 ± 7.2 vs. 10.4 ± 6.6, p = 0.054), and LGE SNR (59.8 ± 26.8 vs. 31.2 ± 24.1, p < 0.001); these metrics further improved with DL noise reduction. A FB-PS sequence shortens scan time by over 5-fold and reduces motion artifact. Combined with a DL noise reduction algorithm, FB-PS provides better or similar image quality compared to BH-PS. This is a promising solution for patients who cannot hold their breath.
Page 122 of 3453445 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.