Sort by:
Page 2 of 34338 results

Self-supervised network predicting neoadjuvant chemoradiotherapy response to locally advanced rectal cancer patients.

Chen Q, Dang J, Wang Y, Li L, Gao H, Li Q, Zhang T, Bai X

pubmed logopapersJul 1 2025
Radiographic imaging is a non-invasive technique of considerable importance for evaluating tumor treatment response. However, redundancy in CT data and the lack of labeled data make it challenging to accurately assess the response of locally advanced rectal cancer (LARC) patients to neoadjuvant chemoradiotherapy (nCRT) using current imaging indicators. In this study, we propose a novel learning framework to automatically predict the response of LARC patients to nCRT. Specifically, we develop a deep learning network called the Expand Intensive Attention Network (EIA-Net), which enhances the network's feature extraction capability through cascaded 3D convolutions and coordinate attention. Instance-oriented collaborative self-supervised learning (IOC-SSL) is proposed to leverage unlabeled data for training, reducing the reliance on labeled data. In a dataset consisting of 1,575 volumes, the proposed method achieves an AUC score of 0.8562. The dataset includes two distinct parts: the self-supervised dataset containing 1,394 volumes and the supervised dataset comprising 195 volumes. Analysis of the lifetime predictions reveals that patients with pathological complete response (pCR) predicted by EIA-Net exhibit better overall survival (OS) compared to non-pCR patients with LARC. The retrospective study demonstrates that imaging-based pCR prediction for patients with low rectal cancer can assist clinicians in making informed decisions regarding the need for Miles operation, thereby improving the likelihood of anal preservation, with an AUC of 0.8222. These results underscore the potential of our method to enhance clinical decision-making, offering a promising tool for personalized treatment and improved patient outcomes in LARC management.

Federated Learning in radiomics: A comprehensive meta-survey on medical image analysis.

Raza A, Guzzo A, Ianni M, Lappano R, Zanolini A, Maggiolini M, Fortino G

pubmed logopapersJul 1 2025
Federated Learning (FL) has emerged as a promising approach for collaborative medical image analysis while preserving data privacy, making it particularly suitable for radiomics tasks. This paper presents a systematic meta-analysis of recent surveys on Federated Learning in Medical Imaging (FL-MI), published in reputable venues over the past five years. We adopt the PRISMA methodology, categorizing and analyzing the existing body of research in FL-MI. Our analysis identifies common trends, challenges, and emerging strategies for implementing FL in medical imaging, including handling data heterogeneity, privacy concerns, and model performance in non-IID settings. The paper also highlights the most widely used datasets and a comparison of adopted machine learning models. Moreover, we examine FL frameworks in FL-MI applications, such as tumor detection, organ segmentation, and disease classification. We identify several research gaps, including the need for more robust privacy protection. Our findings provide a comprehensive overview of the current state of FL-MI and offer valuable directions for future research and development in this rapidly evolving field.

Integrated brain connectivity analysis with fMRI, DTI, and sMRI powered by interpretable graph neural networks.

Qu G, Zhou Z, Calhoun VD, Zhang A, Wang YP

pubmed logopapersJul 1 2025
Multimodal neuroimaging data modeling has become a widely used approach but confronts considerable challenges due to their heterogeneity, which encompasses variability in data types, scales, and formats across modalities. This variability necessitates the deployment of advanced computational methods to integrate and interpret diverse datasets within a cohesive analytical framework. In our research, we combine functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and structural MRI (sMRI) for joint analysis. This integration capitalizes on the unique strengths of each modality and their inherent interconnections, aiming for a comprehensive understanding of the brain's connectivity and anatomical characteristics. Utilizing the Glasser atlas for parcellation, we integrate imaging-derived features from multiple modalities - functional connectivity from fMRI, structural connectivity from DTI, and anatomical features from sMRI - within consistent regions. Our approach incorporates a masking strategy to differentially weight neural connections, thereby facilitating an amalgamation of multimodal imaging data. This technique enhances interpretability at the connectivity level, transcending traditional analyses centered on singular regional attributes. The model is applied to the Human Connectome Project's Development study to elucidate the associations between multimodal imaging and cognitive functions throughout youth. The analysis demonstrates improved prediction accuracy and uncovers crucial anatomical features and neural connections, deepening our understanding of brain structure and function. This study not only advances multimodal neuroimaging analytics by offering a novel method for integrative analysis of diverse imaging modalities but also improves the understanding of intricate relationships between brain's structural and functional networks and cognitive development.

A vision transformer-convolutional neural network framework for decision-transparent dual-energy X-ray absorptiometry recommendations using chest low-dose CT.

Kuo DP, Chen YC, Cheng SJ, Hsieh KL, Li YT, Kuo PC, Chang YC, Chen CY

pubmed logopapersJul 1 2025
This study introduces an ensemble framework that integrates Vision Transformer (ViT) and Convolutional Neural Networks (CNN) models to leverage their complementary strengths, generating visualized and decision-transparent recommendations for dual-energy X-ray absorptiometry (DXA) scans from chest low-dose computed tomography (LDCT). The framework was developed using data from 321 individuals and validated with an independent test cohort of 186 individuals. It addresses two classification tasks: (1) distinguishing normal from abnormal bone mineral density (BMD) and (2) differentiating osteoporosis from non-osteoporosis. Three field-of-view (FOV) settings-fitFOV (entire vertebra), halfFOV (vertebral body only), and largeFOV (fitFOV + 20 %)-were analyzed to assess their impact on model performance. Model predictions were weighted and combined to enhance classification accuracy, and visualizations were generated to improve decision transparency. DXA scans were recommended for individuals classified as having abnormal BMD or osteoporosis. The ensemble framework significantly outperformed individual models in both classification tasks (McNemar test, p < 0.001). In the development cohort, it achieved 91.6 % accuracy for task 1 with largeFOV (area under the receiver operating characteristic curve [AUROC]: 0.97) and 86.0 % accuracy for task 2 with fitFOV (AUROC: 0.94). In the test cohort, it demonstrated 86.6 % accuracy for task 1 (AUROC: 0.93) and 76.9 % accuracy for task 2 (AUROC: 0.99). DXA recommendation accuracy was 91.6 % and 87.1 % in the development and test cohorts, respectively, with notably high accuracy for osteoporosis detection (98.7 % and 100 %). This combined ViT-CNN framework effectively assesses bone status from LDCT images, particularly when utilizing fitFOV and largeFOV settings. By visualizing classification confidence and vertebral abnormalities, the proposed framework enhances decision transparency and supports clinicians in making informed DXA recommendations following opportunistic osteoporosis screening.

Evaluating a large language model's accuracy in chest X-ray interpretation for acute thoracic conditions.

Ostrovsky AM

pubmed logopapersJul 1 2025
The rapid advancement of artificial intelligence (AI) has great ability to impact healthcare. Chest X-rays are essential for diagnosing acute thoracic conditions in the emergency department (ED), but interpretation delays due to radiologist availability can impact clinical decision-making. AI models, including deep learning algorithms, have been explored for diagnostic support, but the potential of large language models (LLMs) in emergency radiology remains largely unexamined. This study assessed ChatGPT's feasibility in interpreting chest X-rays for acute thoracic conditions commonly encountered in the ED. A subset of 1400 images from the NIH Chest X-ray dataset was analyzed, representing seven pathology categories: Atelectasis, Effusion, Emphysema, Pneumothorax, Pneumonia, Mass, and No Finding. ChatGPT 4.0, utilizing the "X-Ray Interpreter" add-on, was evaluated for its diagnostic performance across these categories. ChatGPT demonstrated high performance in identifying normal chest X-rays, with a sensitivity of 98.9 %, specificity of 93.9 %, and accuracy of 94.7 %. However, the model's performance varied across pathologies. The best results were observed in diagnosing pneumonia (sensitivity 76.2 %, specificity 93.7 %) and pneumothorax (sensitivity 77.4 %, specificity 89.1 %), while performance for atelectasis and emphysema was lower. ChatGPT demonstrates potential as a supplementary tool for differentiating normal from abnormal chest X-rays, with promising results for certain pathologies like pneumonia. However, its diagnostic accuracy for more subtle conditions requires improvement. Further research integrating ChatGPT with specialized image recognition models could enhance its performance, offering new possibilities in medical imaging and education.

A systematic review of generative AI approaches for medical image enhancement: Comparing GANs, transformers, and diffusion models.

Oulmalme C, Nakouri H, Jaafar F

pubmed logopapersJul 1 2025
Medical imaging is a vital diagnostic tool that provides detailed insights into human anatomy but faces challenges affecting its accuracy and efficiency. Advanced generative AI models offer promising solutions. Unlike previous reviews with a narrow focus, a comprehensive evaluation across techniques and modalities is necessary. This systematic review integrates the three state-of-the-art leading approaches, GANs, Diffusion Models, and Transformers, examining their applicability, methodologies, and clinical implications in improving medical image quality. Using the PRISMA framework, 63 studies from 989 were selected via Google Scholar and PubMed, focusing on GANs, Transformers, and Diffusion Models. Articles from ACM, IEEE Xplore, and Springer were analyzed. Generative AI techniques show promise in improving image resolution, reducing noise, and enhancing fidelity. GANs generate high-quality images, Transformers utilize global context, and Diffusion Models are effective in denoising and reconstruction. Challenges include high computational costs, limited dataset diversity, and issues with generalizability, with a focus on quantitative metrics over clinical applicability. This review highlights the transformative impact of GANs, Transformers, and Diffusion Models in advancing medical imaging. Future research must address computational and generalization challenges, emphasize open science, and validate these techniques in diverse clinical settings to unlock their full potential. These efforts could enhance diagnostic accuracy, lower costs, and improve patient outcome.

Evaluation of radiology residents' reporting skills using large language models: an observational study.

Atsukawa N, Tatekawa H, Oura T, Matsushita S, Horiuchi D, Takita H, Mitsuyama Y, Omori A, Shimono T, Miki Y, Ueda D

pubmed logopapersJul 1 2025
Large language models (LLMs) have the potential to objectively evaluate radiology resident reports; however, research on their use for feedback in radiology training and assessment of resident skill development remains limited. This study aimed to assess the effectiveness of LLMs in revising radiology reports by comparing them with reports verified by board-certified radiologists and to analyze the progression of resident's reporting skills over time. To identify the LLM that best aligned with human radiologists, 100 reports were randomly selected from 7376 reports authored by nine first-year radiology residents. The reports were evaluated based on six criteria: (1) addition of missing positive findings, (2) deletion of findings, (3) addition of negative findings, (4) correction of the expression of findings, (5) correction of the diagnosis, and (6) proposal of additional examinations or treatments. Reports were segmented into four time-based terms, and 900 reports (450 CT and 450 MRI) were randomly chosen from the initial and final terms of the residents' first year. The revised rates for each criterion were compared between the first and last terms using the Wilcoxon Signed-Rank test. Among the three LLMs-ChatGPT-4 Omni (GPT-4o), Claude-3.5 Sonnet, and Claude-3 Opus-GPT-4o demonstrated the highest level of agreement with board-certified radiologists. Significant improvements were noted in Criteria 1-3 when comparing reports from the first and last terms (Criteria 1, 2, and 3; P < 0.001, P = 0.023, and P = 0.004, respectively) using GPT-4o. No significant changes were observed for Criteria 4-6. Despite this, all criteria except for Criteria 6 showed progressive enhancement over time. LLMs can effectively provide feedback on commonly corrected areas in radiology reports, enabling residents to objectively identify and improve their weaknesses and monitor their progress. Additionally, LLMs may help reduce the workload of radiologists' mentors.

The Chest X- Ray: The Ship has Sailed, But Has It?

Iacovino JR

pubmed logopapersJul 1 2025
In the past, the chest X-ray (CXR) was a traditional age and amount requirement used to assess potential mortality risk in life insurance applicants. It fell out of favor due to inconvenience to the applicant, cost, and lack of protective value. With the advent of deep learning techniques, can the results of the CXR, as a requirement, now add additional value to underwriting risk analysis?

Dynamic glucose enhanced imaging using direct water saturation.

Knutsson L, Yadav NN, Mohammed Ali S, Kamson DO, Demetriou E, Seidemo A, Blair L, Lin DD, Laterra J, van Zijl PCM

pubmed logopapersJul 1 2025
Dynamic glucose enhanced (DGE) MRI studies employ CEST or spin lock (CESL) to study glucose uptake. Currently, these methods are hampered by low effect size and sensitivity to motion. To overcome this, we propose to utilize exchange-based linewidth (LW) broadening of the direct water saturation (DS) curve of the water saturation spectrum (Z-spectrum) during and after glucose infusion (DS-DGE MRI). To estimate the glucose-infusion-induced LW changes (ΔLW), Bloch-McConnell simulations were performed for normoglycemia and hyperglycemia in blood, gray matter (GM), white matter (WM), CSF, and malignant tumor tissue. Whole-brain DS-DGE imaging was implemented at 3 T using dynamic Z-spectral acquisitions (1.2 s per offset frequency, 38 s per spectrum) and assessed on four brain tumor patients using infusion of 35 g of D-glucose. To assess ΔLW, a deep learning-based Lorentzian fitting approach was used on voxel-based DS spectra acquired before, during, and post-infusion. Area-under-the-curve (AUC) images, obtained from the dynamic ΔLW time curves, were compared qualitatively to perfusion-weighted imaging parametric maps. In simulations, ΔLW was 1.3%, 0.30%, 0.29/0.34%, 7.5%, and 13% in arterial blood, venous blood, GM/WM, malignant tumor tissue, and CSF, respectively. In vivo, ΔLW was approximately 1% in GM/WM, 5% to 20% for different tumor types, and 40% in CSF. The resulting DS-DGE AUC maps clearly outlined lesion areas. DS-DGE MRI is highly promising for assessing D-glucose uptake. Initial results in brain tumor patients show high-quality AUC maps of glucose-induced line broadening and DGE-based lesion enhancement similar and/or complementary to perfusion-weighted imaging.

CXR-LLaVA: a multimodal large language model for interpreting chest X-ray images.

Lee S, Youn J, Kim H, Kim M, Yoon SH

pubmed logopapersJul 1 2025
This study aimed to develop an open-source multimodal large language model (CXR-LLaVA) for interpreting chest X-ray images (CXRs), leveraging recent advances in large language models (LLMs) to potentially replicate the image interpretation skills of human radiologists. For training, we collected 592,580 publicly available CXRs, of which 374,881 had labels for certain radiographic abnormalities (Dataset 1) and 217,699 provided free-text radiology reports (Dataset 2). After pre-training a vision transformer with Dataset 1, we integrated it with an LLM influenced by the LLaVA network. Then, the model was fine-tuned, primarily using Dataset 2. The model's diagnostic performance for major pathological findings was evaluated, along with the acceptability of radiologic reports by human radiologists, to gauge its potential for autonomous reporting. The model demonstrated impressive performance in test sets, achieving an average F1 score of 0.81 for six major pathological findings in the MIMIC internal test set and 0.56 for six major pathological findings in the external test set. The model's F1 scores surpassed those of GPT-4-vision and Gemini-Pro-Vision in both test sets. In human radiologist evaluations of the external test set, the model achieved a 72.7% success rate in autonomous reporting, slightly below the 84.0% rate of ground truth reports. This study highlights the significant potential of multimodal LLMs for CXR interpretation, while also acknowledging the performance limitations. Despite these challenges, we believe that making our model open-source will catalyze further research, expanding its effectiveness and applicability in various clinical contexts. Question How can a multimodal large language model be adapted to interpret chest X-rays and generate radiologic reports? Findings The developed CXR-LLaVA model effectively detects major pathological findings in chest X-rays and generates radiologic reports with a higher accuracy compared to general-purpose models. Clinical relevance This study demonstrates the potential of multimodal large language models to support radiologists by autonomously generating chest X-ray reports, potentially reducing diagnostic workloads and improving radiologist efficiency.
Page 2 of 34338 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.