Sort by:
Page 26 of 1621612 results

Imagining Alternatives: Towards High-Resolution 3D Counterfactual Medical Image Generation via Language Guidance

Mohamed Mohamed, Brennan Nichyporuk, Douglas L. Arnold, Tal Arbel

arxiv logopreprintSep 7 2025
Vision-language models have demonstrated impressive capabilities in generating 2D images under various conditions; however the impressive performance of these models in 2D is largely enabled by extensive, readily available pretrained foundation models. Critically, comparable pretrained foundation models do not exist for 3D, significantly limiting progress in this domain. As a result, the potential of vision-language models to produce high-resolution 3D counterfactual medical images conditioned solely on natural language descriptions remains completely unexplored. Addressing this gap would enable powerful clinical and research applications, such as personalized counterfactual explanations, simulation of disease progression scenarios, and enhanced medical training by visualizing hypothetical medical conditions in realistic detail. Our work takes a meaningful step toward addressing this challenge by introducing a framework capable of generating high-resolution 3D counterfactual medical images of synthesized patients guided by free-form language prompts. We adapt state-of-the-art 3D diffusion models with enhancements from Simple Diffusion and incorporate augmented conditioning to improve text alignment and image quality. To our knowledge, this represents the first demonstration of a language-guided native-3D diffusion model applied specifically to neurological imaging data, where faithful three-dimensional modeling is essential to represent the brain's three-dimensional structure. Through results on two distinct neurological MRI datasets, our framework successfully simulates varying counterfactual lesion loads in Multiple Sclerosis (MS), and cognitive states in Alzheimer's disease, generating high-quality images while preserving subject fidelity in synthetically generated medical images. Our results lay the groundwork for prompt-driven disease progression analysis within 3D medical imaging.

Physics-Guided Diffusion Transformer with Spherical Harmonic Posterior Sampling for High-Fidelity Angular Super-Resolution in Diffusion MRI

Mu Nan, Taohui Xiao, Ruoyou Wu, Shoujun Yu, Ye Li, Hairong Zheng, Shanshan Wang

arxiv logopreprintSep 7 2025
Diffusion MRI (dMRI) angular super-resolution (ASR) aims to reconstruct high-angular-resolution (HAR) signals from limited low-angular-resolution (LAR) data without prolonging scan time. However, existing methods are limited in recovering fine-grained angular details or preserving high fidelity due to inadequate modeling of q-space geometry and insufficient incorporation of physical constraints. In this paper, we introduce a Physics-Guided Diffusion Transformer (PGDiT) designed to explore physical priors throughout both training and inference stages. During training, a Q-space Geometry-Aware Module (QGAM) with b-vector modulation and random angular masking facilitates direction-aware representation learning, enabling the network to generate directionally consistent reconstructions with fine angular details from sparse and noisy data. In inference, a two-stage Spherical Harmonics-Guided Posterior Sampling (SHPS) enforces alignment with the acquired data, followed by heat-diffusion-based SH regularization to ensure physically plausible reconstructions. This coarse-to-fine refinement strategy mitigates oversmoothing and artifacts commonly observed in purely data-driven or generative models. Extensive experiments on general ASR tasks and two downstream applications, Diffusion Tensor Imaging (DTI) and Neurite Orientation Dispersion and Density Imaging (NODDI), demonstrate that PGDiT outperforms existing deep learning models in detail recovery and data fidelity. Our approach presents a novel generative ASR framework that offers high-fidelity HAR dMRI reconstructions, with potential applications in neuroscience and clinical research.

Interpreting BI-RADS-Free Breast MRI Reports Using a Large Language Model: Automated BI-RADS Classification From Narrative Reports Using ChatGPT.

Tekcan Sanli DE, Sanli AN, Ozmen G, Ozmen A, Cihan I, Kurt A, Esmerer E

pubmed logopapersSep 6 2025
This study aimed to evaluate the performance of ChatGPT (GPT-4o) in interpreting free-text breast magnetic resonance imaging (MRI) reports by assigning BI-RADS categories and recommending appropriate clinical management steps in the absence of explicitly stated BI-RADS classifications. In this retrospective, single-center study, a total of 352 documented full-text breast MRI reports of at least one identifiable breast lesion with descriptive imaging findings between January 2024 and June 2025 were included in the study. Incomplete reports due to technical limitations, reports describing only normal findings, and MRI examinations performed at external institutions were excluded from the study. First, it was aimed to assess ChatGPT's ability to infer the correct BI-RADS category (2-3-4a-4b-4c-5 separately) based solely on the narrative imaging findings. Second, it was evaluated the model's ability to distinguish between benign versus suspicious/malignant imaging features in terms of clinical decision-making. Therefore, BI-RADS 2-3 categories were grouped as "benign," and BI-RADS 4-5 as "suspicious/malignant," in alignment with how BI-RADS categories are used to guide patient management, rather than to represent definitive diagnostic outcomes. Reports originally containing the term "BI-RADS" were manually de-identified by removing BI-RADS categories and clinical recommendations. Each narrative report was then processed through ChatGPT using two standardized prompts as follows: (1) What is the most appropriate BI-RADS category based on the findings in the report? (2) What should be the next clinical step (e.g., follow-up, biopsy)? Responses were evaluated in real time by two experienced breast radiologists, and consensus was used as the reference standard. ChatGPT demonstrated moderate agreement with radiologists' consensus for BI-RADS classification (Cohen's Kappa (κ): 0.510, p<0.001). Classification accuracy was highest for BI-RADS 5 reports (77.9%), whereas lower agreement was observed in intermediate categories such as BI-RADS 3 (52.4% correct) and 4B (29.4% correct). In the binary classification of reports as benign or malignant, ChatGPT achieved almost perfect agreement (κ: 0.843), correctly identifying 91.7% of benign and 93.2% of malignant reports. Notably, the model's management recommendations were 100% consistent with its assigned BI-RADS categories, advising biopsy for all BI-RADS 4-5 cases and short-interval follow-up or conditional biopsy for BI-RADS 3 reports. ChatGPT accurately interprets unstructured breast MRI reports, particularly in benign/malignant discrimination and corresponding clinical recommendations. This technology holds potential as a decision support tool to standardize reporting and enhance clinical workflows, especially in settings with variable reporting practices. Prospective, multi-institutional studies are needed for further validation.

Brain Tumor Detection Through Diverse CNN Architectures in IoT Healthcare Industries: Fast R-CNN, U-Net, Transfer Learning-Based CNN, and Fully Connected CNN

Mohsen Asghari Ilani, Yaser M. Banad

arxiv logopreprintSep 6 2025
Artificial intelligence (AI)-powered deep learning has advanced brain tumor diagnosis in Internet of Things (IoT)-healthcare systems, achieving high accuracy with large datasets. Brain health is critical to human life, and accurate diagnosis is essential for effective treatment. Magnetic Resonance Imaging (MRI) provides key data for brain tumor detection, serving as a major source of big data for AI-driven image classification. In this study, we classified glioma, meningioma, and pituitary tumors from MRI images using Region-based Convolutional Neural Network (R-CNN) and UNet architectures. We also applied Convolutional Neural Networks (CNN) and CNN-based transfer learning models such as Inception-V3, EfficientNetB4, and VGG19. Model performance was assessed using F-score, recall, precision, and accuracy. The Fast R-CNN achieved the best results with 99% accuracy, 98.5% F-score, 99.5% Area Under the Curve (AUC), 99.4% recall, and 98.5% precision. Combining R-CNN, UNet, and transfer learning enables earlier diagnosis and more effective treatment in IoT-healthcare systems, improving patient outcomes. IoT devices such as wearable monitors and smart imaging systems continuously collect real-time data, which AI algorithms analyze to provide immediate insights for timely interventions and personalized care. For external cohort cross-dataset validation, EfficientNetB2 achieved the strongest performance among fine-tuned EfficientNet models, with 92.11% precision, 92.11% recall/sensitivity, 95.96% specificity, 92.02% F1-score, and 92.23% accuracy. These findings underscore the robustness and reliability of AI models in handling diverse datasets, reinforcing their potential to enhance brain tumor classification and patient care in IoT healthcare environments.

A novel multimodal framework combining habitat radiomics, deep learning, and conventional radiomics for predicting MGMT gene promoter methylation in Glioma: Superior performance of integrated models.

Zhu FY, Chen WJ, Chen HY, Ren SY, Zhuo LY, Wang TD, Ren CC, Yin XP, Wang JN

pubmed logopapersSep 6 2025
The present study aimed to develop a noninvasive predictive framework that integrates clinical data, conventional radiomics, habitat imaging, and deep learning for the preoperative stratification of MGMT gene promoter methylation in glioma. This retrospective study included 410 patients from the University of California, San Francisco, USA, and 102 patients from our hospital. Seven models were constructed using preoperative contrast-enhanced T1-weighted MRI with gadobenate dimeglumine as the contrast agent. Habitat radiomics features were extracted from tumor subregions by k-means clustering, while deep learning features were acquired using a 3D convolutional neural network. Model performance was evaluated based on area under the curve (AUC) value, F1-score, and decision curve analysis. The combined model integrating clinical data, conventional radiomics, habitat imaging features, and deep learning achieved the highest performance (training AUC = 0.979 [95 % CI: 0.969-0.990], F1-score = 0.944; testing AUC = 0.777 [0.651-0.904], F1-score = 0.711). Among the single-modality models, habitat radiomics outperformed the other models (training AUC = 0.960 [0.954-0.983]; testing AUC = 0.724 [0.573-0.875]). The proposed multimodal framework considerably enhances preoperative prediction of MGMT gene promoter methylation, with habitat radiomics highlighting the critical role of tumor heterogeneity. This approach provides a scalable tool for personalized management of glioma.

Interpretable machine learning model for characterizing magnetic susceptibility-based biomarkers in first episode psychosis.

Franco P, Montalba C, Caulier-Cisterna R, Milovic C, González A, Ramirez-Mahaluf JP, Undurraga J, Salas R, Crossley N, Tejos C, Uribe S

pubmed logopapersSep 6 2025
Several studies have shown changes in neurochemicals within the deep-brain nuclei of patients with psychosis. These alterations indicate a dysfunction in dopamine within subcortical regions affected by fluctuations in iron concentrations. Quantitative Susceptibility Mapping (QSM) is a method employed to measure iron concentration, offering a potential means to identify dopamine dysfunction in these subcortical areas. This study employed a random forest algorithm to predict susceptibility features of the First-Episode Psychosis (FEP) and the response to antipsychotics using Shapley Additionality Explanation (SHAP) values. 3D multi-echo Gradient Echo (GRE) and T1-weighted GRE were obtained in 61 healthy-volunteers (HV) and 76 FEP patients (32 % Treatment-Resistant Schizophrenia (TRS) and 68 % treatment-Responsive Schizophrenia (RS)) using a 3T Philips Ingenia MRI scanner. QSM and R2* were reconstructed and averaged in twenty-two segmented regions of interest. We used a Sequential Forward Selection as a feature selection algorithm and a Random Forest as a model to predict FEP patients and their response to antipsychotics. We further applied the SHAP framework to identify informative features and their interpretations. Finally, multiple correlation patterns from magnetic susceptibility parameters were extracted using hierarchical clustering. Our approach accurately classifies HV and FEP patients with 76.48 ± 10.73 % accuracy (using four features) and TRS vs RS patients with 76.43 ± 12.57 % accuracy (using four features), using 10-fold stratified cross-validation. The SHAP analyses indicated the top four nonlinear relationships between the selected features. Hierarchical clustering revealed two groups of correlated features for each study. Early prediction of treatment response enables tailored strategies for FEP patients with treatment resistance, ensuring timely and effective interventions.

Prostate MR image segmentation using a multi-stage network approach.

Jacobson LEO, Bader-El-Den M, Maurya L, Hopgood AA, Tamma V, Masum SK, Prendergast DJ, Osborn P

pubmed logopapersSep 5 2025
Prostate cancer (PCa) remains one of the most prevalent cancers among men, with over 1.4 million new cases and 375,304 deaths reported globally in 2020. Current diagnostic approaches, such as prostate-specific antigen (PSA) testing and trans-rectal ultrasound (TRUS)-guided biopsies, are often Limited by low specificity and accuracy. This study addresses these Limitations by leveraging deep learning-based image segmentation techniques on a dataset comprising 61,119 T2-weighted MR images from 1151 patients to enhance PCa detection and characterisation. A multi-stage segmentation approach, including one-stage, sequential two-stage, and end-to-end two-stage methods, was evaluated using various deep learning architectures. The MultiResUNet model, integrated into a multi-stage segmentation framework, demonstrated significant improvements in delineating prostate boundaries. The study utilised a dataset of over 61,000 T2-weighted magnetic resonance (MR) images from more than 1100 patients, employing three distinct segmentation strategies: one-stage, sequential two-stage, and end-to-end two-stage methods. The end-to-end approach, leveraging shared feature representations, consistently outperformed other methods, underscoring its effectiveness in enhancing diagnostic accuracy. These findings highlight the potential of advanced deep learning architectures in streamlining prostate cancer detection and treatment planning. Future work will focus on further optimisation of the models and assessing their generalisability to diverse medical imaging contexts.

A Replicable and Generalizable Neuroimaging-Based Indicator of Pain Sensitivity Across Individuals.

Zhang LB, Lu XJ, Zhang HJ, Wei ZX, Kong YZ, Tu YH, Iannetti GD, Hu L

pubmed logopapersSep 5 2025
Revealing the neural underpinnings of pain sensitivity is crucial for understanding how the brain encodes individual differences in pain and advancing personalized pain treatments. Here, six large and diverse functional magnetic resonance imaging (fMRI) datasets (total N = 1046) are leveraged to uncover the neural mechanisms of pain sensitivity. Replicable and generalizable correlations are found between nociceptive-evoked fMRI responses and pain sensitivity for laser heat, contact heat, and mechanical pains. These fMRI responses correlate more strongly with pain sensitivity than with tactile, auditory, and visual sensitivity. Moreover, a machine learning model is developed that accurately predicts not only pain sensitivity (r = 0.20∼0.56, ps < 0.05) but also analgesic effects of different treatments in healthy individuals (r = 0.17∼0.25, ps < 0.05). Notably, these findings are influenced considerably by sample sizes, requiring >200 for univariate whole brain correlation analysis and >150 for multivariate machine learning modeling. Altogether, this study demonstrates that fMRI activations encode pain sensitivity across various types of pain, thus facilitating interpretations of subjective pain reports and promoting more mechanistically informed investigations into pain physiology.

Reperfusion injury in STEMI: a double-edged sword.

Thomas KS, Puthooran DM, Edpuganti S, Reddem AL, Jose A, Akula SSM

pubmed logopapersSep 5 2025
ST-elevation myocardial infarction (STEMI) is a major cardiac event that requires rapid reperfusion therapy. The same reperfusion mechanism that minimizes infarct size and mortality may paradoxically exacerbate further cardiac damage-a condition known as reperfusion injury. Oxidative stress, calcium excess, mitochondrial malfunction, and programmed cell death mechanisms make myocardial dysfunction worse. Even with the best revascularization techniques, reperfusion damage still jeopardizes the long-term prognosis and myocardial healing. A thorough narrative review was carried out using some of the most well-known scientific databases, including ScienceDirect, PubMed, and Google Scholar. With an emphasis on pathophysiological causes, clinical manifestations, innovative biomarkers, imaging modalities, artificial intelligence applications, and developing treatment methods related to reperfusion injury, peer-reviewed publications published between 2015 and 2025 were highlighted. The review focuses on the molecular processes that underlie cardiac reperfusion injury, such as reactive oxygen species, calcium dysregulation, opening of the mitochondrial permeability transition pore, and several types of programmed cell death. Clinical syndromes such as myocardial stunning, coronary no-reflow, and intramyocardial hemorrhage are thoroughly studied-all of which lead to negative consequences like heart failure and left ventricular dysfunction. Cardiac magnetic resonance imaging along with coronary angiography and significant biomarkers like N-terminal proBNP and soluble ST2 aid in risk stratification and prognosis. In addition to mechanical techniques like ischemia postconditioning and remote ischemic conditioning, pharmacological treatments are also examined. Despite promising research findings, the majority of therapies have not yet proven consistently effective in extensive clinical studies. Consideration of sex-specific risk factors, medicines that target the mitochondria, tailored therapies, and the use of artificial intelligence for risk assessment and early diagnosis are some potential future avenues. Reperfusion damage continues to be a significant obstacle to the best possible recovery after STEMI, even with improvements in revascularization. The management of STEMI still relies heavily on early reperfusion, although adjuvant medicines that target reperfusion injury specifically are desperately needed. Molecular-targeted approaches, AI-driven risk assessment, and precision medicine advancements have the potential to reduce cardiac damage and enhance long-term outcomes for patients with STEMI.

Preoperative Assessment of Extraprostatic Extension in Prostate Cancer Using an Interpretable Tabular Prior-Data Fitted Network-Based Radiomics Model From MRI.

Liu BC, Ding XH, Xu HH, Bai X, Zhang XJ, Cui MQ, Guo AT, Mu XT, Xie LZ, Kang HH, Zhou SP, Zhao J, Wang BJ, Wang HY

pubmed logopapersSep 5 2025
MRI assessment for extraprostatic extension (EPE) of prostate cancer (PCa) is challenging due to limited accuracy and interobserver agreement. To develop an interpretable Tabular Prior-data Fitted Network (TabPFN)-based radiomics model to evaluate EPE using MRI and explore its integration with radiologists' assessments. Retrospective. Five hundred and thirteen consecutive patients who underwent radical prostatectomy. Four hundred and eleven patients from center 1 (mean age 67 ± 7 years) formed training (287 patients) and internal test (124 patients) sets, and 102 patients from center 2 (mean age 66 ± 6 years) were assigned as an external test set. Three Tesla, fast spin echo T2-weighted imaging (T2WI) and diffusion-weighted imaging using single-shot echo planar imaging. Radiomics features were extracted from T2WI and apparent diffusion coefficient maps, and the TabRadiomics model was developed using TabPFN. Three machine learning models served as baseline comparisons: support vector machine, random forest, and categorical boosting. Two radiologists (with > 1500 and > 500 prostate MRI interpretations, respectively) independently evaluated EPE grade on MRI. Artificial intelligence (AI)-modified EPE grading algorithms incorporating the TabRadiomics model with radiologists' interpretations of curvilinear contact length and frank EPE were simulated. Receiver operating characteristic curve (AUC), Delong test, and McNemar test. p < 0.05 was considered significant. The TabRadiomics model performed comparably to machine learning models in both internal and external tests, with AUCs of 0.806 (95% CI, 0.727-0.884) and 0.842 (95% CI, 0.770-0.912), respectively. AI-modified algorithms showed significantly higher accuracies compared with the less experienced reader in internal testing, with up to 34.7% of interpretations requiring no radiologist input. However, no difference was observed in both readers in the external test set. The TabRadiomics model demonstrated high performance in EPE assessment and may improve clinical assessment in PCa. 4. Stage 2.
Page 26 of 1621612 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.