Sort by:
Page 305 of 6626611 results

Jin W, Zhang H, Ning Y, Chen X, Zhang G, Li H, Zhang H

pubmed logopapersAug 4 2025
We developed an MRI-based habitat radiomics model (HRM) to predict p53-abnormal (p53abn) molecular subtypes of endometrial cancer (EC). Patients with pathologically confirmed EC were retrospectively enrolled from three hospitals and categorized into a training cohort (n = 270), test cohort 1 (n = 70), and test cohort 2 (n = 154). The tumour was divided into habitat sub-regions using diffusion-weighted imaging (DWI) and contrast-enhanced (CE) images with the K-means algorithm. Radiomics features were extracted from T1-weighted imaging (T1WI), T2-weighted imaging (T2WI), DWI, and CE images. Three machine learning classifiers-logistic regression, support vector machines, and random forests-were applied to develop predictive models for p53abn EC. Model performance was validated using receiver operating characteristic (ROC) curves, and the model with the best predictive performance was selected as the HRM. A whole-region radiomics model (WRM) was also constructed, and a clinical model (CM) with five clinical features was developed. The SHApley Additive ExPlanations (SHAP) method was used to explain the outputs of the models. DeLong's test evaluated and compared the performance across the cohorts. A total of 1920 habitat radiomics features were considered. Eight features were selected for the HRM, ten for the WRM, and three clinical features for the CM. The HRM achieved the highest AUC: 0.855 (training), 0.769 (test1), and 0.766 (test2). The AUCs of the WRM were 0.707 (training), 0.703 (test1), and 0.738 (test2). The AUCs of the CM were 0.709 (training), 0.641 (test1), and 0.665 (test2). The MRI-based HRM successfully predicted p53abn EC. The results indicate that habitat combined with machine learning, radiomics, and SHAP can effectively predict p53abn EC, providing clinicians with intuitive insights and interpretability regarding the impact of risk factors in the model.

Sussman MS, Cui L, Tan SBM, Prasla S, Wah-Kahn T, Nickel D, Jhaveri KS

pubmed logopapersAug 4 2025
In pelvic MRI, Turbo Spin Echo (TSE) pulse sequences are used for T2-weighted imaging. However, its lengthy acquisition time increases the potential for artifacts. Deep learning (DL) reconstruction achieves reduced scan times without the degradation in image quality associated with other accelerated techniques. Unfortunately, a comprehensive assessment of DL-reconstruction in pelvic MRI has not been performed. The objective of this prospective study was to compare the performance of DL-TSE and conventional TSE pulse sequences in a broad spectrum of pelvic MRI indications. Fifty-five subjects (33 females and 22 males) were scanned at 3 T using DL-TSE and conventional TSE sequences in axial and/or oblique acquisition planes. Two radiologists independently assessed image quality in 6 categories: edge definition, vessel margin sharpness, T2 Contrast Dynamic Range, artifacts, overall image quality, and lesion features. The contrast ratio was calculated for quantitative assessment. A two-tailed sign test was used for assessment. The 2 readers found DL-TSE to deliver equal or superior image quality than conventional TSE in most cases. There were only 3 instances out of 24 where conventional TSE was scored as providing better image quality. Readers agreed on DL-TSE superiority/inferiority/equivalence in 67% of categories in the axial plane and 75% in the oblique plane. DL-TSE also demonstrated a better contrast ratio in 75% of cases. DL-TSE reduced scan time by approximately 50%. DL-accelerated TSE sequences generally provide equal or better image quality in pelvic MRI than standard TSE with significantly reduced acquisition times.

Kong SH

pubmed logopapersAug 4 2025
Artificial intelligence (AI) is increasingly being explored as a complementary tool to traditional fracture risk assessment methods. Conventional approaches, such as bone mineral density measurement and established clinical risk calculators, provide populationlevel stratification but often fail to capture the structural nuances of bone fragility. Recent advances in AI-particularly deep learning techniques applied to imaging-enable opportunistic screening and individualized risk estimation using routinely acquired radiographs and computed tomography (CT) data. These models demonstrate improved discrimination for osteoporotic fracture detection and risk prediction, supporting applications such as time-to-event modeling and short-term prognosis. CT- and radiograph-based models have shown superiority over conventional metrics in diverse cohorts, while innovations like multitask learning and survival plots contribute to enhanced interpretability and patient-centered communication. Nevertheless, challenges related to model generalizability, data bias, and automation bias persist. Successful clinical integration will require rigorous external validation, transparent reporting, and seamless embedding into electronic medical systems. This review summarizes recent advances in AI-driven fracture assessment, critically evaluates their clinical promise, and outlines a roadmap for translation into real-world practice.

Yin Lin, iccardo Barbieri, Domenico Aquino, Giuseppe Lauria, Marina Grisoli, Elena De Momi, Alberto Redaelli, Simona Ferrante

arxiv logopreprintAug 4 2025
Glioblastoma is one of the most aggressive and common brain tumors, with a median survival of 10-15 months. Predicting Overall Survival (OS) is critical for personalizing treatment strategies and aligning clinical decisions with patient outcomes. In this study, we propose a novel Artificial Intelligence (AI) approach for OS prediction using Magnetic Resonance Imaging (MRI) images, exploiting Vision Transformers (ViTs) to extract hidden features directly from MRI images, eliminating the need of tumor segmentation. Unlike traditional approaches, our method simplifies the workflow and reduces computational resource requirements. The proposed model was evaluated on the BRATS dataset, reaching an accuracy of 62.5% on the test set, comparable to the top-performing methods. Additionally, it demonstrated balanced performance across precision, recall, and F1 score, overcoming the best model in these metrics. The dataset size limits the generalization of the ViT which typically requires larger datasets compared to convolutional neural networks. This limitation in generalization is observed across all the cited studies. This work highlights the applicability of ViTs for downsampled medical imaging tasks and establishes a foundation for OS prediction models that are computationally efficient and do not rely on segmentation.

Huigen CMC, Coukos A, Latifyan S, Nicod Lalonde M, Schaefer N, Abler D, Depeursinge A, Prior JO, Fraga M, Jreige M

pubmed logopapersAug 4 2025
In the last decade, immunotherapy, particularly immune checkpoint inhibitors, has revolutionized cancer treatment and improved prognosis. However, severe checkpoint inhibitor-induced liver injury (CHILI), which can lead to treatment discontinuation or death, occurs in up to 18% of the patients. The aim of this study is to evaluate the value of PET/CT radiomics analysis for the detection of CHILI. Patients with CHILI grade 2 or higher who underwent liver function tests and liver biopsy were retrospectively included. Minors, patients with cognitive impairments, and patients with viral infections were excluded from the study. The patients' liver and spleen were contoured on the anonymized PET/CT imaging data, followed by radiomics feature extraction. Principal component analysis (PCA) and Bonferroni corrections were used for statistical analysis and exploration of radiomics features related to CHILI. Sixteen patients were included and 110 radiomics features were extracted from PET images. Liver PCA-5 showed significance as well as one associated feature but did not remain significant after Bonferroni correction. Spleen PCA-5 differed significantly between CHILI and non-CHILI patients even after Bonferroni correction, possibly linked to the higher metabolic function of the spleen in autoimmune diseases due to the recruitment of immune cells. This pilot study identified statistically significant differences in PET-derived radiomics features of the spleen and observable changes in the liver on PET/CT scans before and after the onset of CHILI. Identifying these features could aid in diagnosing or predicting CHILI, potentially enabling personalized treatment. Larger multicenter prospective studies are needed to confirm these findings and develop automated detection methods.

Qifan Chen, Jin Cui, Cindy Duan, Yushuo Han, Yifei Shi

arxiv logopreprintAug 4 2025
Accurate estimation of postmenstrual age (PMA) at scan is crucial for assessing neonatal development and health. While deep learning models have achieved high accuracy in predicting PMA from brain MRI, they often function as black boxes, offering limited transparency and interpretability in clinical decision support. In this work, we address the dual challenge of accuracy and interpretability by adapting a multimodal large language model (MLLM) to perform both precise PMA prediction and clinically relevant explanation generation. We introduce a parameter-efficient fine-tuning (PEFT) strategy using instruction tuning and Low-Rank Adaptation (LoRA) applied to the Qwen2.5-VL-7B model. The model is trained on four 2D cortical surface projection maps derived from neonatal MRI scans. By employing distinct prompts for training and inference, our approach enables the MLLM to handle a regression task during training and generate clinically relevant explanations during inference. The fine-tuned model achieves a low prediction error with a 95 percent confidence interval of 0.78 to 1.52 weeks, while producing interpretable outputs grounded in developmental features, marking a significant step toward transparent and trustworthy AI systems in perinatal neuroscience.

Zhenyu Yang, Qian Chen, Rihui Zhang, Manju Liu, Fengqiu Guo, Minjie Yang, Min Tang, Lina Zhou, Chunhao Wang, Minbin Chen, Fang-Fang Yin

arxiv logopreprintAug 4 2025
Purpose: Radiation pneumonitis (RP) is a serious complication of intensity-modulated radiation therapy (IMRT) for breast cancer patients, underscoring the need for precise and explainable predictive models. This study presents an Explainable Dual-Omics Filtering (EDOF) model that integrates spatially localized dosiomic and radiomic features for voxel-level RP prediction. Methods: A retrospective cohort of 72 breast cancer patients treated with IMRT was analyzed, including 28 who developed RP. The EDOF model consists of two components: (1) dosiomic filtering, which extracts local dose intensity and spatial distribution features from planning dose maps, and (2) radiomic filtering, which captures texture-based features from pre-treatment CT scans. These features are jointly analyzed using the Explainable Boosting Machine (EBM), a transparent machine learning model that enables feature-specific risk evaluation. Model performance was assessed using five-fold cross-validation, reporting area under the curve (AUC), sensitivity, and specificity. Feature importance was quantified by mean absolute scores, and Partial Dependence Plots (PDPs) were used to visualize nonlinear relationships between RP risk and dual-omic features. Results: The EDOF model achieved strong predictive performance (AUC = 0.95 +- 0.01; sensitivity = 0.81 +- 0.05). The most influential features included dosiomic Intensity Mean, dosiomic Intensity Mean Absolute Deviation, and radiomic SRLGLE. PDPs revealed that RP risk increases beyond 5 Gy and rises sharply between 10-30 Gy, consistent with clinical dose thresholds. SRLGLE also captured structural heterogeneity linked to RP in specific lung regions. Conclusion: The EDOF framework enables spatially resolved, explainable RP prediction and may support personalized radiation planning to mitigate pulmonary toxicity.

Yingshu Li, Yunyi Liu, Zhanyu Wang, Xinyu Liang, Lingqiao Liu, Lei Wang, Luping Zhou

arxiv logopreprintAug 4 2025
Radiology report generation (RRG) for diagnostic images, such as chest X-rays, plays a pivotal role in both clinical practice and AI. Traditional free-text reports suffer from redundancy and inconsistent language, complicating the extraction of critical clinical details. Structured radiology report generation (S-RRG) offers a promising solution by organizing information into standardized, concise formats. However, existing approaches often rely on classification or visual question answering (VQA) pipelines that require predefined label sets and produce only fragmented outputs. Template-based approaches, which generate reports by replacing keywords within fixed sentence patterns, further compromise expressiveness and often omit clinically important details. In this work, we present a novel approach to S-RRG that includes dataset construction, model training, and the introduction of a new evaluation framework. We first create a robust chest X-ray dataset (MIMIC-STRUC) that includes disease names, severity levels, probabilities, and anatomical locations, ensuring that the dataset is both clinically relevant and well-structured. We train an LLM-based model to generate standardized, high-quality reports. To assess the generated reports, we propose a specialized evaluation metric (S-Score) that not only measures disease prediction accuracy but also evaluates the precision of disease-specific details, thus offering a clinically meaningful metric for report quality that focuses on elements critical to clinical decision-making and demonstrates a stronger alignment with human assessments. Our approach highlights the effectiveness of structured reports and the importance of a tailored evaluation metric for S-RRG, providing a more clinically relevant measure of report quality.

Paul Zaha, Lars Böcking, Simeon Allmendinger, Leopold Müller, Niklas Kühl

arxiv logopreprintAug 4 2025
Medical image segmentation is crucial for disease diagnosis and treatment planning, yet developing robust segmentation models often requires substantial computational resources and large datasets. Existing research shows that pre-trained and finetuned foundation models can boost segmentation performance. However, questions remain about how particular image preprocessing steps may influence segmentation performance across different medical imaging modalities. In particular, edges-abrupt transitions in pixel intensity-are widely acknowledged as vital cues for object boundaries but have not been systematically examined in the pre-training of foundation models. We address this gap by investigating to which extend pre-training with data processed using computationally efficient edge kernels, such as kirsch, can improve cross-modality segmentation capabilities of a foundation model. Two versions of a foundation model are first trained on either raw or edge-enhanced data across multiple medical imaging modalities, then finetuned on selected raw subsets tailored to specific medical modalities. After systematic investigation using the medical domains Dermoscopy, Fundus, Mammography, Microscopy, OCT, US, and XRay, we discover both increased and reduced segmentation performance across modalities using edge-focused pre-training, indicating the need for a selective application of this approach. To guide such selective applications, we propose a meta-learning strategy. It uses standard deviation and image entropy of the raw image to choose between a model pre-trained on edge-enhanced or on raw data for optimal performance. Our experiments show that integrating this meta-learning layer yields an overall segmentation performance improvement across diverse medical imaging tasks by 16.42% compared to models pre-trained on edge-enhanced data only and 19.30% compared to models pre-trained on raw data only.

Nys Tjade Siegel, James H. Cole, Mohamad Habes, Stefan Haufe, Kerstin Ritter, Marc-André Schulz

arxiv logopreprintAug 4 2025
Trustworthy interpretation of deep learning models is critical for neuroimaging applications, yet commonly used Explainable AI (XAI) methods lack rigorous validation, risking misinterpretation. We performed the first large-scale, systematic comparison of XAI methods on ~45,000 structural brain MRIs using a novel XAI validation framework. This framework establishes verifiable ground truth by constructing prediction tasks with known signal sources - from localized anatomical features to subject-specific clinical lesions - without artificially altering input images. Our analysis reveals systematic failures in two of the most widely used methods: GradCAM consistently failed to localize predictive features, while Layer-wise Relevance Propagation generated extensive, artifactual explanations that suggest incompatibility with neuroimaging data characteristics. Our results indicate that these failures stem from a domain mismatch, where methods with design principles tailored to natural images require substantial adaptation for neuroimaging data. In contrast, the simpler, gradient-based method SmoothGrad, which makes fewer assumptions about data structure, proved consistently accurate, suggesting its conceptual simplicity makes it more robust to this domain shift. These findings highlight the need for domain-specific adaptation and validation of XAI methods, suggest that interpretations from prior neuroimaging studies using standard XAI methodology warrant re-evaluation, and provide urgent guidance for practical application of XAI in neuroimaging.
Page 305 of 6626611 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.