Sort by:
Page 137 of 3563559 results

Radiation enteritis associated with temporal sequencing of total neoadjuvant therapy in locally advanced rectal cancer: a preliminary study.

Ma CY, Fu Y, Liu L, Chen J, Li SY, Zhang L, Zhou JY

pubmed logopapersJul 30 2025
This study aimed to develop and validate a multi-temporal magnetic resonance imaging (MRI)-based delta-radiomics model to accurately predict severe acute radiation enteritis risk in patients undergoing total neoadjuvant therapy (TNT) for locally advanced rectal cancer (LARC). A retrospective analysis was conducted on the data from 92 patients with LARC who received TNT. All patients underwent pelvic MRI at baseline (pre-treatment) and after neoadjuvant radiotherapy (post-RT). Radiomic features of the primary tumor region were extracted from T2-weighted images at both timepoints. Four delta feature strategies were defined (absolute difference, percent change, ratio, and feature fusion) by concatenating pre- and post-RT features. Severe acute radiation enteritis (SARE) was defined as a composite CTCAE-based symptom score of ≥ 3 within the first 2 weeks of radiotherapy. Features were selected via statistical evaluation and least absolute shrinkage and selection operator regression. Support vector machine (SVM) classifiers were trained using baseline, post-RT, delta, and combined radiomic and clinical features. Model performance was evaluated in an independent test set based on the area under the curve (AUC) value and other metrics. Only the delta-fusion strategy retained stable radiomic features after selection, and outperformed the difference, percent, and ratio definitions in terms of feature stability and model performance. The SVM model, based on combined delta-fusion radiomics and clinical variables, demonstrated the best predictive performance and generalizability. In the independent test cohort, this combined model demonstrated an AUC value of 0.711, sensitivity of 88.9%, and F1-score of 0.696; these values surpassed those of models built with baseline-only or delta difference features. Integrating multi-temporal radiomic features via delta-fusion with clinical factors markedly improved early prediction of SARE in LARC. The delta-fusion approach outperformed conventional delta calculations, and demonstrated superior predictive performance. This highlights its potential in guiding individualized TNT sequencing and proactive toxicity management. NA.

Optimizing Federated Learning Configurations for MRI Prostate Segmentation and Cancer Detection: A Simulation Study

Ashkan Moradi, Fadila Zerka, Joeran S. Bosma, Mohammed R. S. Sunoqrot, Bendik S. Abrahamsen, Derya Yakar, Jeroen Geerdink, Henkjan Huisman, Tone Frost Bathen, Mattijs Elschot

arxiv logopreprintJul 30 2025
Purpose: To develop and optimize a federated learning (FL) framework across multiple clients for biparametric MRI prostate segmentation and clinically significant prostate cancer (csPCa) detection. Materials and Methods: A retrospective study was conducted using Flower FL to train a nnU-Net-based architecture for MRI prostate segmentation and csPCa detection, using data collected from January 2010 to August 2021. Model development included training and optimizing local epochs, federated rounds, and aggregation strategies for FL-based prostate segmentation on T2-weighted MRIs (four clients, 1294 patients) and csPCa detection using biparametric MRIs (three clients, 1440 patients). Performance was evaluated on independent test sets using the Dice score for segmentation and the Prostate Imaging: Cancer Artificial Intelligence (PI-CAI) score, defined as the average of the area under the receiver operating characteristic curve and average precision, for csPCa detection. P-values for performance differences were calculated using permutation testing. Results: The FL configurations were independently optimized for both tasks, showing improved performance at 1 epoch 300 rounds using FedMedian for prostate segmentation and 5 epochs 200 rounds using FedAdagrad, for csPCa detection. Compared with the average performance of the clients, the optimized FL model significantly improved performance in prostate segmentation and csPCa detection on the independent test set. The optimized FL model showed higher lesion detection performance compared to the FL-baseline model, but no evidence of a difference was observed for prostate segmentation. Conclusions: FL enhanced the performance and generalizability of MRI prostate segmentation and csPCa detection compared with local models, and optimizing its configuration further improved lesion detection performance.

Bridging the Gap in Missing Modalities: Leveraging Knowledge Distillation and Style Matching for Brain Tumor Segmentation

Shenghao Zhu, Yifei Chen, Weihong Chen, Yuanhan Wang, Chang Liu, Shuo Jiang, Feiwei Qin, Changmiao Wang

arxiv logopreprintJul 30 2025
Accurate and reliable brain tumor segmentation, particularly when dealing with missing modalities, remains a critical challenge in medical image analysis. Previous studies have not fully resolved the challenges of tumor boundary segmentation insensitivity and feature transfer in the absence of key imaging modalities. In this study, we introduce MST-KDNet, aimed at addressing these critical issues. Our model features Multi-Scale Transformer Knowledge Distillation to effectively capture attention weights at various resolutions, Dual-Mode Logit Distillation to improve the transfer of knowledge, and a Global Style Matching Module that integrates feature matching with adversarial learning. Comprehensive experiments conducted on the BraTS and FeTS 2024 datasets demonstrate that MST-KDNet surpasses current leading methods in both Dice and HD95 scores, particularly in conditions with substantial modality loss. Our approach shows exceptional robustness and generalization potential, making it a promising candidate for real-world clinical applications. Our source code is available at https://github.com/Quanato607/MST-KDNet.

Optimizing Federated Learning Configurations for MRI Prostate Segmentation and Cancer Detection: A Simulation Study

Ashkan Moradi, Fadila Zerka, Joeran S. Bosma, Mohammed R. S. Sunoqrot, Bendik S. Abrahamsen, Derya Yakar, Jeroen Geerdink, Henkjan Huisman, Tone Frost Bathen, Mattijs Elschot

arxiv logopreprintJul 30 2025
Purpose: To develop and optimize a federated learning (FL) framework across multiple clients for biparametric MRI prostate segmentation and clinically significant prostate cancer (csPCa) detection. Materials and Methods: A retrospective study was conducted using Flower FL to train a nnU-Net-based architecture for MRI prostate segmentation and csPCa detection, using data collected from January 2010 to August 2021. Model development included training and optimizing local epochs, federated rounds, and aggregation strategies for FL-based prostate segmentation on T2-weighted MRIs (four clients, 1294 patients) and csPCa detection using biparametric MRIs (three clients, 1440 patients). Performance was evaluated on independent test sets using the Dice score for segmentation and the Prostate Imaging: Cancer Artificial Intelligence (PI-CAI) score, defined as the average of the area under the receiver operating characteristic curve and average precision, for csPCa detection. P-values for performance differences were calculated using permutation testing. Results: The FL configurations were independently optimized for both tasks, showing improved performance at 1 epoch 300 rounds using FedMedian for prostate segmentation and 5 epochs 200 rounds using FedAdagrad, for csPCa detection. Compared with the average performance of the clients, the optimized FL model significantly improved performance in prostate segmentation and csPCa detection on the independent test set. The optimized FL model showed higher lesion detection performance compared to the FL-baseline model, but no evidence of a difference was observed for prostate segmentation. Conclusions: FL enhanced the performance and generalizability of MRI prostate segmentation and csPCa detection compared with local models, and optimizing its configuration further improved lesion detection performance.

Advancing Fetal Ultrasound Image Quality Assessment in Low-Resource Settings

Dongli He, Hu Wang, Mohammad Yaqub

arxiv logopreprintJul 30 2025
Accurate fetal biometric measurements, such as abdominal circumference, play a vital role in prenatal care. However, obtaining high-quality ultrasound images for these measurements heavily depends on the expertise of sonographers, posing a significant challenge in low-income countries due to the scarcity of trained personnel. To address this issue, we leverage FetalCLIP, a vision-language model pretrained on a curated dataset of over 210,000 fetal ultrasound image-caption pairs, to perform automated fetal ultrasound image quality assessment (IQA) on blind-sweep ultrasound data. We introduce FetalCLIP$_{CLS}$, an IQA model adapted from FetalCLIP using Low-Rank Adaptation (LoRA), and evaluate it on the ACOUSLIC-AI dataset against six CNN and Transformer baselines. FetalCLIP$_{CLS}$ achieves the highest F1 score of 0.757. Moreover, we show that an adapted segmentation model, when repurposed for classification, further improves performance, achieving an F1 score of 0.771. Our work demonstrates how parameter-efficient fine-tuning of fetal ultrasound foundation models can enable task-specific adaptations, advancing prenatal care in resource-limited settings. The experimental code is available at: https://github.com/donglihe-hub/FetalCLIP-IQA.

Clinician Perspectives of a Magnetic Resonance Imaging-Based 3D Volumetric Analysis Tool for Neurofibromatosis Type 2-Related Schwannomatosis: Qualitative Pilot Study.

Desroches ST, Huang A, Ghankot R, Tommasini SM, Wiznia DH, Buono FD

pubmed logopapersJul 30 2025
Accurate monitoring of tumor progression is crucial for optimizing outcomes in neurofibromatosis type 2-related schwannomatosis. Standard 2D linear analysis on magnetic resonance imaging is less accurate than 3D volumetric analysis, but since 3D volumetric analysis is time-consuming, it is not widely used. To shorten the time required for 3D volumetric analysis, our lab has been developing an automated artificial intelligence-driven 3D volumetric tool. The objective of the study was to survey and interview clinicians treating neurofibromatosis type 2-related schwannomatosis to understand their views on current 2D analysis and to gather insights for the design of an artificial intelligence-driven 3D volumetric analysis tool. Interviews examined for the following themes: (1) shortcomings of the currently used linear analysis, (2) utility of 3D visualizations, (3) features of an interactive 3D modeling software, and (4) lack of a gold standard to assess the accuracy of 3D volumetric analysis. A Likert scale questionnaire was used to survey clinicians' levels of agreement with 25 statements related to 2D and 3D tumor analyses. A total of 14 clinicians completed a survey, and 12 clinicians were interviewed. Specialties ranged across neurosurgery, neuroradiology, neurology, oncology, and pediatrics. Overall, clinicians expressed concerns with current linear techniques, with clinicians agreeing that linear measurements can be variable with the possibility of two different clinicians calculating 2 different tumor sizes (mean 4.64, SD 0.49) and that volumetric measurements would be more helpful for determining clearer thresholds of tumor growth (mean 4.50, SD 0.52). For statements discussing the capabilities of a 3D volumetric analysis and visualization software, clinicians expressed strong interest in being able to visualize tumors with respect to critical brain structures (mean 4.36, SD 0.74) and in forecasting tumor growth (mean 4.77, SD 0.44). Clinicians were overall in favor of the adoption of 3D volumetric analysis techniques for measuring vestibular schwannoma tumors but expressed concerns regarding the novelty and inexperience surrounding these techniques. However, clinicians felt that the ability to visualize tumors with reference to critical structures, to overlay structures, to interact with 3D models, and to visualize areas of slow versus rapid growth in 3D would be valuable contributions to clinical practice. Overall, clinicians provided valuable insights for designing a 3D volumetric analysis tool for vestibular schwannoma tumor growth. These findings may also apply to other central nervous system tumors, offering broader utility in tumor growth assessments.

A generalizable diffusion framework for 3D low-dose and few-view cardiac SPECT imaging.

Xie H, Gan W, Ji W, Chen X, Alashi A, Thorn SL, Zhou B, Liu Q, Xia M, Guo X, Liu YH, An H, Kamilov US, Wang G, Sinusas AJ, Liu C

pubmed logopapersJul 30 2025
Myocardial perfusion imaging using SPECT is widely utilized to diagnose coronary artery diseases, but image quality can be negatively affected in low-dose and few-view acquisition settings. Although various deep learning methods have been introduced to improve image quality from low-dose or few-view SPECT data, previous approaches often fail to generalize across different acquisition settings, limiting realistic applicability. This work introduced DiffSPECT-3D, a diffusion framework for 3D cardiac SPECT imaging that effectively adapts to different acquisition settings without requiring further network re-training or fine-tuning. Using both image and projection data, a consistency strategy is proposed to ensure that diffusion sampling at each step aligns with the low-dose/few-view projection measurements, the image data, and the scanner geometry, thus enabling generalization to different low-dose/few-view settings. Incorporating anatomical spatial information from CT and total variation constraint, we proposed a 2.5D conditional strategy to allow DiffSPECT-3D to observe 3D contextual information from the entire image volume, addressing the 3D memory/computational issues in diffusion model. We extensively evaluated the proposed method on 1,325 clinical <sup>99m</sup>Tc tetrofosmin stress/rest studies from 795 patients. Each study was reconstructed into 5 different low-count levels and 5 different projection few-view levels for model evaluations, ranging from 1% to 50% and from 1 view to 9 view, respectively. Validated against cardiac catheterization results and diagnostic review from nuclear cardiologists, the presented results show the potential to achieve low-dose and few-view SPECT imaging without compromising clinical performance. Additionally, DiffSPECT-3D could be directly applied to full-dose SPECT images to further improve image quality, especially in a low-dose stress-first cardiac SPECT imaging protocol.

Optimizing Federated Learning Configurations for MRI Prostate Segmentation and Cancer Detection: A Simulation Study.

Moradi A, Zerka F, Bosma JS, Sunoqrot MRS, Abrahamsen BS, Yakar D, Geerdink J, Huisman H, Bathen TF, Elschot M

pubmed logopapersJul 30 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop and optimize a federated learning (FL) framework across multiple clients for biparametric MRI prostate segmentation and clinically significant prostate cancer (csPCa) detection. Materials and Methods A retrospective study was conducted using Flower FL to train a nnU-Net-based architecture for MRI prostate segmentation and csPCa detection, using data collected from January 2010 to August 2021. Model development included training and optimizing local epochs, federated rounds, and aggregation strategies for FL-based prostate segmentation on T2-weighted MRIs (four clients, 1294 patients) and csPCa detection using biparametric MRIs (three clients, 1440 patients). Performance was evaluated on independent test sets using the Dice score for segmentation and the Prostate Imaging: Cancer Artificial Intelligence (PI-CAI) score, defined as the average of the area under the receiver operating characteristic curve and average precision, for csPCa detection. <i>P</i> values for performance differences were calculated using permutation testing. Results The FL configurations were independently optimized for both tasks, showing improved performance at 1 epoch 300 rounds using FedMedian for prostate segmentation and 5 epochs 200 rounds using FedAdagrad, for csPCa detection. Compared with the average performance of the clients, the optimized FL model significantly improved performance in prostate segmentation (Dice score increase from 0.73 ± 0.06 to 0.88 ± 0.03; <i>P</i> ≤ .01) and csPCa detection (PI-CAI score increase from 0.63 ± 0.07 to 0.74 ± 0.06; <i>P</i> ≤ .01) on the independent test set. The optimized FL model showed higher lesion detection performance compared with the FL-baseline model (PICAI score increase from 0.72 ± 0.06 to 0.74 ± 0.06; <i>P</i> ≤ .01), but no evidence of a difference was observed for prostate segmentation (Dice scores, 0.87 ± 0.03 vs 0.88 ± 03; <i>P</i> > .05). Conclusion FL enhanced the performance and generalizability of MRI prostate segmentation and csPCa detection compared with local models, and optimizing its configuration further improved lesion detection performance. ©RSNA, 2025.

Optimizing Thyroid Nodule Management With Artificial Intelligence: Multicenter Retrospective Study on Reducing Unnecessary Fine Needle Aspirations.

Ni JH, Liu YY, Chen C, Shi YL, Zhao X, Li XL, Ye BB, Hu JL, Mou LC, Sun LP, Fu HJ, Zhu XX, Zhang YF, Guo L, Xu HX

pubmed logopapersJul 30 2025
Most artificial intelligence (AI) models for thyroid nodules are designed to screen for malignancy to guide further interventions; however, these models have not yet been fully implemented in clinical practice. This study aimed to evaluate AI in real clinical settings for identifying potentially benign thyroid nodules initially deemed to be at risk for malignancy by radiologists, reducing unnecessary fine needle aspiration (FNA) and optimizing management. We retrospectively collected a validation cohort of thyroid nodules that had undergone FNA. These nodules were initially assessed as "suspicious for malignancy" by radiologists based on ultrasound features, following standard clinical practice, which prompted further FNA procedures. Ultrasound images of these nodules were re-evaluated using a deep learning-based AI system, and its diagnostic performance was assessed in terms of correct identification of benign nodules and error identification of malignant nodules. Performance metrics such as sensitivity, specificity, and the area under the receiver operating characteristic curve were calculated. In addition, a separate comparison cohort was retrospectively assembled to compare the AI system's ability to correctly identify benign thyroid nodules with that of radiologists. The validation cohort comprised 4572 thyroid nodules (benign: n=3134, 68.5%; malignant: n=1438, 31.5%). AI correctly identified 2719 (86.8% among benign nodules) and reduced unnecessary FNAs from 68.5% (3134/4572) to 9.1% (415/4572). However, 123 malignant nodules (8.6% of malignant cases) were mistakenly identified as benign, with the majority of these being of low or intermediate suspicion. In the comparison cohort, AI successfully identified 81.4% (96/118) of benign nodules. It outperformed junior and senior radiologists, who identified only 40% and 55%, respectively. The area under the curve (AUC) for the AI model was 0.88 (95% CI 0.85-0.91), demonstrating a superior AUC compared with that of the junior radiologists (AUC=0.43, 95% CI 0.36-0.50; P=.002) and senior radiologists (AUC=0.63, 95% CI 0.55-0.70; P=.003). Compared with radiologists, AI can better serve as a "goalkeeper" in reducing unnecessary FNAs by identifying benign nodules that are initially assessed as malignant by radiologists. However, active surveillance is still necessary for all these nodules since a very small number of low-aggressiveness malignant nodules may be mistakenly identified.

Role of Artificial Intelligence in Surgical Training by Assessing GPT-4 and GPT-4o on the Japan Surgical Board Examination With Text-Only and Image-Accompanied Questions: Performance Evaluation Study.

Maruyama H, Toyama Y, Takanami K, Takase K, Kamei T

pubmed logopapersJul 30 2025
Artificial intelligence and large language models (LLMs)-particularly GPT-4 and GPT-4o-have demonstrated high correct-answer rates in medical examinations. GPT-4o has enhanced diagnostic capabilities, advanced image processing, and updated knowledge. Japanese surgeons face critical challenges, including a declining workforce, regional health care disparities, and work-hour-related challenges. Nonetheless, although LLMs could be beneficial in surgical education, no studies have yet assessed GPT-4o's surgical knowledge or its performance in the field of surgery. This study aims to evaluate the potential of GPT-4 and GPT-4o in surgical education by using them to take the Japan Surgical Board Examination (JSBE), which includes both textual questions and medical images-such as surgical and computed tomography scans-to comprehensively assess their surgical knowledge. We used 297 multiple-choice questions from the 2021-2023 JSBEs. The questions were in Japanese, and 104 of them included images. First, the GPT-4 and GPT-4o responses to only the textual questions were collected via OpenAI's application programming interface to evaluate their correct-answer rate. Subsequently, the correct-answer rate of their responses to questions that included images was assessed by inputting both text and images. The overall correct-answer rates of GPT-4o and GPT-4 for the text-only questions were 78% (231/297) and 55% (163/297), respectively, with GPT-4o outperforming GPT-4 by 23% (P=<.01). By contrast, there was no significant improvement in the correct-answer rate for questions that included images compared with the results for the text-only questions. GPT-4o outperformed GPT-4 on the JSBE. However, the results of the LLMs were lower than those of the examinees. Despite the capabilities of LLMs, image recognition remains a challenge for them, and their clinical application requires caution owing to the potential inaccuracy of their results.
Page 137 of 3563559 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.