Sort by:
Page 37 of 100995 results

Generative artificial intelligence for counseling of fetal malformations following ultrasound diagnosis.

Grünebaum A, Chervenak FA

pubmed logopapersJul 31 2025
To explore the potential role of generative artificial intelligence (GenAI) in enhancing patient counseling following prenatal ultrasound diagnosis of fetal malformations, with an emphasis on clinical utility, patient comprehension, and ethical implementation. The detection of fetal anomalies during the mid-trimester ultrasound is emotionally distressing for patients and presents significant challenges in communication and decision-making. Generative AI tools, such as GPT-4 and similar models, offer novel opportunities to support clinicians in delivering accurate, empathetic, and accessible counseling while preserving the physician's central role. We present a narrative review and applied framework illustrating how GenAI can assist obstetricians before, during, and after the fetal anomaly scan. Use cases include lay summaries, visual aids, anticipatory guidance, multilingual translation, and emotional support. Tables and sample prompts demonstrate practical applications across a range of anomalies.

Fine-grained Prototype Network for MRI Sequence Classification.

Yuan C, Jia X, Wang L, Yang C

pubmed logopapersJul 30 2025
Magnetic Resonance Imaging (MRI) is a crucial method for clinical diagnosis. Different abdominal MRI sequences provide tissue and structural information from various perspectives, offering reliable evidence for doctors to make accurate diagnoses. In recent years, with the rapid development of intelligent medical imaging, some studies have begun exploring deep learning methods for MRI sequence recognition. However, due to the significant intra-class variations and subtle inter-class differences in MRI sequences, traditional deep learning algorithms still struggle to effectively handle such types of complex distributed data. In addition, the key features for identifying MRI sequence categories often exist in subtle details, while significant discrepancies can be observed among sequences from individual samples. In contrast, current deep learning based MRI sequence classification methods tend to overlook these fine-grained differences across diverse samples. To overcome the above challenges, this paper proposes a fine-grained prototype network, SequencesNet, for MRI sequence classification. A network combining convolutional neural networks (CNNs) with improved vision transformers is constructed for feature extraction, considering both local and global information. Specifically, a Feature Selection Module (FSM) is added to the visual transformer, and fine-grained features for sequence discrimination are selected based on fused attention weights from multiple layers. Then, a Prototype Classification Module (PCM) is proposed to classify MRI sequences based on fine-grained MRI representations. Comprehensive experiments are conducted on a public abdominal MRI sequence classification dataset and a private dataset. Our proposed SequencesNet achieved the highest accuracy with 96.73% and 95.98% in two sequence classification datasets, respectively, and outperfom the comparative prototypes and fine-grained models. The visualization results exhibit that our proposed sequencesNet can better capture fine-grained information. The proposed SequencesNet shows promising performance in MRI sequence classification, excelling in distinguishing subtle inter-class differences and handling large intra-class variability. Specifically, FSM enhances clinical interpretability by focusing on fine-grained features, and PCM improves clustering by optimizing prototype-sample distances. Compared to baselines like 3DResNet18 and TransFG, SequencesNet achieves higher recall and precision, particularly for similar sequences like DCE-LAP and DCE-PVP. The proposed new MRI sequence classification model, SequencesNet, addresses the problem of subtle inter-class differences and significant intraclass variations existing in medical images. The modular design of SequencesNet can be extended to other medical imaging tasks, including but not limited to multimodal image fusion, lesion detection, and disease staging. Future work can be done to decrease the computational complexity and increase the generalization of the model.

Optimizing Federated Learning Configurations for MRI Prostate Segmentation and Cancer Detection: A Simulation Study

Ashkan Moradi, Fadila Zerka, Joeran S. Bosma, Mohammed R. S. Sunoqrot, Bendik S. Abrahamsen, Derya Yakar, Jeroen Geerdink, Henkjan Huisman, Tone Frost Bathen, Mattijs Elschot

arxiv logopreprintJul 30 2025
Purpose: To develop and optimize a federated learning (FL) framework across multiple clients for biparametric MRI prostate segmentation and clinically significant prostate cancer (csPCa) detection. Materials and Methods: A retrospective study was conducted using Flower FL to train a nnU-Net-based architecture for MRI prostate segmentation and csPCa detection, using data collected from January 2010 to August 2021. Model development included training and optimizing local epochs, federated rounds, and aggregation strategies for FL-based prostate segmentation on T2-weighted MRIs (four clients, 1294 patients) and csPCa detection using biparametric MRIs (three clients, 1440 patients). Performance was evaluated on independent test sets using the Dice score for segmentation and the Prostate Imaging: Cancer Artificial Intelligence (PI-CAI) score, defined as the average of the area under the receiver operating characteristic curve and average precision, for csPCa detection. P-values for performance differences were calculated using permutation testing. Results: The FL configurations were independently optimized for both tasks, showing improved performance at 1 epoch 300 rounds using FedMedian for prostate segmentation and 5 epochs 200 rounds using FedAdagrad, for csPCa detection. Compared with the average performance of the clients, the optimized FL model significantly improved performance in prostate segmentation and csPCa detection on the independent test set. The optimized FL model showed higher lesion detection performance compared to the FL-baseline model, but no evidence of a difference was observed for prostate segmentation. Conclusions: FL enhanced the performance and generalizability of MRI prostate segmentation and csPCa detection compared with local models, and optimizing its configuration further improved lesion detection performance.

Deep Learning for the Diagnosis and Treatment of Thyroid Cancer: A Review.

Gao R, Mai S, Wang S, Hu W, Chang Z, Wu G, Guan H

pubmed logopapersJul 30 2025
In recent years, the application of deep learning (DL) technology in the thyroid field has shown exponential growth, greatly promoting innovation in thyroid disease research. As the most common malignant tumor of the endocrine system, the precise diagnosis and treatment of thyroid cancer has been a key focus of clinical research. This article systematically reviews the latest research progress in DL research for the diagnosis and treatment of thyroid malignancies, focusing on the breakthrough application of advanced models such as convolutional neural networks (CNNs), long short-term memory networks (LSTMs), and generative adversarial networks (GANs) in key areas such as ultrasound images analysis for thyroid nodules, automatic classification of pathological images, and assessment of extrathyroidal extension. Furthermore, the review highlights the great potential of DL techniques in the development of individualized treatment planning and prognosis prediction. In addition, it analyzes the technical bottlenecks and clinical challenges faced by current DL applications in thyroid cancer diagnosis and treatment and looks ahead to future directions for development. The aim of this review is to provide the latest research insights for clinical practitioners, promote further improvements in the precision diagnosis and treatment system for thyroid cancer, and ultimately achieve better diagnostic and therapeutic outcomes for thyroid cancer patients.

Optimizing Federated Learning Configurations for MRI Prostate Segmentation and Cancer Detection: A Simulation Study

Ashkan Moradi, Fadila Zerka, Joeran S. Bosma, Mohammed R. S. Sunoqrot, Bendik S. Abrahamsen, Derya Yakar, Jeroen Geerdink, Henkjan Huisman, Tone Frost Bathen, Mattijs Elschot

arxiv logopreprintJul 30 2025
Purpose: To develop and optimize a federated learning (FL) framework across multiple clients for biparametric MRI prostate segmentation and clinically significant prostate cancer (csPCa) detection. Materials and Methods: A retrospective study was conducted using Flower FL to train a nnU-Net-based architecture for MRI prostate segmentation and csPCa detection, using data collected from January 2010 to August 2021. Model development included training and optimizing local epochs, federated rounds, and aggregation strategies for FL-based prostate segmentation on T2-weighted MRIs (four clients, 1294 patients) and csPCa detection using biparametric MRIs (three clients, 1440 patients). Performance was evaluated on independent test sets using the Dice score for segmentation and the Prostate Imaging: Cancer Artificial Intelligence (PI-CAI) score, defined as the average of the area under the receiver operating characteristic curve and average precision, for csPCa detection. P-values for performance differences were calculated using permutation testing. Results: The FL configurations were independently optimized for both tasks, showing improved performance at 1 epoch 300 rounds using FedMedian for prostate segmentation and 5 epochs 200 rounds using FedAdagrad, for csPCa detection. Compared with the average performance of the clients, the optimized FL model significantly improved performance in prostate segmentation and csPCa detection on the independent test set. The optimized FL model showed higher lesion detection performance compared to the FL-baseline model, but no evidence of a difference was observed for prostate segmentation. Conclusions: FL enhanced the performance and generalizability of MRI prostate segmentation and csPCa detection compared with local models, and optimizing its configuration further improved lesion detection performance.

Advancing Fetal Ultrasound Image Quality Assessment in Low-Resource Settings

Dongli He, Hu Wang, Mohammad Yaqub

arxiv logopreprintJul 30 2025
Accurate fetal biometric measurements, such as abdominal circumference, play a vital role in prenatal care. However, obtaining high-quality ultrasound images for these measurements heavily depends on the expertise of sonographers, posing a significant challenge in low-income countries due to the scarcity of trained personnel. To address this issue, we leverage FetalCLIP, a vision-language model pretrained on a curated dataset of over 210,000 fetal ultrasound image-caption pairs, to perform automated fetal ultrasound image quality assessment (IQA) on blind-sweep ultrasound data. We introduce FetalCLIP$_{CLS}$, an IQA model adapted from FetalCLIP using Low-Rank Adaptation (LoRA), and evaluate it on the ACOUSLIC-AI dataset against six CNN and Transformer baselines. FetalCLIP$_{CLS}$ achieves the highest F1 score of 0.757. Moreover, we show that an adapted segmentation model, when repurposed for classification, further improves performance, achieving an F1 score of 0.771. Our work demonstrates how parameter-efficient fine-tuning of fetal ultrasound foundation models can enable task-specific adaptations, advancing prenatal care in resource-limited settings. The experimental code is available at: https://github.com/donglihe-hub/FetalCLIP-IQA.

Optimizing Federated Learning Configurations for MRI Prostate Segmentation and Cancer Detection: A Simulation Study.

Moradi A, Zerka F, Bosma JS, Sunoqrot MRS, Abrahamsen BS, Yakar D, Geerdink J, Huisman H, Bathen TF, Elschot M

pubmed logopapersJul 30 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop and optimize a federated learning (FL) framework across multiple clients for biparametric MRI prostate segmentation and clinically significant prostate cancer (csPCa) detection. Materials and Methods A retrospective study was conducted using Flower FL to train a nnU-Net-based architecture for MRI prostate segmentation and csPCa detection, using data collected from January 2010 to August 2021. Model development included training and optimizing local epochs, federated rounds, and aggregation strategies for FL-based prostate segmentation on T2-weighted MRIs (four clients, 1294 patients) and csPCa detection using biparametric MRIs (three clients, 1440 patients). Performance was evaluated on independent test sets using the Dice score for segmentation and the Prostate Imaging: Cancer Artificial Intelligence (PI-CAI) score, defined as the average of the area under the receiver operating characteristic curve and average precision, for csPCa detection. <i>P</i> values for performance differences were calculated using permutation testing. Results The FL configurations were independently optimized for both tasks, showing improved performance at 1 epoch 300 rounds using FedMedian for prostate segmentation and 5 epochs 200 rounds using FedAdagrad, for csPCa detection. Compared with the average performance of the clients, the optimized FL model significantly improved performance in prostate segmentation (Dice score increase from 0.73 ± 0.06 to 0.88 ± 0.03; <i>P</i> ≤ .01) and csPCa detection (PI-CAI score increase from 0.63 ± 0.07 to 0.74 ± 0.06; <i>P</i> ≤ .01) on the independent test set. The optimized FL model showed higher lesion detection performance compared with the FL-baseline model (PICAI score increase from 0.72 ± 0.06 to 0.74 ± 0.06; <i>P</i> ≤ .01), but no evidence of a difference was observed for prostate segmentation (Dice scores, 0.87 ± 0.03 vs 0.88 ± 03; <i>P</i> > .05). Conclusion FL enhanced the performance and generalizability of MRI prostate segmentation and csPCa detection compared with local models, and optimizing its configuration further improved lesion detection performance. ©RSNA, 2025.

Radiation enteritis associated with temporal sequencing of total neoadjuvant therapy in locally advanced rectal cancer: a preliminary study.

Ma CY, Fu Y, Liu L, Chen J, Li SY, Zhang L, Zhou JY

pubmed logopapersJul 30 2025
This study aimed to develop and validate a multi-temporal magnetic resonance imaging (MRI)-based delta-radiomics model to accurately predict severe acute radiation enteritis risk in patients undergoing total neoadjuvant therapy (TNT) for locally advanced rectal cancer (LARC). A retrospective analysis was conducted on the data from 92 patients with LARC who received TNT. All patients underwent pelvic MRI at baseline (pre-treatment) and after neoadjuvant radiotherapy (post-RT). Radiomic features of the primary tumor region were extracted from T2-weighted images at both timepoints. Four delta feature strategies were defined (absolute difference, percent change, ratio, and feature fusion) by concatenating pre- and post-RT features. Severe acute radiation enteritis (SARE) was defined as a composite CTCAE-based symptom score of ≥ 3 within the first 2 weeks of radiotherapy. Features were selected via statistical evaluation and least absolute shrinkage and selection operator regression. Support vector machine (SVM) classifiers were trained using baseline, post-RT, delta, and combined radiomic and clinical features. Model performance was evaluated in an independent test set based on the area under the curve (AUC) value and other metrics. Only the delta-fusion strategy retained stable radiomic features after selection, and outperformed the difference, percent, and ratio definitions in terms of feature stability and model performance. The SVM model, based on combined delta-fusion radiomics and clinical variables, demonstrated the best predictive performance and generalizability. In the independent test cohort, this combined model demonstrated an AUC value of 0.711, sensitivity of 88.9%, and F1-score of 0.696; these values surpassed those of models built with baseline-only or delta difference features. Integrating multi-temporal radiomic features via delta-fusion with clinical factors markedly improved early prediction of SARE in LARC. The delta-fusion approach outperformed conventional delta calculations, and demonstrated superior predictive performance. This highlights its potential in guiding individualized TNT sequencing and proactive toxicity management. NA.

Role of Artificial Intelligence in Surgical Training by Assessing GPT-4 and GPT-4o on the Japan Surgical Board Examination With Text-Only and Image-Accompanied Questions: Performance Evaluation Study.

Maruyama H, Toyama Y, Takanami K, Takase K, Kamei T

pubmed logopapersJul 30 2025
Artificial intelligence and large language models (LLMs)-particularly GPT-4 and GPT-4o-have demonstrated high correct-answer rates in medical examinations. GPT-4o has enhanced diagnostic capabilities, advanced image processing, and updated knowledge. Japanese surgeons face critical challenges, including a declining workforce, regional health care disparities, and work-hour-related challenges. Nonetheless, although LLMs could be beneficial in surgical education, no studies have yet assessed GPT-4o's surgical knowledge or its performance in the field of surgery. This study aims to evaluate the potential of GPT-4 and GPT-4o in surgical education by using them to take the Japan Surgical Board Examination (JSBE), which includes both textual questions and medical images-such as surgical and computed tomography scans-to comprehensively assess their surgical knowledge. We used 297 multiple-choice questions from the 2021-2023 JSBEs. The questions were in Japanese, and 104 of them included images. First, the GPT-4 and GPT-4o responses to only the textual questions were collected via OpenAI's application programming interface to evaluate their correct-answer rate. Subsequently, the correct-answer rate of their responses to questions that included images was assessed by inputting both text and images. The overall correct-answer rates of GPT-4o and GPT-4 for the text-only questions were 78% (231/297) and 55% (163/297), respectively, with GPT-4o outperforming GPT-4 by 23% (P=<.01). By contrast, there was no significant improvement in the correct-answer rate for questions that included images compared with the results for the text-only questions. GPT-4o outperformed GPT-4 on the JSBE. However, the results of the LLMs were lower than those of the examinees. Despite the capabilities of LLMs, image recognition remains a challenge for them, and their clinical application requires caution owing to the potential inaccuracy of their results.

Optimizing Thyroid Nodule Management With Artificial Intelligence: Multicenter Retrospective Study on Reducing Unnecessary Fine Needle Aspirations.

Ni JH, Liu YY, Chen C, Shi YL, Zhao X, Li XL, Ye BB, Hu JL, Mou LC, Sun LP, Fu HJ, Zhu XX, Zhang YF, Guo L, Xu HX

pubmed logopapersJul 30 2025
Most artificial intelligence (AI) models for thyroid nodules are designed to screen for malignancy to guide further interventions; however, these models have not yet been fully implemented in clinical practice. This study aimed to evaluate AI in real clinical settings for identifying potentially benign thyroid nodules initially deemed to be at risk for malignancy by radiologists, reducing unnecessary fine needle aspiration (FNA) and optimizing management. We retrospectively collected a validation cohort of thyroid nodules that had undergone FNA. These nodules were initially assessed as "suspicious for malignancy" by radiologists based on ultrasound features, following standard clinical practice, which prompted further FNA procedures. Ultrasound images of these nodules were re-evaluated using a deep learning-based AI system, and its diagnostic performance was assessed in terms of correct identification of benign nodules and error identification of malignant nodules. Performance metrics such as sensitivity, specificity, and the area under the receiver operating characteristic curve were calculated. In addition, a separate comparison cohort was retrospectively assembled to compare the AI system's ability to correctly identify benign thyroid nodules with that of radiologists. The validation cohort comprised 4572 thyroid nodules (benign: n=3134, 68.5%; malignant: n=1438, 31.5%). AI correctly identified 2719 (86.8% among benign nodules) and reduced unnecessary FNAs from 68.5% (3134/4572) to 9.1% (415/4572). However, 123 malignant nodules (8.6% of malignant cases) were mistakenly identified as benign, with the majority of these being of low or intermediate suspicion. In the comparison cohort, AI successfully identified 81.4% (96/118) of benign nodules. It outperformed junior and senior radiologists, who identified only 40% and 55%, respectively. The area under the curve (AUC) for the AI model was 0.88 (95% CI 0.85-0.91), demonstrating a superior AUC compared with that of the junior radiologists (AUC=0.43, 95% CI 0.36-0.50; P=.002) and senior radiologists (AUC=0.63, 95% CI 0.55-0.70; P=.003). Compared with radiologists, AI can better serve as a "goalkeeper" in reducing unnecessary FNAs by identifying benign nodules that are initially assessed as malignant by radiologists. However, active surveillance is still necessary for all these nodules since a very small number of low-aggressiveness malignant nodules may be mistakenly identified.
Page 37 of 100995 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.