Sort by:
Page 45 of 1651650 results

Accurate and Efficient Fetal Birth Weight Estimation from 3D Ultrasound

Jian Wang, Qiongying Ni, Hongkui Yu, Ruixuan Yao, Jinqiao Ying, Bin Zhang, Xingyi Yang, Jin Peng, Jiongquan Chen, Junxuan Yu, Wenlong Shi, Chaoyu Chen, Zhongnuo Yan, Mingyuan Luo, Gaocheng Cai, Dong Ni, Jing Lu, Xin Yang

arxiv logopreprintJul 1 2025
Accurate fetal birth weight (FBW) estimation is essential for optimizing delivery decisions and reducing perinatal mortality. However, clinical methods for FBW estimation are inefficient, operator-dependent, and challenging to apply in cases of complex fetal anatomy. Existing deep learning methods are based on 2D standard ultrasound (US) images or videos that lack spatial information, limiting their prediction accuracy. In this study, we propose the first method for directly estimating FBW from 3D fetal US volumes. Our approach integrates a multi-scale feature fusion network (MFFN) and a synthetic sample-based learning framework (SSLF). The MFFN effectively extracts and fuses multi-scale features under sparse supervision by incorporating channel attention, spatial attention, and a ranking-based loss function. SSLF generates synthetic samples by simply combining fetal head and abdomen data from different fetuses, utilizing semi-supervised learning to improve prediction performance. Experimental results demonstrate that our method achieves superior performance, with a mean absolute error of $166.4\pm155.9$ $g$ and a mean absolute percentage error of $5.1\pm4.6$%, outperforming existing methods and approaching the accuracy of a senior doctor. Code is available at: https://github.com/Qioy-i/EFW.

MedDiff-FT: Data-Efficient Diffusion Model Fine-tuning with Structural Guidance for Controllable Medical Image Synthesis

Jianhao Xie, Ziang Zhang, Zhenyu Weng, Yuesheng Zhu, Guibo Luo

arxiv logopreprintJul 1 2025
Recent advancements in deep learning for medical image segmentation are often limited by the scarcity of high-quality training data.While diffusion models provide a potential solution by generating synthetic images, their effectiveness in medical imaging remains constrained due to their reliance on large-scale medical datasets and the need for higher image quality. To address these challenges, we present MedDiff-FT, a controllable medical image generation method that fine-tunes a diffusion foundation model to produce medical images with structural dependency and domain specificity in a data-efficient manner. During inference, a dynamic adaptive guiding mask enforces spatial constraints to ensure anatomically coherent synthesis, while a lightweight stochastic mask generator enhances diversity through hierarchical randomness injection. Additionally, an automated quality assessment protocol filters suboptimal outputs using feature-space metrics, followed by mask corrosion to refine fidelity. Evaluated on five medical segmentation datasets,MedDiff-FT's synthetic image-mask pairs improve SOTA method's segmentation performance by an average of 1% in Dice score. The framework effectively balances generation quality, diversity, and computational efficiency, offering a practical solution for medical data augmentation. The code is available at https://github.com/JianhaoXie1/MedDiff-FT.

Enhancing Magnetic Resonance Imaging (MRI) Report Comprehension in Spinal Trauma: Readability Analysis of AI-Generated Explanations for Thoracolumbar Fractures.

Sing DC, Shah KS, Pompliano M, Yi PH, Velluto C, Bagheri A, Eastlack RK, Stephan SR, Mundis GM

pubmed logopapersJul 1 2025
Magnetic resonance imaging (MRI) reports are challenging for patients to interpret and may subject patients to unnecessary anxiety. The advent of advanced artificial intelligence (AI) large language models (LLMs), such as GPT-4o, hold promise for translating complex medical information into layman terms. This paper aims to evaluate the accuracy, helpfulness, and readability of GPT-4o in explaining MRI reports of patients with thoracolumbar fractures. MRI reports of 20 patients presenting with thoracic or lumbar vertebral body fractures were obtained. GPT-4o was prompted to explain the MRI report in layman's terms. The generated explanations were then presented to 7 board-certified spine surgeons for evaluation on the reports' helpfulness and accuracy. The MRI report text and GPT-4o explanations were then analyzed to grade the readability of the texts using the Flesch Readability Ease Score (FRES) and Flesch-Kincaid Grade Level (FKGL) Scale. The layman explanations provided by GPT-4o were found to be helpful by all surgeons in 17 cases, with 6 of 7 surgeons finding the information helpful in the remaining 3 cases. ChatGPT-generated layman reports were rated as "accurate" by all 7 surgeons in 11/20 cases (55%). In an additional 5/20 cases (25%), 6 out of 7 surgeons agreed on their accuracy. In the remaining 4/20 cases (20%), accuracy ratings varied, with 4 or 5 surgeons considering them accurate. Review of surgeon feedback on inaccuracies revealed that the radiology reports were often insufficiently detailed. The mean FRES score of the MRI reports was significantly lower than the GPT-4o explanations (32.15, SD 15.89 vs 53.9, SD 7.86; P<.001). The mean FKGL score of the MRI reports trended higher compared to the GPT-4o explanations (11th-12th grade vs 10th-11th grade level; P=.11). Overall helpfulness and readability ratings for AI-generated summaries of MRI reports were high, with few inaccuracies recorded. This study demonstrates the potential of GPT-4o to serve as a valuable tool for enhancing patient comprehension of MRI report findings.

The power spectrum map of gyro-sulcal functional activity dissociation in macaque brains.

Sun Y, Zhou J, Mao W, Zhang W, Zhao B, Duan X, Zhang S, Zhang T, Jiang X

pubmed logopapersJul 1 2025
Nonhuman primates, particularly rhesus macaques, have served as crucial animal models for investigating complex brain functions. While previous studies have explored neural activity features in macaques, the gyro-sulcal functional dissociation characteristics are largely unknown. In this study, we employ a deep learning model named one-dimensional convolutional neural network to differentiate resting state functional magnetic resonance imaging signals between gyri and sulci in macaque brains, and further investigate the frequency-specific dissociations between gyri and sulci inferred from the power spectral density of resting state functional magnetic resonance imaging. Experimental results based on a large cohort of 440 macaques from two independent sites demonstrate substantial frequency-specific dissociation between gyral and sulcal signals at both whole-brain and regional levels. The magnitude of gyral power spectral density is significantly larger than that of sulcal power spectral density within the range of 0.01 to 0.1 Hz, suggesting that gyri and sulci may play distinct roles as the global hubs and local processing units for functional activity transmission and interaction in macaque brains. In conclusion, our study has established one of the first power spectrum maps of gyro-sulcal functional activity dissociation in macaque brains, providing a novel perspective for systematically exploring the neural mechanism of functional dissociation in mammalian brains.

Cephalometric landmark detection using vision transformers with direct coordinate prediction.

Laitenberger F, Scheuer HT, Scheuer HA, Lilienthal E, You S, Friedrich RE

pubmed logopapersJul 1 2025
Cephalometric Landmark Detection (CLD), i.e. annotating interest points in lateral X-ray images, is the crucial first step of every orthodontic therapy. While CLD has immense potential for automation using Deep Learning methods, carefully crafted contemporary approaches using convolutional neural networks and heatmap prediction do not qualify for large-scale clinical application due to insufficient performance. We propose a novel approach using Vision Transformers (ViTs) with direct coordinate prediction, avoiding the memory-intensive heatmap prediction common in previous work. Through extensive ablation studies comparing our method against contemporary CNN architectures (ConvNext V2) and heatmap-based approaches (Segformer), we demonstrate that ViTs with coordinate prediction achieve superior performance with more than 2 mm improvement in mean radial error compared to state-of-the-art CLD methods. Our results show that while non-adapted CNN architectures perform poorly on the given task, contemporary approaches may be too tailored to specific datasets, failing to generalize to different and especially sparse datasets. We conclude that using general-purpose Vision Transformers with direct coordinate prediction shows great promise for future research on CLD and medical computer vision.

Comparison of Deep Learning Models for fast and accurate dose map prediction in Microbeam Radiation Therapy.

Arsini L, Humphreys J, White C, Mentzel F, Paino J, Bolst D, Caccia B, Cameron M, Ciardiello A, Corde S, Engels E, Giagu S, Rosenfeld A, Tehei M, Tsoi AC, Vogel S, Lerch M, Hagenbuchner M, Guatelli S, Terracciano CM

pubmed logopapersJul 1 2025
Microbeam Radiation Therapy (MRT) is an innovative radiotherapy modality which uses highly focused synchrotron-generated X-ray microbeams. Current pre-clinical research in MRT mostly rely on Monte Carlo (MC) simulations for dose estimation, which are highly accurate but computationally intensive. Recently, Deep Learning (DL) dose engines have been proved effective in generating fast and reliable dose distributions in different RT modalities. However, relatively few studies compare different models on the same task. This work aims to compare a Graph-Convolutional-Network-based DL model, developed in the context of Very High Energy Electron RT, to the Convolutional 3D U-Net that we recently implemented for MRT dose predictions. The two DL solutions are trained with 3D dose maps, generated with the MC-Toolkit Geant4, in rats used in MRT pre-clinical research. The models are evaluated against Geant4 simulations, used as ground truth, and are assessed in terms of Mean Absolute Error, Mean Relative Error, and a voxel-wise version of the γ-index. Also presented are specific comparisons of predictions in relevant tumor regions, tissues boundaries and air pockets. The two models are finally compared from the perspective of the execution time and size. This study finds that the two models achieve comparable overall performance. Main differences are found in their dosimetric accuracy within specific regions, such as air pockets, and their respective inference times. Consequently, the choice between models should be guided primarily by data structure and time constraints, favoring the graph-based method for its flexibility or the 3D U-Net for its faster execution.

Deep learning radiomics and mediastinal adipose tissue-based nomogram for preoperative prediction of postoperative‌ brain metastasis risk in non-small cell lung cancer.

Niu Y, Jia HB, Li XM, Huang WJ, Liu PP, Liu L, Liu ZY, Wang QJ, Li YZ, Miao SD, Wang RT, Duan ZX

pubmed logopapersJul 1 2025
Brain metastasis (BM) significantly affects the prognosis of non-small cell lung cancer (NSCLC) patients. Increasing evidence suggests that adipose tissue influences cancer progression and metastasis. This study aimed to develop a predictive nomogram integrating mediastinal fat area (MFA) and deep learning (DL)-derived tumor characteristics to stratify postoperative‌ BM risk in NSCLC patients. A retrospective cohort of 585 surgically resected NSCLC patients was analyzed. Preoperative computed tomography (CT) scans were utilized to quantify MFA using ImageJ software (radiologist-validated measurements). Concurrently, a DL algorithm extracted tumor radiomic features, generating a deep learning brain metastasis score (DLBMS). Multivariate logistic regression identified independent BM predictors, which were incorporated into a nomogram. Model performance was assessed via area under the receiver operating characteristic curve (AUC), calibration plots, integrated discrimination improvement (IDI), net reclassification improvement (NRI), and decision curve analysis (DCA). Multivariate analysis identified N stage, EGFR mutation status, MFA, and DLBMS as independent predictors of BM. The nomogram achieved superior discriminative capacity (AUC: 0.947 in the test set), significantly outperforming conventional models. MFA contributed substantially to predictive accuracy, with IDI and NRI values confirming its incremental utility (IDI: 0.123, <i>P</i> < 0.001; NRI: 0.386, <i>P</i> = 0.023). Calibration analysis demonstrated strong concordance between predicted and observed BM probabilities, while DCA confirmed clinical net benefit across risk thresholds. This DL-enhanced nomogram, incorporating MFA and tumor radiomics, represents a robust and clinically useful tool for preoperative prediction of postoperative BM risk in NSCLC. The integration of adipose tissue metrics with advanced imaging analytics advances personalized prognostic assessment in NSCLC patients. The online version contains supplementary material available at 10.1186/s12885-025-14466-5.

Improving YOLO-based breast mass detection with transfer learning pretraining on the OPTIMAM Mammography Image Database.

Ho PS, Tsai HY, Liu I, Lee YY, Chan SW

pubmed logopapersJul 1 2025
Early detection of breast cancer through mammography significantly improves survival rates. However, high false positive and false negative rates remain a challenge. Deep learning-based computer-aided diagnosis systems can assist in lesion detection, but their performance is often limited by the availability of labeled clinical data. This study systematically evaluated the effectiveness of transfer learning, image preprocessing techniques, and the latest You Only Look Once (YOLO) model (v9) for optimizing breast mass detection models on small proprietary datasets. We examined 133 mammography images containing masses and assessed various preprocessing strategies, including cropping and contrast enhancement. We further investigated the impact of transfer learning using the OPTIMAM Mammography Image Database (OMI-DB) compared with training on proprietary data. The performance of YOLOv9 was evaluated against YOLOv7 to determine improvements in detection accuracy. Pretraining on the OMI-DB dataset with cropped images significantly improved model performance, with YOLOv7 achieving a 13.9 % higher mean average precision (mAP) and 13.2 % higher F1-score compared to training only on proprietary data. Among the tested models and configurations, the best results were obtained using YOLOv9 pretrained OMI-DB and fine-tuned with cropped proprietary images, yielding an mAP of 73.3 % ± 16.7 % and an F1-score of 76.0 % ± 13.4 %, under this condition, YOLOv9 outperformed YOLOv7 by 8.1 % in mAP and 9.2 % in F1-score. This study provides a systematic evaluation of transfer learning and preprocessing techniques for breast mass detection in small datasets. Our results demonstrating that YOLOv9 with OMI-DB pretraining significantly enhances the performance of breast mass detection models while reducing training time, providing a valuable guideline for optimizing deep learning models in data-limited clinical applications.

Artificial Intelligence in Obstetric and Gynecological MR Imaging.

Saida T, Gu W, Hoshiai S, Ishiguro T, Sakai M, Amano T, Nakahashi Y, Shikama A, Satoh T, Nakajima T

pubmed logopapersJul 1 2025
This review explores the significant progress and applications of artificial intelligence (AI) in obstetrics and gynecological MRI, charting its development from foundational algorithmic techniques to deep learning strategies and advanced radiomics. This review features research published over the last few years that has used AI with MRI to identify specific conditions such as uterine leiomyosarcoma, endometrial cancer, cervical cancer, ovarian tumors, and placenta accreta. In addition, it covers studies on the application of AI for segmentation and quality improvement in obstetrics and gynecology MRI. The review also outlines the existing challenges and envisions future directions for AI research in this domain. The growing accessibility of extensive datasets across various institutions and the application of multiparametric MRI are significantly enhancing the accuracy and adaptability of AI. This progress has the potential to enable more accurate and efficient diagnosis, offering opportunities for personalized medicine in the field of obstetrics and gynecology.
Page 45 of 1651650 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.