Sort by:
Page 15 of 78773 results

Enhancing Corpus Callosum Segmentation in Fetal MRI via Pathology-Informed Domain Randomization

Marina Grifell i Plana, Vladyslav Zalevskyi, Léa Schmidt, Yvan Gomez, Thomas Sanchez, Vincent Dunet, Mériam Koob, Vanessa Siffredi, Meritxell Bach Cuadra

arxiv logopreprintAug 28 2025
Accurate fetal brain segmentation is crucial for extracting biomarkers and assessing neurodevelopment, especially in conditions such as corpus callosum dysgenesis (CCD), which can induce drastic anatomical changes. However, the rarity of CCD severely limits annotated data, hindering the generalization of deep learning models. To address this, we propose a pathology-informed domain randomization strategy that embeds prior knowledge of CCD manifestations into a synthetic data generation pipeline. By simulating diverse brain alterations from healthy data alone, our approach enables robust segmentation without requiring pathological annotations. We validate our method on a cohort comprising 248 healthy fetuses, 26 with CCD, and 47 with other brain pathologies, achieving substantial improvements on CCD cases while maintaining performance on both healthy fetuses and those with other pathologies. From the predicted segmentations, we derive clinically relevant biomarkers, such as corpus callosum length (LCC) and volume, and show their utility in distinguishing CCD subtypes. Our pathology-informed augmentation reduces the LCC estimation error from 1.89 mm to 0.80 mm in healthy cases and from 10.9 mm to 0.7 mm in CCD cases. Beyond these quantitative gains, our approach yields segmentations with improved topological consistency relative to available ground truth, enabling more reliable shape-based analyses. Overall, this work demonstrates that incorporating domain-specific anatomical priors into synthetic data pipelines can effectively mitigate data scarcity and enhance analysis of rare but clinically significant malformations.

Deep Learning-Based Generation of DSC MRI Parameter Maps Using Dynamic Contrast-Enhanced MRI Data.

Pei H, Lyu Y, Lambrecht S, Lin D, Feng L, Liu F, Nyquist P, van Zijl P, Knutsson L, Xu X

pubmed logopapersAug 28 2025
Perfusion and perfusion-related parameter maps obtained by using DSC MRI and dynamic contrast-enhanced (DCE) MRI are both useful for clinical diagnosis and research. However, using both DSC and DCE MRI in the same scan session requires 2 doses of gadolinium contrast agent. The objective was to develop deep learning-based methods to synthesize DSC-derived parameter maps from DCE MRI data. Independent analysis of data collected in previous studies was performed. The database contained 64 participants, including patients with and without brain tumors. The reference parameter maps were measured from DSC MRI performed after DCE MRI. A conditional generative adversarial network (cGAN) was designed and trained to generate synthetic DSC-derived maps from DCE MRI data. The median parameter values and distributions between synthetic and real maps were compared by using linear regression and Bland-Altman plots. Using cGAN, realistic DSC parameter maps could be synthesized from DCE MRI data. For controls without brain tumors, the synthesized parameters had distributions similar to the ground truth values. For patients with brain tumors, the synthesized parameters in the tumor region correlated linearly with the ground truth values. In addition, areas not visible due to susceptibility artifacts in real DSC maps could be visualized by using DCE-derived DSC maps. DSC-derived parameter maps could be synthesized by using DCE MRI data, including susceptibility-artifact-prone regions. This shows the potential to obtain both DSC and DCE parameter maps from DCE MRI by using a single dose of contrast agent.

Hybrid quantum-classical-quantum convolutional neural networks.

Long C, Huang M, Ye X, Futamura Y, Sakurai T

pubmed logopapersAug 28 2025
Deep learning has achieved significant success in pattern recognition, with convolutional neural networks (CNNs) serving as a foundational architecture for extracting spatial features from images. Quantum computing provides an alternative computational framework, a hybrid quantum-classical convolutional neural networks (QCCNNs) leverage high-dimensional Hilbert spaces and entanglement to surpass classical CNNs in image classification accuracy under comparable architectures. Despite performance improvements, QCCNNs typically use fixed quantum layers without incorporating trainable quantum parameters. This limits their ability to capture non-linear quantum representations and separates the model from the potential advantages of expressive quantum learning. In this work, we present a hybrid quantum-classical-quantum convolutional neural network (QCQ-CNN) that incorporates a quantum convolutional filter, a shallow classical CNN, and a trainable variational quantum classifier. This architecture aims to enhance the expressivity of decision boundaries in image classification tasks by introducing tunable quantum parameters into the end-to-end learning process. Through a series of small-sample experiments on MNIST, F-MNIST, and MRI tumor datasets, QCQ-CNN demonstrates competitive accuracy and convergence behavior compared to classical and hybrid baselines. We further analyze the effect of ansatz depth and find that moderate-depth quantum circuits can improve learning stability without introducing excessive complexity. Additionally, simulations incorporating depolarizing noise and finite sampling shots suggest that QCQ-CNN maintains a certain degree of robustness under realistic quantum noise conditions. While our results are currently limited to simulations with small-scale quantum circuits, the proposed approach offers a potentially promising direction for hybrid quantum learning in near-term applications.

Reverse Imaging for Wide-spectrum Generalization of Cardiac MRI Segmentation

Yidong Zhao, Peter Kellman, Hui Xue, Tongyun Yang, Yi Zhang, Yuchi Han, Orlando Simonetti, Qian Tao

arxiv logopreprintAug 28 2025
Pretrained segmentation models for cardiac magnetic resonance imaging (MRI) struggle to generalize across different imaging sequences due to significant variations in image contrast. These variations arise from changes in imaging protocols, yet the same fundamental spin properties, including proton density, T1, and T2 values, govern all acquired images. With this core principle, we introduce Reverse Imaging, a novel physics-driven method for cardiac MRI data augmentation and domain adaptation to fundamentally solve the generalization problem. Our method reversely infers the underlying spin properties from observed cardiac MRI images, by solving ill-posed nonlinear inverse problems regularized by the prior distribution of spin properties. We acquire this "spin prior" by learning a generative diffusion model from the multiparametric SAturation-recovery single-SHot acquisition sequence (mSASHA) dataset, which offers joint cardiac T1 and T2 maps. Our method enables approximate but meaningful spin-property estimates from MR images, which provide an interpretable "latent variable" that lead to highly flexible image synthesis of arbitrary novel sequences. We show that Reverse Imaging enables highly accurate segmentation across vastly different image contrasts and imaging protocols, realizing wide-spectrum generalization of cardiac MRI segmentation.

FaithfulNet: An explainable deep learning framework for autism diagnosis using structural MRI.

Sujana DS, Augustine DP

pubmed logopapersAug 27 2025
Explainable Artificial Intelligence (XAI) can decode the 'black box' models, enhancing trust in clinical decision-making. XAI makes the predictions of deep learning models interpretable, transparent, and trustworthy. This study employed XAI techniques to explain the predictions made by a deep learning-based model for diagnosing autism and identifying the memory regions responsible for children's academic performance. This study utilized publicly available sMRI data from the ABIDE-II repository. First, a deep learning model, FaithfulNet, was developed to aid in the diagnosis of autism. Next, gradient-based class activation maps and the SHAP gradient explainer were employed to generate explanations for the model's predictions. These explanations were integrated to develop a novel and faithful visual explanation, Faith_CAM. Finally, this faithful explanation was quantified using the pointing game score and analyzed with cortical and subcortical structure masks to identify the impaired brain regions in the autistic brain. This study achieved a classification accuracy of 99.74% with an AUC value of 1. In addition to facilitating autism diagnosis, this study assesses the degree of impairment in memory regions responsible for the children's academic performance, thus contributing to the development of personalized treatment plans.

Quantum integration in swin transformer mitigates overfitting in breast cancer screening.

Xie Z, Yang X, Zhang S, Yang J, Zhu Y, Zhang A, Sun H, Dai Q, Li L, Liu H, Ming W, Dou M

pubmed logopapersAug 27 2025
To explore the potential of quantum computing in advancing transformer-based deep learning models for breast cancer screening, this study introduces the Quantum-Enhanced Swin Transformer (QEST). This model integrates a Variational Quantum Circuit (VQC) to replace the fully connected layer responsible for classification in the Swin Transformer architecture. In simulations, QEST exhibited competitive accuracy and generalization performance compared to the original Swin Transformer, while also demonstrating an effect in mitigating overfitting. Specifically, in 16-qubit simulations, the VQC reduced the parameter count by 62.5% compared with the replaced fully connected layer and improved the Balanced Accuracy (BACC) by 3.62% in external validation. Furthermore, validation experiments conducted on an actual quantum computer have corroborated the effectiveness of QEST.

A Systematic Review on the Generative AI Applications in Human Medical Genomics

Anton Changalidis, Yury Barbitoff, Yulia Nasykhova, Andrey Glotov

arxiv logopreprintAug 27 2025
Although traditional statistical techniques and machine learning methods have contributed significantly to genetics and, in particular, inherited disease diagnosis, they often struggle with complex, high-dimensional data, a challenge now addressed by state-of-the-art deep learning models. Large language models (LLMs), based on transformer architectures, have excelled in tasks requiring contextual comprehension of unstructured medical data. This systematic review examines the role of LLMs in the genetic research and diagnostics of both rare and common diseases. Automated keyword-based search in PubMed, bioRxiv, medRxiv, and arXiv was conducted, targeting studies on LLM applications in diagnostics and education within genetics and removing irrelevant or outdated models. A total of 172 studies were analyzed, highlighting applications in genomic variant identification, annotation, and interpretation, as well as medical imaging advancements through vision transformers. Key findings indicate that while transformer-based models significantly advance disease and risk stratification, variant interpretation, medical imaging analysis, and report generation, major challenges persist in integrating multimodal data (genomic sequences, imaging, and clinical records) into unified and clinically robust pipelines, facing limitations in generalizability and practical implementation in clinical settings. This review provides a comprehensive classification and assessment of the current capabilities and limitations of LLMs in transforming hereditary disease diagnostics and supporting genetic education, serving as a guide to navigate this rapidly evolving field.

Intelligent Head and Neck CTA Report Quality Detection with Large Language Models.

Tian L, Lu Y, Fei X, Lu J

pubmed logopapersAug 27 2025
This study aims to identify common errors in head and neck CTA reports using GPT-4, ERNIE Bot, and SparkDesk, evaluating their potential for supporting quality control in Chinese radiological reports. This study collected 10,000 head and neck CTA imaging reports from Xuanwu Hospital (Dataset 1) and 5000 multi-center reports (Dataset 2). We identified six common types of errors and detected them using three large language models: GPT-4, ERNIE Bot, and SparkDesk. The overall quality of the reports was assessed using a 5-point Likert scale. We conducted a Wilcoxon rank-sum test and Friedman test to compare error detection rates and evaluate the models' performance on different error types and overall scores. For Dataset 2, after manual review, we annotated the six error types and provided overall scoring, while also recording the time taken for manual scoring and model detection. Model performance was evaluated using accuracy, precision, recall, and F1 score. The intraclass correlation coefficient measured consistency between manual and model scores, and ANOVA compared evaluation times. In Dataset 1, the error detection rates for final reports were significantly lower than those for preliminary reports across all three model types. Friedman's test indicated significant differences in error rates among the three models. In Dataset 2, the detection accuracy of the three LLMs for the six error types was above 95%. GPT-4 had a moderate consistency with manual scores (ICC = 0.517), while ERNIE Bot and SparkDesk showed slightly lower consistency (ICC = 0.431 and 0.456, respectively; P < 0.001). The models evaluated one hundred radiology reports significantly faster than human reviewers. LLMs can differentiate the quality of radiology reports and identify error types, significantly enhancing the efficiency of quality control reviews and providing substantial research and practical value in this field.

HONeYBEE: Enabling Scalable Multimodal AI in Oncology Through Foundation Model-Driven Embeddings

Tripathi, A. G., Waqas, A., Schabath, M. B., Yilmaz, Y., Rasool, G.

medrxiv logopreprintAug 27 2025
HONeYBEE (Harmonized ONcologY Biomedical Embedding Encoder) is an open-source framework that integrates multimodal biomedical data for oncology applications. It processes clinical data (structured and unstructured), whole-slide images, radiology scans, and molecular profiles to generate unified patient-level embeddings using domain-specific foundation models and fusion strategies. These embeddings enable survival prediction, cancer-type classification, patient similarity retrieval, and cohort clustering. Evaluated on 11,400+ patients across 33 cancer types from The Cancer Genome Atlas (TCGA), clinical embeddings showed the strongest single-modality performance with 98.5% classification accuracy and 96.4% precision@10 in patient retrieval. They also achieved the highest survival prediction concordance indices across most cancer types. Multimodal fusion provided complementary benefits for specific cancers, improving overall survival prediction beyond clinical features alone. Comparative evaluation of four large language models revealed that general-purpose models like Qwen3 outperformed specialized medical models for clinical text representation, though task-specific fine-tuning improved performance on heterogeneous data such as pathology reports.

Evaluating the Quality and Understandability of Radiology Report Summaries Generated by ChatGPT: Survey Study.

Sunshine A, Honce GH, Callen AL, Zander DA, Tanabe JL, Pisani Petrucci SL, Lin CT, Honce JM

pubmed logopapersAug 27 2025
Radiology reports convey critical medical information to health care providers and patients. Unfortunately, they are often difficult for patients to comprehend, causing confusion and anxiety, thereby limiting patient engagement in health care decision-making. Large language models (LLMs) like ChatGPT (OpenAI) can create simplified, patient-friendly report summaries to increase accessibility, albeit with errors. We evaluated the accuracy and clarity of ChatGPT-generated summaries compared to original radiologist-assessed radiology reports, assessed patients' understanding and satisfaction with the summaries compared to the original reports, and compared the readability of the original reports and summaries using validated readability metrics. We anonymized 30 radiology reports created by neuroradiologists at our institution (6 brain magnetic resonance imaging, 6 brain computed tomography, 6 head and neck computed tomography angiography, 6 neck computed tomography, and 6 spine computed tomography). These anonymized reports were processed by ChatGPT to produce patient-centric summaries. Four board-certified neuroradiologists evaluated the ChatGPT-generated summaries on quality and accuracy compared to the original reports, and 4 patient volunteers separately evaluated the reports and summaries on perceived understandability and satisfaction. Readability was assessed using word count and validated readability scales. After reading the summary, patient confidence in understanding (98%, 116/118 vs 26%, 31/118) and satisfaction regarding the level of jargon/terminology (91%, 107/118 vs 8%, 9/118) and time taken to understand the content (97%, 115/118 vs 23%, 27/118) substantially improved. Ninety-two percent (108/118) of responses indicated the summary clarified patients' questions about the report, and 98% (116/118) of responses indicated patients would use the summary if available, with 67% (79/118) of responses indicating they would want access to both the report and summary, while 26% (31/118) of responses indicated only wanting the summary. Eighty-three percent (100/120) of radiologist responses indicated the summary represented the original report "extremely well" or "very well," with only 5% (6/120) of responses indicating it did so "slightly well" or "not well at all." Five percent (6/120) of responses indicated there was missing relevant medical information in the summary, 12% (14/120) reported instances of overemphasis of nonsignificant findings, and 18% (22/120) reported instances of underemphasis of significant findings. No fabricated findings were identified. Overall, 83% (99/120) of responses indicated that the summary would definitely/probably not lead patients to incorrect conclusions about the original report, with 10% (12/120) of responses indicating the summaries may do so. ChatGPT-generated summaries could significantly improve perceived comprehension and satisfaction while accurately reflecting most key information from original radiology reports. Instances of minor omissions and under-/overemphasis were noted in some summaries, underscoring the need for ongoing validation and oversight. Overall, these artificial intelligence-generated, patient-centric summaries hold promise for enhancing patient-centered communication in radiology.
Page 15 of 78773 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.