Sort by:
Page 17 of 38374 results

Epistasis regulates genetic control of cardiac hypertrophy.

Wang Q, Tang TM, Youlton M, Weldy CS, Kenney AM, Ronen O, Hughes JW, Chin ET, Sutton SC, Agarwal A, Li X, Behr M, Kumbier K, Moravec CS, Tang WHW, Margulies KB, Cappola TP, Butte AJ, Arnaout R, Brown JB, Priest JR, Parikh VN, Yu B, Ashley EA

pubmed logopapersJun 5 2025
Although genetic variant effects often interact nonadditively, strategies to uncover epistasis remain in their infancy. Here we develop low-signal signed iterative random forests to elucidate the complex genetic architecture of cardiac hypertrophy, using deep learning-derived left ventricular mass estimates from 29,661 UK Biobank cardiac magnetic resonance images. We report epistatic variants near CCDC141, IGF1R, TTN and TNKS, identifying loci deemed insignificant in genome-wide association studies. Functional genomic and integrative enrichment analyses reveal that genes mapped from these loci share biological process gene ontologies and myogenic regulatory factors. Transcriptomic network analyses using 313 human hearts demonstrate strong co-expression correlations among these genes in healthy hearts, with significantly reduced connectivity in failing hearts. To assess causality, RNA silencing in human induced pluripotent stem cell-derived cardiomyocytes, combined with novel microfluidic single-cell morphology analysis, confirms that cardiomyocyte hypertrophy is nonadditively modifiable by interactions between CCDC141, TTN and IGF1R. Our results expand the scope of cardiac genetic regulation to epistasis.

Role of Large Language Models for Suggesting Nerve Involvement in Upper Limbs MRI Reports with Muscle Denervation Signs.

Martín-Noguerol T, López-Úbeda P, Luna A, Gómez-Río M, Górriz JM

pubmed logopapersJun 5 2025
Determining the involvement of specific peripheral nerves (PNs) in the upper limb associated with signs of muscle denervation can be challenging. This study aims to develop, compare, and validate various large language models (LLMs) to automatically identify and establish potential relationships between denervated muscles and their corresponding PNs. We collected 300 retrospective MRI reports in Spanish from upper limb examinations conducted between 2018 and 2024 that showed signs of muscle denervation. An expert radiologist manually annotated these reports based on the affected peripheral nerves (median, ulnar, radial, axillary, and suprascapular). BERT, DistilBERT, mBART, RoBERTa, and Medical-ELECTRA models were fine-tuned and evaluated on the reports. Additionally, an automatic voting system was implemented to consolidate predictions through majority voting. The voting system achieved the highest F1 scores for the median, ulnar, and radial nerves, with scores of 0.88, 1.00, and 0.90, respectively. Medical-ELECTRA also performed well, achieving F1 scores above 0.82 for the axillary and suprascapular nerves. In contrast, mBART demonstrated lower performance, particularly with an F1 score of 0.38 for the median nerve. Our voting system generally outperforms the individually tested LLMs in determining the specific PN likely associated with muscle denervation patterns detected in upper limb MRI reports. This system can thereby assist radiologists by suggesting the implicated PN when generating their radiology reports.

Performance analysis of large language models in multi-disease detection from chest computed tomography reports: a comparative study: Experimental Research.

Luo P, Fan C, Li A, Jiang T, Jiang A, Qi C, Gan W, Zhu L, Mou W, Zeng D, Tang B, Xiao M, Chu G, Liang Z, Shen J, Liu Z, Wei T, Cheng Q, Lin A, Chen X

pubmed logopapersJun 5 2025
Computed Tomography (CT) is widely acknowledged as the gold standard for diagnosing thoracic diseases. However, the accuracy of interpretation significantly depends on radiologists' expertise. Large Language Models (LLMs) have shown considerable promise in various medical applications, particularly in radiology. This study aims to assess the performance of leading LLMs in analyzing unstructured chest CT reports and to examine how different questioning methodologies and fine-tuning strategies influence their effectiveness in enhancing chest CT diagnosis. This retrospective analysis evaluated 13,489 chest CT reports encompassing 13 common thoracic conditions across pulmonary, cardiovascular, pleural, and upper abdominal systems. Five LLMs (Claude-3.5-Sonnet, GPT-4, GPT-3.5-Turbo, Gemini-Pro, Qwen-Max) were assessed using dual questioning methodologies: multiple-choice and open-ended. Radiologist-curated datasets underwent rigorous preprocessing, including RadLex terminology standardization, multi-step diagnostic validation, and exclusion of ambiguous cases. Model performance was quantified via Subjective Answer Accuracy Rate (SAAR), Reference Answer Accuracy Rate (RAAR), and Area Under the Receiver Operating Characteristic (ROC) Curve analysis. GPT-3.5-Turbo underwent fine-tuning (100 iterations with one training epoch) on 200 high-performing cases to enhance diagnostic precision for initially misclassified conditions. GPT-4 demonstrated superior performance with the highest RAAR of 75.1% in multiple-choice questioning, followed by Qwen-Max (66.0%) and Claude-3.5 (63.5%), significantly outperforming GPT-3.5-Turbo (41.8%) and Gemini-Pro (40.8%) across the entire patient cohort. Multiple-choice questioning consistently improved both RAAR and SAAR for all models compared to open-ended questioning, with RAAR consistently surpassing SAAR. Model performance demonstrated notable variations across different diseases and organ conditions. Notably, fine-tuning substantially enhanced the performance of GPT-3.5-Turbo, which initially exhibited suboptimal results in most scenarios. This study demonstrated that general-purpose LLMs can effectively interpret chest CT reports, with performance varying significantly across models depending on the questioning methodology and fine-tuning approaches employed. For surgical practice, these findings provided evidence-based guidance for selecting appropriate AI tools to enhance preoperative planning, particularly for thoracic procedures. The integration of optimized LLMs into surgical workflows may improve decision-making efficiency, risk stratification, and diagnostic speed, potentially contributing to better surgical outcomes through more accurate preoperative assessment.

Advancing prenatal healthcare by explainable AI enhanced fetal ultrasound image segmentation using U-Net++ with attention mechanisms.

Singh R, Gupta S, Mohamed HG, Bharany S, Rehman AU, Ghadi YY, Hussen S

pubmed logopapersJun 4 2025
Prenatal healthcare development requires accurate automated techniques for fetal ultrasound image segmentation. This approach allows standardized evaluation of fetal development by minimizing time-exhaustive processes that perform poorly due to human intervention. This research develops a segmentation framework through U-Net++ with ResNet backbone features which incorporates attention components for enhancing extraction of features in low contrast, noisy ultrasound data. The model leverages the nested skip connections of U-Net++ and the residual learning of ResNet-34 to achieve state-of-the-art segmentation accuracy. Evaluations of the developed model against the vast fetal ultrasound image collection yielded superior results by reaching 97.52% Dice coefficient as well as 95.15% Intersection over Union (IoU), and 3.91 mm Hausdorff distance. The pipeline integrated Grad-CAM++ allows explanations of the model decisions for clinical utility and trust enhancement. The explainability component enables medical professionals to study how the model functions, which creates clear and proven segmentation outputs for better overall reliability. The framework fills in the gap between AI automation and clinical interpretability by showing important areas which affect predictions. The research shows that deep learning combined with Explainable AI (XAI) operates to generate medical imaging solutions that achieve high accuracy. The proposed system demonstrates readiness for clinical workflows due to its ability to deliver a sophisticated prenatal diagnostic instrument that enhances healthcare results.

Interpretable Machine Learning based Detection of Coeliac Disease

Jaeckle, F., Bryant, R., Denholm, J., Romero Diaz, J., Schreiber, B., Shenoy, V., Ekundayomi, D., Evans, S., Arends, M., Soilleux, E.

medrxiv logopreprintJun 4 2025
BackgroundCoeliac disease, an autoimmune disorder affecting approximately 1% of the global population, is typically diagnosed on a duodenal biopsy. However, inter-pathologist agreement on coeliac disease diagnosis is only around 80%. Existing machine learning solutions designed to improve coeliac disease diagnosis often lack interpretability, which is essential for building trust and enabling widespread clinical adoption. ObjectiveTo develop an interpretable AI model capable of segmenting key histological structures in duodenal biopsies, generating explainable segmentation masks, estimating intraepithelial lymphocyte (IEL)-to-enterocyte and villus-to-crypt ratios, and diagnosing coeliac disease. DesignSemantic segmentation models were trained to identify villi, crypts, IELs, and enterocytes using 49 annotated 2048x2048 patches at 40x magnification. IEL-to-enterocyte and villus-to-crypt ratios were calculated from segmentation masks, and a logistic regression model was trained on 172 images to diagnose coeliac disease based on these ratios. Evaluation was performed on an independent test set of 613 duodenal biopsy scans from a separate NHS Trust. ResultsThe villus-crypt segmentation model achieved a mean PR AUC of 80.5%, while the IEL-enterocyte model reached a PR AUC of 82%. The diagnostic model classified WSIs with 96% accuracy, 86% positive predictive value, and 98% negative predictive value on the independent test set. ConclusionsOur interpretable AI models accurately segmented key histological structures and diagnosed coeliac disease in unseen WSIs, demonstrating strong generalization performance. These models provide pathologists with reliable IEL-to-enterocyte and villus-to-crypt ratio estimates, enhancing diagnostic accuracy. Interpretable AI solutions like ours are essential for fostering trust among healthcare professionals and patients, complementing existing black-box methodologies. What is already known on this topicPathologist concordance in diagnosing coeliac disease from duodenal biopsies is consistently reported to be below 80%, highlighting diagnostic variability and the need for improved methods. Several recent studies have leveraged artificial intelligence (AI) to enhance coeliac disease diagnosis. However, most of these models operate as "black boxes," offering limited interpretability and transparency. The lack of explainability in AI-driven diagnostic tools prevents widespread adoption by healthcare professionals and reduces patient trust. What this study addsThis study presents an interpretable semantic segmentation algorithm capable of detecting the four key histological structures essential for diagnosing coeliac disease: crypts, villi, intraepithelial lymphocytes (IELs), and enterocytes. The model accurately estimates the IEL-to-enterocyte ratio and the villus-to-crypt ratio, the latter being an indicator of villous atrophy and crypt hyperplasia, thereby providing objective, reproducible metrics for diagnosis. The segmentation outputs allow for transparent, explainable decision-making, supporting pathologists in coeliac disease diagnosis with improved accuracy and confidence. This study presents an AI model that automates the estimation of the IEL-to-enterocyte ratio--a labour-intensive task currently performed manually by pathologists in limited biopsy regions. By minimising diagnostic variability and alleviating time constraints for pathologists, the model provides an efficient and practical solution to streamline the diagnostic workflow. Tested on an independent dataset from a previously unseen source, the model demonstrates explainability and generalizability, enhancing trust and encouraging adoption in routine clinical practice. Furthermore, this approach could set a new standard for AI-assisted duodenal biopsy evaluation, paving the way for the development of interpretable AI tools in pathology to address the critical challenges of limited pathologist availability and diagnostic inconsistencies.

Regulating Generative AI in Radiology Practice: A Trilaminar Approach to Balancing Risk with Innovation.

Gowda V, Bizzo BC, Dreyer KJ

pubmed logopapersJun 4 2025
Generative AI tools have proliferated across the market, garnered significant media attention, and increasingly found incorporation into the radiology practice setting. However, they raise a number of unanswered questions concerning governance and appropriate use. By their nature as general-purpose technologies, they strain the limits of existing FDA premarket review pathways to regulate them and introduce new sources of liability, privacy, and clinical risk. A multilayered governance approach is needed to balance innovation with safety. To address gaps in oversight, this piece establishes a trilaminar governance model for generative AI technologies. This treats federal regulations as a scaffold, upon which tiers of institutional guidelines and industry self-regulatory frameworks are added to create a comprehensive paradigm composed of interlocking parts. Doing so would provide radiologists with an effective risk management strategy for the future, foster continued technical development, and ultimately, promote patient care.

Multimodal data integration for biologically-relevant artificial intelligence to guide adjuvant chemotherapy in stage II colorectal cancer.

Xie C, Ning Z, Guo T, Yao L, Chen X, Huang W, Li S, Chen J, Zhao K, Bian X, Li Z, Huang Y, Liang C, Zhang Q, Liu Z

pubmed logopapersJun 4 2025
Adjuvant chemotherapy provides a limited survival benefit (<5%) for patients with stage II colorectal cancer (CRC) and is suggested for high-risk patients. Given the heterogeneity of stage II CRC, we aimed to develop a clinically explainable artificial intelligence (AI)-powered analyser to identify radiological phenotypes that would benefit from chemotherapy. Multimodal data from patients with CRC across six cohorts were collected, including 405 patients from the Guangdong Provincial People's Hospital for model development and 153 patients from the Yunnan Provincial Cancer Centre for validation. RNA sequencing data were used to identify the differentially expressed genes in the two radiological clusters. Histopathological patterns were evaluated to bridge the gap between the imaging and genetic information. Finally, we investigated the discovered morphological patterns of mouse models to observe imaging features. The survival benefit of chemotherapy varied significantly among the AI-powered radiological clusters [interaction hazard ratio (iHR) = 5.35, (95% CI: 1.98, 14.41), adjusted P<sub>interaction</sub> = 0.012]. Distinct biological pathways related to immune and stromal cell abundance were observed between the clusters. The observation only (OO)-preferable cluster exhibited higher necrosis, haemorrhage, and tortuous vessels, whereas the adjuvant chemotherapy (AC)-preferable cluster exhibited vessels with greater pericyte coverage, allowing for a more enriched infiltration of B, CD4<sup>+</sup>-T, and CD8<sup>+</sup>-T cells into the core tumoural areas. Further experiments confirmed that changes in vessel morphology led to alterations in predictive imaging features. The developed explainable AI-powered analyser effectively identified patients with stage II CRC with improved overall survival after receiving adjuvant chemotherapy, thereby contributing to the advancement of precision oncology. This work was funded by the National Science Fund of China (81925023, 82302299, and U22A2034), Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application (2022B1212010011), and High-level Hospital Construction Project (DFJHBF202105 and YKY-KF202204).

An Unsupervised XAI Framework for Dementia Detection with Context Enrichment

Singh, D., Brima, Y., Levin, F., Becker, M., Hiller, B., Hermann, A., Villar-Munoz, I., Beichert, L., Bernhardt, A., Buerger, K., Butryn, M., Dechent, P., Duezel, E., Ewers, M., Fliessbach, K., D. Freiesleben, S., Glanz, W., Hetzer, S., Janowitz, D., Goerss, D., Kilimann, I., Kimmich, O., Laske, C., Levin, J., Lohse, A., Luesebrink, F., Munk, M., Perneczky, R., Peters, O., Preis, L., Priller, J., Prudlo, J., Prychynenko, D., Rauchmann, B.-S., Rostamzadeh, A., Roy-Kluth, N., Scheffler, K., Schneider, A., Droste zu Senden, L., H. Schott, B., Spottke, A., Synofzik, M., Wiltfang, J., Jessen, F., W

medrxiv logopreprintJun 4 2025
IntroductionExplainable Artificial Intelligence (XAI) methods enhance the diagnostic efficiency of clinical decision support systems by making the predictions of a convolutional neural networks (CNN) on brain imaging more transparent and trustworthy. However, their clinical adoption is limited due to limited validation of the explanation quality. Our study introduces a framework that evaluates XAI methods by integrating neuroanatomical morphological features with CNN-generated relevance maps for disease classification. MethodsWe trained a CNN using brain MRI scans from six cohorts: ADNI, AIBL, DELCODE, DESCRIBE, EDSD, and NIFD (N=3253), including participants that were cognitively normal, with amnestic mild cognitive impairment, dementia due to Alzheimers disease and frontotemporal dementia. Clustering analysis benchmarked different explanation space configurations by using morphological features as proxy-ground truth. We implemented three post-hoc explanations methods: i) by simplifying model decisions, ii) explanation-by-example, and iii) textual explanations. A qualitative evaluation by clinicians (N=6) was performed to assess their clinical validity. ResultsClustering performance improved in morphology enriched explanation spaces, improving both homogeneity and completeness of the clusters. Post hoc explanations by model simplification largely delineated converters and stable participants, while explanation-by-example presented possible cognition trajectories. Textual explanations gave rule-based summarization of pathological findings. Clinicians qualitative evaluation highlighted challenges and opportunities of XAI for different clinical applications. ConclusionOur study refines XAI explanation spaces and applies various approaches for generating explanations. Within the context of AI-based decision support system in dementia research we found the explanations methods to be promising towards enhancing diagnostic efficiency, backed up by the clinical assessments.

Recent Advances in Medical Image Classification

Loan Dao, Ngoc Quoc Ly

arxiv logopreprintJun 4 2025
Medical image classification is crucial for diagnosis and treatment, benefiting significantly from advancements in artificial intelligence. The paper reviews recent progress in the field, focusing on three levels of solutions: basic, specific, and applied. It highlights advances in traditional methods using deep learning models like Convolutional Neural Networks and Vision Transformers, as well as state-of-the-art approaches with Vision Language Models. These models tackle the issue of limited labeled data, and enhance and explain predictive results through Explainable Artificial Intelligence.

Synthetic multi-inversion time magnetic resonance images for visualization of subcortical structures

Savannah P. Hays, Lianrui Zuo, Anqi Feng, Yihao Liu, Blake E. Dewey, Jiachen Zhuo, Ellen M. Mowry, Scott D. Newsome Jerry L. Prince, Aaron Carass

arxiv logopreprintJun 4 2025
Purpose: Visualization of subcortical gray matter is essential in neuroscience and clinical practice, particularly for disease understanding and surgical planning.While multi-inversion time (multi-TI) T$_1$-weighted (T$_1$-w) magnetic resonance (MR) imaging improves visualization, it is rarely acquired in clinical settings. Approach: We present SyMTIC (Synthetic Multi-TI Contrasts), a deep learning method that generates synthetic multi-TI images using routinely acquired T$_1$-w, T$_2$-weighted (T$_2$-w), and FLAIR images. Our approach combines image translation via deep neural networks with imaging physics to estimate longitudinal relaxation time (T$_1$) and proton density (PD) maps. These maps are then used to compute multi-TI images with arbitrary inversion times. Results: SyMTIC was trained using paired MPRAGE and FGATIR images along with T$_2$-w and FLAIR images. It accurately synthesized multi-TI images from standard clinical inputs, achieving image quality comparable to that from explicitly acquired multi-TI data.The synthetic images, especially for TI values between 400-800 ms, enhanced visualization of subcortical structures and improved segmentation of thalamic nuclei. Conclusion: SyMTIC enables robust generation of high-quality multi-TI images from routine MR contrasts. It generalizes well to varied clinical datasets, including those with missing FLAIR images or unknown parameters, offering a practical solution for improving brain MR image visualization and analysis.
Page 17 of 38374 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.