Sort by:
Page 98 of 2382377 results

AI-driven preclinical disease risk assessment using imaging in UK biobank.

Seletkov D, Starck S, Mueller TT, Zhang Y, Steinhelfer L, Rueckert D, Braren R

pubmed logopapersJul 26 2025
Identifying disease risk and detecting disease before clinical symptoms appear are essential for early intervention and improving patient outcomes. In this context, the integration of medical imaging in a clinical workflow offers a unique advantage by capturing detailed structural and functional information. Unlike non-image data, such as lifestyle, sociodemographic, or prior medical conditions, which often rely on self-reported information susceptible to recall biases and subjective perceptions, imaging offers more objective and reliable insights. Although the use of medical imaging in artificial intelligence (AI)-driven risk assessment is growing, its full potential remains underutilized. In this work, we demonstrate how imaging can be integrated into routine screening workflows, in particular by taking advantage of neck-to-knee whole-body magnetic resonance imaging (MRI) data available in the large prospective study UK Biobank. Our analysis focuses on three-year risk assessment for a broad spectrum of diseases, including cardiovascular, digestive, metabolic, inflammatory, degenerative, and oncologic conditions. We evaluate AI-based pipelines for processing whole-body MRI and demonstrate that using image-derived radiomics features provides the best prediction performance, interpretability, and integration capability with non-image data.

A triple pronged approach for ulcerative colitis severity classification using multimodal, meta, and transformer based learning.

Ahmed MN, Neogi D, Kabir MR, Rahman S, Momen S, Mohammed N

pubmed logopapersJul 26 2025
Ulcerative colitis (UC) is a chronic inflammatory disorder necessitating precise severity stratification to facilitate optimal therapeutic interventions. This study harnesses a triple-pronged deep learning methodology-including multimodal inference pipelines that eliminate domain-specific training, few-shot meta-learning, and Vision Transformer (ViT)-based ensembling-to classify UC severity within the HyperKvasir dataset. We systematically evaluate multiple vision transformer architectures, discovering that a Swin-Base model achieves an accuracy of 90%, while a soft-voting ensemble of diverse ViT backbones boosts performance to 93%. In parallel, we leverage multimodal pre-trained frameworks (e.g., CLIP, BLIP, FLAVA) integrated with conventional machine learning algorithms, yielding an accuracy of 83%. To address limited annotated data, we deploy few-shot meta-learning approaches (e.g., Matching Networks), attaining 83% accuracy in a 5-shot context. Furthermore, interpretability is enhanced via SHapley Additive exPlanations (SHAP), which interpret both local and global model behaviors, thereby fostering clinical trust in the model's inferences. These findings underscore the potential of contemporary representation learning and ensemble strategies for robust UC severity classification, highlighting the pivotal role of model transparency in facilitating medical image analysis.

Brainwide hemodynamics predict EEG neural rhythms across sleep and wakefulness in humans

Jacob, L. P. L., Bailes, S. M., Williams, S. D., Stringer, C., Lewis, L. D.

biorxiv logopreprintJul 26 2025
The brain exhibits rich oscillatory dynamics that play critical roles in vigilance and cognition, such as the neural rhythms that define sleep. These rhythms continuously fluctuate, signaling major changes in vigilance, but the widespread brain dynamics underlying these oscillations are difficult to investigate. Using simultaneous EEG and fast fMRI in humans who fell asleep inside the scanner, we developed a machine learning approach to investigate which fMRI regions and networks predict fluctuations in neural rhythms. We demonstrated that the rise and fall of alpha (8-12 Hz) and delta (1-4 Hz) power, two canonical EEG bands critically involved with cognition and vigilance, can be predicted from fMRI data in subjects that were not present in the training set. This approach also identified predictive information in individual brain regions across the cortex and subcortex. Finally, we developed an approach to identify shared and unique predictive information, and found that information about alpha rhythms was highly separable in two networks linked to arousal and visual systems. Conversely, delta rhythms were diffusely represented on a large spatial scale primarily across the cortex. These results demonstrate that EEG rhythms can be predicted from fMRI data, identify large-scale network patterns that underlie alpha and delta rhythms, and establish a novel framework for investigating multimodal brain dynamics.

Hybrid Deep Learning and Handcrafted Feature Fusion for Mammographic Breast Cancer Classification

Maximilian Tschuchnig, Michael Gadermayr, Khalifa Djemal

arxiv logopreprintJul 26 2025
Automated breast cancer classification from mammography remains a significant challenge due to subtle distinctions between benign and malignant tissue. In this work, we present a hybrid framework combining deep convolutional features from a ResNet-50 backbone with handcrafted descriptors and transformer-based embeddings. Using the CBIS-DDSM dataset, we benchmark our ResNet-50 baseline (AUC: 78.1%) and demonstrate that fusing handcrafted features with deep ResNet-50 and DINOv2 features improves AUC to 79.6% (setup d1), with a peak recall of 80.5% (setup d1) and highest F1 score of 67.4% (setup d1). Our experiments show that handcrafted features not only complement deep representations but also enhance performance beyond transformer-based embeddings. This hybrid fusion approach achieves results comparable to state-of-the-art methods while maintaining architectural simplicity and computational efficiency, making it a practical and effective solution for clinical decision support.

Deep Learning-Based Multi-View Echocardiographic Framework for Comprehensive Diagnosis of Pericardial Disease

Jeong, S., Moon, I., Jeon, J., Jeong, D., Lee, J., kim, J., Lee, S.-A., Jang, Y., Yoon, Y. E., Chang, H.-J.

medrxiv logopreprintJul 25 2025
BackgroundPericardial disease exhibits a wide clinical spectrum, ranging from mild effusions to life-threatening tamponade or constriction pericarditis. While transthoracic echocardiography (TTE) is the primary diagnostic modality, its effectiveness is limited by operator dependence and incomplete evaluation of functional impact. Existing artificial intelligence models focus primarily on effusion detection, lacking comprehensive disease assessment. MethodsWe developed a deep learning (DL)-based framework that sequentially assesses pericardial disease: (1) morphological changes, including pericardial effusion amount (normal/small/moderate/large) and pericardial thickening or adhesion (yes/no), using five B-mode views, and (2) hemodynamic significance (yes/no), incorporating additional inputs from Doppler and inferior vena cava measurements. The developmental dataset comprises 2,253 TTEs from multiple Korean institutions (225 for internal testing), and the independent external test set consists of 274 TTEs. ResultsIn the internal test set, the model achieved diagnostic accuracy of 81.8-97.3% for pericardial effusion classification, 91.6% for pericardial thickening/adhesion, and 86.2% for hemodynamic significance. Corresponding accuracy in the external test set was 80.3-94.2%, 94.5%, and 85.5%, respectively. Area under the receiver operating curves (AUROCs) for the three tasks in the internal test set was 0.92-0.99, 0.90, and 0.79; and in the external test set, 0.95-0.98, 0.85, and 0.76. Sensitivity for detecting pericardial thickening/adhesion and hemodynamic significance was modest (66.7% and 68.8% in the internal test set), but improved substantially when cases with poor image quality were excluded (77.3%, and 80.8%). Similar performance gains were observed in subgroups with complete target views and a higher number of available video clips. ConclusionsThis study presents the first DL-based TTE model capable of comprehensive evaluation of pericardial disease, integrating both morphological and functional assessments. The proposed framework demonstrated strong generalizability and aligned with the real-world diagnostic workflow. However, caution is warranted when interpreting results under suboptimal imaging conditions.

DeepJIVE: Learning Joint and Individual Variation Explained from Multimodal Data Using Deep Learning

Matthew Drexler, Benjamin Risk, James J Lah, Suprateek Kundu, Deqiang Qiu

arxiv logopreprintJul 25 2025
Conventional multimodal data integration methods provide a comprehensive assessment of the shared or unique structure within each individual data type but suffer from several limitations such as the inability to handle high-dimensional data and identify nonlinear structures. In this paper, we introduce DeepJIVE, a deep-learning approach to performing Joint and Individual Variance Explained (JIVE). We perform mathematical derivation and experimental validations using both synthetic and real-world 1D, 2D, and 3D datasets. Different strategies of achieving the identity and orthogonality constraints for DeepJIVE were explored, resulting in three viable loss functions. We found that DeepJIVE can successfully uncover joint and individual variations of multimodal datasets. Our application of DeepJIVE to the Alzheimer's Disease Neuroimaging Initiative (ADNI) also identified biologically plausible covariation patterns between the amyloid positron emission tomography (PET) and magnetic resonance (MR) images. In conclusion, the proposed DeepJIVE can be a useful tool for multimodal data analysis.

Deep learning-based image classification for integrating pathology and radiology in AI-assisted medical imaging.

Lu C, Zhang J, Liu R

pubmed logopapersJul 25 2025
The integration of pathology and radiology in medical imaging has emerged as a critical need for advancing diagnostic accuracy and improving clinical workflows. Current AI-driven approaches for medical image analysis, despite significant progress, face several challenges, including handling multi-modal imaging, imbalanced datasets, and the lack of robust interpretability and uncertainty quantification. These limitations often hinder the deployment of AI systems in real-world clinical settings, where reliability and adaptability are essential. To address these issues, this study introduces a novel framework, the Domain-Informed Adaptive Network (DIANet), combined with an Adaptive Clinical Workflow Integration (ACWI) strategy. DIANet leverages multi-scale feature extraction, domain-specific priors, and Bayesian uncertainty modeling to enhance interpretability and robustness. The proposed model is tailored for multi-modal medical imaging tasks, integrating adaptive learning mechanisms to mitigate domain shifts and imbalanced datasets. Complementing the model, the ACWI strategy ensures seamless deployment through explainable AI (XAI) techniques, uncertainty-aware decision support, and modular workflow integration compatible with clinical systems like PACS. Experimental results demonstrate significant improvements in diagnostic accuracy, segmentation precision, and reconstruction fidelity across diverse imaging modalities, validating the potential of this framework to bridge the gap between AI innovation and clinical utility.

A novel approach for breast cancer detection using a Nesterov accelerated adam optimizer with an attention mechanism.

Saber A, Emara T, Elbedwehy S, Hassan E

pubmed logopapersJul 25 2025
Image-based automatic breast tumor detection has become a significant research focus, driven by recent advancements in machine learning (ML) algorithms. Traditional disease detection methods often involve manual feature extraction from images, a process requiring extensive expertise from specialists and pathologists. This labor-intensive approach is not only time-consuming but also impractical for widespread application. However, advancements in digital technologies and computer vision have enabled convolutional neural networks (CNNs) to learn features automatically, thereby overcoming these challenges. This paper presents a deep neural network model based on the MobileNet-V2 architecture, enhanced with a convolutional block attention mechanism for identifying tumor types in ultrasound images. The attention module improves the MobileNet-V2 model's performance by highlighting disease-affected areas within the images. The proposed model refines features extracted by MobileNet-V2 using the Nesterov-accelerated Adaptive Moment Estimation (Nadam) optimizer. This integration enhances convergence and stability, leading to improved classification accuracy. The proposed approach was evaluated on the BUSI ultrasound image dataset. Experimental results demonstrated strong performance, achieving an accuracy of 99.1%, sensitivity of 99.7%, specificity of 99.5%, precision of 97.7%, and an area under the curve (AUC) of 1.0 using an 80-20 data split. Additionally, under 10-fold cross-validation, the model achieved an accuracy of 98.7%, sensitivity of 99.1%, specificity of 98.3%, precision of 98.4%, F1-score of 98.04%, and an AUC of 0.99.

Digitalizing English-language CT Interpretation for Positive Haemorrhage Evaluation Reporting: the DECIPHER study.

Bloom B, Haimovich A, Pott J, Williams SL, Cheetham M, Langsted S, Skene I, Astin-Chamberlain R, Thomas SH

pubmed logopapersJul 25 2025
Identifying whether there is a traumatic intracranial bleed (ICB+) on head CT is critical for clinical care and research. Free text CT reports are unstructured and therefore must undergo time-consuming manual review. Existing artificial intelligence classification schemes are not optimised for the emergency department endpoint of classification of ICB+ or ICB-. We sought to assess three methods for classifying CT reports: a text classification (TC) programme, a commercial natural language processing programme (Clinithink) and a generative pretrained transformer large language model (Digitalizing English-language CT Interpretation for Positive Haemorrhage Evaluation Reporting (DECIPHER)-LLM). Primary objective: determine the diagnostic classification performance of the dichotomous categorisation of each of the three approaches. determine whether the LLM could achieve a substantial reduction in CT report review workload while maintaining 100% sensitivity.Anonymised radiology reports of head CT scans performed for trauma were manually labelled as ICB+/-. Training and validation sets were randomly created to train the TC and natural language processing models. Prompts were written to train the LLM. 898 reports were manually labelled. Sensitivity and specificity (95% CI)) of TC, Clinithink and DECIPHER-LLM (with probability of ICB set at 10%) were respectively 87.9% (76.7% to 95.0%) and 98.2% (96.3% to 99.3%), 75.9% (62.8% to 86.1%) and 96.2% (93.8% to 97.8%) and 100% (93.8% to 100%) and 97.4% (95.3% to 98.8%).With DECIPHER-LLM probability of ICB+ threshold of 10% set to identify CT reports requiring manual evaluation, CT reports requiring manual classification reduced by an estimated 385/449 cases (85.7% (95% CI 82.1% to 88.9%)) while maintaining 100% sensitivity. DECIPHER-LLM outperformed other tested free-text classification methods.

Automated characterization of abdominal MRI exams using deep learning.

Kim J, Chae A, Duda J, Borthakur A, Rader DJ, Gee JC, Kahn CE, Witschey WR, Sagreiya H

pubmed logopapersJul 25 2025
Advances in magnetic resonance imaging (MRI) have revolutionized disease detection and treatment planning. However, the growing volume and complexity of MRI data-along with heterogeneity in imaging protocols, scanner technology, and labeling practices-creates a need for standardized tools to automatically identify and characterize key imaging attributes. Such tools are essential for large-scale, multi-institutional studies that rely on harmonized data to train robust machine learning models. In this study, we developed convolutional neural networks (CNNs) to automatically classify three core attributes of abdominal MRI: pulse sequence type, imaging orientation, and contrast enhancement status. Three distinct CNNs with similar backbone architectures were trained to classify single image slices into one of 12 pulse sequences, 4 orientations, or 2 contrast classes. The models achieved high classification accuracies of 99.51%, 99.87%, and 99.99% for pulse sequence, orientation, and contrast, respectively. We applied Grad-CAM to visualize image regions influencing pulse sequence predictions and highlight relevant anatomical features. To enhance performance, we implemented a majority voting approach to aggregate slice-level predictions, achieving 100% accuracy at the volume level for all tasks. External validation using the Duke Liver Dataset demonstrated strong generalizability; after adjusting for class label mismatch, volume-level accuracies exceeded 96.9% across all classification tasks.
Page 98 of 2382377 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.