Sort by:
Page 69 of 3593587 results

Brain Age Prediction: Deep Models Need a Hand to Generalize.

Rajabli R, Soltaninejad M, Fonov VS, Bzdok D, Collins DL

pubmed logopapersAug 1 2025
Predicting brain age from T1-weighted MRI is a promising marker for understanding brain aging and its associated conditions. While deep learning models have shown success in reducing the mean absolute error (MAE) of predicted brain age, concerns about robust and accurate generalization in new data limit their clinical applicability. The large number of trainable parameters, combined with limited medical imaging training data, contributes to this challenge, often resulting in a generalization gap where there is a significant discrepancy between model performance on training data versus unseen data. In this study, we assess a deep model, SFCN-reg, based on the VGG-16 architecture, and address the generalization gap through comprehensive preprocessing, extensive data augmentation, and model regularization. Using training data from the UK Biobank, we demonstrate substantial improvements in model performance. Specifically, our approach reduces the generalization MAE by 47% (from 5.25 to 2.79 years) in the Alzheimer's Disease Neuroimaging Initiative dataset and by 12% (from 4.35 to 3.75 years) in the Australian Imaging, Biomarker and Lifestyle dataset. Furthermore, we achieve up to 13% reduction in scan-rescan error (from 0.80 to 0.70 years) while enhancing the model's robustness to registration errors. Feature importance maps highlight anatomical regions used to predict age. These results highlight the critical role of high-quality preprocessing and robust training techniques in improving accuracy and narrowing the generalization gap, both necessary steps toward the clinical use of brain age prediction models. Our study makes valuable contributions to neuroimaging research by offering a potential pathway to improve the clinical applicability of deep learning models.

Your other Left! Vision-Language Models Fail to Identify Relative Positions in Medical Images

Daniel Wolf, Heiko Hillenhagen, Billurvan Taskin, Alex Bäuerle, Meinrad Beer, Michael Götz, Timo Ropinski

arxiv logopreprintAug 1 2025
Clinical decision-making relies heavily on understanding relative positions of anatomical structures and anomalies. Therefore, for Vision-Language Models (VLMs) to be applicable in clinical practice, the ability to accurately determine relative positions on medical images is a fundamental prerequisite. Despite its importance, this capability remains highly underexplored. To address this gap, we evaluate the ability of state-of-the-art VLMs, GPT-4o, Llama3.2, Pixtral, and JanusPro, and find that all models fail at this fundamental task. Inspired by successful approaches in computer vision, we investigate whether visual prompts, such as alphanumeric or colored markers placed on anatomical structures, can enhance performance. While these markers provide moderate improvements, results remain significantly lower on medical images compared to observations made on natural images. Our evaluations suggest that, in medical imaging, VLMs rely more on prior anatomical knowledge than on actual image content for answering relative position questions, often leading to incorrect conclusions. To facilitate further research in this area, we introduce the MIRP , Medical Imaging Relative Positioning, benchmark dataset, designed to systematically evaluate the capability to identify relative positions in medical images.

Minimum Data, Maximum Impact: 20 annotated samples for explainable lung nodule classification

Luisa Gallée, Catharina Silvia Lisson, Christoph Gerhard Lisson, Daniela Drees, Felix Weig, Daniel Vogele, Meinrad Beer, Michael Götz

arxiv logopreprintAug 1 2025
Classification models that provide human-interpretable explanations enhance clinicians' trust and usability in medical image diagnosis. One research focus is the integration and prediction of pathology-related visual attributes used by radiologists alongside the diagnosis, aligning AI decision-making with clinical reasoning. Radiologists use attributes like shape and texture as established diagnostic criteria and mirroring these in AI decision-making both enhances transparency and enables explicit validation of model outputs. However, the adoption of such models is limited by the scarcity of large-scale medical image datasets annotated with these attributes. To address this challenge, we propose synthesizing attribute-annotated data using a generative model. We enhance the Diffusion Model with attribute conditioning and train it using only 20 attribute-labeled lung nodule samples from the LIDC-IDRI dataset. Incorporating its generated images into the training of an explainable model boosts performance, increasing attribute prediction accuracy by 13.4% and target prediction accuracy by 1.8% compared to training with only the small real attribute-annotated dataset. This work highlights the potential of synthetic data to overcome dataset limitations, enhancing the applicability of explainable models in medical image analysis.

A RF-based end-to-end Breast Cancer Prediction algorithm.

Win KN

pubmed logopapersAug 1 2025
Breast cancer became the primary cause of cancer-related deaths among women year by year. Early detection and accurate prediction of breast cancer play a crucial role in strengthening the quality of human life. Many scientists have concentrated on analyzing and conducting the development of many algorithms and progressing computer-aided diagnosis applications. Whereas many research have been conducted, feature research on cancer diagnosis is rare, especially regarding predicting the desired features by providing and feeding breast cancer features into the system. In this regard, this paper proposed a Breast Cancer Prediction (RF-BCP) algorithm based on Random Forest by taking inputs to predict cancer. For the experiment of the proposed algorithm, two datasets were utilized namely Breast Cancer dataset and a curated mammography dataset, and also compared the accuracy of the proposed algorithm with SVM, Gaussian NB, and KNN algorithms. Experimental results show that the proposed algorithm can predict well and outperform other existing machine learning algorithms to support decision-making.

Contrast-Enhanced Ultrasound-Based Intratumoral and Peritumoral Radiomics for Discriminating Carcinoma In Situ and Invasive Carcinoma of the Breast.

Zheng Y, Song Y, Wu T, Chen J, Du Y, Liu H, Wu R, Kuang Y, Diao X

pubmed logopapersAug 1 2025
This study aimed to evaluate the efficacy of a diagnostic model integrating intratumoral and peritumoral radiomic features based on contrast-enhanced ultrasound (CEUS) for differentiation between carcinoma in situ (CIS) and invasive breast carcinoma (IBC). Consecutive cases confirmed by postoperative histopathological analysis were retrospectively gathered, comprising 143 cases of CIS from January 2018 to May 2024, and 186 cases of IBC from May 2022 to May 2024, totaling 322 patients with 329 lesion and complete preoperative CEUS imaging. Intratumoral regions of interest (ROI) were defined in CEUS peak-phase images deferring gray-scale mode, while peritumoral ROI were defined by expanding 2 mm, 5 mm, and 8 mm beyond the tumor margin for radiomic features extraction. Statistical and machine learning techniques were employed for feature selection. Logistic regression classifier was utilized to construct radiomic models integrating intratumoral, peritumoral, and clinical features. Model performance was assessed using the area under the curve (AUC). The model incorporating 5 mm peritumoral features with intratumoral and clinical data exhibited superior diagnostic performance, achieving AUCs of 0.927 and 0.911 in the training and test sets, respectively. It outperformed models based only on clinical features or other radiomic configurations, with the 5 mm peritumoral region proving most effective for lesions discrimination. This study highlights the significant potential of combined intratumoral and peritumoral CEUS radiomics for classifying CIS and IBC, with the integration of 5 mm peritumoral features notably enhancing diagnostic accuracy.

Lumbar and pelvic CT image segmentation based on cross-scale feature fusion and linear self-attention mechanism.

Li C, Chen L, Liu Q, Teng J

pubmed logopapersAug 1 2025
The lumbar spine and pelvis are critical stress-bearing structures of the human body, and their rapid and accurate segmentation plays a vital role in clinical diagnosis and intervention. However, conventional CT imaging poses significant challenges due to the low contrast of sacral and bilateral hip tissues and the complex and highly similar intervertebral space structures within the lumbar spine. To address these challenges, we propose a general-purpose segmentation network that integrates a cross-scale feature fusion strategy with a linear self-attention mechanism. The proposed network effectively extracts multi-scale features and fuses them along the channel dimension, enabling both structural and boundary information of lumbar and pelvic regions to be captured within the encoder-decoder architecture.Furthermore, we introduce a linear mapping strategy to approximate the traditional attention matrix with a low-rank representation, allowing the linear attention mechanism to significantly reduce computational complexity while maintaining segmentation accuracy for vertebrae and pelvic bones. Comparative and ablation experiments conducted on the CTSpine1K and CTPelvic1K datasets demonstrate that our method achieves improvements of 1.5% in Dice Similarity Coefficient (DSC) and 2.6% in Hausdorff Distance (HD) over state-of-the-art models, validating the effectiveness of our approach in enhancing boundary segmentation quality and segmentation accuracy in homogeneous anatomical regions.

Segmentation of coronary calcifications with a domain knowledge-based lightweight 3D convolutional neural network.

Santos R, Castro R, Baeza R, Nunes F, Filipe VM, Renna F, Paredes H, Fontes-Carvalho R, Pedrosa J

pubmed logopapersAug 1 2025
Cardiovascular diseases are the leading cause of death in the world, with coronary artery disease being the most prevalent. Coronary artery calcifications are critical biomarkers for cardiovascular disease, and their quantification via non-contrast computed tomography is a widely accepted and heavily employed technique for risk assessment. Manual segmentation of these calcifications is a time-consuming task, subject to variability. State-of-the-art methods often employ convolutional neural networks for an automated approach. However, there is a lack of studies that perform these segmentations with 3D architectures that can gather important and necessary anatomical context to distinguish the different coronary arteries. This paper proposes a novel and automated approach that uses a lightweight three-dimensional convolutional neural network to perform efficient and accurate segmentations and calcium scoring. Results show that this method achieves Dice score coefficients of 0.93 ± 0.02, 0.93 ± 0.03, 0.84 ± 0.02, 0.63 ± 0.06 and 0.89 ± 0.03 for the foreground, left anterior descending artery (LAD), left circumflex artery (LCX), left main artery (LM) and right coronary artery (RCA) calcifications, respectively, outperforming other state-of-the-art architectures. An external cohort validation also showed the generalization of this method's performance and how it can be applied in different clinical scenarios. In conclusion, the proposed lightweight 3D convolutional neural network demonstrates high efficiency and accuracy, outperforming state-of-the-art methods and showcasing robust generalization potential.

LesiOnTime -- Joint Temporal and Clinical Modeling for Small Breast Lesion Segmentation in Longitudinal DCE-MRI

Mohammed Kamran, Maria Bernathova, Raoul Varga, Christian Singer, Zsuzsanna Bago-Horvath, Thomas Helbich, Georg Langs, Philipp Seeböck

arxiv logopreprintAug 1 2025
Accurate segmentation of small lesions in Breast Dynamic Contrast-Enhanced MRI (DCE-MRI) is critical for early cancer detection, especially in high-risk patients. While recent deep learning methods have advanced lesion segmentation, they primarily target large lesions and neglect valuable longitudinal and clinical information routinely used by radiologists. In real-world screening, detecting subtle or emerging lesions requires radiologists to compare across timepoints and consider previous radiology assessments, such as the BI-RADS score. We propose LesiOnTime, a novel 3D segmentation approach that mimics clinical diagnostic workflows by jointly leveraging longitudinal imaging and BIRADS scores. The key components are: (1) a Temporal Prior Attention (TPA) block that dynamically integrates information from previous and current scans; and (2) a BI-RADS Consistency Regularization (BCR) loss that enforces latent space alignment for scans with similar radiological assessments, thus embedding domain knowledge into the training process. Evaluated on a curated in-house longitudinal dataset of high-risk patients with DCE-MRI, our approach outperforms state-of-the-art single-timepoint and longitudinal baselines by 5% in terms of Dice. Ablation studies demonstrate that both TPA and BCR contribute complementary performance gains. These results highlight the importance of incorporating temporal and clinical context for reliable early lesion segmentation in real-world breast cancer screening. Our code is publicly available at https://github.com/cirmuw/LesiOnTime

Structured Spectral Graph Learning for Anomaly Classification in 3D Chest CT Scans

Theo Di Piazza, Carole Lazarus, Olivier Nempont, Loic Boussel

arxiv logopreprintAug 1 2025
With the increasing number of CT scan examinations, there is a need for automated methods such as organ segmentation, anomaly detection and report generation to assist radiologists in managing their increasing workload. Multi-label classification of 3D CT scans remains a critical yet challenging task due to the complex spatial relationships within volumetric data and the variety of observed anomalies. Existing approaches based on 3D convolutional networks have limited abilities to model long-range dependencies while Vision Transformers suffer from high computational costs and often require extensive pre-training on large-scale datasets from the same domain to achieve competitive performance. In this work, we propose an alternative by introducing a new graph-based approach that models CT scans as structured graphs, leveraging axial slice triplets nodes processed through spectral domain convolution to enhance multi-label anomaly classification performance. Our method exhibits strong cross-dataset generalization, and competitive performance while achieving robustness to z-axis translation. An ablation study evaluates the contribution of each proposed component.
Page 69 of 3593587 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.