Sort by:
Page 71 of 3593587 results

LesiOnTime -- Joint Temporal and Clinical Modeling for Small Breast Lesion Segmentation in Longitudinal DCE-MRI

Mohammed Kamran, Maria Bernathova, Raoul Varga, Christian Singer, Zsuzsanna Bago-Horvath, Thomas Helbich, Georg Langs, Philipp Seeböck

arxiv logopreprintAug 1 2025
Accurate segmentation of small lesions in Breast Dynamic Contrast-Enhanced MRI (DCE-MRI) is critical for early cancer detection, especially in high-risk patients. While recent deep learning methods have advanced lesion segmentation, they primarily target large lesions and neglect valuable longitudinal and clinical information routinely used by radiologists. In real-world screening, detecting subtle or emerging lesions requires radiologists to compare across timepoints and consider previous radiology assessments, such as the BI-RADS score. We propose LesiOnTime, a novel 3D segmentation approach that mimics clinical diagnostic workflows by jointly leveraging longitudinal imaging and BIRADS scores. The key components are: (1) a Temporal Prior Attention (TPA) block that dynamically integrates information from previous and current scans; and (2) a BI-RADS Consistency Regularization (BCR) loss that enforces latent space alignment for scans with similar radiological assessments, thus embedding domain knowledge into the training process. Evaluated on a curated in-house longitudinal dataset of high-risk patients with DCE-MRI, our approach outperforms state-of-the-art single-timepoint and longitudinal baselines by 5% in terms of Dice. Ablation studies demonstrate that both TPA and BCR contribute complementary performance gains. These results highlight the importance of incorporating temporal and clinical context for reliable early lesion segmentation in real-world breast cancer screening. Our code is publicly available at https://github.com/cirmuw/LesiOnTime

Cerebral Amyloid Deposition With <sup>18</sup>F-Florbetapir PET Mediates Retinal Vascular Density and Cognitive Impairment in Alzheimer's Disease.

Chen Z, He HL, Qi Z, Bi S, Yang H, Chen X, Xu T, Jin ZB, Yan S, Lu J

pubmed logopapersAug 1 2025
Alzheimer's disease (AD) is accompanied by alterations in retinal vascular density (VD), but the mechanisms remain unclear. This study investigated the relationship among cerebral amyloid-β (Aβ) deposition, VD, and cognitive decline. We enrolled 92 participants, including 47 AD patients and 45 healthy control (HC) participants. VD across retinal subregions was quantified using deep learning-based fundus photography, and cerebral Aβ deposition was measured with <sup>18</sup>F-florbetapir (<sup>18</sup>F-AV45) PET/MRI. Using the minimum bounding circle of the optic disc as the diameter (papilla-diameter, PD), VD (total, 0.5-1.0 PD, 1.0-1.5 PD, 1.5-2.0 PD, 2.0-2.5 PD) was calculated. Standardized uptake value ratio (SUVR) for Aβ deposition was computed for global and regional cortical areas, using the cerebellar cortex as the reference region. Cognitive performance was assessed with the Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA). Pearson correlation, multiple linear regression, and mediation analyses were used to explore Aβ deposition, VD, and cognition. AD patients exhibited significantly lower VD in all subregions compared to HC (p < 0.05). Reduced VD correlated with higher SUVR in the global cortex and a decline in cognitive abilities (p < 0.05). Mediation analysis indicated that VD influenced MMSE and MoCA through SUVR in the global cortex, with the most pronounced effects observed in the 1.0-1.5 PD range. Retinal VD is associated with cognitive decline, a relationship primarily mediated by cerebral Aβ deposition measured via <sup>18</sup>F-AV45 PET. These findings highlight the potential of retinal VD as a biomarker for early detection in AD.

Mobile U-ViT: Revisiting large kernel and U-shaped ViT for efficient medical image segmentation

Fenghe Tang, Bingkun Nian, Jianrui Ding, Wenxin Ma, Quan Quan, Chengqi Dong, Jie Yang, Wei Liu, S. Kevin Zhou

arxiv logopreprintAug 1 2025
In clinical practice, medical image analysis often requires efficient execution on resource-constrained mobile devices. However, existing mobile models-primarily optimized for natural images-tend to perform poorly on medical tasks due to the significant information density gap between natural and medical domains. Combining computational efficiency with medical imaging-specific architectural advantages remains a challenge when developing lightweight, universal, and high-performing networks. To address this, we propose a mobile model called Mobile U-shaped Vision Transformer (Mobile U-ViT) tailored for medical image segmentation. Specifically, we employ the newly purposed ConvUtr as a hierarchical patch embedding, featuring a parameter-efficient large-kernel CNN with inverted bottleneck fusion. This design exhibits transformer-like representation learning capacity while being lighter and faster. To enable efficient local-global information exchange, we introduce a novel Large-kernel Local-Global-Local (LGL) block that effectively balances the low information density and high-level semantic discrepancy of medical images. Finally, we incorporate a shallow and lightweight transformer bottleneck for long-range modeling and employ a cascaded decoder with downsample skip connections for dense prediction. Despite its reduced computational demands, our medical-optimized architecture achieves state-of-the-art performance across eight public 2D and 3D datasets covering diverse imaging modalities, including zero-shot testing on four unseen datasets. These results establish it as an efficient yet powerful and generalization solution for mobile medical image analysis. Code is available at https://github.com/FengheTan9/Mobile-U-ViT.

Reference charts for first-trimester placental volume derived using OxNNet.

Mathewlynn S, Starck LN, Yin Y, Soltaninejad M, Swinburne M, Nicolaides KH, Syngelaki A, Contreras AG, Bigiotti S, Woess EM, Gerry S, Collins S

pubmed logopapersAug 1 2025
To establish a comprehensive reference range for OxNNet-derived first-trimester placental volume (FTPV), based on values observed in healthy pregnancies. Data were obtained from the First Trimester Placental Ultrasound Study, an observational cohort study in which three-dimensional placental ultrasound imaging was performed between 11 + 2 and 14 + 1 weeks' gestation, alongside otherwise routine care. A subgroup of singleton pregnancies resulting in term live birth, without neonatal unit admission or major chromosomal or structural abnormality, were included. Exclusion criteria were fetal growth restriction, maternal diabetes mellitus, hypertensive disorders of pregnancy or other maternal medical conditions (e.g. chronic hypertension, antiphospholipid syndrome, systemic lupus erythematosus). Placental images were processed using the OxNNet toolkit, a software solution based on a fully convolutional neural network, for automated placental segmentation and volume calculation. Quantile regression and the lambda-mu-sigma (LMS) method were applied to model the distribution of FTPV, using both crown-rump length (CRL) and gestational age as predictors. Model fit was assessed using the Akaike information criterion (AIC), and centile curves were constructed for visual inspection. The cohort comprised 2547 cases. The distribution of FTPV across gestational ages was positively skewed, with variation in the distribution at different gestational timepoints. In model comparisons, the LMS method yielded lower AIC values compared with quantile regression models. For predicting FTPV from CRL, the LMS model with the Sinh-Arcsinh distribution achieved the best performance, with the lowest AIC value. For gestational-age-based prediction, the LMS model with the Box-Cox Cole and Green original distribution achieved the lowest AIC value. The LMS models were selected to construct centile charts for FTPV based on both CRL and gestational age. Evaluation of the centile charts revealed strong agreement between predicted and observed centiles, with minimal deviations. Both models demonstrated excellent calibration, and the Z-scores derived using each of the models confirmed normal distribution. This study established reference ranges for FTPV based on both CRL and gestational age in healthy pregnancies. The LMS method provided the best model fit, demonstrating excellent calibration and minimal deviations between predicted and observed centiles. These findings should facilitate the exploration of FTPV as a potential biomarker for adverse pregnancy outcome and provide a foundation for future research into its clinical applications. © 2025 The Author(s). Ultrasound in Obstetrics & Gynecology published by John Wiley & Sons Ltd on behalf of International Society of Ultrasound in Obstetrics and Gynecology.

A RF-based end-to-end Breast Cancer Prediction algorithm.

Win KN

pubmed logopapersAug 1 2025
Breast cancer became the primary cause of cancer-related deaths among women year by year. Early detection and accurate prediction of breast cancer play a crucial role in strengthening the quality of human life. Many scientists have concentrated on analyzing and conducting the development of many algorithms and progressing computer-aided diagnosis applications. Whereas many research have been conducted, feature research on cancer diagnosis is rare, especially regarding predicting the desired features by providing and feeding breast cancer features into the system. In this regard, this paper proposed a Breast Cancer Prediction (RF-BCP) algorithm based on Random Forest by taking inputs to predict cancer. For the experiment of the proposed algorithm, two datasets were utilized namely Breast Cancer dataset and a curated mammography dataset, and also compared the accuracy of the proposed algorithm with SVM, Gaussian NB, and KNN algorithms. Experimental results show that the proposed algorithm can predict well and outperform other existing machine learning algorithms to support decision-making.

Your other Left! Vision-Language Models Fail to Identify Relative Positions in Medical Images

Daniel Wolf, Heiko Hillenhagen, Billurvan Taskin, Alex Bäuerle, Meinrad Beer, Michael Götz, Timo Ropinski

arxiv logopreprintAug 1 2025
Clinical decision-making relies heavily on understanding relative positions of anatomical structures and anomalies. Therefore, for Vision-Language Models (VLMs) to be applicable in clinical practice, the ability to accurately determine relative positions on medical images is a fundamental prerequisite. Despite its importance, this capability remains highly underexplored. To address this gap, we evaluate the ability of state-of-the-art VLMs, GPT-4o, Llama3.2, Pixtral, and JanusPro, and find that all models fail at this fundamental task. Inspired by successful approaches in computer vision, we investigate whether visual prompts, such as alphanumeric or colored markers placed on anatomical structures, can enhance performance. While these markers provide moderate improvements, results remain significantly lower on medical images compared to observations made on natural images. Our evaluations suggest that, in medical imaging, VLMs rely more on prior anatomical knowledge than on actual image content for answering relative position questions, often leading to incorrect conclusions. To facilitate further research in this area, we introduce the MIRP , Medical Imaging Relative Positioning, benchmark dataset, designed to systematically evaluate the capability to identify relative positions in medical images.

Contrast-Enhanced Ultrasound-Based Intratumoral and Peritumoral Radiomics for Discriminating Carcinoma In Situ and Invasive Carcinoma of the Breast.

Zheng Y, Song Y, Wu T, Chen J, Du Y, Liu H, Wu R, Kuang Y, Diao X

pubmed logopapersAug 1 2025
This study aimed to evaluate the efficacy of a diagnostic model integrating intratumoral and peritumoral radiomic features based on contrast-enhanced ultrasound (CEUS) for differentiation between carcinoma in situ (CIS) and invasive breast carcinoma (IBC). Consecutive cases confirmed by postoperative histopathological analysis were retrospectively gathered, comprising 143 cases of CIS from January 2018 to May 2024, and 186 cases of IBC from May 2022 to May 2024, totaling 322 patients with 329 lesion and complete preoperative CEUS imaging. Intratumoral regions of interest (ROI) were defined in CEUS peak-phase images deferring gray-scale mode, while peritumoral ROI were defined by expanding 2 mm, 5 mm, and 8 mm beyond the tumor margin for radiomic features extraction. Statistical and machine learning techniques were employed for feature selection. Logistic regression classifier was utilized to construct radiomic models integrating intratumoral, peritumoral, and clinical features. Model performance was assessed using the area under the curve (AUC). The model incorporating 5 mm peritumoral features with intratumoral and clinical data exhibited superior diagnostic performance, achieving AUCs of 0.927 and 0.911 in the training and test sets, respectively. It outperformed models based only on clinical features or other radiomic configurations, with the 5 mm peritumoral region proving most effective for lesions discrimination. This study highlights the significant potential of combined intratumoral and peritumoral CEUS radiomics for classifying CIS and IBC, with the integration of 5 mm peritumoral features notably enhancing diagnostic accuracy.

Segmentation of coronary calcifications with a domain knowledge-based lightweight 3D convolutional neural network.

Santos R, Castro R, Baeza R, Nunes F, Filipe VM, Renna F, Paredes H, Fontes-Carvalho R, Pedrosa J

pubmed logopapersAug 1 2025
Cardiovascular diseases are the leading cause of death in the world, with coronary artery disease being the most prevalent. Coronary artery calcifications are critical biomarkers for cardiovascular disease, and their quantification via non-contrast computed tomography is a widely accepted and heavily employed technique for risk assessment. Manual segmentation of these calcifications is a time-consuming task, subject to variability. State-of-the-art methods often employ convolutional neural networks for an automated approach. However, there is a lack of studies that perform these segmentations with 3D architectures that can gather important and necessary anatomical context to distinguish the different coronary arteries. This paper proposes a novel and automated approach that uses a lightweight three-dimensional convolutional neural network to perform efficient and accurate segmentations and calcium scoring. Results show that this method achieves Dice score coefficients of 0.93 ± 0.02, 0.93 ± 0.03, 0.84 ± 0.02, 0.63 ± 0.06 and 0.89 ± 0.03 for the foreground, left anterior descending artery (LAD), left circumflex artery (LCX), left main artery (LM) and right coronary artery (RCA) calcifications, respectively, outperforming other state-of-the-art architectures. An external cohort validation also showed the generalization of this method's performance and how it can be applied in different clinical scenarios. In conclusion, the proposed lightweight 3D convolutional neural network demonstrates high efficiency and accuracy, outperforming state-of-the-art methods and showcasing robust generalization potential.

FOCUS-DWI improves prostate cancer detection through deep learning reconstruction with IQMR technology.

Zhao Y, Xie XL, Zhu X, Huang WN, Zhou CW, Ren KX, Zhai RY, Wang W, Wang JW

pubmed logopapersAug 1 2025
This study explored the effects of using Intelligent Quick Magnetic Resonance (IQMR) image post-processing on image quality in Field of View Optimized and Constrained Single-Shot Diffusion-Weighted Imaging (FOCUS-DWI) sequences for prostate cancer detection, and assessed its efficacy in distinguishing malignant from benign lesions. The clinical data and MRI images from 62 patients with prostate masses (31 benign and 31 malignant) were retrospectively analyzed. Axial T2-weighted imaging with fat saturation (T2WI-FS) and FOCUS-DWI sequences were acquired, and the FOCUS-DWI images were processed using the IQMR post-processing system to generate IQMR-FOCUS-DWI images. Two independent radiologists undertook subjective scoring, grading using the Prostate Imaging Reporting and Data System (PI-RADS), diagnosis of benign and malignant lesions, and diagnostic confidence scoring for images from the FOCUS-DWI and IQMR-FOCUS-DWI sequences. Additionally, quantitative analyses, specifically, the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM), were conducted using T2WI-FS as the reference standard. The apparent diffusion coefficients (ADCs) of malignant and benign lesions were compared between the two imaging sequences. Spearman correlation coefficients were calculated to evaluate the associations between diagnostic confidence scores and diagnostic accuracy rates of the two sequence groups, as well as between the ADC values of malignant lesions and Gleason grading in the two sequence groups. Receiver operating characteristic (ROC) curves were utilized to assess the efficacy of ADC in distinguishing lesions. The qualitative analysis revealed that IQMR-FOCUS-DWI images showed significantly better noise suppression, reduced geometric distortion, and enhanced overall quality relative to the FOCUS-DWI images (P < 0.001). There was no significant difference in the PI-RADS scores between IQMR-FOCUS-DWI and FOCUS-DWI images (P = 0.0875), while the diagnostic confidence scores of IQMR-FOCUS-DWI sequences were markedly higher than those of FOCUS-DWI sequences (P = 0.0002). The diagnostic results of the FOCUS-DWI sequences for benign and malignant prostate lesions were consistent with those of the pathological results (P < 0.05), as were those of the IQMR-FOCUS-DWI sequences (P < 0.05). The quantitative analysis indicated that the PSNR, SSIM, and ADC values were markedly greater in IQMR-FOCUS-DWI images relative to FOCUS-DWI images (P < 0.01). In both imaging sequences, benign lesions exhibited ADC values markedly greater than those of malignant lesions (P < 0.001). The diagnostic confidence scores of both groups of sequences were significantly positively correlated with the diagnostic accuracy rate. In malignant lesions, the ADC values of the FOCUS-DWI sequences showed moderate negative correlations with the Gleason grading, while the ADC values of the IQMR-FOCUS-DWI sequences were strongly negatively associated with the Gleason grading. ROC curves indicated the superior diagnostic performance of IQMR-FOCUS-DWI (AUC = 0.941) compared to FOCUS-DWI (AUC = 0.832) for differentiating prostate lesions (P = 0.0487). IQMR-FOCUS-DWI significantly enhances image quality and improves diagnostic accuracy for benign and malignant prostate lesions compared to conventional FOCUS-DWI.

Minimum Data, Maximum Impact: 20 annotated samples for explainable lung nodule classification

Luisa Gallée, Catharina Silvia Lisson, Christoph Gerhard Lisson, Daniela Drees, Felix Weig, Daniel Vogele, Meinrad Beer, Michael Götz

arxiv logopreprintAug 1 2025
Classification models that provide human-interpretable explanations enhance clinicians' trust and usability in medical image diagnosis. One research focus is the integration and prediction of pathology-related visual attributes used by radiologists alongside the diagnosis, aligning AI decision-making with clinical reasoning. Radiologists use attributes like shape and texture as established diagnostic criteria and mirroring these in AI decision-making both enhances transparency and enables explicit validation of model outputs. However, the adoption of such models is limited by the scarcity of large-scale medical image datasets annotated with these attributes. To address this challenge, we propose synthesizing attribute-annotated data using a generative model. We enhance the Diffusion Model with attribute conditioning and train it using only 20 attribute-labeled lung nodule samples from the LIDC-IDRI dataset. Incorporating its generated images into the training of an explainable model boosts performance, increasing attribute prediction accuracy by 13.4% and target prediction accuracy by 1.8% compared to training with only the small real attribute-annotated dataset. This work highlights the potential of synthetic data to overcome dataset limitations, enhancing the applicability of explainable models in medical image analysis.
Page 71 of 3593587 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.