Sort by:
Page 65 of 3533530 results

Coronary CT angiography evaluation with artificial intelligence for individualized medical treatment of atherosclerosis: a Consensus Statement from the QCI Study Group.

Schulze K, Stantien AM, Williams MC, Vassiliou VS, Giannopoulos AA, Nieman K, Maurovich-Horvat P, Tarkin JM, Vliegenthart R, Weir-McCall J, Mohamed M, Föllmer B, Biavati F, Stahl AC, Knape J, Balogh H, Galea N, Išgum I, Arbab-Zadeh A, Alkadhi H, Manka R, Wood DA, Nicol ED, Nurmohamed NS, Martens FMAC, Dey D, Newby DE, Dewey M

pubmed logopapersAug 1 2025
Coronary CT angiography is widely implemented, with an estimated 2.2 million procedures in patients with stable chest pain every year in Europe alone. In parallel, artificial intelligence and machine learning are poised to transform coronary atherosclerotic plaque evaluation by improving reliability and speed. However, little is known about how to use coronary atherosclerosis imaging biomarkers to individualize recommendations for medical treatment. This Consensus Statement from the Quantitative Cardiovascular Imaging (QCI) Study Group outlines key recommendations derived from a three-step Delphi process that took place after the third international QCI Study Group meeting in September 2024. Experts from various fields of cardiovascular imaging agreed on the use of age-adjusted and gender-adjusted percentile curves, based on coronary plaque data from the DISCHARGE and SCOT-HEART trials. Two key issues were addressed: the need to harness the reliability and precision of artificial intelligence and machine learning tools and to tailor treatment on the basis of individualized plaque analysis. The QCI Study Group recommends that the presence of any atherosclerotic plaque should lead to a recommendation of pharmacological treatment, whereas the 70th percentile of total plaque volume warrants high-intensity treatment. The aim of these recommendations is to lay the groundwork for future trials and to unlock the potential of coronary CT angiography to improve patient outcomes globally.

Structured Spectral Graph Learning for Anomaly Classification in 3D Chest CT Scans

Theo Di Piazza, Carole Lazarus, Olivier Nempont, Loic Boussel

arxiv logopreprintAug 1 2025
With the increasing number of CT scan examinations, there is a need for automated methods such as organ segmentation, anomaly detection and report generation to assist radiologists in managing their increasing workload. Multi-label classification of 3D CT scans remains a critical yet challenging task due to the complex spatial relationships within volumetric data and the variety of observed anomalies. Existing approaches based on 3D convolutional networks have limited abilities to model long-range dependencies while Vision Transformers suffer from high computational costs and often require extensive pre-training on large-scale datasets from the same domain to achieve competitive performance. In this work, we propose an alternative by introducing a new graph-based approach that models CT scans as structured graphs, leveraging axial slice triplets nodes processed through spectral domain convolution to enhance multi-label anomaly classification performance. Our method exhibits strong cross-dataset generalization, and competitive performance while achieving robustness to z-axis translation. An ablation study evaluates the contribution of each proposed component.

LesiOnTime -- Joint Temporal and Clinical Modeling for Small Breast Lesion Segmentation in Longitudinal DCE-MRI

Mohammed Kamran, Maria Bernathova, Raoul Varga, Christian Singer, Zsuzsanna Bago-Horvath, Thomas Helbich, Georg Langs, Philipp Seeböck

arxiv logopreprintAug 1 2025
Accurate segmentation of small lesions in Breast Dynamic Contrast-Enhanced MRI (DCE-MRI) is critical for early cancer detection, especially in high-risk patients. While recent deep learning methods have advanced lesion segmentation, they primarily target large lesions and neglect valuable longitudinal and clinical information routinely used by radiologists. In real-world screening, detecting subtle or emerging lesions requires radiologists to compare across timepoints and consider previous radiology assessments, such as the BI-RADS score. We propose LesiOnTime, a novel 3D segmentation approach that mimics clinical diagnostic workflows by jointly leveraging longitudinal imaging and BIRADS scores. The key components are: (1) a Temporal Prior Attention (TPA) block that dynamically integrates information from previous and current scans; and (2) a BI-RADS Consistency Regularization (BCR) loss that enforces latent space alignment for scans with similar radiological assessments, thus embedding domain knowledge into the training process. Evaluated on a curated in-house longitudinal dataset of high-risk patients with DCE-MRI, our approach outperforms state-of-the-art single-timepoint and longitudinal baselines by 5% in terms of Dice. Ablation studies demonstrate that both TPA and BCR contribute complementary performance gains. These results highlight the importance of incorporating temporal and clinical context for reliable early lesion segmentation in real-world breast cancer screening. Our code is publicly available at https://github.com/cirmuw/LesiOnTime

Weakly Supervised Intracranial Aneurysm Detection and Segmentation in MR angiography via Multi-task UNet with Vesselness Prior

Erin Rainville, Amirhossein Rasoulian, Hassan Rivaz, Yiming Xiao

arxiv logopreprintAug 1 2025
Intracranial aneurysms (IAs) are abnormal dilations of cerebral blood vessels that, if ruptured, can lead to life-threatening consequences. However, their small size and soft contrast in radiological scans often make it difficult to perform accurate and efficient detection and morphological analyses, which are critical in the clinical care of the disorder. Furthermore, the lack of large public datasets with voxel-wise expert annotations pose challenges for developing deep learning algorithms to address the issues. Therefore, we proposed a novel weakly supervised 3D multi-task UNet that integrates vesselness priors to jointly perform aneurysm detection and segmentation in time-of-flight MR angiography (TOF-MRA). Specifically, to robustly guide IA detection and segmentation, we employ the popular Frangi's vesselness filter to derive soft cerebrovascular priors for both network input and an attention block to conduct segmentation from the decoder and detection from an auxiliary branch. We train our model on the Lausanne dataset with coarse ground truth segmentation, and evaluate it on the test set with refined labels from the same database. To further assess our model's generalizability, we also validate it externally on the ADAM dataset. Our results demonstrate the superior performance of the proposed technique over the SOTA techniques for aneurysm segmentation (Dice = 0.614, 95%HD =1.38mm) and detection (false positive rate = 1.47, sensitivity = 92.9%).

Mobile U-ViT: Revisiting large kernel and U-shaped ViT for efficient medical image segmentation

Fenghe Tang, Bingkun Nian, Jianrui Ding, Wenxin Ma, Quan Quan, Chengqi Dong, Jie Yang, Wei Liu, S. Kevin Zhou

arxiv logopreprintAug 1 2025
In clinical practice, medical image analysis often requires efficient execution on resource-constrained mobile devices. However, existing mobile models-primarily optimized for natural images-tend to perform poorly on medical tasks due to the significant information density gap between natural and medical domains. Combining computational efficiency with medical imaging-specific architectural advantages remains a challenge when developing lightweight, universal, and high-performing networks. To address this, we propose a mobile model called Mobile U-shaped Vision Transformer (Mobile U-ViT) tailored for medical image segmentation. Specifically, we employ the newly purposed ConvUtr as a hierarchical patch embedding, featuring a parameter-efficient large-kernel CNN with inverted bottleneck fusion. This design exhibits transformer-like representation learning capacity while being lighter and faster. To enable efficient local-global information exchange, we introduce a novel Large-kernel Local-Global-Local (LGL) block that effectively balances the low information density and high-level semantic discrepancy of medical images. Finally, we incorporate a shallow and lightweight transformer bottleneck for long-range modeling and employ a cascaded decoder with downsample skip connections for dense prediction. Despite its reduced computational demands, our medical-optimized architecture achieves state-of-the-art performance across eight public 2D and 3D datasets covering diverse imaging modalities, including zero-shot testing on four unseen datasets. These results establish it as an efficient yet powerful and generalization solution for mobile medical image analysis. Code is available at https://github.com/FengheTan9/Mobile-U-ViT.

From Consensus to Standardization: Evaluating Deep Learning for Nerve Block Segmentation in Ultrasound Imaging.

Pelletier ED, Jeffries SD, Suissa N, Sarty I, Malka N, Song K, Sinha A, Hemmerling TM

pubmed logopapersAug 1 2025
Deep learning can automate nerve identification by learning from expert-labeled examples to detect and highlight nerves in ultrasound images. This study aims to evaluate the performance of deep-learning models in identifying nerves for ultrasound-guided nerve blocks. A total of 3594 raw ultrasound images were collected from public sources-an open GitHub repository and publicly available YouTube videos-covering 9 nerve block regions: Transversus Abdominis Plane (TAP), Femoral Nerve, Posterior Rectus Sheath, Median and Ulnar Nerves, Pectoralis Plane, Sciatic Nerve, Infraclavicular Brachial Plexus, Supraclavicular Brachial Plexus, and Interscalene Brachial Plexus. Of these, 10 images per nerve region were kept for testing, with each image labeled by 10 expert anesthesiologists. The remaining 3504 were labeled by a medical anesthesia resident and augmented to create a diverse training dataset of 25,000 images per nerve region. Additionally, 908 negative ultrasound images, which do not contain the targeted nerve structures, were included to improve model robustness. Ten convolutional neural network-based deep-learning architectures were selected to identify nerve structures. Models were trained using a 5-fold cross-validation approach on an Extended Video Graphics Array (EVGA) GeForce RTX 3090 GPU, with batch size, number of epochs, and the Adam optimizer adjusted to enhance the models' effectiveness. Posttraining, models were evaluated on a set of 10 images per nerve region, using the Dice score (range: 0 to 1, where 1 indicates perfect agreement and 0 indicates no overlap) to compare model predictions with expert-labeled images. Further validation was conducted by 10 medical experts who assessed whether they would insert a needle into the model's predictions. Statistical analyses were performed to explore the relationship between Dice scores and expert responses. The R2U-Net model achieved the highest average Dice score (0.7619) across all nerve regions, outperforming other models (0.7123-0.7619). However, statistically significant differences in model performance were observed only for the TAP nerve region (χ² = 26.4, df = 9, P = .002, ε² = 0.267). Expert evaluations indicated high accuracy in the model predictions, particularly for the Popliteal nerve region, where experts agreed to insert a needle based on all 100 model-generated predictions. Logistic modeling suggested that higher Dice overlap might increase the odds of expert acceptance in the Supraclavicular region (odds ratio [OR] = 8.59 × 10⁴, 95% confidence interval [CI], 0.33-2.25 × 10¹⁰; P = .073). The findings demonstrate the potential of deep-learning models, such as R2U-Net, to deliver consistent segmentation results in ultrasound-guided nerve block procedures.

Your other Left! Vision-Language Models Fail to Identify Relative Positions in Medical Images

Daniel Wolf, Heiko Hillenhagen, Billurvan Taskin, Alex Bäuerle, Meinrad Beer, Michael Götz, Timo Ropinski

arxiv logopreprintAug 1 2025
Clinical decision-making relies heavily on understanding relative positions of anatomical structures and anomalies. Therefore, for Vision-Language Models (VLMs) to be applicable in clinical practice, the ability to accurately determine relative positions on medical images is a fundamental prerequisite. Despite its importance, this capability remains highly underexplored. To address this gap, we evaluate the ability of state-of-the-art VLMs, GPT-4o, Llama3.2, Pixtral, and JanusPro, and find that all models fail at this fundamental task. Inspired by successful approaches in computer vision, we investigate whether visual prompts, such as alphanumeric or colored markers placed on anatomical structures, can enhance performance. While these markers provide moderate improvements, results remain significantly lower on medical images compared to observations made on natural images. Our evaluations suggest that, in medical imaging, VLMs rely more on prior anatomical knowledge than on actual image content for answering relative position questions, often leading to incorrect conclusions. To facilitate further research in this area, we introduce the MIRP , Medical Imaging Relative Positioning, benchmark dataset, designed to systematically evaluate the capability to identify relative positions in medical images.

LesiOnTime -- Joint Temporal and Clinical Modeling for Small Breast Lesion Segmentation in Longitudinal DCE-MRI

Mohammed Kamran, Maria Bernathova, Raoul Varga, Christian F. Singer, Zsuzsanna Bago-Horvath, Thomas Helbich, Georg Langs, Philipp Seeböck

arxiv logopreprintAug 1 2025
Accurate segmentation of small lesions in Breast Dynamic Contrast-Enhanced MRI (DCE-MRI) is critical for early cancer detection, especially in high-risk patients. While recent deep learning methods have advanced lesion segmentation, they primarily target large lesions and neglect valuable longitudinal and clinical information routinely used by radiologists. In real-world screening, detecting subtle or emerging lesions requires radiologists to compare across timepoints and consider previous radiology assessments, such as the BI-RADS score. We propose LesiOnTime, a novel 3D segmentation approach that mimics clinical diagnostic workflows by jointly leveraging longitudinal imaging and BIRADS scores. The key components are: (1) a Temporal Prior Attention (TPA) block that dynamically integrates information from previous and current scans; and (2) a BI-RADS Consistency Regularization (BCR) loss that enforces latent space alignment for scans with similar radiological assessments, thus embedding domain knowledge into the training process. Evaluated on a curated in-house longitudinal dataset of high-risk patients with DCE-MRI, our approach outperforms state-of-the-art single-timepoint and longitudinal baselines by 5% in terms of Dice. Ablation studies demonstrate that both TPA and BCR contribute complementary performance gains. These results highlight the importance of incorporating temporal and clinical context for reliable early lesion segmentation in real-world breast cancer screening. Our code is publicly available at https://github.com/cirmuw/LesiOnTime

Evaluation of calcaneal inclusion angle in the diagnosis of pes planus with pretrained deep learning networks: An observational study.

Aktas E, Ceylan N, Yaltirik Bilgin E, Bilgin E, Ince L

pubmed logopapersAug 1 2025
Pes planus is a common postural deformity related to the medial longitudinal arch of the foot. Radiographic examinations are important for reproducibility and objectivity; the most commonly used methods are the calcaneal inclusion angle and Mery angle. However, there may be variations in radiographic measurements due to human error and inexperience. In this study, a deep learning (DL)-based solution is proposed to solve this problem. Lateral radiographs of the right and left foot of 289 patients were taken and saved. The study population is a homogeneous group in terms of age and gender, and does not provide sufficient heterogeneity to represent the general population. These radiography (X-ray) images were measured by 2 different experts and the measurements were recorded. According to these measurements, each X-ray image is labeled as pes planus or non-pes planus. These images were then filtered and resized using Gaussian blurring and median filtering methods. As a result of these processes, 2 separate data sets were created. Generally accepted DL models (AlexNet, GoogleNet, SqueezeNet) were reconstructed to classify these images. The 2-category (pes planus/no pes planus) data in the 2 preprocessed and resized datasets were classified by fine-tuning these reconstructed transfer learning networks. The GoogleNet and SqueezeNet models achieved 100% accuracy, while AlexNet achieved 92.98% accuracy. These results show that the predictions of the models and the measurements of expert radiologists overlap to a large extent. DL-based diagnostic methods can be used as a decision support system in the diagnosis of pes planus. DL algorithms enhance the consistency of the diagnostic process by reducing measurement variations between different observers. DL systems accelerate diagnosis by automatically performing angle measurements from X-ray images, which is particularly beneficial in busy clinical settings by saving time. DL models integrated with smartphone cameras can facilitate the diagnosis of pes planus and serve as a screening tool, especially in regions with limited access to healthcare.

Multimodal multiphasic preoperative image-based deep-learning predicts HCC outcomes after curative surgery.

Hui RW, Chiu KW, Lee IC, Wang C, Cheng HM, Lu J, Mao X, Yu S, Lam LK, Mak LY, Cheung TT, Chia NH, Cheung CC, Kan WK, Wong TC, Chan AC, Huang YH, Yuen MF, Yu PL, Seto WK

pubmed logopapersAug 1 2025
HCC recurrence frequently occurs after curative surgery. Histological microvascular invasion (MVI) predicts recurrence but cannot provide preoperative prognostication, whereas clinical prediction scores have variable performances. Recurr-NET, a multimodal multiphasic residual-network random survival forest deep-learning model incorporating preoperative CT and clinical parameters, was developed to predict HCC recurrence. Preoperative triphasic CT scans were retrieved from patients with resected histology-confirmed HCC from 4 centers in Hong Kong (internal cohort). The internal cohort was randomly divided in an 8:2 ratio into training and internal validation. External testing was performed in an independent cohort from Taiwan.Among 1231 patients (age 62.4y, 83.1% male, 86.8% viral hepatitis, and median follow-up 65.1mo), cumulative HCC recurrence rates at years 2 and 5 were 41.8% and 56.4%, respectively. Recurr-NET achieved excellent accuracy in predicting recurrence from years 1 to 5 (internal cohort AUROC 0.770-0.857; external AUROC 0.758-0.798), significantly outperforming MVI (internal AUROC 0.518-0.590; external AUROC 0.557-0.615) and multiple clinical risk scores (ERASL-PRE, ERASL-POST, DFT, and Shim scores) (internal AUROC 0.523-0.587, external AUROC: 0.524-0.620), respectively (all p < 0.001). Recurr-NET was superior to MVI in stratifying recurrence risks at year 2 (internal: 72.5% vs. 50.0% in MVI; external: 65.3% vs. 46.6% in MVI) and year 5 (internal: 86.4% vs. 62.5% in MVI; external: 81.4% vs. 63.8% in MVI) (all p < 0.001). Recurr-NET was also superior to MVI in stratifying liver-related and all-cause mortality (all p < 0.001). The performance of Recurr-NET remained robust in subgroup analyses. Recurr-NET accurately predicted HCC recurrence, outperforming MVI and clinical prediction scores, highlighting its potential in preoperative prognostication.
Page 65 of 3533530 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.