Sort by:
Page 12 of 99986 results

Utility of artificial intelligence in radiosurgery for pituitary adenoma: a deep learning-based automated segmentation model and evaluation of its clinical applicability.

Černý M, May J, Hamáčková L, Hallak H, Novotný J, Baručić D, Kybic J, May M, Májovský M, Link MJ, Balasubramaniam N, Síla D, Babničová M, Netuka D, Liščák R

pubmed logopapersAug 1 2025
The objective of this study was to develop a deep learning model for automated pituitary adenoma segmentation in MRI scans for stereotactic radiosurgery planning and to assess its accuracy and efficiency in clinical settings. An nnU-Net-based model was trained on MRI scans with expert segmentations of 582 patients treated with Leksell Gamma Knife over the course of 12 years. The accuracy of the model was evaluated by a human expert on a separate dataset of 146 previously unseen patients. The primary outcome was the comparison of expert ratings between the predicted segmentations and a control group consisting of original manual segmentations. Secondary outcomes were the influence of tumor volume, previous surgery, previous stereotactic radiosurgery (SRS), and endocrinological status on expert ratings, performance in a subgroup of nonfunctioning macroadenomas (measuring 1000-4000 mm3) without previous surgery and/or radiosurgery, and influence of using additional MRI modalities as model input and time cost reduction. The model achieved Dice similarity coefficients of 82.3%, 63.9%, and 79.6% for tumor, normal gland, and optic nerve, respectively. A human expert rated 20.6% of the segmentations as applicable in treatment planning without any modifications, 52.7% as applicable with minor manual modifications, and 26.7% as inapplicable. The ratings for predicted segmentations were lower than for the control group of original segmentations (p < 0.001). Larger tumor volume, history of a previous radiosurgery, and nonfunctioning pituitary adenoma were associated with better expert ratings (p = 0.005, p = 0.007, and p < 0.001, respectively). In the subgroup without previous surgery, although expert ratings were more favorable, the association did not reach statistical significance (p = 0.074). In the subgroup of noncomplex cases (n = 9), 55.6% of the segmentations were rated as applicable without any manual modifications and no segmentations were rated as inapplicable. Manually improving inaccurate segmentations instead of creating them from scratch led to 53.6% reduction of the time cost (p < 0.001). The results were applicable for treatment planning with either no or minor manual modifications, demonstrating a significant increase in the efficiency of the planning process. The predicted segmentations can be loaded into the planning software used in clinical practice for treatment planning. The authors discuss some considerations of the clinical utility of the automated segmentation models, as well as their integration within established clinical workflows, and outline directions for future research.

Utility of an artificial intelligence-based lung CT airway model in the quantitative evaluation of large and small airway lesions in patients with chronic obstructive pulmonary disease.

Liu Z, Li J, Li B, Yi G, Pang S, Zhang R, Li P, Yin Z, Zhang J, Lv B, Yan J, Ma J

pubmed logopapersAug 1 2025
Accurate quantification of the extent of bronchial damage across various airway levels in chronic obstructive pulmonary disease (COPD) remains a challenge. In this study, artificial intelligence (AI) was employed to develop an airway segmentation model to investigate the morphological changes of the central and peripheral airways in COPD patients and the effects of these airway changes on pulmonary function classification and acute COPD exacerbations. Clinical data from a total of 340 patients with COPD and 73 healthy volunteers were collected and compiled. An AI-driven airway segmentation model was constructed using Convolutional Neural Regressor (CNR) and Airway Transfer Network (ATN) algorithms. The efficacy of the model was evaluated through support vector machine (SVM) and random forest regression approaches. The area under the receiver operating characteristic (ROC) curve (AUC) of the SVM in evaluating the COPD airway segmentation model was 0.96, with a sensitivity of 97% and a specificity of 92%, however, the AUC value of the SVM was 0.81 when it was replaced the healthy group by non-COPD outpatients. Compared with the healthy group, the grade and the total number of airway segmentation were decreased and the diameters of the right main bronchus and bilateral lobar bronchi of patients with COPD were smaller and the airway walls were thinner (all P < 0.01). However, the diameters of the subsegmental and small airway bronchi were increased, and airway walls were thickened, and the arc lengths were shorter ( all P < 0.01), especially in patients with severe COPD (all P < 0.05). Correlation and regression analysis showed that FEV1%pre was positively correlated with the diameters and airway wall thickness of the main and lobar airway, and the arc lengths of small airway bronchi (all P < 0.05). Airway wall thickness of the subsegment and small airway were found to have the greatest impact on the frequency of COPD exacerbations. Artificial intelligence lung CT airway segmentation model is a non-invasive quantitative tool for measuring chronic obstructive pulmonary disease. The main changes in COPD patients are that the central airway diameter becomes narrower and the thickness becomes thinner. The arc length of the peripheral airway becomes shorter, and the diameter and airway wall thickness become larger, which is more obvious in severe patients. Pulmonary function classification and small and medium airway dysfunction are also affected by the diameter, thickness and arc length of large and small airways. Small airway remodeling is more significant in acute exacerbations of COPD.

Coronary CT angiography evaluation with artificial intelligence for individualized medical treatment of atherosclerosis: a Consensus Statement from the QCI Study Group.

Schulze K, Stantien AM, Williams MC, Vassiliou VS, Giannopoulos AA, Nieman K, Maurovich-Horvat P, Tarkin JM, Vliegenthart R, Weir-McCall J, Mohamed M, Föllmer B, Biavati F, Stahl AC, Knape J, Balogh H, Galea N, Išgum I, Arbab-Zadeh A, Alkadhi H, Manka R, Wood DA, Nicol ED, Nurmohamed NS, Martens FMAC, Dey D, Newby DE, Dewey M

pubmed logopapersAug 1 2025
Coronary CT angiography is widely implemented, with an estimated 2.2 million procedures in patients with stable chest pain every year in Europe alone. In parallel, artificial intelligence and machine learning are poised to transform coronary atherosclerotic plaque evaluation by improving reliability and speed. However, little is known about how to use coronary atherosclerosis imaging biomarkers to individualize recommendations for medical treatment. This Consensus Statement from the Quantitative Cardiovascular Imaging (QCI) Study Group outlines key recommendations derived from a three-step Delphi process that took place after the third international QCI Study Group meeting in September 2024. Experts from various fields of cardiovascular imaging agreed on the use of age-adjusted and gender-adjusted percentile curves, based on coronary plaque data from the DISCHARGE and SCOT-HEART trials. Two key issues were addressed: the need to harness the reliability and precision of artificial intelligence and machine learning tools and to tailor treatment on the basis of individualized plaque analysis. The QCI Study Group recommends that the presence of any atherosclerotic plaque should lead to a recommendation of pharmacological treatment, whereas the 70th percentile of total plaque volume warrants high-intensity treatment. The aim of these recommendations is to lay the groundwork for future trials and to unlock the potential of coronary CT angiography to improve patient outcomes globally.

BEA-CACE: branch-endpoint-aware double-DQN for coronary artery centerline extraction in CT angiography images.

Zhang Y, Luo G, Wang W, Cao S, Dong S, Yu D, Wang X, Wang K

pubmed logopapersAug 1 2025
In order to automate the centerline extraction of the coronary tree, three challenges must be addressed: tracking branches automatically, passing through plaques successfully, and detecting endpoints accurately. This study aims to develop a method to solve the three challenges. We propose a branch-endpoint-aware coronary centerline extraction framework. The framework consists of a deep reinforcement learning-based tracker and a 3D dilated CNN-based detector. The tracker is designed to predict the actions of an agent with the objective of tracking the centerline. The detector identifies bifurcation points and endpoints, assisting the tracker in tracking branches and terminating the tracking process automatically. The detector can also estimate the radius values of the coronary artery. The method achieves the state-of-the-art performance in both the centerline extraction and radius estimate. Furthermore, the method necessitates minimal user interaction to extract a coronary tree, a feature that surpasses other interactive methods. The method can track branches automatically, pass through plaques successfully and detect endpoints accurately. Compared with other interactive methods that require multiple seeds, our method only needs one seed to extract the entire coronary tree.

Segmentation of coronary calcifications with a domain knowledge-based lightweight 3D convolutional neural network.

Santos R, Castro R, Baeza R, Nunes F, Filipe VM, Renna F, Paredes H, Fontes-Carvalho R, Pedrosa J

pubmed logopapersAug 1 2025
Cardiovascular diseases are the leading cause of death in the world, with coronary artery disease being the most prevalent. Coronary artery calcifications are critical biomarkers for cardiovascular disease, and their quantification via non-contrast computed tomography is a widely accepted and heavily employed technique for risk assessment. Manual segmentation of these calcifications is a time-consuming task, subject to variability. State-of-the-art methods often employ convolutional neural networks for an automated approach. However, there is a lack of studies that perform these segmentations with 3D architectures that can gather important and necessary anatomical context to distinguish the different coronary arteries. This paper proposes a novel and automated approach that uses a lightweight three-dimensional convolutional neural network to perform efficient and accurate segmentations and calcium scoring. Results show that this method achieves Dice score coefficients of 0.93 ± 0.02, 0.93 ± 0.03, 0.84 ± 0.02, 0.63 ± 0.06 and 0.89 ± 0.03 for the foreground, left anterior descending artery (LAD), left circumflex artery (LCX), left main artery (LM) and right coronary artery (RCA) calcifications, respectively, outperforming other state-of-the-art architectures. An external cohort validation also showed the generalization of this method's performance and how it can be applied in different clinical scenarios. In conclusion, the proposed lightweight 3D convolutional neural network demonstrates high efficiency and accuracy, outperforming state-of-the-art methods and showcasing robust generalization potential.

LesiOnTime -- Joint Temporal and Clinical Modeling for Small Breast Lesion Segmentation in Longitudinal DCE-MRI

Mohammed Kamran, Maria Bernathova, Raoul Varga, Christian Singer, Zsuzsanna Bago-Horvath, Thomas Helbich, Georg Langs, Philipp Seeböck

arxiv logopreprintAug 1 2025
Accurate segmentation of small lesions in Breast Dynamic Contrast-Enhanced MRI (DCE-MRI) is critical for early cancer detection, especially in high-risk patients. While recent deep learning methods have advanced lesion segmentation, they primarily target large lesions and neglect valuable longitudinal and clinical information routinely used by radiologists. In real-world screening, detecting subtle or emerging lesions requires radiologists to compare across timepoints and consider previous radiology assessments, such as the BI-RADS score. We propose LesiOnTime, a novel 3D segmentation approach that mimics clinical diagnostic workflows by jointly leveraging longitudinal imaging and BIRADS scores. The key components are: (1) a Temporal Prior Attention (TPA) block that dynamically integrates information from previous and current scans; and (2) a BI-RADS Consistency Regularization (BCR) loss that enforces latent space alignment for scans with similar radiological assessments, thus embedding domain knowledge into the training process. Evaluated on a curated in-house longitudinal dataset of high-risk patients with DCE-MRI, our approach outperforms state-of-the-art single-timepoint and longitudinal baselines by 5% in terms of Dice. Ablation studies demonstrate that both TPA and BCR contribute complementary performance gains. These results highlight the importance of incorporating temporal and clinical context for reliable early lesion segmentation in real-world breast cancer screening. Our code is publicly available at https://github.com/cirmuw/LesiOnTime

Weakly Supervised Intracranial Aneurysm Detection and Segmentation in MR angiography via Multi-task UNet with Vesselness Prior

Erin Rainville, Amirhossein Rasoulian, Hassan Rivaz, Yiming Xiao

arxiv logopreprintAug 1 2025
Intracranial aneurysms (IAs) are abnormal dilations of cerebral blood vessels that, if ruptured, can lead to life-threatening consequences. However, their small size and soft contrast in radiological scans often make it difficult to perform accurate and efficient detection and morphological analyses, which are critical in the clinical care of the disorder. Furthermore, the lack of large public datasets with voxel-wise expert annotations pose challenges for developing deep learning algorithms to address the issues. Therefore, we proposed a novel weakly supervised 3D multi-task UNet that integrates vesselness priors to jointly perform aneurysm detection and segmentation in time-of-flight MR angiography (TOF-MRA). Specifically, to robustly guide IA detection and segmentation, we employ the popular Frangi's vesselness filter to derive soft cerebrovascular priors for both network input and an attention block to conduct segmentation from the decoder and detection from an auxiliary branch. We train our model on the Lausanne dataset with coarse ground truth segmentation, and evaluate it on the test set with refined labels from the same database. To further assess our model's generalizability, we also validate it externally on the ADAM dataset. Our results demonstrate the superior performance of the proposed technique over the SOTA techniques for aneurysm segmentation (Dice = 0.614, 95%HD =1.38mm) and detection (false positive rate = 1.47, sensitivity = 92.9%).

Cerebral Amyloid Deposition With <sup>18</sup>F-Florbetapir PET Mediates Retinal Vascular Density and Cognitive Impairment in Alzheimer's Disease.

Chen Z, He HL, Qi Z, Bi S, Yang H, Chen X, Xu T, Jin ZB, Yan S, Lu J

pubmed logopapersAug 1 2025
Alzheimer's disease (AD) is accompanied by alterations in retinal vascular density (VD), but the mechanisms remain unclear. This study investigated the relationship among cerebral amyloid-β (Aβ) deposition, VD, and cognitive decline. We enrolled 92 participants, including 47 AD patients and 45 healthy control (HC) participants. VD across retinal subregions was quantified using deep learning-based fundus photography, and cerebral Aβ deposition was measured with <sup>18</sup>F-florbetapir (<sup>18</sup>F-AV45) PET/MRI. Using the minimum bounding circle of the optic disc as the diameter (papilla-diameter, PD), VD (total, 0.5-1.0 PD, 1.0-1.5 PD, 1.5-2.0 PD, 2.0-2.5 PD) was calculated. Standardized uptake value ratio (SUVR) for Aβ deposition was computed for global and regional cortical areas, using the cerebellar cortex as the reference region. Cognitive performance was assessed with the Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA). Pearson correlation, multiple linear regression, and mediation analyses were used to explore Aβ deposition, VD, and cognition. AD patients exhibited significantly lower VD in all subregions compared to HC (p < 0.05). Reduced VD correlated with higher SUVR in the global cortex and a decline in cognitive abilities (p < 0.05). Mediation analysis indicated that VD influenced MMSE and MoCA through SUVR in the global cortex, with the most pronounced effects observed in the 1.0-1.5 PD range. Retinal VD is associated with cognitive decline, a relationship primarily mediated by cerebral Aβ deposition measured via <sup>18</sup>F-AV45 PET. These findings highlight the potential of retinal VD as a biomarker for early detection in AD.

Mobile U-ViT: Revisiting large kernel and U-shaped ViT for efficient medical image segmentation

Fenghe Tang, Bingkun Nian, Jianrui Ding, Wenxin Ma, Quan Quan, Chengqi Dong, Jie Yang, Wei Liu, S. Kevin Zhou

arxiv logopreprintAug 1 2025
In clinical practice, medical image analysis often requires efficient execution on resource-constrained mobile devices. However, existing mobile models-primarily optimized for natural images-tend to perform poorly on medical tasks due to the significant information density gap between natural and medical domains. Combining computational efficiency with medical imaging-specific architectural advantages remains a challenge when developing lightweight, universal, and high-performing networks. To address this, we propose a mobile model called Mobile U-shaped Vision Transformer (Mobile U-ViT) tailored for medical image segmentation. Specifically, we employ the newly purposed ConvUtr as a hierarchical patch embedding, featuring a parameter-efficient large-kernel CNN with inverted bottleneck fusion. This design exhibits transformer-like representation learning capacity while being lighter and faster. To enable efficient local-global information exchange, we introduce a novel Large-kernel Local-Global-Local (LGL) block that effectively balances the low information density and high-level semantic discrepancy of medical images. Finally, we incorporate a shallow and lightweight transformer bottleneck for long-range modeling and employ a cascaded decoder with downsample skip connections for dense prediction. Despite its reduced computational demands, our medical-optimized architecture achieves state-of-the-art performance across eight public 2D and 3D datasets covering diverse imaging modalities, including zero-shot testing on four unseen datasets. These results establish it as an efficient yet powerful and generalization solution for mobile medical image analysis. Code is available at https://github.com/FengheTan9/Mobile-U-ViT.

LesiOnTime -- Joint Temporal and Clinical Modeling for Small Breast Lesion Segmentation in Longitudinal DCE-MRI

Mohammed Kamran, Maria Bernathova, Raoul Varga, Christian F. Singer, Zsuzsanna Bago-Horvath, Thomas Helbich, Georg Langs, Philipp Seeböck

arxiv logopreprintAug 1 2025
Accurate segmentation of small lesions in Breast Dynamic Contrast-Enhanced MRI (DCE-MRI) is critical for early cancer detection, especially in high-risk patients. While recent deep learning methods have advanced lesion segmentation, they primarily target large lesions and neglect valuable longitudinal and clinical information routinely used by radiologists. In real-world screening, detecting subtle or emerging lesions requires radiologists to compare across timepoints and consider previous radiology assessments, such as the BI-RADS score. We propose LesiOnTime, a novel 3D segmentation approach that mimics clinical diagnostic workflows by jointly leveraging longitudinal imaging and BIRADS scores. The key components are: (1) a Temporal Prior Attention (TPA) block that dynamically integrates information from previous and current scans; and (2) a BI-RADS Consistency Regularization (BCR) loss that enforces latent space alignment for scans with similar radiological assessments, thus embedding domain knowledge into the training process. Evaluated on a curated in-house longitudinal dataset of high-risk patients with DCE-MRI, our approach outperforms state-of-the-art single-timepoint and longitudinal baselines by 5% in terms of Dice. Ablation studies demonstrate that both TPA and BCR contribute complementary performance gains. These results highlight the importance of incorporating temporal and clinical context for reliable early lesion segmentation in real-world breast cancer screening. Our code is publicly available at https://github.com/cirmuw/LesiOnTime
Page 12 of 99986 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.