Sort by:
Page 5 of 21209 results

Inconsistency of AI in intracranial aneurysm detection with varying dose and image reconstruction.

Goelz L, Laudani A, Genske U, Scheel M, Bohner G, Bauknecht HC, Mutze S, Hamm B, Jahnke P

pubmed logopapersJun 6 2025
Scanner-related changes in data quality are common in medical imaging, yet monitoring their impact on diagnostic AI performance remains challenging. In this study, we performed standardized consistency testing of an FDA-cleared and CE-marked AI for triage and notification of intracranial aneurysms across changes in image data quality caused by dose and image reconstruction. Our assessment was based on repeated examinations of a head CT phantom designed for AI evaluation, replicating a patient with three intracranial aneurysms in the anterior, middle and posterior circulation. We show that the AI maintains stable performance within the medium dose range but produces inconsistent results at reduced dose and, unexpectedly, at higher dose when filtered back projection is used. Data quality standards required for AI are stricter than those for neuroradiologists, who report higher aneurysm visibility rates and experience performance degradation only at substantially lower doses, with no decline at higher doses.

Predicting infarct outcomes after extended time window thrombectomy in large vessel occlusion using knowledge guided deep learning.

Dai L, Yuan L, Zhang H, Sun Z, Jiang J, Li Z, Li Y, Zha Y

pubmed logopapersJun 6 2025
Predicting the final infarct after an extended time window mechanical thrombectomy (MT) is beneficial for treatment planning in acute ischemic stroke (AIS). By introducing guidance from prior knowledge, this study aims to improve the accuracy of the deep learning model for post-MT infarct prediction using pre-MT brain perfusion data. This retrospective study collected CT perfusion data at admission for AIS patients receiving MT over 6 hours after symptom onset, from January 2020 to December 2024, across three centers. Infarct on post-MT diffusion weighted imaging served as ground truth. Five Swin transformer based models were developed for post-MT infarct segmentation using pre-MT CT perfusion parameter maps: BaselineNet served as the basic model for comparative analysis, CollateralFlowNet included a collateral circulation evaluation score, InfarctProbabilityNet incorporated infarct probability mapping, ArterialTerritoryNet was guided by artery territory mapping, and UnifiedNet combined all prior knowledge sources. Model performance was evaluated using the Dice coefficient and intersection over union (IoU). A total of 221 patients with AIS were included (65.2% women) with a median age of 73 years. Baseline ischemic core based on CT perfusion threshold achieved a Dice coefficient of 0.50 and IoU of 0.33. BaselineNet improved to a Dice coefficient of 0.69 and IoU of 0.53. Compared with BaselineNet, models incorporating medical knowledge demonstrated higher performance: CollateralFlowNet (Dice coefficient 0.72, IoU 0.56), InfarctProbabilityNet (Dice coefficient 0.74, IoU 0.58), ArterialTerritoryNet (Dice coefficient 0.75, IoU 0.60), and UnifiedNet (Dice coefficient 0.82, IoU 0.71) (all P<0.05). In this study, integrating medical knowledge into deep learning models enhanced the accuracy of infarct predictions in AIS patients undergoing extended time window MT.

The Predictive Value of Multiparameter Characteristics of Coronary Computed Tomography Angiography for Coronary Stent Implantation.

Xu X, Wang Y, Yang T, Wang Z, Chu C, Sun L, Zhao Z, Li T, Yu H, Wang X, Song P

pubmed logopapersJun 6 2025
This study aims to evaluate the predictive value of multiparameter characteristics of coronary computed tomography angiography (CCTA) plaque and the ratio of coronary artery volume to myocardial mass (V/M) in guiding percutaneous coronary stent implantation (PCI) in patients diagnosed with unstable angina. Patients who underwent CCTA and coronary angiography (CAG) within 2 months were retrospectively analyzed. According to CAG results, patients were divided into a medical therapy group (n=41) and a PCI revascularization group (n=37). The plaque characteristics and V/M were quantitatively evaluated. The parameters included minimum lumen area at stenosis (MLA), maximum area stenosis (MAS), maximum diameter stenosis (MDS), total plaque burden (TPB), plaque length, plaque volume, and each component volume within the plaque. Fractional flow reserve (FFR) and pericoronary fat attenuation index (FAI) were calculated based on CCTA. Artificial intelligence software was employed to compare the differences in each parameter between the 2 groups at both the vessel and plaque levels. The PCI group had higher MAS, MDS, TPB, FAI, noncalcified plaque volume and lipid plaque volume, and significantly lower V/M, MLA, and CT-derived fractional flow reserve (FFRCT). V/M, TPB, MLA, FFRCT, and FAI are important influencing factors of PCI. The combined model of MLA, FFRCT, and FAI had the largest area under the ROC curve (AUC=0.920), and had the best performance in predicting PCI. The integration of AI-derived multiparameter features from one-stop CCTA significantly enhances the accuracy of predicting PCI in angina pectoris patients, evaluating at the plaque, vessel, and patient levels.

Comparative analysis of convolutional neural networks and vision transformers in identifying benign and malignant breast lesions.

Wang L, Fang S, Chen X, Pan C, Meng M

pubmed logopapersJun 6 2025
Various deep learning models have been developed and employed for medical image classification. This study conducted comprehensive experiments on 12 models, aiming to establish reliable benchmarks for research on breast dynamic contrast-enhanced magnetic resonance imaging image classification. Twelve deep learning models were systematically compared by analyzing variations in 4 key hyperparameters: optimizer (Op), learning rate, batch size (BS), and data augmentation. The evaluation criteria encompassed a comprehensive set of metrics including accuracy (Ac), loss value, precision, recall rate, F1-score, and area under the receiver operating characteristic curve. Furthermore, the training times and model parameter counts were assessed for holistic performance comparison. Adjustments in the BS within Adam Op had a minimal impact on Ac in the convolutional neural network models. However, altering the Op and learning rate while maintaining the same BS significantly affected the Ac. The ResNet152 network model exhibited the lowest Ac. Both the recall rate and area under the receiver operating characteristic curve for the ResNet152 and Vision transformer-base (ViT) models were inferior compared to the others. Data augmentation unexpectedly reduced the Ac of ResNet50, ResNet152, VGG16, VGG19, and ViT models. The VGG16 model boasted the shortest training duration, whereas the ViT model, before data augmentation, had the longest training time and smallest model weight. The ResNet152 and ViT models were not well suited for image classification tasks involving small breast dynamic contrast-enhanced magnetic resonance imaging datasets. Although data augmentation is typically beneficial, its application should be approached cautiously. These findings provide important insights to inform and refine future research in this domain.

Quantitative and automatic plan-of-the-day assessment to facilitate adaptive radiotherapy in cervical cancer.

Mason SA, Wang L, Alexander SE, Lalondrelle S, McNair HA, Harris EJ

pubmed logopapersJun 5 2025
To facilitate implementation of plan-of-the-day (POTD) selection for treating locally advanced cervical cancer (LACC), we developed a POTD assessment tool for CBCT-guided radiotherapy (RT). A female pelvis segmentation model (U-Seg3) is combined with a quantitative standard operating procedure (qSOP) to identify optimal and acceptable plans. &#xD;&#xD;Approach: The planning CT[i], corresponding structure set[ii], and manually contoured CBCTs[iii] (n=226) from 39 LACC patients treated with POTD (n=11) or non-adaptive RT (n=28) were used to develop U-Seg3, an algorithm incorporating deep-learning and deformable image registration techniques to segment the low-risk clinical target volume (LR-CTV), high-risk CTV (HR-CTV), bladder, rectum, and bowel bag. A single-channel input model (iii only, U-Seg1) was also developed. Contoured CBCTs from the POTD patients were (a) reserved for U-Seg3 validation/testing, (b) audited to determine optimal and acceptable plans, and (c) used to empirically derive a qSOP that maximised classification accuracy. &#xD;&#xD;Main Results: The median [interquartile range] DSC between manual and U-Seg3 contours was 0.83 [0.80], 0.78 [0.13], 0.94 [0.05], 0.86[0.09], and 0.90 [0.05] for the LR-CTV, HR-CTV, bladder, rectum, and bowel bag. These were significantly higher than U-Seg1 in all structures but bladder. The qSOP classified plans as acceptable if they met target coverage thresholds (LR-CTV≧99%, HR-CTV≧99.8%), with lower LR-CTV coverage (≧95%) sometimes allowed. The acceptable plan minimising bowel irradiation was considered optimal unless substantial bladder sparing could be achieved. With U-Seg3 embedded in the qSOP, optimal and acceptable plans were identified in 46/60 and 57/60 cases. &#xD;&#xD;Significance: U-Seg3 outperforms U-Seg1 and all known CBCT-based female pelvis segmentation models. The tool combining U-Seg3 and the qSOP identifies optimal plans with equivalent accuracy as two observers. In an implementation strategy whereby this tool serves as the second observer, plan selection confidence and decision-making time could be improved whilst simultaneously reducing the required number of POTD-trained radiographers by 50%.&#xD;&#xD;&#xD.

GNNs surpass transformers in tumor medical image segmentation.

Xiao H, Yang G, Li Z, Yi C

pubmed logopapersJun 5 2025
To assess the suitability of Transformer-based architectures for medical image segmentation and investigate the potential advantages of Graph Neural Networks (GNNs) in this domain. We analyze the limitations of the Transformer, which models medical images as sequences of image patches, limiting its flexibility in capturing complex and irregular tumor structures. To address it, we propose U-GNN, a pure GNN-based U-shaped architecture designed for medical image segmentation. U-GNN retains the U-Net-inspired inductive bias while leveraging GNNs' topological modeling capabilities. The architecture consists of Vision GNN blocks stacked into a U-shaped structure. Additionally, we introduce the concept of multi-order similarity and propose a zero-computation-cost approach to incorporate higher-order similarity in graph construction. Each Vision GNN block segments the image into patch nodes, constructs multi-order similarity graphs, and aggregates node features via multi-order node information aggregation. Experimental evaluations on multi-organ and cardiac segmentation datasets demonstrate that U-GNN significantly outperforms existing CNN- and Transformer-based models. U-GNN achieves a 6% improvement in Dice Similarity Coefficient (DSC) and an 18% reduction in Hausdorff Distance (HD) compared to state-of-the-art methods. The source code will be released upon paper acceptance.

Matrix completion-informed deep unfolded equilibrium models for self-supervised <math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>k</mi> <annotation>$k$</annotation></semantics> </math> -space interpolation in MRI.

Luo C, Wang H, Liu Y, Xie T, Chen G, Jin Q, Liang D, Cui ZX

pubmed logopapersJun 5 2025
Self-supervised methods for magnetic resonance imaging (MRI) reconstruction have garnered significant interest due to their ability to address the challenges of slow data acquisition and scarcity of fully sampled labels. Current regularization-based self-supervised techniques merge the theoretical foundations of regularization with the representational strengths of deep learning and enable effective reconstruction under higher acceleration rates, yet often fall short in interpretability, leaving their theoretical underpinnings lacking. In this paper, we introduce a novel self-supervised approach that provides stringent theoretical guarantees and interpretable networks while circumventing the need for fully sampled labels. Our method exploits the intrinsic relationship between convolutional neural networks and the null space within structural low-rank models, effectively integrating network parameters into an iterative reconstruction process. Our network learns gradient descent steps of the projected gradient descent algorithm without changing its convergence property, which implements a fully interpretable unfolded model. We design a non-expansive mapping for the network architecture, ensuring convergence to a fixed point. This well-defined framework enables complete reconstruction of missing <math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>k</mi> <annotation>$k$</annotation></semantics> </math> -space data grounded in matrix completion theory, independent of fully sampled labels. Qualitative and quantitative experimental results on multi-coil MRI reconstruction demonstrate the efficacy of our self-supervised approach, showing marked improvements over existing self-supervised and traditional regularization methods, achieving results comparable to supervised learning in selected scenarios. Our method surpasses existing self-supervised approaches in reconstruction quality and also delivers competitive performance under supervised settings. This work not only advances the state-of-the-art in MRI reconstruction but also enhances interpretability in deep learning applications for medical imaging.

Clinical validation of a deep learning model for low-count PET image enhancement.

Long Q, Tian Y, Pan B, Xu Z, Zhang W, Xu L, Fan W, Pan T, Gong NJ

pubmed logopapersJun 5 2025
To investigate the effects of the deep learning model RaDynPET on fourfold reduced-count whole-body PET examinations. A total of 120 patients (84 internal cohorts and 36 external cohorts) undergoing <sup>18</sup>F-FDG PET/CT examinations were enrolled. PET images were reconstructed using OSEM algorithm with 120-s (G120) and 30-s (G30) list-mode data. RaDynPET was developed to generate enhanced images (R30) from G30. Two experienced nuclear medicine physicians independently evaluated subjective image quality using a 5-point Likert scale. Standardized uptake values (SUV), standard deviations, liver signal-to-noise ratio (SNR), lesion tumor-to-background ratio (TBR), and contrast-to-noise ratio (CNR) were compared. Subgroup analyses evaluated performance across demographics, and lesion detectability were evaluated using external datasets. RaDynPET was also compared to other deep learning methods. In internal cohorts, R30 demonstrated significantly higher image quality scores than G30 and G120. R30 showed excellent agreement with G120 for liver and lesion SUV values and surpassed G120 in liver SNR and CNR. Liver SNR and CNR of R30 were comparable to G120 in thin group, and the CNR of R30 was comparable to G120 in young age group. In external cohorts, R30 maintained strong SUV agreement with G120, with lesion-level sensitivity and specificity of 95.45% and 98.41%, respectively. There was no statistical difference in lesion detection between R30 and G120. RaDynPET achieved the highest PSNR and SSIM among deep learning methods. The RaDynPET model effectively restored high image quality while maintaining SUV agreement for <sup>18</sup>F-FDG PET scans acquired in 25% of the standard acquisition time.

A review on learning-based algorithms for tractography and human brain white matter tracts recognition.

Barati Shoorche A, Farnia P, Makkiabadi B, Leemans A

pubmed logopapersJun 4 2025
Human brain fiber tractography using diffusion magnetic resonance imaging is a crucial stage in mapping brain white matter structures, pre-surgical planning, and extracting connectivity patterns. Accurate and reliable tractography, by providing detailed geometric information about the position of neural pathways, minimizes the risk of damage during neurosurgical procedures. Both tractography itself and its post-processing steps such as bundle segmentation are usually used in these contexts. Many approaches have been put forward in the past decades and recently, multiple data-driven tractography algorithms and automatic segmentation pipelines have been proposed to address the limitations of traditional methods. Several of these recent methods are based on learning algorithms that have demonstrated promising results. In this study, in addition to introducing diffusion MRI datasets, we review learning-based algorithms such as conventional machine learning, deep learning, reinforcement learning and dictionary learning methods that have been used for white matter tract, nerve and pathway recognition as well as whole brain streamlines or whole brain tractogram creation. The contribution is to discuss both tractography and tract recognition methods, in addition to extending previous related reviews with most recent methods, covering architectures as well as network details, assess the efficiency of learning-based methods through a comprehensive comparison in this field, and finally demonstrate the important role of learning-based methods in tractography.

Retrieval-Augmented Generation with Large Language Models in Radiology: From Theory to Practice.

Fink A, Rau A, Reisert M, Bamberg F, Russe MF

pubmed logopapersJun 4 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Large language models (LLMs) hold substantial promise in addressing the growing workload in radiology, but recent studies also reveal limitations, such as hallucinations and opacity in sources for LLM responses. Retrieval-augmented Generation (RAG) based LLMs offer a promising approach to streamline radiology workflows by integrating reliable, verifiable, and customizable information. Ongoing refinement is critical to enable RAG models to manage large amounts of input data and to engage in complex multiagent dialogues. This report provides an overview of recent advances in LLM architecture, including few-shot and zero-shot learning, RAG integration, multistep reasoning, and agentic RAG, and identifies future research directions. Exemplary cases demonstrate the practical application of these techniques in radiology practice. ©RSNA, 2025.
Page 5 of 21209 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.