Sort by:
Page 25 of 2252246 results

A multi-view CNN model to predict resolving of new lung nodules on follow-up low-dose chest CT.

Wang J, Zhang X, Tang W, van Tuinen M, Vliegenthart R, van Ooijen P

pubmed logopapersJun 27 2025
New, intermediate-sized nodules in lung cancer screening undergo follow-up CT, but some of these will resolve. We evaluated the performance of a multi-view convolutional neural network (CNN) in distinguishing resolving and non-resolving new, intermediate-sized lung nodules. This retrospective study utilized data on 344 intermediate-sized nodules (50-500 mm<sup>3</sup>) in 250 participants from the NELSON (Dutch-Belgian Randomized Lung Cancer Screening) trial. We implemented four-fold cross-validation for model training and testing. A multi-view CNN model was developed by combining three two-dimensional (2D) CNN models and one three-dimensional (3D) CNN model. We used 2D, 2.5D, and 3D models for comparison. The models' performance was evaluated using sensitivity, specificity, and area under the ROC curve (AUC). Specificity, indicating what percentage of non-resolving nodules requiring follow-up can be correctly predicted, was maximized. Among all nodules, 18.3% (63) were resolving. The multi-view CNN model achieved an AUC of 0.81, with a mean sensitivity of 0.63 (SD, 0.15) and a mean specificity of 0.93 (SD, 0.02). The model significantly improved performance compared to 2D, 2.5D, or 3D models (p < 0.05). Under the premise of specificity greater than 90% (meaning < 10% of non-resolving nodules are incorrectly identified as resolving), follow-up CT in 14% of individuals could be prevented. The multi-view CNN model achieved high specificity in discriminating new intermediate nodules that would need follow-up CT by identifying non-resolving nodules. After further validation and optimization, this model may assist with decision-making when new intermediate nodules are found in lung cancer screening. The multi-view CNN-based model has the potential to reduce unnecessary follow-up scans when new nodules are detected, aiding radiologists in making earlier, more informed decisions. Predicting the resolution of new intermediate lung nodules in lung cancer screening CT is a challenge. Our multi-view CNN model showed an AUC of 0.81, a specificity of 0.93, and a sensitivity of 0.63 at the nodule level. The multi-view model demonstrated a significant improvement in AUC compared to the three 2D models, one 2.5D model, and one 3D model.

A two-step automatic identification of contrast phases for abdominal CT images based on residual networks.

Liu Q, Jiang J, Wu K, Zhang Y, Sun N, Luo J, Ba T, Lv A, Liu C, Yin Y, Yang Z, Xu H

pubmed logopapersJun 27 2025
To develop a deep learning model based on Residual Networks (ResNet) for the automated and accurate identification of contrast phases in abdominal CT images. A dataset of 1175 abdominal contrast-enhanced CT scans was retrospectively collected for the model development, and another independent dataset of 215 scans from five hospitals was collected for external testing. Each contrast phase was independently annotated by two radiologists. A ResNet-based model was developed to automatically classify phases into the early arterial phase (EAP) or late arterial phase (LAP), portal venous phase (PVP), and delayed phase (DP). Strategy A identified EAP or LAP, PVP, and DP in one step. Strategy B used a two-step approach: first classifying images as arterial phase (AP), PVP, and DP, then further classifying AP images into EAP or LAP. Model performance and strategy comparison were evaluated. In the internal test set, the overall accuracy of the two-step strategy was 98.3% (283/288; p < 0.001), significantly higher than that of the one-step strategy (91.7%, 264/288; p < 0.001). In the external test set, the two-step model achieved an overall accuracy of 99.1% (639/645), with sensitivities of 95.1% (EAP), 99.4% (LAP), 99.5% (PVP), and 99.5% (DP). The proposed two-step ResNet-based model provides highly accurate and robust identification of contrast phases in abdominal CT images, outperforming the conventional one-step strategy. Automated and accurate identification of contrast phases in abdominal CT images provides a robust tool for improving image quality control and establishes a strong foundation for AI-driven applications, particularly those leveraging contrast-enhanced abdominal imaging data. Accurate identification of contrast phases is crucial in abdominal CT imaging. The two-step ResNet-based model achieved superior accuracy across internal and external datasets. Automated phase classification strengthens imaging quality control and supports precision AI applications.

Leadership in radiology in the era of technological advancements and artificial intelligence.

Wichtmann BD, Paech D, Pianykh OS, Huang SY, Seltzer SE, Brink J, Fennessy FM

pubmed logopapersJun 27 2025
Radiology has evolved from the pioneering days of X-ray imaging to a field rich in advanced technologies on the cusp of a transformative future driven by artificial intelligence (AI). As imaging workloads grow in volume and complexity, and economic as well as environmental pressures intensify, visionary leadership is needed to navigate the unprecedented challenges and opportunities ahead. Leveraging its strengths in automation, accuracy and objectivity, AI will profoundly impact all aspects of radiology practice-from workflow management, to imaging, diagnostics, reporting and data-driven analytics-freeing radiologists to focus on value-driven tasks that improve patient care. However, successful AI integration requires strong leadership and robust governance structures to oversee algorithm evaluation, deployment, and ongoing maintenance, steering the transition from static to continuous learning systems. The vision of a "diagnostic cockpit" that integrates multidimensional data for quantitative precision diagnoses depends on visionary leadership that fosters innovation and interdisciplinary collaboration. Through administrative automation, precision medicine, and predictive analytics, AI can enhance operational efficiency, reduce administrative burden, and optimize resource allocation, leading to substantial cost reductions. Leaders need to understand not only the technical aspects but also the complex human, administrative, and organizational challenges of AI's implementation. Establishing sound governance and organizational frameworks will be essential to ensure ethical compliance and appropriate oversight of AI algorithms. As radiology advances toward this AI-driven future, leaders must cultivate an environment where technology enhances rather than replaces human skills, upholding an unwavering commitment to human-centered care. Their vision will define radiology's pioneering role in AI-enabled healthcare transformation. KEY POINTS: Question Artificial intelligence (AI) will transform radiology, improving workflow efficiency, reducing administrative burden, and optimizing resource allocation to meet imaging workloads' increasing complexity and volume. Findings Strong leadership and governance ensure ethical deployment of AI, steering the transition from static to continuous learning systems while fostering interdisciplinary innovation and collaboration. Clinical relevance Visionary leaders must harness AI to enhance, rather than replace, the role of professionals in radiology, advancing human-centered care while pioneering healthcare transformation.

Deep Learning-Based Prediction of PET Amyloid Status Using MRI.

Kim D, Ottesen JA, Kumar A, Ho BC, Bismuth E, Young CB, Mormino E, Zaharchuk G

pubmed logopapersJun 27 2025
Identifying amyloid-beta (Aβ)-positive patients is essential for Alzheimer's disease (AD) clinical trials and disease-modifying treatments but currently requires PET or cerebrospinal fluid sampling. Previous MRI-based deep learning models, using only T1-weighted (T1w) images, have shown moderate performance. Multi-contrast MRI and PET-based quantitative Aβ deposition were retrospectively obtained from three public datasets: ADNI, OASIS3, and A4. Aβ positivity was defined using each dataset's recommended centiloid threshold. Two EfficientNet models were trained to predict amyloid positivity: one using only T1w images and another incorporating both T1w and T2-FLAIR. Model performance was assessed using an internal held-out test set, evaluating AUC, accuracy, sensitivity, and specificity. External validation was conducted using an independent cohort from Stanford Alzheimer's Disease Research Center. DeLong's and McNemar's tests were used to compare AUC and accuracy, respectively. A total of 4,056 exams (mean [SD] age: 71.6 [6.3] years; 55% female; 55% amyloid-positive) were used for network development, and 149 exams were used for external testing (mean [SD] age: 72.1 [9.6] years; 58% female; 56% amyloid-positive). The multi-contrast model outperformed the single-modality model in the internal held-out test set (AUC: 0.67, 95% CI: 0.65-0.70, <i>P</i> < 0.001; accuracy: 0.63, 95% CI: 0.62-0.65, <i>P</i> < 0.001) compared to the T1w-only model (AUC: 0.61; accuracy: 0.59). Among cognitive subgroups, the highest performance (AUC: 0.71) was observed in mild cognitive impairment. The multi-contrast model also demonstrated consistent performance in the external test set (AUC: 0.65, 95% CI: 0.60-0.71, <i>P</i> = 0.014; accuracy: 0.62, 95% CI: 0.58- 0.65, <i>P</i> < 0.001). The use of multi-contrast MRI, specifically incorporating T2-FLAIR in addition to T1w images, significantly improved the predictive accuracy of PET-determined amyloid status from MRI scans using a deep learning approach. Aβ= amyloid-beta; AD= Alzheimer's disease; AUC= area under the receiver operating characteristic curve; CN= cognitively normal; MCI= mild cognitive impairment; T1w = T1-wegithed; T2-FLAIR = T2-weighted fluid attenuated inversion recovery; FBP=<sup>18</sup>F-florbetapir; FBB=<sup>18</sup>F-florbetaben; SUVR= standard uptake value ratio.

Photon-counting micro-CT scanner for deep learning-enabled small animal perfusion imaging.

Allphin AJ, Nadkarni R, Clark DP, Badea CT

pubmed logopapersJun 27 2025
In this work, we introduce a benchtop, turn-table photon-counting (PC) micro-CT scanner and highlight its application for dynamic small animal perfusion imaging.&#xD;Approach: Built on recently published hardware, the system now features a CdTe-based photon-counting detector (PCD). We validated its static spectral PC micro-CT imaging using conventional phantoms and assessed dynamic performance with a custom flow-configurable dual-compartment perfusion phantom. The phantom was scanned under varied flow conditions during injections of a low molecular weight iodinated contrast agent. In vivo mouse studies with identical injection settings demonstrated potential applications. A pretrained denoising CNN processed large multi-energy, temporal datasets (20 timepoints × 4 energies × 3 spatial dimensions), reconstructed via weighted filtered back projection. A separate CNN, trained on simulated data, performed gamma variate-based 2D perfusion mapping, evaluated qualitatively in phantom and in vivo tests.&#xD;Main Results: Full five-dimensional reconstructions were denoised using a CNN in ~3% of the time of iterative reconstruction, reducing noise in water at the highest energy threshold from 1206 HU to 86 HU. Decomposed iodine maps, which improved contrast to noise ratio from 16.4 (in the lowest energy CT images) to 29.4 (in the iodine maps), were used for perfusion analysis. The perfusion CNN outperformed pixelwise gamma variate fitting by ~33%, with a test set error of 0.04 vs. 0.06 in blood flow index (BFI) maps, and quantified linear BFI changes in the phantom with a coefficient of determination of 0.98.&#xD;Significance: This work underscores the PC micro-CT scanner's utility for high-throughput small animal perfusion imaging, leveraging spectral PC micro-CT and iodine decomposition. It provides a versatile platform for preclinical vascular research and advanced, time-resolved studies of disease models and therapeutic interventions.

Prospective quality control in chest radiography based on the reconstructed 3D human body.

Tan Y, Ye Z, Ye J, Hou Y, Li S, Liang Z, Li H, Tang J, Xia C, Li Z

pubmed logopapersJun 27 2025
Chest radiography requires effective quality control (QC) to reduce high retake rates. However, existing QC measures are all retrospective and implemented after exposure, often necessitating retakes when image quality fails to meet standards and thereby increasing radiation exposure to patients. To address this issue, we proposed a 3D human body (3D-HB) reconstruction algorithm to realize prospective QC. Our objective was to investigate the feasibility of using the reconstructed 3D-HB for prospective QC in chest radiography and evaluate its impact on retake rates.&#xD;Approach: This prospective study included patients indicated for posteroanterior (PA) and lateral (LA) chest radiography in May 2024. A 3D-HB reconstruction algorithm integrating the SMPL-X model and the HybrIK-X algorithm was proposed to convert patients' 2D images into 3D-HBs. QC metrics regarding patient positioning and collimation were assessed using chest radiographs (reference standard) and 3D-HBs, with results compared using ICCs, linear regression, and receiver operating characteristic curves. For retake rate evaluation, a real-time 3D-HB visualization interface was developed and chest radiography was conducted in two four-week phases: the first without prospective QC and the second with prospective QC. Retake rates between the two phases were compared using chi-square tests. &#xD;Main results: 324 participants were included (mean age, 42 years±19 [SD]; 145 men; 324 PA and 294 LA examinations). The ICCs for the clavicle and midaxillary line angles were 0.80 and 0.78, respectively. Linear regression showed good relation for clavicle angles (R2: 0.655) and midaxillary line angles (R2: 0.616). In PA chest radiography, the AUCs of 3D-HBs were 0.89, 0.87, 0.91 and 0.92 for assessing scapula rotation, lateral tilt, centered positioning and central X-ray alignment respectively, with 97% accuracy in collimation assessment. In LA chest radiography, the AUCs of 3D-HBs were 0.87, 0.84, 0.87 and 0.88 for assessing arms raised, chest rotation, centered positioning and central X-ray alignment respectively, with 94% accuracy in collimation assessment. In retake rate evaluation, 3995 PA and 3295 LA chest radiographs were recorded. The implementation of prospective QC based on the 3D-HB reduced retake rates from 8.6% to 3.5% (PA) and 19.6% to 4.9% (LA) (p < .001).&#xD;Significance: The reconstructed 3D-HB is a feasible tool for prospective QC in chest radiography, providing real-time feedback on patient positioning and collimation before exposure. Prospective QC based on the reconstructed 3D-HB has the potential to reshape the future of radiography QC by significantly reducing retake rates and improving clinical standardization.

AI Model Passport: Data and System Traceability Framework for Transparent AI in Health

Varvara Kalokyri, Nikolaos S. Tachos, Charalampos N. Kalantzopoulos, Stelios Sfakianakis, Haridimos Kondylakis, Dimitrios I. Zaridis, Sara Colantonio, Daniele Regge, Nikolaos Papanikolaou, The ProCAncer-I consortium, Konstantinos Marias, Dimitrios I. Fotiadis, Manolis Tsiknakis

arxiv logopreprintJun 27 2025
The increasing integration of Artificial Intelligence (AI) into health and biomedical systems necessitates robust frameworks for transparency, accountability, and ethical compliance. Existing frameworks often rely on human-readable, manual documentation which limits scalability, comparability, and machine interpretability across projects and platforms. They also fail to provide a unique, verifiable identity for AI models to ensure their provenance and authenticity across systems and use cases, limiting reproducibility and stakeholder trust. This paper introduces the concept of the AI Model Passport, a structured and standardized documentation framework that acts as a digital identity and verification tool for AI models. It captures essential metadata to uniquely identify, verify, trace and monitor AI models across their lifecycle - from data acquisition and preprocessing to model design, development and deployment. In addition, an implementation of this framework is presented through AIPassport, an MLOps tool developed within the ProCAncer-I EU project for medical imaging applications. AIPassport automates metadata collection, ensures proper versioning, decouples results from source scripts, and integrates with various development environments. Its effectiveness is showcased through a lesion segmentation use case using data from the ProCAncer-I dataset, illustrating how the AI Model Passport enhances transparency, reproducibility, and regulatory readiness while reducing manual effort. This approach aims to set a new standard for fostering trust and accountability in AI-driven healthcare solutions, aspiring to serve as the basis for developing transparent and regulation compliant AI systems across domains.

Cardiovascular disease classification using radiomics and geometric features from cardiac CT

Ajay Mittal, Raghav Mehta, Omar Todd, Philipp Seeböck, Georg Langs, Ben Glocker

arxiv logopreprintJun 27 2025
Automatic detection and classification of Cardiovascular disease (CVD) from Computed Tomography (CT) images play an important part in facilitating better-informed clinical decisions. However, most of the recent deep learning based methods either directly work on raw CT data or utilize it in pair with anatomical cardiac structure segmentation by training an end-to-end classifier. As such, these approaches become much more difficult to interpret from a clinical perspective. To address this challenge, in this work, we break down the CVD classification pipeline into three components: (i) image segmentation, (ii) image registration, and (iii) downstream CVD classification. Specifically, we utilize the Atlas-ISTN framework and recent segmentation foundational models to generate anatomical structure segmentation and a normative healthy atlas. These are further utilized to extract clinically interpretable radiomic features as well as deformation field based geometric features (through atlas registration) for CVD classification. Our experiments on the publicly available ASOCA dataset show that utilizing these features leads to better CVD classification accuracy (87.50\%) when compared against classification model trained directly on raw CT images (67.50\%). Our code is publicly available: https://github.com/biomedia-mira/grc-net

Reasoning in machine vision: learning to think fast and slow

Shaheer U. Saeed, Yipei Wang, Veeru Kasivisvanathan, Brian R. Davidson, Matthew J. Clarkson, Yipeng Hu, Daniel C. Alexander

arxiv logopreprintJun 27 2025
Reasoning is a hallmark of human intelligence, enabling adaptive decision-making in complex and unfamiliar scenarios. In contrast, machine intelligence remains bound to training data, lacking the ability to dynamically refine solutions at inference time. While some recent advances have explored reasoning in machines, these efforts are largely limited to verbal domains such as mathematical problem-solving, where explicit rules govern step-by-step reasoning. Other critical real-world tasks - including visual perception, spatial reasoning, and radiological diagnosis - require non-verbal reasoning, which remains an open challenge. Here we present a novel learning paradigm that enables machine reasoning in vision by allowing performance improvement with increasing thinking time (inference-time compute), even under conditions where labelled data is very limited. Inspired by dual-process theories of human cognition in psychology, our approach integrates a fast-thinking System I module for familiar tasks, with a slow-thinking System II module that iteratively refines solutions using self-play reinforcement learning. This paradigm mimics human reasoning by proposing, competing over, and refining solutions in data-scarce scenarios. We demonstrate superior performance through extended thinking time, compared not only to large-scale supervised learning but also foundation models and even human experts, in real-world vision tasks. These tasks include computer-vision benchmarks and cancer localisation on medical images across five organs, showcasing transformative potential for non-verbal machine reasoning.

Towards Scalable and Robust White Matter Lesion Localization via Multimodal Deep Learning

Julia Machnio, Sebastian Nørgaard Llambias, Mads Nielsen, Mostafa Mehdipour Ghazi

arxiv logopreprintJun 27 2025
White matter hyperintensities (WMH) are radiological markers of small vessel disease and neurodegeneration, whose accurate segmentation and spatial localization are crucial for diagnosis and monitoring. While multimodal MRI offers complementary contrasts for detecting and contextualizing WM lesions, existing approaches often lack flexibility in handling missing modalities and fail to integrate anatomical localization efficiently. We propose a deep learning framework for WM lesion segmentation and localization that operates directly in native space using single- and multi-modal MRI inputs. Our study evaluates four input configurations: FLAIR-only, T1-only, concatenated FLAIR and T1, and a modality-interchangeable setup. It further introduces a multi-task model for jointly predicting lesion and anatomical region masks to estimate region-wise lesion burden. Experiments conducted on the MICCAI WMH Segmentation Challenge dataset demonstrate that multimodal input significantly improves the segmentation performance, outperforming unimodal models. While the modality-interchangeable setting trades accuracy for robustness, it enables inference in cases with missing modalities. Joint lesion-region segmentation using multi-task learning was less effective than separate models, suggesting representational conflict between tasks. Our findings highlight the utility of multimodal fusion for accurate and robust WMH analysis, and the potential of joint modeling for integrated predictions.
Page 25 of 2252246 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.