Sort by:
Page 111 of 2212205 results

Long-Term Prognostic Implications of Thoracic Aortic Calcification on CT Using Artificial Intelligence-Based Quantification in a Screening Population: A Two-Center Study.

Lee JE, Kim NY, Kim YH, Kwon Y, Kim S, Han K, Suh YJ

pubmed logopapersJun 4 2025
<b>BACKGROUND.</b> The importance of including the thoracic aortic calcification (TAC), in addition to coronary artery calcification (CAC), in prognostic assessments has been difficult to determine, partly due to greater challenge in performing standardized TAC assessments. <b>OBJECTIVE.</b> The purpose of this study was to evaluate long-term prognostic implications of TAC assessed using artificial intelligence (AI)-based quantification on routine chest CT in a screening population. <b>METHODS.</b> This retrospective study included 7404 asymptomatic individuals (median age, 53.9 years; 5875 men, 1529 women) who underwent nongated noncontrast chest CT as part of a national general health screening program at one of two centers from January 2007 to December 2014. A commercial AI program quantified TAC and CAC using Agatston scores, which were stratified into categories. Radiologists manually quantified TAC and CAC in 2567 examinations. The role of AI-based TAC categories in predicting major adverse cardiovascular events (MACE) and all-cause mortality (ACM), independent of AI-based CAC categories as well as clinical and laboratory variables, was assessed by multivariable Cox proportional hazards models using data from both centers and concordance statistics from prognostic models developed and tested using center 1 and center 2 data, respectively. <b>RESULTS.</b> AI-based and manual quantification showed excellent agreement for TAC and CAC (concordance correlation coefficient: 0.967 and 0.895, respectively). The median observation periods were 7.5 years for MACE (383 events in 5342 individuals) and 11.0 years for ACM (292 events in 7404 individuals). When adjusted for AI-based CAC categories along with clinical and laboratory variables, the risk for MACE was not independently associated with any AI-based TAC category; risk of ACM was independently associated with AI-based TAC score of 1001-3000 (HR = 2.14, <i>p</i> = .02) but not with other AI-based TAC categories. When prognostic models were tested, the addition of AI-based TAC categories did not improve model fit relative to models containing clinical variables, laboratory variables, and AI-based CAC categories for MACE (concordance index [C-index] = 0.760-0.760, <i>p</i> = .81) or ACM (C-index = 0.823-0.830, <i>p</i> = .32). <b>CONCLUSION.</b> The addition of TAC to models containing CAC provided limited improvement in risk prediction in an asymptomatic screening population undergoing CT. <b>CLINICAL IMPACT.</b> AI-based quantification provides a standardized approach for better understanding the potential role of TAC as a predictive imaging biomarker.

Regulating Generative AI in Radiology Practice: A Trilaminar Approach to Balancing Risk with Innovation.

Gowda V, Bizzo BC, Dreyer KJ

pubmed logopapersJun 4 2025
Generative AI tools have proliferated across the market, garnered significant media attention, and increasingly found incorporation into the radiology practice setting. However, they raise a number of unanswered questions concerning governance and appropriate use. By their nature as general-purpose technologies, they strain the limits of existing FDA premarket review pathways to regulate them and introduce new sources of liability, privacy, and clinical risk. A multilayered governance approach is needed to balance innovation with safety. To address gaps in oversight, this piece establishes a trilaminar governance model for generative AI technologies. This treats federal regulations as a scaffold, upon which tiers of institutional guidelines and industry self-regulatory frameworks are added to create a comprehensive paradigm composed of interlocking parts. Doing so would provide radiologists with an effective risk management strategy for the future, foster continued technical development, and ultimately, promote patient care.

Impact of AI-Generated ADC Maps on Computer-Aided Diagnosis of Prostate Cancer: A Feasibility Study.

Ozyoruk KB, Harmon SA, Yilmaz EC, Gelikman DG, Bagci U, Simon BD, Merino MJ, Lis R, Gurram S, Wood BJ, Pinto PA, Choyke PL, Turkbey B

pubmed logopapersJun 4 2025
To evaluate the impact of AI-generated apparent diffusion coefficient (ADC) maps on diagnostic performance of a 3D U-Net AI model for prostate cancer (PCa) detection and segmentation at biparametric MRI (bpMRI). The study population was retrospectively collected and consisted of 178 patients, including 119 cases and 59 controls. Cases had a mean age of 62.1 years (SD=7.4) and a median prostate-specific antigen (PSA) level of 7.27ng/mL (IQR=5.43-10.55), while controls had a mean age of 63.4 years (SD=7.5) and a median PSA of 6.66ng/mL (IQR=4.29-11.30). All participants underwent 3.0 T T2-weighted turbo spin-echo MRI and high b-value echo-planar diffusion-weighted imaging (bpMRI), followed by either prostate biopsy or radical prostatectomy between January 2013 and December 2022. We compared the lesion detection and segmentation performance of a pretrained 3D U-Net AI model using conventional ADC maps versus AI-generated ADC maps. The Wilcoxon signed-rank test was used for statistical comparison, with 95% confidence intervals (CI) estimated via bootstrapping. A p-value <0.05 was considered significant. AI-ADC maps increased the accuracy of the lesion detection AI model, from 0.70 to 0.78 (p<0.01). Specificity increased from 0.22 to 0.47 (p<0.001), while maintaining high sensitivity, which was 0.94 with conventional ADC maps and 0.93 with AI-ADC maps (p>0.05). Mean dice similarity coefficients (DSC) for conventional ADC maps was 0.276, while AI-ADC maps showed a mean DSC of 0.225 (p<0.05). In the subset of patients with ISUP≥2, standard ADC maps demonstrated a mean DSC of 0.282 compared to 0.230 for AI-ADC maps (p<0.05). AI-generated ADC maps can improve performance of computer-aided diagnosis of prostate cancer.

Multimodal data integration for biologically-relevant artificial intelligence to guide adjuvant chemotherapy in stage II colorectal cancer.

Xie C, Ning Z, Guo T, Yao L, Chen X, Huang W, Li S, Chen J, Zhao K, Bian X, Li Z, Huang Y, Liang C, Zhang Q, Liu Z

pubmed logopapersJun 4 2025
Adjuvant chemotherapy provides a limited survival benefit (<5%) for patients with stage II colorectal cancer (CRC) and is suggested for high-risk patients. Given the heterogeneity of stage II CRC, we aimed to develop a clinically explainable artificial intelligence (AI)-powered analyser to identify radiological phenotypes that would benefit from chemotherapy. Multimodal data from patients with CRC across six cohorts were collected, including 405 patients from the Guangdong Provincial People's Hospital for model development and 153 patients from the Yunnan Provincial Cancer Centre for validation. RNA sequencing data were used to identify the differentially expressed genes in the two radiological clusters. Histopathological patterns were evaluated to bridge the gap between the imaging and genetic information. Finally, we investigated the discovered morphological patterns of mouse models to observe imaging features. The survival benefit of chemotherapy varied significantly among the AI-powered radiological clusters [interaction hazard ratio (iHR) = 5.35, (95% CI: 1.98, 14.41), adjusted P<sub>interaction</sub> = 0.012]. Distinct biological pathways related to immune and stromal cell abundance were observed between the clusters. The observation only (OO)-preferable cluster exhibited higher necrosis, haemorrhage, and tortuous vessels, whereas the adjuvant chemotherapy (AC)-preferable cluster exhibited vessels with greater pericyte coverage, allowing for a more enriched infiltration of B, CD4<sup>+</sup>-T, and CD8<sup>+</sup>-T cells into the core tumoural areas. Further experiments confirmed that changes in vessel morphology led to alterations in predictive imaging features. The developed explainable AI-powered analyser effectively identified patients with stage II CRC with improved overall survival after receiving adjuvant chemotherapy, thereby contributing to the advancement of precision oncology. This work was funded by the National Science Fund of China (81925023, 82302299, and U22A2034), Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application (2022B1212010011), and High-level Hospital Construction Project (DFJHBF202105 and YKY-KF202204).

UltraBones100k: A reliable automated labeling method and large-scale dataset for ultrasound-based bone surface extraction.

Wu L, Cavalcanti NA, Seibold M, Loggia G, Reissner L, Hein J, Beeler S, Viehöfer A, Wirth S, Calvet L, Fürnstahl P

pubmed logopapersJun 4 2025
Ultrasound-based bone surface segmentation is crucial in computer-assisted orthopedic surgery. However, ultrasound images have limitations, including a low signal-to-noise ratio, acoustic shadowing, and speckle noise, which make interpretation difficult. Existing deep learning models for bone segmentation rely primarily on costly manual labeling by experts, limiting dataset size and model generalizability. Additionally, the complexity of ultrasound physics and acoustic shadow makes the images difficult for humans to interpret, leading to incomplete labels in low-intensity and anechoic regions and limiting model performance. To advance the state-of-the-art in ultrasound bone segmentation and establish effective model benchmarks, larger and higher-quality datasets are needed. We propose a methodology for collecting ex-vivo ultrasound datasets with automatically generated bone labels, including anechoic regions. The proposed labels are derived by accurately superimposing tracked bone Computed Tomography (CT) models onto the tracked ultrasound images. These initial labels are refined to account for ultrasound physics. To clinically evaluate the proposed method, an expert physician from our university hospital specialized in orthopedic sonography assessed the quality of the generated bone labels. A neural network for bone segmentation is trained on the collected dataset and its predictions are compared to expert manual labels, evaluating accuracy, completeness, and F1-score. We collected UltraBones100k, the largest known dataset comprising 100k ex-vivo ultrasound images of human lower limbs with bone annotations, specifically targeting the fibula, tibia, and foot bones. A Wilcoxon signed-rank test with Bonferroni correction confirmed that the bone alignment after our optimization pipeline significantly improved the quality of bone labeling (p<0.001). The model trained on UltraBones100k consistently outperforms manual labeling in all metrics, particularly in low-intensity regions (at a distance threshold of 0.5 mm: 320% improvement in completeness, 27.4% improvement in accuracy, and 197% improvement in F1 score) CONCLUSION:: This work is promising to facilitate research and clinical translation of ultrasound imaging in computer-assisted interventions, particularly for applications such as 2D bone segmentation, 3D bone surface reconstruction, and multi-modality bone registration.

Vascular segmentation of functional ultrasound images using deep learning.

Sebia H, Guyet T, Pereira M, Valdebenito M, Berry H, Vidal B

pubmed logopapersJun 4 2025
Segmentation of medical images is a fundamental task with numerous applications. While MRI, CT, and PET modalities have significantly benefited from deep learning segmentation techniques, more recent modalities, like functional ultrasound (fUS), have seen limited progress. fUS is a non invasive imaging method that measures changes in cerebral blood volume (CBV) with high spatio-temporal resolution. However, distinguishing arterioles from venules in fUS is challenging due to opposing blood flow directions within the same pixel. Ultrasound localization microscopy (ULM) can enhance resolution by tracking microbubble contrast agents but is invasive, and lacks dynamic CBV quantification. In this paper, we introduce the first deep learning-based application for fUS image segmentation, capable of differentiating signals based on vertical flow direction (upward vs. downward), using ULM-based automatic annotation, and enabling dynamic CBV quantification. In the cortical vasculature, this distinction in flow direction provides a proxy for differentiating arteries from veins. We evaluate various UNet architectures on fUS images of rat brains, achieving competitive segmentation performance, with 90% accuracy, a 71% F1 score, and an IoU of 0.59, using only 100 temporal frames from a fUS stack. These results are comparable to those from tubular structure segmentation in other imaging modalities. Additionally, models trained on resting-state data generalize well to images captured during visual stimulation, highlighting robustness. Although it does not reach the full granularity of ULM, the proposed method provides a practical, non-invasive and cost-effective solution for inferring flow direction-particularly valuable in scenarios where ULM is not available or feasible. Our pipeline shows high linear correlation coefficients between signals from predicted and actual compartments, showcasing its ability to accurately capture blood flow dynamics.

Subgrouping autism and ADHD based on structural MRI population modelling centiles.

Pecci-Terroba C, Lai MC, Lombardo MV, Chakrabarti B, Ruigrok ANV, Suckling J, Anagnostou E, Lerch JP, Taylor MJ, Nicolson R, Georgiades S, Crosbie J, Schachar R, Kelley E, Jones J, Arnold PD, Seidlitz J, Alexander-Bloch AF, Bullmore ET, Baron-Cohen S, Bedford SA, Bethlehem RAI

pubmed logopapersJun 4 2025
Autism and attention deficit hyperactivity disorder (ADHD) are two highly heterogeneous neurodevelopmental conditions with variable underlying neurobiology. Imaging studies have yielded varied results, and it is now clear that there is unlikely to be one characteristic neuroanatomical profile of either condition. Parsing this heterogeneity could allow us to identify more homogeneous subgroups, either within or across conditions, which may be more clinically informative. This has been a pivotal goal for neurodevelopmental research using both clinical and neuroanatomical features, though results thus far have again been inconsistent with regards to the number and characteristics of subgroups. Here, we use population modelling to cluster a multi-site dataset based on global and regional centile scores of cortical thickness, surface area and grey matter volume. We use HYDRA, a novel semi-supervised machine learning algorithm which clusters based on differences to controls and compare its performance to a traditional clustering approach. We identified distinct subgroups within autism and ADHD, as well as across diagnosis, often with opposite neuroanatomical alterations relatively to controls. These subgroups were characterised by different combinations of increased or decreased patterns of morphometrics. We did not find significant clinical differences across subgroups. Crucially, however, the number of subgroups and their membership differed vastly depending on chosen features and the algorithm used, highlighting the impact and importance of careful method selection. We highlight the importance of examining heterogeneity in autism and ADHD and demonstrate that population modelling is a useful tool to study subgrouping in autism and ADHD. We identified subgroups with distinct patterns of alterations relative to controls but note that these results rely heavily on the algorithm used and encourage detailed reporting of methods and features used in future studies.

Computed tomography-based radiomics model for predicting station 4 lymph node metastasis in non-small cell lung cancer.

Kang Y, Li M, Xing X, Qian K, Liu H, Qi Y, Liu Y, Cui Y, Zhang H

pubmed logopapersJun 4 2025
This study aimed to develop and validate machine learning models for preoperative identification of metastasis to station 4 mediastinal lymph nodes (MLNM) in non-small cell lung cancer (NSCLC) patients at pathological N0-N2 (pN0-pN2) stage, thereby enhancing the precision of clinical decision-making. We included a total of 356 NSCLC patients at pN0-pN2 stage, divided into training (n = 207), internal test (n = 90), and independent test (n = 59) sets. Station 4 mediastinal lymph nodes (LNs) regions of interest (ROIs) were semi-automatically segmented on venous-phase computed tomography (CT) images for radiomics feature extraction. Using least absolute shrinkage and selection operator (LASSO) regression to select features with non-zero coefficients. Four machine learning algorithms-decision tree (DT), logistic regression (LR), random forest (RF), and support vector machine (SVM)-were employed to construct radiomics models. Clinical predictors were identified through univariate and multivariate logistic regression, which were subsequently integrated with radiomics features to develop combined models. Models performance were evaluated using receiver operating characteristic (ROC) analysis, calibration curves, decision curve analysis (DCA), and DeLong's test. Out of 1721 radiomics features, eight radiomics features were selected using LASSO regression. The RF-based combined model exhibited the strongest discriminative power, with an area under the curve (AUC) of 0.934 for the training set and 0.889 for the internal test set. The calibration curve and DCA further indicated the superior performance of the combined model based on RF. The independent test set further verified the model's robustness. The combined model based on RF, integrating radiomics and clinical features, effectively and non-invasively identifies metastasis to the station 4 mediastinal LNs in NSCLC patients at pN0-pN2 stage. This model serves as an effective auxiliary tool for clinical decision-making and has the potential to optimize treatment strategies and improve prognostic assessment for pN0-pN2 patients. Not applicable.

Deep learning-based cone-beam CT motion compensation with single-view temporal resolution.

Maier J, Sawall S, Arheit M, Paysan P, Kachelrieß M

pubmed logopapersJun 4 2025
Cone-beam CT (CBCT) scans that are affected by motion often require motion compensation to reduce artifacts or to reconstruct 4D (3D+time) representations of the patient. To do so, most existing strategies rely on some sort of gating strategy that sorts the acquired projections into motion bins. Subsequently, these bins can be reconstructed individually before further post-processing may be applied to improve image quality. While this concept is useful for periodic motion patterns, it fails in case of non-periodic motion as observed, for example, in irregularly breathing patients. To address this issue and to increase temporal resolution, we propose the deep single angle-based motion compensation (SAMoCo). To avoid gating, and therefore its downsides, the deep SAMoCo trains a U-net-like network to predict displacement vector fields (DVFs) representing the motion that occurred between any two given time points of the scan. To do so, 4D clinical CT scans are used to simulate 4D CBCT scans as well as the corresponding ground truth DVFs that map between the different motion states of the scan. The network is then trained to predict these DVFs as a function of the respective projection views and an initial 3D reconstruction. Once the network is trained, an arbitrary motion state corresponding to a certain projection view of the scan can be recovered by estimating DVFs from any other state or view and by considering them during reconstruction. Applied to 4D CBCT simulations of breathing patients, the deep SAMoCo provides high-quality reconstructions for periodic and non-periodic motion. Here, the deviations with respect to the ground truth are less than 27 HU on average, while respiratory motion, or the diaphragm position, can be resolved with an accuracy of about 0.75 mm. Similar results were obtained for real measurements where a high correlation with external motion monitoring signals could be observed, even in patients with highly irregular respiration. The ability to estimate DVFs as a function of two arbitrary projection views and an initial 3D reconstruction makes deep SAMoCo applicable to arbitrary motion patterns with single-view temporal resolution. Therefore, the deep SAMoCo is particularly useful for cases with unsteady breathing, compensation of residual motion during a breath-hold scan, or scans with fast gantry rotation times in which the data acquisition only covers a very limited number of breathing cycles. Furthermore, not requiring gating signals may simplify the clinical workflow and reduces the time needed for patient preparation.

Deep learning based rapid X-ray fluorescence signal extraction and image reconstruction for preclinical benchtop X-ray fluorescence computed tomography applications.

Kaphle A, Jayarathna S, Cho SH

pubmed logopapersJun 4 2025
Recent research advances have resulted in an experimental benchtop X-ray fluorescence computed tomography (XFCT) system that likely meets the imaging dose/scan time constraints for benchtop XFCT imaging of live mice injected with gold nanoparticles (GNPs). For routine in vivo benchtop XFCT imaging, however, additional challenges, most notably the need for rapid/near-real-time handling of X-ray fluorescence (XRF) signal extraction and XFCT image reconstruction, must be successfully addressed. Here we propose a novel end-to-end deep learning (DL) framework that integrates a one-dimensional convolutional neural network (1D CNN) for rapid XRF signal extraction with a U-Net model for XFCT image reconstruction. We trained the models using a comprehensive dataset including experimentally-acquired and augmented XRF/scatter photon spectra from various GNP concentrations and imaging scenarios, including phantom and synthetic mouse models. The DL framework demonstrated exceptional performance in both tasks. The 1D CNN achieved a high coefficient-of-determination (R² > 0.9885) and a low mean-absolute-error (MAE < 0.6248) in XRF signal extraction. The U-Net model achieved an average structural-similarity-index-measure (SSIM) of 0.9791 and a peak signal-to-noise ratio (PSNR) of 39.11 in XFCT image reconstruction, closely matching ground truth images. Notably, the DL approach (vs. the conventional approach) reduced the total post-processing time per slice from approximately 6 min to just 1.25 s.
Page 111 of 2212205 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.