Sort by:
Page 7 of 15147 results

Diagnostic performance of lumbar spine CT using deep learning denoising to evaluate disc herniation and spinal stenosis.

Park S, Kang JH, Moon SG

pubmed logopapersJun 7 2025
To evaluate the diagnostic performance of lumbar spine CT using deep learning denoising (DLD CT) for detecting disc herniation and spinal stenosis. This retrospective study included 47 patients (229 intervertebral discs from L1/2 to L5/S1; 18 men and 29 women; mean age, 69.1 ± 10.9 years) who underwent lumbar spine CT and MRI within 1 month. CT images were reconstructed using filtered back projection (FBP) and denoised using a deep learning algorithm (ClariCT.AI). Three radiologists independently evaluated standard CT and DLD CT at an 8-week interval for the presence of disc herniation, central canal stenosis, and neural foraminal stenosis. Subjective image quality and diagnostic confidence were also assessed using five-point Likert scales. Standard CT and DLD CT were compared using MRI as a reference standard. DLD CT showed higher sensitivity (60% (70/117) vs. 44% (51/117); p < 0.001) and similar specificity (94% (534/570) vs. 94% (538/570); p = 0.465) for detecting disc herniation. Specificity for detecting spinal canal stenosis and neural foraminal stenosis was higher in DLD CT (90% (487/540) vs. 86% (466/540); p = 0.003, 94% (1202/1272) vs. 92% (1171/1272); p < 0.001), while sensitivity was comparable (81% (119/147) vs. 77% (113/147); p = 0.233, 83% (85/102) vs. 81% (83/102); p = 0.636). Image quality and diagnostic confidence were superior for DLD CT (all comparisons, p < 0.05). Compared to standard CT, DLD CT can improve diagnostic performance in detecting disc herniation and spinal stenosis with superior image quality and diagnostic confidence. Question The accurate diagnosis of disc herniation and spinal stenosis is limited on lumbar spine CT because of the low soft-tissue contrast. Findings Lumbar spine CT using deep learning denoising (DLD CT) demonstrated superior diagnostic performance in detecting disc herniation and spinal stenosis compared to standard CT. Clinical relevance DLD CT can be used as a simple and cost-effective screening test.

Clinical validation of a deep learning model for low-count PET image enhancement.

Long Q, Tian Y, Pan B, Xu Z, Zhang W, Xu L, Fan W, Pan T, Gong NJ

pubmed logopapersJun 5 2025
To investigate the effects of the deep learning model RaDynPET on fourfold reduced-count whole-body PET examinations. A total of 120 patients (84 internal cohorts and 36 external cohorts) undergoing <sup>18</sup>F-FDG PET/CT examinations were enrolled. PET images were reconstructed using OSEM algorithm with 120-s (G120) and 30-s (G30) list-mode data. RaDynPET was developed to generate enhanced images (R30) from G30. Two experienced nuclear medicine physicians independently evaluated subjective image quality using a 5-point Likert scale. Standardized uptake values (SUV), standard deviations, liver signal-to-noise ratio (SNR), lesion tumor-to-background ratio (TBR), and contrast-to-noise ratio (CNR) were compared. Subgroup analyses evaluated performance across demographics, and lesion detectability were evaluated using external datasets. RaDynPET was also compared to other deep learning methods. In internal cohorts, R30 demonstrated significantly higher image quality scores than G30 and G120. R30 showed excellent agreement with G120 for liver and lesion SUV values and surpassed G120 in liver SNR and CNR. Liver SNR and CNR of R30 were comparable to G120 in thin group, and the CNR of R30 was comparable to G120 in young age group. In external cohorts, R30 maintained strong SUV agreement with G120, with lesion-level sensitivity and specificity of 95.45% and 98.41%, respectively. There was no statistical difference in lesion detection between R30 and G120. RaDynPET achieved the highest PSNR and SSIM among deep learning methods. The RaDynPET model effectively restored high image quality while maintaining SUV agreement for <sup>18</sup>F-FDG PET scans acquired in 25% of the standard acquisition time.

Are presentations of thoracic CT performed on admission to the ICU associated with mortality at day-90 in COVID-19 related ARDS?

Le Corre A, Maamar A, Lederlin M, Terzi N, Tadié JM, Gacouin A

pubmed logopapersJun 5 2025
Computed tomography (CT) analysis of lung morphology has significantly advanced our understanding of acute respiratory distress syndrome (ARDS). During the Coronavirus Disease 2019 (COVID-19) pandemic, CT imaging was widely utilized to evaluate lung injury and was suggested as a tool for predicting patient outcomes. However, data specifically focused on patients with ARDS admitted to intensive care units (ICUs) remain limited. This retrospective study analyzed patients admitted to ICUs between March 2020 and November 2022 with moderate to severe COVID-19 ARDS. All CT scans performed within 48 h of ICU admission were independently reviewed by three experts. Lung injury severity was quantified using the CT Severity Score (CT-SS; range 0-25). Patients were categorized as having severe disease (CT-SS ≥ 18) or non-severe disease (CT-SS < 18). The primary outcome was all-cause mortality at 90 days. Secondary outcomes included ICU mortality and medical complications during the ICU stay. Additionally, we evaluated a computer-assisted CT-score assessment using artificial intelligence software (CT Pneumonia Analysis<sup>®</sup>, SIEMENS Healthcare) to explore the feasibility of automated measurement and routine implementation. A total of 215 patients with moderate to severe COVID-19 ARDS were included. The median CT-SS at admission was 18/25 [interquartile range, 15-21]. Among them, 120 patients (56%) had a severe CT-SS (≥ 18), while 95 patients (44%) had a non-severe CT-SS (< 18). The 90-day mortality rates were 20.8% for the severe group and 15.8% for the non-severe group (p = 0.35). No significant association was observed between CT-SS severity and patient outcomes. In patients with moderate to severe COVID-19 ARDS, systematic CT assessment of lung parenchymal injury was not a reliable predictor of 90-day mortality or ICU-related complications.

Dual energy CT-based Radiomics for identification of myocardial focal scar and artificial beam-hardening.

Zeng L, Hu F, Qin P, Jia T, Lu L, Yang Z, Zhou X, Qiu Y, Luo L, Chen B, Jin L, Tang W, Wang Y, Zhou F, Liu T, Wang A, Zhou Z, Guo X, Zheng Z, Fan X, Xu J, Xiao L, Liu Q, Guan W, Chen F, Wang J, Li S, Chen J, Pan C

pubmed logopapersJun 5 2025
Computed tomography is an inadequate method for detecting myocardial focal scar (MFS) due to its moderate density resolution, which is insufficient for distinguishing MFS from artificial beam-hardening (BH). Virtual monochromatic images (VMIs) of dual-energy coronary computed tomography angiography (DECCTA) provide a variety of diagnostic information with significant potential for detecting myocardial lesions. The aim of this study was to assess whether radiomics analysis in VMIs of DECCTA can help distinguish MFS from BH. A prospective cohort of patients who were suspected with an old myocardial infarction was assembled at two different centers between Janurary 2021 and June 2024. MFS and BH segmentation and radiomics feature extraction and selection were performed on VMIs images, and four machine learning classifiers were constructed using selected strongest features. Subsequently, an independent validation was conducted, and a subjective diagnosis of the validation set was provided by an radiologist. The AUC was used to assess the performance of the radiomics models. The training set included 57 patients from center 1 (mean age, 54 years +/- 9, 55 men), and the external validation set included 10 patients from center 2 (mean age, 59 years +/- 10, 9 men). The radiomics models exhibited the highest AUC value of 0.937 (expressed at 130 keV VMIs), while the radiologist demonstrated the highest AUC value of 0.734 (expressed at 40 keV VMIs). The integration of radiomic features derived from VMIs of DECCTA with machine learning algorithms has the potential to improve the efficiency of distinguishing MFS from BH.

Preliminary analysis of AI-based thyroid nodule evaluation in a non-subspecialist endocrinology setting.

Fernández Velasco P, Estévez Asensio L, Torres B, Ortolá A, Gómez Hoyos E, Delgado E, de Luís D, Díaz Soto G

pubmed logopapersJun 5 2025
Thyroid nodules are commonly evaluated using ultrasound-based risk stratification systems, which rely on subjective descriptors. Artificial intelligence (AI) may improve assessment, but its effectiveness in non-subspecialist settings is unclear. This study evaluated the impact of an AI-based decision support system (AI-DSS) on thyroid nodule ultrasound assessments by general endocrinologists (GE) without subspecialty thyroid imaging training. A prospective cohort study was conducted on 80 patients undergoing thyroid ultrasound in GE outpatient clinics. Thyroid ultrasound was performed based on clinical judgment as part of routine care by GE. Images were retrospectively analyzed using an AI-DSS (Koios DS), independently of clinician assessments. AI-DSS results were compared with initial GE evaluations and, when referred, with expert evaluations at a subspecialized thyroid nodule clinic (TNC). Agreement in ultrasound features, risk classification by the American College of Radiology Thyroid Imaging Reporting and Data System (ACR TI-RADS) and American Thyroid Association guidelines, and referral recommendations was assessed. AI-DSS differed notably from GE, particularly assessing nodule composition (solid: 80%vs.36%,p < 0.01), echogenicity (hypoechoic:52%vs.16%,p < 0.01), and echogenic foci (microcalcifications:10.7%vs.1.3%,p < 0.05). AI-DSS classification led to a higher referral rate compared to GE (37.3%vs.30.7%, not statistically significant). Agreement between AI-DSS and GE in ACR TI-RADS scoring was moderate (r = 0.337;p < 0.001), but improved when comparing GE to AI-DSS and TNC subspecialist (r = 0.465;p < 0.05 and r = 0.607;p < 0.05, respectively). In a non-subspecialist setting, non-adjunct AI-DSS use did not significantly improve risk stratification or reduce hypothetical referrals. The system tended to overestimate risk, potentially leading to unnecessary procedures. Further optimization is required for AI to function effectively in low-prevalence environment.

Artificial intelligence for detecting traumatic intracranial haemorrhage with CT: A workflow-oriented implementation.

Abed S, Hergan K, Pfaff J, Dörrenberg J, Brandstetter L, Gradl J

pubmed logopapersJun 3 2025
The objective of this study was to assess the performance of an artificial intelligence (AI) algorithm in detecting intracranial haemorrhages (ICHs) on non-contrast CT scans (NCCT). Another objective was to gauge the department's acceptance of said algorithm. Surveys conducted at three and nine months post-implementation revealed an increase in radiologists' acceptance of the AI tool with an increasing performance. However, a significant portion still preferred an additional physician given comparable cost. Our findings emphasize the importance of careful software implementation into a robust IT architecture.

Enhancing Lesion Detection in Inflammatory Myelopathies: A Deep Learning-Reconstructed Double Inversion Recovery MRI Approach.

Fang Q, Yang Q, Wang B, Wen B, Xu G, He J

pubmed logopapersJun 3 2025
The imaging of inflammatory myelopathies has advanced significantly across time, with MRI techniques playing a pivotal role in enhancing lesion detection. However, the impact of deep learning (DL)-based reconstruction on 3D double inversion recovery (DIR) imaging for inflammatory myelopathies remains unassessed. This study aimed to compare the acquisition time, image quality, diagnostic confidence, and lesion detection rates among sagittal T2WI, standard DIR, and DL-reconstructed DIR in patients with inflammatory myelopathies. In this observational study, patients diagnosed with inflammatory myelopathies were recruited between June 2023 and March 2024. Each patient underwent sagittal conventional TSE sequences and standard 3D DIR (T2WI and standard 3D DIR were used as references for comparison), followed by an undersampled accelerated double inversion recovery deep learning (DIR<sub>DL</sub>) examination. Three neuroradiologists evaluated the images using a 4-point Likert scale (from 1 to 4) for overall image quality, perceived SNR, sharpness, artifacts, and diagnostic confidence. The acquisition times and lesion detection rates were also compared among the acquisition protocols. A total of 149 participants were evaluated (mean age, 40.6 [SD, 16.8] years; 71 women). The median acquisition time for DIR<sub>DL</sub> was significantly lower than for standard DIR (298 seconds [interquartile range, 288-301 seconds] versus 151 seconds [interquartile range, 148-155 seconds]; <i>P</i> < .001), showing a 49% time reduction. DIR<sub>DL</sub> images scored higher in overall quality, perceived SNR, and artifact noise reduction (all <i>P</i> < .001). There were no significant differences in sharpness (<i>P</i> = .07) or diagnostic confidence (<i>P</i> = .06) between the standard DIR and DIR<sub>DL</sub> protocols. Additionally, DIR<sub>DL</sub> detected 37% more lesions compared with T2WI (300 versus 219; <i>P</i> < .001). DIR<sub>DL</sub> significantly reduces acquisition time and improves image quality compared with standard DIR, without compromising diagnostic confidence. Additionally, DIR<sub>DL</sub> enhances lesion detection in patients with inflammatory myelopathies, making it a valuable tool in clinical practice. These findings underscore the potential for incorporating DIR<sub>DL</sub> into future imaging guidelines.

How do medical institutions co-create artificial intelligence solutions with commercial startups?

Grootjans W, Krainska U, Rezazade Mehrizi MH

pubmed logopapersJun 3 2025
As many radiology departments embark on adopting artificial intelligence (AI) solutions in their clinical practice, they face the challenge that commercial applications often do not fit with their needs. As a result, they engage in a co-creation process with technology companies to collaboratively develop and implement AI solutions. Despite its importance, the process of co-creating AI solutions is under-researched, particularly regarding the range of challenges that may occur and how medical and technological parties can monitor, assess, and guide their co-creation process through an effective collaboration framework. Drawing on the multi-case study of three co-creation projects at an academic medical center in the Netherlands, we examine how co-creation processes happen through different scenarios, depending on the extent to which the two parties engage in "resourcing," "adaptation," and "reconfiguration." We offer a relational framework that helps involved parties monitor, assess, and guide their collaborations in co-creating AI solutions. The framework allows them to discover novel use-cases and reconsider their established assumptions and practices for developing AI solutions, also for redesigning their technological systems, clinical workflow, and their legal and organizational arrangements. Using the proposed framework, we identified distinct co-creation journeys with varying outcomes, which could be mapped onto the framework to diagnose, monitor, and guide collaborations toward desired results. The outcomes of co-creation can vary widely. The proposed framework enables medical institutions and technology companies to assess challenges and make adjustments. It can assist in steering their collaboration toward desired goals. Question How can medical institutions and AI startups effectively co-create AI solutions for radiology, ensuring alignment with clinical needs while steering collaboration effectively? Findings This study provides a co-creation framework allowing assessment of project progress, stakeholder engagement, as well as guidelines for radiology departments to steer co-creation of AI. Clinical relevance By actively involving radiology professionals in AI co-creation, this study demonstrates how co-creation helps bridge the gap between clinical needs and AI development, leading to clinically relevant, user-friendly solutions that enhance the radiology workflow.

Efficiency and Quality of Generative AI-Assisted Radiograph Reporting.

Huang J, Wittbrodt MT, Teague CN, Karl E, Galal G, Thompson M, Chapa A, Chiu ML, Herynk B, Linchangco R, Serhal A, Heller JA, Abboud SF, Etemadi M

pubmed logopapersJun 2 2025
Diagnostic imaging interpretation involves distilling multimodal clinical information into text form, a task well-suited to augmentation by generative artificial intelligence (AI). However, to our knowledge, impacts of AI-based draft radiological reporting remain unstudied in clinical settings. To prospectively evaluate the association of radiologist use of a workflow-integrated generative model capable of providing draft radiological reports for plain radiographs across a tertiary health care system with documentation efficiency, the clinical accuracy and textual quality of final radiologist reports, and the model's potential for detecting unexpected, clinically significant pneumothorax. This prospective cohort study was conducted from November 15, 2023, to April 24, 2024, at a tertiary care academic health system. The association between use of the generative model and radiologist documentation efficiency was evaluated for radiographs documented with model assistance compared with a baseline set of radiographs without model use, matched by study type (chest or nonchest). Peer review was performed on model-assisted interpretations. Flagging of pneumothorax requiring intervention was performed on radiographs prospectively. The primary outcomes were association of use of the generative model with radiologist documentation efficiency, assessed by difference in documentation time with and without model use using a linear mixed-effects model; for peer review of model-assisted reports, the difference in Likert-scale ratings using a cumulative-link mixed model; and for flagging pneumothorax requiring intervention, sensitivity and specificity. A total of 23 960 radiographs (11 980 each with and without model use) were used to analyze documentation efficiency. Interpretations with model assistance (mean [SE], 159.8 [27.0] seconds) were faster than the baseline set of those without (mean [SE], 189.2 [36.2] seconds) (P = .02), representing a 15.5% documentation efficiency increase. Peer review of 800 studies showed no difference in clinical accuracy (χ2 = 0.68; P = .41) or textual quality (χ2 = 3.62; P = .06) between model-assisted interpretations and nonmodel interpretations. Moreover, the model flagged studies containing a clinically significant, unexpected pneumothorax with a sensitivity of 72.7% and specificity of 99.9% among 97 651 studies screened. In this prospective cohort study of clinical use of a generative model for draft radiological reporting, model use was associated with improved radiologist documentation efficiency while maintaining clinical quality and demonstrated potential to detect studies containing a pneumothorax requiring immediate intervention. This study suggests the potential for radiologist and generative AI collaboration to improve clinical care delivery.
Page 7 of 15147 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.