Sort by:
Page 29 of 33329 results

Early-stage lung cancer detection via thin-section low-dose CT reconstruction combined with AI in non-high risk populations: a large-scale real-world retrospective cohort study.

Ji G, Luo W, Zhu Y, Chen B, Wang M, Jiang L, Yang M, Song W, Yao P, Zheng T, Yu H, Zhang R, Wang C, Ding R, Zhuo X, Chen F, Li J, Tang X, Xian J, Song T, Tang J, Feng M, Shao J, Li W

pubmed logopapersJun 1 2025
Current lung cancer screening guidelines recommend annual low-dose computed tomography (LDCT) for high-risk individuals. However, the effectiveness of LDCT in non-high-risk individuals remains inadequately explored. With the incidence of lung cancer steadily increasing among non-high-risk individuals, this study aims to assess the risk of lung cancer in non-high-risk individuals and evaluate the potential of thin-section LDCT reconstruction combined with artificial intelligence (LDCT-TRAI) as a screening tool. A real-world cohort study on lung cancer screening was conducted at the West China Hospital of Sichuan University from January 2010 to July 2021. Participants were screened using either LDCT-TRAI or traditional thick-section LDCT without AI (traditional LDCT) . The AI system employed was the uAI-ChestCare software. Lung cancer diagnoses were confirmed through pathological examination. Among the 259 121 enrolled non-high-risk participants, 87 260 (33.7%) had positive screening results. Within 1 year, 728 (0.3%) participants were diagnosed with lung cancer, of whom 87.1% (634/728) were never-smokers, and 92.7% (675/728) presented with stage I disease. Compared with traditional LDCT, LDCT-TRAI demonstrated a higher lung cancer detection rate (0.3% vs. 0.2%, <i>P</i> < 0.001), particularly for stage I cancers (94.4% vs. 83.2%, <i>P</i> < 0.001), and was associated with improved survival outcomes (5-year overall survival rate: 95.4% vs. 81.3%, <i>P</i> < 0.0001). These findings highlight the importance of expanding lung cancer screening to non-high-risk populations, especially never-smokers. LDCT-TRAI outperformed traditional LDCT in detecting early-stage cancers and improving survival outcomes, underscoring its potential as a more effective screening tool for early lung cancer detection in this population.

Review and reflections on live AI mammographic screen reading in a large UK NHS breast screening unit.

Puri S, Bagnall M, Erdelyi G

pubmed logopapersJun 1 2025
The Radiology team from a large Breast Screening Unit in the UK with a screening population of over 135,000 took part in a service evaluation project using artificial intelligence (AI) for reading breast screening mammograms. To evaluate the clinical benefit AI may provide when implemented as a silent reader in a double reading breast screening programme and to evaluate feasibility and the operational impact of deploying AI into the breast screening programme. The service was one of 14 breast screening sites in the UK to take part in this project and we present our local experience with AI in breast screening. A commercially available AI platform was deployed and worked in real time as a 'silent third reader' so as not to impact standard workflows and patient care. All cases flagged by AI but not recalled by standard double reading (positive discordant cases) were reviewed along with all cases recalled by human readers but not flagged by AI (negative discordant cases). 9,547 cases were included in the evaluation. 1,135 positive discordant cases were reviewed, and one woman was recalled from the reviews who was not found to have cancer on further assessment in the breast assessment clinic. 139 negative discordant cases were reviewed, and eight cancer cases (8.79% of total cancers detected in this period) recalled by human readers were not detected by AI. No additional cancers were detected by AI during the study. Performance of AI was inferior to human readers in our unit. Having missed a significant number of cancers makes it unreliable and not safe to be used in clinical practice. AI is not currently of sufficient accuracy to be considered in the NHS Breast Screening Programme.

Ultrasound-based radiomics and machine learning for enhanced diagnosis of knee osteoarthritis: Evaluation of diagnostic accuracy, sensitivity, specificity, and predictive value.

Kiso T, Okada Y, Kawata S, Shichiji K, Okumura E, Hatsumi N, Matsuura R, Kaminaga M, Kuwano H, Okumura E

pubmed logopapersJun 1 2025
To evaluate the usefulness of radiomics features extracted from ultrasonographic images in diagnosing and predicting the severity of knee osteoarthritis (OA). In this single-center, prospective, observational study, radiomics features were extracted from standing radiographs and ultrasonographic images of knees of patients aged 40-85 years with primary medial OA and without OA. Analysis was conducted using LIFEx software (version 7.2.n), ANOVA, and LASSO regression. The diagnostic accuracy of three different models, including a statistical model incorporating background factors and machine learning models, was evaluated. Among 491 limbs analyzed, 318 were OA and 173 were non-OA cases. The mean age was 72.7 (±8.7) and 62.6 (±11.3) years in the OA and non-OA groups, respectively. The OA group included 81 (25.5 %) men and 237 (74.5 %) women, whereas the non-OA group included 73 men (42.2 %) and 100 (57.8 %) women. A statistical model using the cutoff value of MORPHOLOGICAL_SurfaceToVolumeRatio (IBSI:2PR5) achieved a specificity of 0.98 and sensitivity of 0.47. Machine learning diagnostic models (Model 2) demonstrated areas under the curve (AUCs) of 0.88 (discriminant analysis) and 0.87 (logistic regression), with sensitivities of 0.80 and 0.81 and specificities of 0.82 and 0.80, respectively. For severity prediction, the statistical model using MORPHOLOGICAL_SurfaceToVolumeRatio (IBSI:2PR5) showed sensitivity and specificity values of 0.78 and 0.86, respectively, whereas machine learning models achieved an AUC of 0.92, sensitivity of 0.81, and specificity of 0.85 for severity prediction. The use of radiomics features in diagnosing knee OA shows potential as a supportive tool for enhancing clinicians' decision-making.

AI for fracture diagnosis in clinical practice: Four approaches to systematic AI-implementation and their impact on AI-effectiveness.

Loeffen DV, Zijta FM, Boymans TA, Wildberger JE, Nijssen EC

pubmed logopapersJun 1 2025
Artificial Intelligence (AI) has been shown to enhance fracture-detection-accuracy, but the most effective AI-implementation in clinical practice is less well understood. In the current study, four approaches to AI-implementation are evaluated for their impact on AI-effectiveness. Retrospective single-center study based on all consecutive, around-the-clock radiographic examinations for suspected fractures, and accompanying clinical-practice radiologist-diagnoses, between January and March 2023. These image-sets were independently analysed by a dedicated bone-fracture-detection-AI. Findings were combined with radiologist clinical-practice diagnoses to simulate the four AI-implementation methods deemed most relevant to clinical workflows: AI-standalone (radiologist-findings not consulted); AI-problem-solving (AI-findings consulted when radiologist in doubt); AI-triage (radiologist-findings consulted when AI in doubt); and AI-safety net (AI-findings consulted when radiologist diagnosis negative). Reference-standard diagnoses were established by two senior musculoskeletal-radiologists (by consensus in cases of disagreement). Radiologist- and radiologist + AI diagnoses were compared for false negatives (FN), false positives (FP) and their clinical consequences. Experience-level-subgroups radiologists-in-training-, non-musculoskeletal-radiologists, and dedicated musculoskeletal-radiologists were analysed separately. 1508 image-sets were included (1227 unique patients; 40 radiologist-readers). Radiologist results were: 2.7 % FN (40/1508), 28 with clinical consequences; 1.2 % FP (18/1508), 2 received full-fracture treatments (11.1 %). All AI-implementation methods changed overall FN and FP with statistical significance (p < 0.001): AI-standalone 1.5 % FN (23/1508; 11 consequences), 6.8 % FP (103/1508); AI-problem-solving 3.2 % FN (48/1508; 31 consequences), 0.6 % FP (9/1508); AI-triage 2.1 % FN (32/1508; 18 consequences), 1.7 % FP (26/1508); AI-safety net 0.07 % FN (1/1508; 1 consequence), 7.6 % FP (115/1508). Subgroups show similar trends, except AI-triage increased FN for all subgroups except radiologists-in-training. Implementation methods have a large impact on AI-effectiveness. These results suggest AI should not be considered for problem-solving or triage at this time; AI standalone performs better than either and may be a source of assistance where radiologists are unavailable. Best results were obtained implementing AI as safety net, which eliminates missed fractures with serious clinical consequences; even though false positives are increased, unnecessary treatments are limited.

Impact of artificial intelligence assisted lesion detection on radiologists' interpretation at multiparametric prostate MRI.

Nakrour N, Cochran RL, Mercaldo ND, Bradley W, Tsai LL, Prajapati P, Grimm R, von Busch H, Lo WC, Harisinghani MG

pubmed logopapersJun 1 2025
To compare prostate cancer lesion detection using conventional and artificial intelligence (AI)-assisted image interpretation at multiparametric MRI (mpMRI). A retrospective study of 53 consecutive patients who underwent prostate mpMRI and subsequent prostate tissue sampling was performed. Two board-certified radiologists (with 4 and 12 years of experience) blinded to the clinical information interpreted anonymized exams using the PI-RADS v2.1 framework without and with an AI-assistance tool. The AI software tool provided radiologists with gland segmentation and automated lesion detection assigning a probability score for the likelihood of the presence of clinically significant prostate cancer (csPCa). The reference standard for all cases was the prostate pathology from systematic and targeted biopsies. Statistical analyses assessed interrater agreement and compared diagnostic performances with and without AI assistance. Within the entire cohort, 42 patients (79 %) harbored Gleason-positive disease, with 25 patients (47 %) having csPCa. Radiologists' diagnostic performance for csPCa was significantly improved over conventional interpretation with AI assistance (reader A: AUC 0.82 vs. 0.72, p = 0.03; reader B: AUC 0.78 vs. 0.69, p = 0.03). Without AI assistance, 81 % (n = 36; 95 % CI: 0.89-0.91) of the lesions were scored similarly by radiologists for lesion-level characteristics, and with AI assistance, 59 % (26, 0.82-0.89) of the lesions were scored similarly. For reader A, there was a significant difference in PI-RADS scores (p = 0.02) between AI-assisted and non-assisted assessments. Signficant differences were not detected for reader B. AI-assisted prostate mMRI interpretation improved radiologist diagnostic performance over conventional interpretation independent of reader experience.

Deep learning reconstruction improves computer-aided pulmonary nodule detection and measurement accuracy for ultra-low-dose chest CT.

Wang J, Zhu Z, Pan Z, Tan W, Han W, Zhou Z, Hu G, Ma Z, Xu Y, Ying Z, Sui X, Jin Z, Song L, Song W

pubmed logopapersMay 30 2025
To compare the image quality and pulmonary nodule detectability and measurement accuracy between deep learning reconstruction (DLR) and hybrid iterative reconstruction (HIR) of chest ultra-low-dose CT (ULDCT). Participants who underwent chest standard-dose CT (SDCT) followed by ULDCT from October 2020 to January 2022 were prospectively included. ULDCT images reconstructed with HIR and DLR were compared with SDCT images to evaluate image quality, nodule detection rate, and measurement accuracy using a commercially available deep learning-based nodule evaluation system. Wilcoxon signed-rank test was used to evaluate the percentage errors of nodule size and nodule volume between HIR and DLR images. Eighty-four participants (54 ± 13 years; 26 men) were finally enrolled. The effective radiation doses of ULDCT and SDCT were 0.16 ± 0.02 mSv and 1.77 ± 0.67 mSv, respectively (P < 0.001). The mean ± standard deviation of the lung tissue noises was 61.4 ± 3.0 HU for SDCT, 61.5 ± 2.8 HU and 55.1 ± 3.4 HU for ULDCT reconstructed with HIR-Strong setting (HIR-Str) and DLR-Strong setting (DLR-Str), respectively (P < 0.001). A total of 535 nodules were detected. The nodule detection rates of ULDCT HIR-Str and ULDCT DLR-Str were 74.0% and 83.4%, respectively (P < 0.001). The absolute percentage error in nodule volume from that of SDCT was 19.5% in ULDCT HIR-Str versus 17.9% in ULDCT DLR-Str (P < 0.001). Compared with HIR, DLR reduced image noise, increased nodule detection rate, and improved measurement accuracy of nodule volume at chest ULDCT. Not applicable.

CCTA-Derived coronary plaque burden offers enhanced prognostic value over CAC scoring in suspected CAD patients.

Dahdal J, Jukema RA, Maaniitty T, Nurmohamed NS, Raijmakers PG, Hoek R, Driessen RS, Twisk JWR, Bär S, Planken RN, van Royen N, Nijveldt R, Bax JJ, Saraste A, van Rosendael AR, Knaapen P, Knuuti J, Danad I

pubmed logopapersMay 30 2025
To assess the prognostic utility of coronary artery calcium (CAC) scoring and coronary computed tomography angiography (CCTA)-derived quantitative plaque metrics for predicting adverse cardiovascular outcomes. The study enrolled 2404 patients with suspected coronary artery disease (CAD) but without a prior history of CAD. All participants underwent CAC scoring and CCTA, with plaque metrics quantified using an artificial intelligence (AI)-based tool (Cleerly, Inc). Percent atheroma volume (PAV) and non-calcified plaque volume percentage (NCPV%), reflecting total plaque burden and the proportion of non-calcified plaque volume normalized to vessel volume, were evaluated. The primary endpoint was a composite of all-cause mortality and non-fatal myocardial infarction (MI). Cox proportional hazard models, adjusted for clinical risk factors and early revascularization, were employed for analysis. During a median follow-up of 7.0 years, 208 patients (8.7%) experienced the primary endpoint, including 73 cases of MI (3%). The model incorporating PAV demonstrated superior discriminatory power for the composite endpoint (AUC = 0.729) compared to CAC scoring (AUC = 0.706, P = 0.016). In MI prediction, PAV (AUC = 0.791) significantly outperformed CAC (AUC = 0.699, P < 0.001), with NCPV% showing the highest prognostic accuracy (AUC = 0.814, P < 0.001). AI-driven assessment of coronary plaque burden enhances prognostic accuracy for future adverse cardiovascular events, highlighting the critical role of comprehensive plaque characterization in refining risk stratification strategies.

Real-time brain tumor detection in intraoperative ultrasound: From model training to deployment in the operating room.

Cepeda S, Esteban-Sinovas O, Romero R, Singh V, Shett P, Moiyadi A, Zemmoura I, Giammalva GR, Del Bene M, Barbotti A, DiMeco F, West TR, Nahed BV, Arrese I, Hornero R, Sarabia R

pubmed logopapersMay 30 2025
Intraoperative ultrasound (ioUS) is a valuable tool in brain tumor surgery due to its versatility, affordability, and seamless integration into the surgical workflow. However, its adoption remains limited, primarily because of the challenges associated with image interpretation and the steep learning curve required for effective use. This study aimed to enhance the interpretability of ioUS images by developing a real-time brain tumor detection system deployable in the operating room. We collected 2D ioUS images from the BraTioUS and ReMIND datasets, annotated with expert-refined tumor labels. Using the YOLO11 architecture and its variants, we trained object detection models to identify brain tumors. The dataset included 1732 images from 192 patients, divided into training, validation, and test sets. Data augmentation expanded the training set to 11,570 images. In the test dataset, YOLO11s achieved the best balance of precision and computational efficiency, with a mAP@50 of 0.95, mAP@50-95 of 0.65, and a processing speed of 34.16 frames per second. The proposed solution was prospectively validated in a cohort of 20 consecutively operated patients diagnosed with brain tumors. Neurosurgeons confirmed its seamless integration into the surgical workflow, with real-time predictions accurately delineating tumor regions. These findings highlight the potential of real-time object detection algorithms to enhance ioUS-guided brain tumor surgery, addressing key challenges in interpretation and providing a foundation for future development of computer vision-based tools for neuro-oncological surgery.

Diagnostic Efficiency of an Artificial Intelligence-Based Technology in Dental Radiography.

Obrubov AA, Solovykh EA, Nadtochiy AG

pubmed logopapersMay 30 2025
We present results of the development of Dentomo artificial intelligence model based on two neural networks. The model includes a database and a knowledge base harmonized with SNOMED CT that allows processing and interpreting the results of cone beam computed tomography (CBCT) scans of the dental system, in particular, identifying and classifying teeth, identifying CT signs of pathology and previous treatments. Based on these data, artificial intelligence can draw conclusions and generate medical reports, systematize the data, and learn from the results. The diagnostic effectiveness of Dentomo was evaluated. The first results of the study have demonstrated that the model based on neural networks and artificial intelligence is a valuable tool for analyzing CBCT scans in clinical practice and optimizing the dentist workflow.

Deep Learning CAIPIRINHA-VIBE Improves and Accelerates Head and Neck MRI.

Nitschke LV, Lerchbaumer M, Ulas T, Deppe D, Nickel D, Geisel D, Kubicka F, Wagner M, Walter-Rittel T

pubmed logopapersMay 29 2025
The aim of this study was to evaluate image quality for contrast-enhanced (CE) neck MRI with a deep learning-reconstructed VIBE sequence with acceleration factors (AF) 4 (DL4-VIBE) and 6 (DL6-VIBE). Patients referred for neck MRI were examined in a 3-Tesla scanner in this prospective, single-center study. Four CE fat-saturated (FS) VIBE sequences were acquired in each patient: Star-VIBE (4:01 min), VIBE (2:05 min), DL4-VIBE (0:24 min), DL6-VIBE (0:17 min). Image quality was evaluated by three radiologists with a 5-point Likert scale and included overall image quality, muscle contour delineation, conspicuity of mucosa and pharyngeal musculature, FS uniformity, and motion artifacts. Objective image quality was assessed with signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and quantification of metal artifacts. 68 patients (60.3% male; mean age 57.4±16 years) were included in this study. DL4-VIBE was superior for overall image quality, delineation of muscle contours, differentiation of mucosa and pharyngeal musculature, vascular delineation, and motion artifacts. Notably, DL4-VIBE exhibited exceptional FS uniformity (p<0.001). SNR and CNR were superior for DL4-VIBE compared to all other sequences (p<0.001). Metal artifacts were least pronounced in the standard VIBE, followed by DL4-VIBE (p<0.001). Although DL6-VIBE was inferior to DL4-VIBE, it demonstrated improved FS homogeneity, delineation of pharyngeal mucosa, and CNR compared to Star-VIBE and VIBE. DL4-VIBE significantly improves image quality for CE neck MRI with a fraction of the scan time of conventional sequences.
Page 29 of 33329 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.