Sort by:
Page 14 of 45441 results

Automated vertebral bone quality score measurement on lumbar MRI using deep learning: Development and validation of an AI algorithm.

Jayasuriya NM, Feng E, Nathani KR, Delawan M, Katsos K, Bhagra O, Freedman BA, Bydon M

pubmed logopapersAug 5 2025
Bone health is a critical determinant of spine surgery outcomes, yet many patients undergo procedures without adequate preoperative assessment due to limitations in current bone quality assessment methods. This study aimed to develop and validate an artificial intelligence-based algorithm that predicts Vertebral Bone Quality (VBQ) scores from routine MRI scans, enabling improved preoperative identification of patients at risk for poor surgical outcomes. This study utilized 257 lumbar spine T1-weighted MRI scans from the SPIDER challenge dataset. VBQ scores were calculated through a three-step process: selecting the mid-sagittal slice, measuring vertebral body signal intensity from L1-L4, and normalizing by cerebrospinal fluid signal intensity. A YOLOv8 model was developed to automate region of interest placement and VBQ score calculation. The system was validated against manual annotations from 47 lumbar spine surgery patients, with performance evaluated using precision, recall, mean average precision, intraclass correlation coefficient, Pearson correlation, RMSE, and mean error. The YOLOv8 model demonstrated high accuracy in vertebral body detection (precision: 0.9429, recall: 0.9076, [email protected]: 0.9403, mAP@[0.5:0.95]: 0.8288). Strong interrater reliability was observed with ICC values of 0.95 (human-human), 0.88 and 0.93 (human-AI). Pearson correlations for VBQ scores between human and AI measurements were 0.86 and 0.9, with RMSE values of 0.58 and 0.42 respectively. The AI-based algorithm accurately predicts VBQ scores from routine lumbar MRIs. This approach has potential to enhance early identification and intervention for patients with poor bone health, leading to improved surgical outcomes. Further external validation is recommended to ensure generalizability and clinical applicability.

Scaling Artificial Intelligence for Prostate Cancer Detection on MRI towards Population-Based Screening and Primary Diagnosis in a Global, Multiethnic Population (Study Protocol)

Anindo Saha, Joeran S. Bosma, Jasper J. Twilt, Alexander B. C. D. Ng, Aqua Asif, Kirti Magudia, Peder Larson, Qinglin Xie, Xiaodong Zhang, Chi Pham Minh, Samuel N. Gitau, Ivo G. Schoots, Martijn F. Boomsma, Renato Cuocolo, Nikolaos Papanikolaou, Daniele Regge, Derya Yakar, Mattijs Elschot, Jeroen Veltman, Baris Turkbey, Nancy A. Obuchowski, Jurgen J. Fütterer, Anwar R. Padhani, Hashim U. Ahmed, Tobias Nordström, Martin Eklund, Veeru Kasivisvanathan, Maarten de Rooij, Henkjan Huisman

arxiv logopreprintAug 4 2025
In this intercontinental, confirmatory study, we include a retrospective cohort of 22,481 MRI examinations (21,288 patients; 46 cities in 22 countries) to train and externally validate the PI-CAI-2B model, i.e., an efficient, next-generation iteration of the state-of-the-art AI system that was developed for detecting Gleason grade group $\geq$2 prostate cancer on MRI during the PI-CAI study. Of these examinations, 20,471 cases (19,278 patients; 26 cities in 14 countries) from two EU Horizon projects (ProCAncer-I, COMFORT) and 12 independent centers based in Europe, North America, Asia and Africa, are used for training and internal testing. Additionally, 2010 cases (2010 patients; 20 external cities in 12 countries) from population-based screening (STHLM3-MRI, IP1-PROSTAGRAM trials) and primary diagnostic settings (PRIME trial) based in Europe, North and South Americas, Asia and Australia, are used for external testing. Primary endpoint is the proportion of AI-based assessments in agreement with the standard of care diagnoses (i.e., clinical assessments made by expert uropathologists on histopathology, if available, or at least two expert urogenital radiologists in consensus; with access to patient history and peer consultation) in the detection of Gleason grade group $\geq$2 prostate cancer within the external testing cohorts. Our statistical analysis plan is prespecified with a hypothesis of diagnostic interchangeability to the standard of care at the PI-RADS $\geq$3 (primary diagnosis) or $\geq$4 (screening) cut-off, considering an absolute margin of 0.05 and reader estimates derived from the PI-CAI observer study (62 radiologists reading 400 cases). Secondary measures comprise the area under the receiver operating characteristic curve (AUROC) of the AI system stratified by imaging quality, patient age and patient ethnicity to identify underlying biases (if any).

Diagnostic Performance of Imaging-Based Artificial Intelligence Models for Preoperative Detection of Cervical Lymph Node Metastasis in Clinically Node-Negative Papillary Thyroid Carcinoma: A Systematic Review and Meta-Analysis.

Li B, Cheng G, Mo Y, Dai J, Cheng S, Gong S, Li H, Liu Y

pubmed logopapersAug 4 2025
This systematic review and meta-analysis evaluated the performance of imaging-based artificial intelligence (AI) models in diagnosing preoperative cervical lymph node metastasis (LNM) in clinically node-negative (cN0) papillary thyroid carcinoma (PTC). We conducted a literature search in PubMed, Embase, and Web of Science until February 25, 2025. Studies were selected that focused on imaging-based AI models for predicting cervical LNM in cN0 PTC. The diagnostic performance metrics were analyzed using a bivariate random-effects model, and study quality was assessed with the QUADAS-2 tool. From 671 articles, 11 studies involving 3366 patients were included. Ultrasound (US)-based AI models showed pooled sensitivity of 0.79 and specificity of 0.82, significantly higher than radiologists (p < 0.001). CT-based AI models demonstrated sensitivity of 0.78 and specificity of 0.89. Imaging-based AI models, particularly US-based AI, show promising diagnostic performance. There is a need for further multicenter prospective studies for validation. PROSPERO: (CRD420251063416).

The Use of Artificial Intelligence to Improve Detection of Acute Incidental Pulmonary Emboli.

Kuzo RS, Levin DL, Bratt AK, Walkoff LA, Suman G, Houghton DE

pubmed logopapersAug 4 2025
Incidental pulmonary emboli (IPE) are frequently overlooked by radiologists. Artificial intelligence (AI) algorithms have been developed to aid detection of pulmonary emboli. To measure diagnostic performance of AI compared with prospective interpretation by radiologists. A commercially available AI algorithm was used to retrospectively review 14,453 contrast-enhanced outpatient CT CAP exams in 9171 patients where PE was not clinically suspected. Natural language processing (NLP) searches of reports identified IPE detected prospectively. Thoracic radiologists reviewed all cases read as positive by AI or NLP to confirm IPE and assess the most proximal level of clot and overall clot burden. 1,400 cases read as negative by both the initial radiologist and AI were re-reviewed to assess for additional IPE. Radiologists prospectively detected 218 IPE and AI detected an additional 36 unreported cases. AI missed 30 cases of IPE detected by the radiologist and had 94 false positives. For 36 IPE missed by the radiologist, median clot burden was 1 and 19 were solitary segmental or subsegmental. For 30 IPE missed by AI, one case had large central emboli and the others were small with 23 solitary subsegmental emboli. Radiologist re-review of 1,400 exams interpreted as negative found 8 additional cases of IPE. Compared with radiologists, AI had similar sensitivity but reduced positive predictive value. Our experience indicates that the AI tool is not ready to be used autonomously without human oversight, but a human observer plus AI is better than either alone for detection of incidental pulmonary emboli.

Automated detection of lacunes in brain MR images using SAM with robust prompts using self-distillation and anatomy-informed priors.

Deepika P, Shanker G, Narayanan R, Sundaresan V

pubmed logopapersAug 4 2025
Lacunes, which are small fluid-filled cavities in the brain, are signs of cerebral small vessel disease and have been clinically associated with various neurodegenerative and cerebrovascular diseases. Hence, accurate detection of lacunes is crucial and is one of the initial steps for the precise diagnosis of these diseases. However, developing a robust and consistently reliable method for detecting lacunes is challenging because of the heterogeneity in their appearance, contrast, shape, and size. In this study, we propose a lacune detection method using the Segment Anything Model (SAM), guided by point prompts from a candidate prompt generator. The prompt generator initially detects potential lacunes with a high sensitivity using a composite loss function. The true lacunes are then selected using SAM by discriminating their characteristics from mimics such as the sulcus and enlarged perivascular spaces, imitating the clinicians' strategy of examining the potential lacunes along all three axes. False positives are further reduced by adaptive thresholds based on the region wise prevalence of lacunes. We evaluated our method on two diverse, multi-centric MRI datasets, VALDO and ISLES, comprising only FLAIR sequences. Despite diverse imaging conditions and significant variations in slice thickness (0.5-6 mm), our method achieved sensitivities of 84% and 92%, with average false positive rates of 0.05 and 0.06 per slice in ISLES and VALDO datasets respectively. The proposed method demonstrates robust performance across varied imaging conditions and outperformed the state-of-the-art methods, demonstrating its effectiveness in lacune detection and quantification.

Artificial intelligence: a new era in prostate cancer diagnosis and treatment.

Vidiyala N, Parupathi P, Sunkishala P, Sree C, Gujja A, Kanagala P, Meduri SK, Nyavanandi D

pubmed logopapersAug 4 2025
Prostate cancer (PCa) represents one of the most prevalent cancers among men, with substantial challenges in timely and accurate diagnosis and subsequent treatment. Traditional diagnosis and treatment methods for PCa, such as prostate-specific antigen (PSA) biomarker detection, digital rectal examination, imaging (CT/MRI) analysis, and biopsy histopathological examination, suffer from limitations such as a lack of specificity, generation of false positives or negatives, and difficulty in handling large data, leading to overdiagnosis and overtreatment. The integration of artificial intelligence (AI) in PCa diagnosis and treatment is revolutionizing traditional approaches by offering advanced tools for early detection, personalized treatment planning, and patient management. AI technologies, especially machine learning and deep learning, improve diagnostic accuracy and treatment planning. The AI algorithms analyze imaging data, like MRI and ultrasound, to identify cancerous lesions effectively with great precision. In addition, AI algorithms enhance risk assessment and prognosis by combining clinical, genomic, and imaging data. This leads to more tailored treatment strategies, enabling informed decisions about active surveillance, surgery, or new therapies, thereby improving quality of life while reducing unnecessary diagnoses and treatments. This review examines current AI applications in PCa care, focusing on their transformative impact on diagnosis and treatment planning while recognizing potential challenges. It also outlines expected improvements in diagnosis through AI-integrated systems and decision support tools for healthcare teams. The findings highlight AI's potential to enhance clinical outcomes, operational efficiency, and patient-centred care in managing PCa.

Vessel-specific reliability of artificial intelligence-based coronary artery calcium scoring on non-ECG-gated chest CT: a comparative study with ECG-gated cardiac CT.

Zhang J, Liu K, You C, Gong J

pubmed logopapersAug 4 2025
To evaluate the performance of artificial intelligence (AI)-based coronary artery calcium scoring (CACS) on non-electrocardiogram (ECG)-gated chest CT, using manual quantification as the reference standard, while characterizing per-vessel reliability and clinical risk classification impacts. Retrospective study of 290 patients (June 2023-2024) with paired non-ECG-gated chest CT and ECG-gated cardiac CT (median time was 2 days). AI-based CACS and manual CACS (CACS_man) were compared using intraclass correlation coefficient (ICC) and weighted Cohen's kappa (3,1). Error types, anatomical distributions, and CACS of the lesions of individual arteries or segments were assessed in accordance with the Society of Cardiovascular Computed Tomography (SCCT) guidelines. The total CACS of chest CT demonstrated excellent concordance with CACS_man (ICC = 0.87, 95 % CI 0.84-0.90). Non-ECG-gated chest showed a 7.5-fold increased risk misclassification rate compared to ECG-gated cardiac CT (41.4 % vs. 5.5 %), with 35.5 % overclassification and 5.9 % underclassification. Vessel-specific analysis revealed paradoxical reliability of the left anterior descending artery (LAD) due to stent misclassification in four cases (ICC = 0.93 on chest CT vs 0.82 on cardiac CT), while the right coronary artery (RCA) demonstrated suboptimal performance with ICCs ranging from 0.60 to 0.68. Chest CT exhibited higher false-positive (1.9 % vs 0.5 %) and false-negative rates (14.4 % vs 4.3 %). False positive mainly derived from image noise in proximal LAD/RCA (median CACS 5.97 vs 3.45) and anatomical error, while false negatives involved RCA microcalcifications (median CACS 2.64). AI-based non-ECG-gated chest CT demonstrates utility for opportunistic screening but requires protocol optimization to address vessel-specific limitations and mitigate 41.4 % risk misclassification rates.

Deep learning-driven incidental detection of vertebral fractures in cancer patients: advancing diagnostic precision and clinical management.

Mniai EM, Laletin V, Tselikas L, Assi T, Bonnet B, Camez AO, Zemmouri A, Muller S, Moussa T, Chaibi Y, Kiewsky J, Quenet S, Avare C, Lassau N, Balleyguier C, Ayobi A, Ammari S

pubmed logopapersAug 2 2025
Vertebral compression fractures (VCFs) are the most prevalent skeletal manifestations of osteoporosis in cancer patients. Yet, they are frequently missed or not reported in routine clinical radiology, adversely impacting patient outcomes and quality of life. This study evaluates the diagnostic performance of a deep-learning (DL)-based application and its potential to reduce the miss rate of incidental VCFs in a high-risk cancer population. We retrospectively analysed thoraco-abdomino-pelvic (TAP) CT scans from 1556 patients with stage IV cancer collected consecutively over a 4-month period (September-December 2023) in a tertiary cancer center. A DL-based application flagged cases positive for VCFs, which were subsequently reviewed by two expert radiologists for validation. Additionally, grade 3 fractures identified by the application were independently assessed by two expert interventional radiologists to determine their eligibility for vertebroplasty. Of the 1556 cases, 501 were flagged as positive for VCF by the application, with 436 confirmed as true positives by expert review, yielding a positive predictive value (PPV) of 87%. Common causes of false positives included sclerotic vertebral metastases, scoliosis, and vertebrae misidentification. Notably, 83.5% (364/436) of true positive VCFs were absent from radiology reports, indicating a substantial non-report rate in routine practice. Ten grade 3 fractures were overlooked or not reported by radiologists. Among them, 9 were deemed suitable for vertebroplasty by expert interventional radiologists. This study underscores the potential of DL-based applications to improve the detection of VCFs. The analyzed tool can assist radiologists in detecting more incidental vertebral fractures in adult cancer patients, optimising timely treatment and reducing associated morbidity and economic burden. Moreover, it might enhance patient access to interventional treatments such as vertebroplasty. These findings highlight the transformative role that DL can play in optimising clinical management and outcomes for osteoporosis-related VCFs in cancer patients.

AI enhanced diagnostic accuracy and workload reduction in hepatocellular carcinoma screening.

Lu RF, She CY, He DN, Cheng MQ, Wang Y, Huang H, Lin YD, Lv JY, Qin S, Liu ZZ, Lu ZR, Ke WP, Li CQ, Xiao H, Xu ZF, Liu GJ, Yang H, Ren J, Wang HB, Lu MD, Huang QH, Chen LD, Wang W, Kuang M

pubmed logopapersAug 2 2025
Hepatocellular carcinoma (HCC) ultrasound screening encounters challenges related to accuracy and the workload of radiologists. This retrospective, multicenter study assessed four artificial intelligence (AI) enhanced strategies using 21,934 liver ultrasound images from 11,960 patients to improve HCC ultrasound screening accuracy and reduce radiologist workload. UniMatch was used for lesion detection and LivNet for classification, trained on 17,913 images. Among the strategies tested, Strategy 4, which combined AI for initial detection and radiologist evaluation of negative cases in both detection and classification phases, outperformed others. It not only matched the high sensitivity of original algorithm (0.956 vs. 0.991) but also improved specificity (0.787 vs. 0.698), reduced radiologist workload by 54.5%, and decreased both recall and false positive rates. This approach demonstrates a successful model of human-AI collaboration, not only enhancing clinical outcomes but also mitigating unnecessary patient anxiety and system burden by minimizing recalls and false positives.

High-grade glioma: combined use of 5-aminolevulinic acid and intraoperative ultrasound for resection and a predictor algorithm for detection.

Aibar-Durán JÁ, Mirapeix RM, Gallardo Alcañiz A, Salgado-López L, Freixer-Palau B, Casitas Hernando V, Hernández FM, de Quintana-Schmidt C

pubmed logopapersAug 1 2025
The primary goal in neuro-oncology is the maximally safe resection of high-grade glioma (HGG). A more extensive resection improves both overall and disease-free survival, while a complication-free surgery enables better tolerance to adjuvant therapies such as chemotherapy and radiotherapy. Techniques such as 5-aminolevulinic acid (5-ALA) fluorescence and intraoperative ultrasound (ioUS) are valuable for safe resection and cost-effective. However, the benefits of combining these techniques remain undocumented. The aim of this study was to investigate outcomes when combining 5-ALA and ioUS. From January 2019 to January 2024, 72 patients (mean age 62.2 years, 62.5% male) underwent HGG resection at a single hospital. Tumor histology included glioblastoma (90.3%), grade IV astrocytoma (4.1%), grade III astrocytoma (2.8%), and grade III oligodendroglioma (2.8%). Tumor resection was performed under natural light, followed by using 5-ALA and ioUS to detect residual tumor. Biopsies from the surgical bed were analyzed for tumor presence and categorized based on 5-ALA and ioUS results. Results of 5-ALA and ioUS were classified into positive, weak/doubtful, or negative. Histological findings of the biopsies were categorized into solid tumor, infiltration, or no tumor. Sensitivity, specificity, and predictive values for both techniques, separately and combined, were calculated. A machine learning algorithm (HGGPredictor) was developed to predict tumor presence in biopsies. The overall sensitivities of 5-ALA and ioUS were 84.9% and 76%, with specificities of 57.8% and 84.5%, respectively. The combination of both methods in a positive/positive scenario yielded the highest performance, achieving a sensitivity of 91% and specificity of 86%. The positive/doubtful combination followed, with sensitivity of 67.9% and specificity of 95.2%. Area under the curve analysis indicated superior performance when both techniques were combined, in comparison to each method used individually. Additionally, the HGGPredictor tool effectively estimated the quantity of tumor cells in surgical margins. Combining 5-ALA and ioUS enhanced diagnostic accuracy for HGG resection, suggesting a new surgical standard. An intraoperative predictive algorithm could further automate decision-making.
Page 14 of 45441 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.