Sort by:
Page 113 of 1521519 results

Deep learning radiomics analysis for prediction of survival in patients with unresectable gastric cancer receiving immunotherapy.

Gou M, Zhang H, Qian N, Zhang Y, Sun Z, Li G, Wang Z, Dai G

pubmed logopapersJun 1 2025
Immunotherapy has become an option for the first-line therapy of advanced gastric cancer (GC), with improved survival. Our study aimed to investigate unresectable GC from an imaging perspective combined with clinicopathological variables to identify patients who were most likely to benefit from immunotherapy. Patients with unresectable GC who were consecutively treated with immunotherapy at two different medical centers of Chinese PLA General Hospital were included and divided into the training and validation cohorts, respectively. A deep learning neural network, using a multimodal ensemble approach based on CT imaging data before immunotherapy, was trained in the training cohort to predict survival, and an internal validation cohort was constructed to select the optimal ensemble model. Data from another cohort were used for external validation. The area under the receiver operating characteristic curve was analyzed to evaluate performance in predicting survival. Detailed clinicopathological data and peripheral blood prior to immunotherapy were collected for each patient. Univariate and multivariable logistic regression analysis of imaging models and clinicopathological variables was also applied to identify the independent predictors of survival. A nomogram based on multivariable logistic regression was constructed. A total of 79 GC patients in the training cohort and 97 patients in the external validation cohort were enrolled in this study. A multi-model ensemble approach was applied to train a model to predict the 1-year survival of GC patients. Compared to individual models, the ensemble model showed improvement in performance metrics in both the internal and external validation cohorts. There was a significant difference in overall survival (OS) among patients with different imaging models based on the optimum cutoff score of 0.5 (HR = 0.20, 95 % CI: 0.10-0.37, <i>P</i> < 0.001). Multivariate Cox regression analysis revealed that the imaging models, PD-L1 expression, and lung immune prognostic index were independent prognostic factors for OS. We combined these variables and built a nomogram. The calibration curves showed that the C-index of the nomogram was 0.85 and 0.78 in the training and validation cohorts. The deep learning model in combination with several clinical factors showed predictive value for survival in patients with unresectable GC receiving immunotherapy.

Empowering PET imaging reporting with retrieval-augmented large language models and reading reports database: a pilot single center study.

Choi H, Lee D, Kang YK, Suh M

pubmed logopapersJun 1 2025
The potential of Large Language Models (LLMs) in enhancing a variety of natural language tasks in clinical fields includes medical imaging reporting. This pilot study examines the efficacy of a retrieval-augmented generation (RAG) LLM system considering zero-shot learning capability of LLMs, integrated with a comprehensive database of PET reading reports, in improving reference to prior reports and decision making. We developed a custom LLM framework with retrieval capabilities, leveraging a database of over 10 years of PET imaging reports from a single center. The system uses vector space embedding to facilitate similarity-based retrieval. Queries prompt the system to generate context-based answers and identify similar cases or differential diagnoses. From routine clinical PET readings, experienced nuclear medicine physicians evaluated the performance of system in terms of the relevance of queried similar cases and the appropriateness score of suggested potential diagnoses. The system efficiently organized embedded vectors from PET reports, showing that imaging reports were accurately clustered within the embedded vector space according to the diagnosis or PET study type. Based on this system, a proof-of-concept chatbot was developed and showed the framework's potential in referencing reports of previous similar cases and identifying exemplary cases for various purposes. From routine clinical PET readings, 84.1% of the cases retrieved relevant similar cases, as agreed upon by all three readers. Using the RAG system, the appropriateness score of the suggested potential diagnoses was significantly better than that of the LLM without RAG. Additionally, it demonstrated the capability to offer differential diagnoses, leveraging the vast database to enhance the completeness and precision of generated reports. The integration of RAG LLM with a large database of PET imaging reports suggests the potential to support clinical practice of nuclear medicine imaging reading by various tasks of AI including finding similar cases and deriving potential diagnoses from them. This study underscores the potential of advanced AI tools in transforming medical imaging reporting practices.

Enhancing detection of previously missed non-palpable breast carcinomas through artificial intelligence.

Mansour S, Kamal R, Hussein SA, Emara M, Kassab Y, Taha SN, Gomaa MMM

pubmed logopapersJun 1 2025
To investigate the impact of artificial intelligence (AI) reading digital mammograms in increasing the chance of detecting missed breast cancer, by studying the AI- flagged early morphology indictors, overlooked by the radiologist, and correlating them with the missed cancer pathology types. Mammograms done in 2020-2023, presenting breast carcinomas (n = 1998), were analyzed in concordance with the prior one year's result (2019-2022) assumed negative or benign. Present mammograms reviewed for the descriptors: asymmetry, distortion, mass, and microcalcifications. The AI presented abnormalities by overlaying color hue and scoring percentage for the degree of suspicion of malignancy. Prior mammogram with AI marking compromised 54 % (n = 555), and in the present mammograms, AI targeted 904 (88 %) carcinomas. The descriptor proportion of "asymmetry" was the common presentation of missed breast carcinoma (64.1 %) in the prior mammograms and the highest detection rate for AI was presented by "distortion" (100 %) followed by "grouped microcalcifications" (80 %). AI performance to predict malignancy in previously assigned negative or benign mammograms showed sensitivity of 73.4 %, specificity of 89 %, and accuracy of 78.4 %. Reading mammograms with AI significantly enhances the detection of early cancerous changes, particularly in dense breast tissues. The AI's detection rate does not correlate with specific pathological types of breast cancer, highlighting its broad utility. Subtle mammographic changes in postmenopausal women, not corroborated by ultrasound but marked by AI, warrant further evaluation by advanced applications of digital mammograms and close interval AI-reading mammogram follow up to minimize the potential for missed breast carcinoma.

AI image analysis as the basis for risk-stratified screening.

Strand F

pubmed logopapersJun 1 2025
Artificial intelligence (AI) has emerged as a transformative tool in breast cancer screening, with two distinct applications: computer-aided cancer detection (CAD) and risk prediction. While AI CAD systems are slowly finding its way into clinical practice to assist radiologists or make independent reads, this review focuses on AI risk models, which aim to predict a patient's likelihood of being diagnosed with breast cancer within a few years after negative screening. Unlike AI CAD systems, AI risk models are mainly explored in research settings without widespread clinical adoption. This review synthesizes advances in AI-driven risk prediction models, from traditional imaging biomarkers to cutting-edge deep learning methodologies and multimodal approaches. Contributions by leading researchers are explored with critical appraisal of their methods and findings. Ethical, practical, and clinical challenges in implementing AI models are also discussed, with an emphasis on real-world applications. This review concludes by proposing future directions to optimize the adoption of AI tools in breast cancer screening and improve equity and outcomes for diverse populations.

Semi-Supervised Learning Allows for Improved Segmentation With Reduced Annotations of Brain Metastases Using Multicenter MRI Data.

Ottesen JA, Tong E, Emblem KE, Latysheva A, Zaharchuk G, Bjørnerud A, Grøvik E

pubmed logopapersJun 1 2025
Deep learning-based segmentation of brain metastases relies on large amounts of fully annotated data by domain experts. Semi-supervised learning offers potential efficient methods to improve model performance without excessive annotation burden. This work tests the viability of semi-supervision for brain metastases segmentation. Retrospective. There were 156, 65, 324, and 200 labeled scans from four institutions and 519 unlabeled scans from a single institution. All subjects included in the study had diagnosed with brain metastases. 1.5 T and 3 T, 2D and 3D T1-weighted pre- and post-contrast, and fluid-attenuated inversion recovery (FLAIR). Three semi-supervision methods (mean teacher, cross-pseudo supervision, and interpolation consistency training) were adapted with the U-Net architecture. The three semi-supervised methods were compared to their respective supervised baseline on the full and half-sized training. Evaluation was performed on a multinational test set from four different institutions using 5-fold cross-validation. Method performance was evaluated by the following: the number of false-positive predictions, the number of true positive predictions, the 95th Hausdorff distance, and the Dice similarity coefficient (DSC). Significance was tested using a paired samples t test for a single fold, and across all folds within a given cohort. Semi-supervision outperformed the supervised baseline for all sites with the best-performing semi-supervised method achieved an on average DSC improvement of 6.3% ± 1.6%, 8.2% ± 3.8%, 8.6% ± 2.6%, and 15.4% ± 1.4%, when trained on half the dataset and 3.6% ± 0.7%, 2.0% ± 1.5%, 1.8% ± 5.7%, and 4.7% ± 1.7%, compared to the supervised baseline on four test cohorts. In addition, in three of four datasets, the semi-supervised training produced equal or better results than the supervised models trained on twice the labeled data. Semi-supervised learning allows for improved segmentation performance over the supervised baseline, and the improvement was particularly notable for independent external test sets when trained on small amounts of labeled data. Artificial intelligence requires extensive datasets with large amounts of annotated data from medical experts which can be difficult to acquire due to the large workload. To compensate for this, it is possible to utilize large amounts of un-annotated clinical data in addition to annotated data. However, this method has not been widely tested for the most common intracranial brain tumor, brain metastases. This study shows that this approach allows for data efficient deep learning models across multiple institutions with different clinical protocols and scanners. 3 TECHNICAL EFFICACY: Stage 2.

Computer-Aided Detection (CADe) and Segmentation Methods for Breast Cancer Using Magnetic Resonance Imaging (MRI).

Jannatdoust P, Valizadeh P, Saeedi N, Valizadeh G, Salari HM, Saligheh Rad H, Gity M

pubmed logopapersJun 1 2025
Breast cancer continues to be a major health concern, and early detection is vital for enhancing survival rates. Magnetic resonance imaging (MRI) is a key tool due to its substantial sensitivity for invasive breast cancers. Computer-aided detection (CADe) systems enhance the effectiveness of MRI by identifying potential lesions, aiding radiologists in focusing on areas of interest, extracting quantitative features, and integrating with computer-aided diagnosis (CADx) pipelines. This review aims to provide a comprehensive overview of the current state of CADe systems in breast MRI, focusing on the technical details of pipelines and segmentation models including classical intensity-based methods, supervised and unsupervised machine learning (ML) approaches, and the latest deep learning (DL) architectures. It highlights recent advancements from traditional algorithms to sophisticated DL models such as U-Nets, emphasizing CADe implementation of multi-parametric MRI acquisitions. Despite these advancements, CADe systems face challenges like variable false-positive and negative rates, complexity in interpreting extensive imaging data, variability in system performance, and lack of large-scale studies and multicentric models, limiting the generalizability and suitability for clinical implementation. Technical issues, including image artefacts and the need for reproducible and explainable detection algorithms, remain significant hurdles. Future directions emphasize developing more robust and generalizable algorithms, integrating explainable AI to improve transparency and trust among clinicians, developing multi-purpose AI systems, and incorporating large language models to enhance diagnostic reporting and patient management. Additionally, efforts to standardize and streamline MRI protocols aim to increase accessibility and reduce costs, optimizing the use of CADe systems in clinical practice. LEVEL OF EVIDENCE: NA TECHNICAL EFFICACY: Stage 2.

Deep Learning-Based Three-Dimensional Analysis Reveals Distinct Patterns of Condylar Remodelling After Orthognathic Surgery in Skeletal Class III Patients.

Barone S, Cevidanes L, Bianchi J, Goncalves JR, Giudice A

pubmed logopapersJun 1 2025
This retrospective study aimed to evaluate morphometric changes in mandibular condyles of patients with skeletal Class III malocclusion following two-jaw orthognathic surgery planned using virtual surgical planning (VSP) and analysed with automated three-dimensional (3D) image analysis based on deep-learning techniques. Pre-operative (T1) and 12-18 months post-operative (T2) Cone-Beam Computed Tomography (CBCT) scans of 17 patients (mean age: 24.8 ± 3.5 years) were analysed using 3DSlicer software. Deep-learning algorithms automated CBCT orientation, registration, bone segmentation, and landmark identification. By utilising voxel-based superimposition of pre- and post-operative CBCT scans and shape correspondence, the overall changes in condylar morphology were assessed, with a focus on bone resorption and apposition at specific regions (superior, lateral and medial poles). The correlation between these modifications and the extent of actual condylar movements post-surgery was investigated. Statistical analysis was conducted with a significance level of α = 0.05. Overall condylar remodelling was minimal, with mean changes of < 1 mm. Small but statistically significant bone resorption occurred at the condylar superior articular surface, while bone apposition was primarily observed at the lateral pole. The bone apposition at the lateral pole and resorption at the superior articular surface were significantly correlated with medial condylar displacement (p < 0.05). The automated 3D analysis revealed distinct patterns of condylar remodelling following orthognathic surgery in skeletal Class III patients, with minimal overall changes but significant regional variations. The correlation between condylar displacements and remodelling patterns highlights the need for precise pre-operative planning to optimise condylar positioning, potentially minimising harmful remodelling and enhancing stability.

Phenotyping atherosclerotic plaque and perivascular adipose tissue: signalling pathways and clinical biomarkers in atherosclerosis.

Grodecki K, Geers J, Kwiecinski J, Lin A, Slipczuk L, Slomka PJ, Dweck MR, Nerlekar N, Williams MC, Berman D, Marwick T, Newby DE, Dey D

pubmed logopapersJun 1 2025
Computed tomography coronary angiography provides a non-invasive evaluation of coronary artery disease that includes phenotyping of atherosclerotic plaques and the surrounding perivascular adipose tissue (PVAT). Image analysis techniques have been developed to quantify atherosclerotic plaque burden and morphology as well as the associated PVAT attenuation, and emerging radiomic approaches can add further contextual information. PVAT attenuation might provide a novel measure of vascular health that could be indicative of the pathogenetic processes implicated in atherosclerosis such as inflammation, fibrosis or increased vascularity. Bidirectional signalling between the coronary artery and adjacent PVAT has been hypothesized to contribute to coronary artery disease progression and provide a potential novel measure of the risk of future cardiovascular events. However, despite the development of more advanced radiomic and artificial intelligence-based algorithms, studies involving large datasets suggest that the measurement of PVAT attenuation contributes only modest additional predictive discrimination to standard cardiovascular risk scores. In this Review, we explore the pathobiology of coronary atherosclerotic plaques and PVAT, describe their phenotyping with computed tomography coronary angiography, and discuss potential future applications in clinical risk prediction and patient management.

Evaluation of a deep learning prostate cancer detection system on biparametric MRI against radiological reading.

Debs N, Routier A, Bône A, Rohé MM

pubmed logopapersJun 1 2025
This study aims to evaluate a deep learning pipeline for detecting clinically significant prostate cancer (csPCa), defined as Gleason Grade Group (GGG) ≥ 2, using biparametric MRI (bpMRI) and compare its performance with radiological reading. The training dataset included 4381 bpMRI cases (3800 positive and 581 negative) across three continents, with 80% annotated using PI-RADS and 20% with Gleason Scores. The testing set comprised 328 cases from the PROSTATEx dataset, including 34% positive (GGG ≥ 2) and 66% negative cases. A 3D nnU-Net was trained on bpMRI for lesion detection, evaluated using histopathology-based annotations, and assessed with patient- and lesion-level metrics, along with lesion volume, and GGG. The algorithm was compared to non-expert radiologists using multi-parametric MRI (mpMRI). The model achieved an AUC of 0.83 (95% CI: 0.80, 0.87). Lesion-level sensitivity was 0.85 (95% CI: 0.82, 0.94) at 0.5 False Positives per volume (FP/volume) and 0.88 (95% CI: 0.79, 0.92) at 1 FP/volume. Average Precision was 0.55 (95% CI: 0.46, 0.64). The model showed over 0.90 sensitivity for lesions larger than 650 mm³ and exceeded 0.85 across GGGs. It had higher true positive rates (TPRs) than radiologists equivalent FP rates, achieving TPRs of 0.93 and 0.79 compared to radiologists' 0.87 and 0.68 for PI-RADS ≥ 3 and PI-RADS ≥ 4 lesions (p ≤ 0.05). The DL model showed strong performance in detecting csPCa on an independent test cohort, surpassing radiological interpretation and demonstrating AI's potential to improve diagnostic accuracy for non-expert radiologists. However, detecting small lesions remains challenging. Question Current prostate cancer detection methods often do not involve non-expert radiologists, highlighting the need for more accurate deep learning approaches using biparametric MRI. Findings Our model outperforms radiologists significantly, showing consistent performance across Gleason Grade Groups and for medium to large lesions. Clinical relevance This AI model improves prostate detection accuracy in prostate imaging, serves as a benchmark with reference performance on a public dataset, and offers public PI-RADS annotations, enhancing transparency and facilitating further research and development.

Automated Cone Beam Computed Tomography Segmentation of Multiple Impacted Teeth With or Without Association to Rare Diseases: Evaluation of Four Deep Learning-Based Methods.

Sinard E, Gajny L, de La Dure-Molla M, Felizardo R, Dot G

pubmed logopapersJun 1 2025
To assess the accuracy of three commercially available and one open-source deep learning (DL) solutions for automatic tooth segmentation in cone beam computed tomography (CBCT) images of patients with multiple dental impactions. Twenty patients (20 CBCT scans) were selected from a retrospective cohort of individuals with multiple dental impactions. For each CBCT scan, one reference segmentation and four DL segmentations of the maxillary and mandibular teeth were obtained. Reference segmentations were generated by experts using a semi-automatic process. DL segmentations were automatically generated according to the manufacturer's instructions. Quantitative and qualitative evaluations of each DL segmentation were performed by comparing it with expert-generated segmentation. The quantitative metrics used were Dice similarity coefficient (DSC) and the normalized surface distance (NSD). The patients had an average of 12 retained teeth, with 12 of them diagnosed with a rare disease. DSC values ranged from 88.5% ± 3.2% to 95.6% ± 1.2%, and NSD values ranged from 95.3% ± 2.7% to 97.4% ± 6.5%. The number of completely unsegmented teeth ranged from 1 (0.1%) to 41 (6.0%). Two solutions (Diagnocat and DentalSegmentator) outperformed the others across all tested parameters. All the tested methods showed a mean NSD of approximately 95%, proving their overall efficiency for tooth segmentation. The accuracy of the methods varied among the four tested solutions owing to the presence of impacted teeth in our CBCT scans. DL solutions are evolving rapidly, and their future performance cannot be predicted based on our results.
Page 113 of 1521519 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.