Sort by:
Page 59 of 1381373 results

AI image analysis as the basis for risk-stratified screening.

Strand F

pubmed logopapersJun 1 2025
Artificial intelligence (AI) has emerged as a transformative tool in breast cancer screening, with two distinct applications: computer-aided cancer detection (CAD) and risk prediction. While AI CAD systems are slowly finding its way into clinical practice to assist radiologists or make independent reads, this review focuses on AI risk models, which aim to predict a patient's likelihood of being diagnosed with breast cancer within a few years after negative screening. Unlike AI CAD systems, AI risk models are mainly explored in research settings without widespread clinical adoption. This review synthesizes advances in AI-driven risk prediction models, from traditional imaging biomarkers to cutting-edge deep learning methodologies and multimodal approaches. Contributions by leading researchers are explored with critical appraisal of their methods and findings. Ethical, practical, and clinical challenges in implementing AI models are also discussed, with an emphasis on real-world applications. This review concludes by proposing future directions to optimize the adoption of AI tools in breast cancer screening and improve equity and outcomes for diverse populations.

Semi-Supervised Learning Allows for Improved Segmentation With Reduced Annotations of Brain Metastases Using Multicenter MRI Data.

Ottesen JA, Tong E, Emblem KE, Latysheva A, Zaharchuk G, Bjørnerud A, Grøvik E

pubmed logopapersJun 1 2025
Deep learning-based segmentation of brain metastases relies on large amounts of fully annotated data by domain experts. Semi-supervised learning offers potential efficient methods to improve model performance without excessive annotation burden. This work tests the viability of semi-supervision for brain metastases segmentation. Retrospective. There were 156, 65, 324, and 200 labeled scans from four institutions and 519 unlabeled scans from a single institution. All subjects included in the study had diagnosed with brain metastases. 1.5 T and 3 T, 2D and 3D T1-weighted pre- and post-contrast, and fluid-attenuated inversion recovery (FLAIR). Three semi-supervision methods (mean teacher, cross-pseudo supervision, and interpolation consistency training) were adapted with the U-Net architecture. The three semi-supervised methods were compared to their respective supervised baseline on the full and half-sized training. Evaluation was performed on a multinational test set from four different institutions using 5-fold cross-validation. Method performance was evaluated by the following: the number of false-positive predictions, the number of true positive predictions, the 95th Hausdorff distance, and the Dice similarity coefficient (DSC). Significance was tested using a paired samples t test for a single fold, and across all folds within a given cohort. Semi-supervision outperformed the supervised baseline for all sites with the best-performing semi-supervised method achieved an on average DSC improvement of 6.3% ± 1.6%, 8.2% ± 3.8%, 8.6% ± 2.6%, and 15.4% ± 1.4%, when trained on half the dataset and 3.6% ± 0.7%, 2.0% ± 1.5%, 1.8% ± 5.7%, and 4.7% ± 1.7%, compared to the supervised baseline on four test cohorts. In addition, in three of four datasets, the semi-supervised training produced equal or better results than the supervised models trained on twice the labeled data. Semi-supervised learning allows for improved segmentation performance over the supervised baseline, and the improvement was particularly notable for independent external test sets when trained on small amounts of labeled data. Artificial intelligence requires extensive datasets with large amounts of annotated data from medical experts which can be difficult to acquire due to the large workload. To compensate for this, it is possible to utilize large amounts of un-annotated clinical data in addition to annotated data. However, this method has not been widely tested for the most common intracranial brain tumor, brain metastases. This study shows that this approach allows for data efficient deep learning models across multiple institutions with different clinical protocols and scanners. 3 TECHNICAL EFFICACY: Stage 2.

Preliminary study on detection and diagnosis of focal liver lesions based on a deep learning model using multimodal PET/CT images.

Luo Y, Yang Q, Hu J, Qin X, Jiang S, Liu Y

pubmed logopapersJun 1 2025
To develop and validate a deep learning model using multimodal PET/CT imaging for detecting and classifying focal liver lesions (FLL). This study included 185 patients who underwent <sup>18</sup>F-FDG PET/CT imaging at our institution from March 2022 to February 2023. We analyzed serological data and imaging. Liver lesions were segmented on PET and CT, serving as the "reference standard". Deep learning models were trained using PET and CT images to generate predicted segmentations and classify lesion nature. Model performance was evaluated by comparing the predicted segmentations with the reference segmentations, using metrics such as Dice, Precision, Recall, F1-score, ROC, and AUC, and compared it with physician diagnoses. This study finally included 150 patients, comprising 46 patients with benign liver nodules, 51 patients with malignant liver nodules, and 53 patients with no FLLs. Significant differences were observed among groups for age, AST, ALP, GGT, AFP, CA19-9and CEA. On the validation set, the Dice coefficient of the model was 0.740. For the normal group, the recall was 0.918, precision was 0.904, F1-score was 0.909, and AUC was 0.976. For the benign group, the recall was 0.869, precision was 0.862, F1-score was 0.863, and AUC was 0.928. For the malignant group, the recall was 0.858, precision was 0.914, F1-score was 0.883, and AUC was 0.979. The model's overall diagnostic performance was between that of junior and senior physician. This deep learning model demonstrated high sensitivity in detecting FLLs and effectively differentiated between benign and malignant lesions.

Computer-Aided Detection (CADe) and Segmentation Methods for Breast Cancer Using Magnetic Resonance Imaging (MRI).

Jannatdoust P, Valizadeh P, Saeedi N, Valizadeh G, Salari HM, Saligheh Rad H, Gity M

pubmed logopapersJun 1 2025
Breast cancer continues to be a major health concern, and early detection is vital for enhancing survival rates. Magnetic resonance imaging (MRI) is a key tool due to its substantial sensitivity for invasive breast cancers. Computer-aided detection (CADe) systems enhance the effectiveness of MRI by identifying potential lesions, aiding radiologists in focusing on areas of interest, extracting quantitative features, and integrating with computer-aided diagnosis (CADx) pipelines. This review aims to provide a comprehensive overview of the current state of CADe systems in breast MRI, focusing on the technical details of pipelines and segmentation models including classical intensity-based methods, supervised and unsupervised machine learning (ML) approaches, and the latest deep learning (DL) architectures. It highlights recent advancements from traditional algorithms to sophisticated DL models such as U-Nets, emphasizing CADe implementation of multi-parametric MRI acquisitions. Despite these advancements, CADe systems face challenges like variable false-positive and negative rates, complexity in interpreting extensive imaging data, variability in system performance, and lack of large-scale studies and multicentric models, limiting the generalizability and suitability for clinical implementation. Technical issues, including image artefacts and the need for reproducible and explainable detection algorithms, remain significant hurdles. Future directions emphasize developing more robust and generalizable algorithms, integrating explainable AI to improve transparency and trust among clinicians, developing multi-purpose AI systems, and incorporating large language models to enhance diagnostic reporting and patient management. Additionally, efforts to standardize and streamline MRI protocols aim to increase accessibility and reduce costs, optimizing the use of CADe systems in clinical practice. LEVEL OF EVIDENCE: NA TECHNICAL EFFICACY: Stage 2.

Deep Learning-Based Three-Dimensional Analysis Reveals Distinct Patterns of Condylar Remodelling After Orthognathic Surgery in Skeletal Class III Patients.

Barone S, Cevidanes L, Bianchi J, Goncalves JR, Giudice A

pubmed logopapersJun 1 2025
This retrospective study aimed to evaluate morphometric changes in mandibular condyles of patients with skeletal Class III malocclusion following two-jaw orthognathic surgery planned using virtual surgical planning (VSP) and analysed with automated three-dimensional (3D) image analysis based on deep-learning techniques. Pre-operative (T1) and 12-18 months post-operative (T2) Cone-Beam Computed Tomography (CBCT) scans of 17 patients (mean age: 24.8 ± 3.5 years) were analysed using 3DSlicer software. Deep-learning algorithms automated CBCT orientation, registration, bone segmentation, and landmark identification. By utilising voxel-based superimposition of pre- and post-operative CBCT scans and shape correspondence, the overall changes in condylar morphology were assessed, with a focus on bone resorption and apposition at specific regions (superior, lateral and medial poles). The correlation between these modifications and the extent of actual condylar movements post-surgery was investigated. Statistical analysis was conducted with a significance level of α = 0.05. Overall condylar remodelling was minimal, with mean changes of < 1 mm. Small but statistically significant bone resorption occurred at the condylar superior articular surface, while bone apposition was primarily observed at the lateral pole. The bone apposition at the lateral pole and resorption at the superior articular surface were significantly correlated with medial condylar displacement (p < 0.05). The automated 3D analysis revealed distinct patterns of condylar remodelling following orthognathic surgery in skeletal Class III patients, with minimal overall changes but significant regional variations. The correlation between condylar displacements and remodelling patterns highlights the need for precise pre-operative planning to optimise condylar positioning, potentially minimising harmful remodelling and enhancing stability.

Phenotyping atherosclerotic plaque and perivascular adipose tissue: signalling pathways and clinical biomarkers in atherosclerosis.

Grodecki K, Geers J, Kwiecinski J, Lin A, Slipczuk L, Slomka PJ, Dweck MR, Nerlekar N, Williams MC, Berman D, Marwick T, Newby DE, Dey D

pubmed logopapersJun 1 2025
Computed tomography coronary angiography provides a non-invasive evaluation of coronary artery disease that includes phenotyping of atherosclerotic plaques and the surrounding perivascular adipose tissue (PVAT). Image analysis techniques have been developed to quantify atherosclerotic plaque burden and morphology as well as the associated PVAT attenuation, and emerging radiomic approaches can add further contextual information. PVAT attenuation might provide a novel measure of vascular health that could be indicative of the pathogenetic processes implicated in atherosclerosis such as inflammation, fibrosis or increased vascularity. Bidirectional signalling between the coronary artery and adjacent PVAT has been hypothesized to contribute to coronary artery disease progression and provide a potential novel measure of the risk of future cardiovascular events. However, despite the development of more advanced radiomic and artificial intelligence-based algorithms, studies involving large datasets suggest that the measurement of PVAT attenuation contributes only modest additional predictive discrimination to standard cardiovascular risk scores. In this Review, we explore the pathobiology of coronary atherosclerotic plaques and PVAT, describe their phenotyping with computed tomography coronary angiography, and discuss potential future applications in clinical risk prediction and patient management.

Evaluation of a deep learning prostate cancer detection system on biparametric MRI against radiological reading.

Debs N, Routier A, Bône A, Rohé MM

pubmed logopapersJun 1 2025
This study aims to evaluate a deep learning pipeline for detecting clinically significant prostate cancer (csPCa), defined as Gleason Grade Group (GGG) ≥ 2, using biparametric MRI (bpMRI) and compare its performance with radiological reading. The training dataset included 4381 bpMRI cases (3800 positive and 581 negative) across three continents, with 80% annotated using PI-RADS and 20% with Gleason Scores. The testing set comprised 328 cases from the PROSTATEx dataset, including 34% positive (GGG ≥ 2) and 66% negative cases. A 3D nnU-Net was trained on bpMRI for lesion detection, evaluated using histopathology-based annotations, and assessed with patient- and lesion-level metrics, along with lesion volume, and GGG. The algorithm was compared to non-expert radiologists using multi-parametric MRI (mpMRI). The model achieved an AUC of 0.83 (95% CI: 0.80, 0.87). Lesion-level sensitivity was 0.85 (95% CI: 0.82, 0.94) at 0.5 False Positives per volume (FP/volume) and 0.88 (95% CI: 0.79, 0.92) at 1 FP/volume. Average Precision was 0.55 (95% CI: 0.46, 0.64). The model showed over 0.90 sensitivity for lesions larger than 650 mm³ and exceeded 0.85 across GGGs. It had higher true positive rates (TPRs) than radiologists equivalent FP rates, achieving TPRs of 0.93 and 0.79 compared to radiologists' 0.87 and 0.68 for PI-RADS ≥ 3 and PI-RADS ≥ 4 lesions (p ≤ 0.05). The DL model showed strong performance in detecting csPCa on an independent test cohort, surpassing radiological interpretation and demonstrating AI's potential to improve diagnostic accuracy for non-expert radiologists. However, detecting small lesions remains challenging. Question Current prostate cancer detection methods often do not involve non-expert radiologists, highlighting the need for more accurate deep learning approaches using biparametric MRI. Findings Our model outperforms radiologists significantly, showing consistent performance across Gleason Grade Groups and for medium to large lesions. Clinical relevance This AI model improves prostate detection accuracy in prostate imaging, serves as a benchmark with reference performance on a public dataset, and offers public PI-RADS annotations, enhancing transparency and facilitating further research and development.

Automated Cone Beam Computed Tomography Segmentation of Multiple Impacted Teeth With or Without Association to Rare Diseases: Evaluation of Four Deep Learning-Based Methods.

Sinard E, Gajny L, de La Dure-Molla M, Felizardo R, Dot G

pubmed logopapersJun 1 2025
To assess the accuracy of three commercially available and one open-source deep learning (DL) solutions for automatic tooth segmentation in cone beam computed tomography (CBCT) images of patients with multiple dental impactions. Twenty patients (20 CBCT scans) were selected from a retrospective cohort of individuals with multiple dental impactions. For each CBCT scan, one reference segmentation and four DL segmentations of the maxillary and mandibular teeth were obtained. Reference segmentations were generated by experts using a semi-automatic process. DL segmentations were automatically generated according to the manufacturer's instructions. Quantitative and qualitative evaluations of each DL segmentation were performed by comparing it with expert-generated segmentation. The quantitative metrics used were Dice similarity coefficient (DSC) and the normalized surface distance (NSD). The patients had an average of 12 retained teeth, with 12 of them diagnosed with a rare disease. DSC values ranged from 88.5% ± 3.2% to 95.6% ± 1.2%, and NSD values ranged from 95.3% ± 2.7% to 97.4% ± 6.5%. The number of completely unsegmented teeth ranged from 1 (0.1%) to 41 (6.0%). Two solutions (Diagnocat and DentalSegmentator) outperformed the others across all tested parameters. All the tested methods showed a mean NSD of approximately 95%, proving their overall efficiency for tooth segmentation. The accuracy of the methods varied among the four tested solutions owing to the presence of impacted teeth in our CBCT scans. DL solutions are evolving rapidly, and their future performance cannot be predicted based on our results.

AI model using CT-based imaging biomarkers to predict hepatocellular carcinoma in patients with chronic hepatitis B.

Shin H, Hur MH, Song BG, Park SY, Kim GA, Choi G, Nam JY, Kim MA, Park Y, Ko Y, Park J, Lee HA, Chung SW, Choi NR, Park MK, Lee YB, Sinn DH, Kim SU, Kim HY, Kim JM, Park SJ, Lee HC, Lee DH, Chung JW, Kim YJ, Yoon JH, Lee JH

pubmed logopapersJun 1 2025
Various hepatocellular carcinoma (HCC) prediction models have been proposed for patients with chronic hepatitis B (CHB) using clinical variables. We aimed to develop an artificial intelligence (AI)-based HCC prediction model by incorporating imaging biomarkers derived from abdominal computed tomography (CT) images along with clinical variables. An AI prediction model employing a gradient-boosting machine algorithm was developed utilizing imaging biomarkers extracted by DeepFore, a deep learning-based CT auto-segmentation software. The derivation cohort (n = 5,585) was randomly divided into the training and internal validation sets at a 3:1 ratio. The external validation cohort included 2,883 patients. Six imaging biomarkers (i.e. abdominal visceral fat-total fat volume ratio, total fat-trunk volume ratio, spleen volume, liver volume, liver-spleen Hounsfield unit ratio, and muscle Hounsfield unit) and eight clinical variables were selected as the main variables of our model, PLAN-B-DF. In the internal validation set (median follow-up duration = 7.4 years), PLAN-B-DF demonstrated an excellent predictive performance with a c-index of 0.91 and good calibration function (p = 0.78 by the Hosmer-Lemeshow test). In the external validation cohort (median follow-up duration = 4.6 years), PLAN-B-DF showed a significantly better discrimination function compared to previous models, including PLAN-B, PAGE-B, modified PAGE-B, and CU-HCC (c-index, 0.89 vs. 0.65-0.78; all p <0.001), and maintained a good calibration function (p = 0.42 by the Hosmer-Lemeshow test). When patients were classified into four groups according to the risk probability calculated by PLAN-B-DF, the 10-year cumulative HCC incidence was 0.0%, 0.4%, 16.0%, and 46.2% in the minimal-, low-, intermediate-, and high-risk groups, respectively. This AI prediction model, integrating deep learning-based auto-segmentation of CT images, offers improved performance in predicting HCC risk among patients with CHB compared to previous models. The novel predictive model PLAN-B-DF, employing an automated computed tomography segmentation algorithm, significantly improves predictive accuracy and risk stratification for hepatocellular carcinoma in patients with chronic hepatitis B (CHB). Using a gradient-boosting algorithm and computed tomography metrics, such as visceral fat volume and myosteatosis, PLAN-B-DF outperforms previous models based solely on clinical and demographic data. This model not only shows a higher c-index compared to previous models, but also effectively classifies patients with CHB into different risk groups. This model uses machine learning to analyze the complex relationships among various risk factors contributing to hepatocellular carcinoma occurrence, thereby enabling more personalized surveillance for patients with CHB.

Incorporating Radiologist Knowledge Into MRI Quality Metrics for Machine Learning Using Rank-Based Ratings.

Tang C, Eisenmenger LB, Rivera-Rivera L, Huo E, Junn JC, Kuner AD, Oechtering TH, Peret A, Starekova J, Johnson KM

pubmed logopapersJun 1 2025
Deep learning (DL) often requires an image quality metric; however, widely used metrics are not designed for medical images. To develop an image quality metric that is specific to MRI using radiologists image rankings and DL models. Retrospective. A total of 19,344 rankings on 2916 unique image pairs from the NYU fastMRI Initiative neuro database was used for the neural network-based image quality metrics training with an 80%/20% training/validation split and fivefold cross-validation. 1.5 T and 3 T T1, T1 postcontrast, T2, and FLuid Attenuated Inversion Recovery (FLAIR). Synthetically corrupted image pairs were ranked by radiologists (N = 7), with a subset also scoring images using a Likert scale (N = 2). DL models were trained to match rankings using two architectures (EfficientNet and IQ-Net) with and without reference image subtraction and compared to ranking based on mean squared error (MSE) and structural similarity (SSIM). Image quality assessing DL models were evaluated as alternatives to MSE and SSIM as optimization targets for DL denoising and reconstruction. Radiologists' agreement was assessed by a percentage metric and quadratic weighted Cohen's kappa. Ranking accuracies were compared using repeated measurements analysis of variance. Reconstruction models trained with IQ-Net score, MSE and SSIM were compared by paired t test. P < 0.05 was considered significant. Compared to direct Likert scoring, ranking produced a higher level of agreement between radiologists (70.4% vs. 25%). Image ranking was subjective with a high level of intraobserver agreement ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>94.9</mn> <mo>%</mo> <mo>±</mo> <mn>2.4</mn> <mo>%</mo></mrow> </math> ) and lower interobserver agreement ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>61.47</mn> <mo>%</mo> <mo>±</mo> <mn>5.51</mn> <mo>%</mo></mrow> </math> ). IQ-Net and EfficientNet accurately predicted rankings with a reference image ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>75.2</mn> <mo>%</mo> <mo>±</mo> <mn>1.3</mn> <mo>%</mo></mrow> </math> and <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>79.2</mn> <mo>%</mo> <mo>±</mo> <mn>1.7</mn> <mo>%</mo></mrow> </math> ). However, EfficientNet resulted in images with artifacts and high MSE when used in denoising tasks while IQ-Net optimized networks performed well for both denoising and reconstruction tasks. Image quality networks can be trained from image ranking and used to optimize DL tasks. 3 TECHNICAL EFFICACY: Stage 1.
Page 59 of 1381373 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.