Sort by:
Page 133 of 2132126 results

Computer-Aided Detection (CADe) and Segmentation Methods for Breast Cancer Using Magnetic Resonance Imaging (MRI).

Jannatdoust P, Valizadeh P, Saeedi N, Valizadeh G, Salari HM, Saligheh Rad H, Gity M

pubmed logopapersJun 1 2025
Breast cancer continues to be a major health concern, and early detection is vital for enhancing survival rates. Magnetic resonance imaging (MRI) is a key tool due to its substantial sensitivity for invasive breast cancers. Computer-aided detection (CADe) systems enhance the effectiveness of MRI by identifying potential lesions, aiding radiologists in focusing on areas of interest, extracting quantitative features, and integrating with computer-aided diagnosis (CADx) pipelines. This review aims to provide a comprehensive overview of the current state of CADe systems in breast MRI, focusing on the technical details of pipelines and segmentation models including classical intensity-based methods, supervised and unsupervised machine learning (ML) approaches, and the latest deep learning (DL) architectures. It highlights recent advancements from traditional algorithms to sophisticated DL models such as U-Nets, emphasizing CADe implementation of multi-parametric MRI acquisitions. Despite these advancements, CADe systems face challenges like variable false-positive and negative rates, complexity in interpreting extensive imaging data, variability in system performance, and lack of large-scale studies and multicentric models, limiting the generalizability and suitability for clinical implementation. Technical issues, including image artefacts and the need for reproducible and explainable detection algorithms, remain significant hurdles. Future directions emphasize developing more robust and generalizable algorithms, integrating explainable AI to improve transparency and trust among clinicians, developing multi-purpose AI systems, and incorporating large language models to enhance diagnostic reporting and patient management. Additionally, efforts to standardize and streamline MRI protocols aim to increase accessibility and reduce costs, optimizing the use of CADe systems in clinical practice. LEVEL OF EVIDENCE: NA TECHNICAL EFFICACY: Stage 2.

Deep Learning-Based Three-Dimensional Analysis Reveals Distinct Patterns of Condylar Remodelling After Orthognathic Surgery in Skeletal Class III Patients.

Barone S, Cevidanes L, Bianchi J, Goncalves JR, Giudice A

pubmed logopapersJun 1 2025
This retrospective study aimed to evaluate morphometric changes in mandibular condyles of patients with skeletal Class III malocclusion following two-jaw orthognathic surgery planned using virtual surgical planning (VSP) and analysed with automated three-dimensional (3D) image analysis based on deep-learning techniques. Pre-operative (T1) and 12-18 months post-operative (T2) Cone-Beam Computed Tomography (CBCT) scans of 17 patients (mean age: 24.8 ± 3.5 years) were analysed using 3DSlicer software. Deep-learning algorithms automated CBCT orientation, registration, bone segmentation, and landmark identification. By utilising voxel-based superimposition of pre- and post-operative CBCT scans and shape correspondence, the overall changes in condylar morphology were assessed, with a focus on bone resorption and apposition at specific regions (superior, lateral and medial poles). The correlation between these modifications and the extent of actual condylar movements post-surgery was investigated. Statistical analysis was conducted with a significance level of α = 0.05. Overall condylar remodelling was minimal, with mean changes of < 1 mm. Small but statistically significant bone resorption occurred at the condylar superior articular surface, while bone apposition was primarily observed at the lateral pole. The bone apposition at the lateral pole and resorption at the superior articular surface were significantly correlated with medial condylar displacement (p < 0.05). The automated 3D analysis revealed distinct patterns of condylar remodelling following orthognathic surgery in skeletal Class III patients, with minimal overall changes but significant regional variations. The correlation between condylar displacements and remodelling patterns highlights the need for precise pre-operative planning to optimise condylar positioning, potentially minimising harmful remodelling and enhancing stability.

Phenotyping atherosclerotic plaque and perivascular adipose tissue: signalling pathways and clinical biomarkers in atherosclerosis.

Grodecki K, Geers J, Kwiecinski J, Lin A, Slipczuk L, Slomka PJ, Dweck MR, Nerlekar N, Williams MC, Berman D, Marwick T, Newby DE, Dey D

pubmed logopapersJun 1 2025
Computed tomography coronary angiography provides a non-invasive evaluation of coronary artery disease that includes phenotyping of atherosclerotic plaques and the surrounding perivascular adipose tissue (PVAT). Image analysis techniques have been developed to quantify atherosclerotic plaque burden and morphology as well as the associated PVAT attenuation, and emerging radiomic approaches can add further contextual information. PVAT attenuation might provide a novel measure of vascular health that could be indicative of the pathogenetic processes implicated in atherosclerosis such as inflammation, fibrosis or increased vascularity. Bidirectional signalling between the coronary artery and adjacent PVAT has been hypothesized to contribute to coronary artery disease progression and provide a potential novel measure of the risk of future cardiovascular events. However, despite the development of more advanced radiomic and artificial intelligence-based algorithms, studies involving large datasets suggest that the measurement of PVAT attenuation contributes only modest additional predictive discrimination to standard cardiovascular risk scores. In this Review, we explore the pathobiology of coronary atherosclerotic plaques and PVAT, describe their phenotyping with computed tomography coronary angiography, and discuss potential future applications in clinical risk prediction and patient management.

Evaluation of a deep learning prostate cancer detection system on biparametric MRI against radiological reading.

Debs N, Routier A, Bône A, Rohé MM

pubmed logopapersJun 1 2025
This study aims to evaluate a deep learning pipeline for detecting clinically significant prostate cancer (csPCa), defined as Gleason Grade Group (GGG) ≥ 2, using biparametric MRI (bpMRI) and compare its performance with radiological reading. The training dataset included 4381 bpMRI cases (3800 positive and 581 negative) across three continents, with 80% annotated using PI-RADS and 20% with Gleason Scores. The testing set comprised 328 cases from the PROSTATEx dataset, including 34% positive (GGG ≥ 2) and 66% negative cases. A 3D nnU-Net was trained on bpMRI for lesion detection, evaluated using histopathology-based annotations, and assessed with patient- and lesion-level metrics, along with lesion volume, and GGG. The algorithm was compared to non-expert radiologists using multi-parametric MRI (mpMRI). The model achieved an AUC of 0.83 (95% CI: 0.80, 0.87). Lesion-level sensitivity was 0.85 (95% CI: 0.82, 0.94) at 0.5 False Positives per volume (FP/volume) and 0.88 (95% CI: 0.79, 0.92) at 1 FP/volume. Average Precision was 0.55 (95% CI: 0.46, 0.64). The model showed over 0.90 sensitivity for lesions larger than 650 mm³ and exceeded 0.85 across GGGs. It had higher true positive rates (TPRs) than radiologists equivalent FP rates, achieving TPRs of 0.93 and 0.79 compared to radiologists' 0.87 and 0.68 for PI-RADS ≥ 3 and PI-RADS ≥ 4 lesions (p ≤ 0.05). The DL model showed strong performance in detecting csPCa on an independent test cohort, surpassing radiological interpretation and demonstrating AI's potential to improve diagnostic accuracy for non-expert radiologists. However, detecting small lesions remains challenging. Question Current prostate cancer detection methods often do not involve non-expert radiologists, highlighting the need for more accurate deep learning approaches using biparametric MRI. Findings Our model outperforms radiologists significantly, showing consistent performance across Gleason Grade Groups and for medium to large lesions. Clinical relevance This AI model improves prostate detection accuracy in prostate imaging, serves as a benchmark with reference performance on a public dataset, and offers public PI-RADS annotations, enhancing transparency and facilitating further research and development.

Automated Cone Beam Computed Tomography Segmentation of Multiple Impacted Teeth With or Without Association to Rare Diseases: Evaluation of Four Deep Learning-Based Methods.

Sinard E, Gajny L, de La Dure-Molla M, Felizardo R, Dot G

pubmed logopapersJun 1 2025
To assess the accuracy of three commercially available and one open-source deep learning (DL) solutions for automatic tooth segmentation in cone beam computed tomography (CBCT) images of patients with multiple dental impactions. Twenty patients (20 CBCT scans) were selected from a retrospective cohort of individuals with multiple dental impactions. For each CBCT scan, one reference segmentation and four DL segmentations of the maxillary and mandibular teeth were obtained. Reference segmentations were generated by experts using a semi-automatic process. DL segmentations were automatically generated according to the manufacturer's instructions. Quantitative and qualitative evaluations of each DL segmentation were performed by comparing it with expert-generated segmentation. The quantitative metrics used were Dice similarity coefficient (DSC) and the normalized surface distance (NSD). The patients had an average of 12 retained teeth, with 12 of them diagnosed with a rare disease. DSC values ranged from 88.5% ± 3.2% to 95.6% ± 1.2%, and NSD values ranged from 95.3% ± 2.7% to 97.4% ± 6.5%. The number of completely unsegmented teeth ranged from 1 (0.1%) to 41 (6.0%). Two solutions (Diagnocat and DentalSegmentator) outperformed the others across all tested parameters. All the tested methods showed a mean NSD of approximately 95%, proving their overall efficiency for tooth segmentation. The accuracy of the methods varied among the four tested solutions owing to the presence of impacted teeth in our CBCT scans. DL solutions are evolving rapidly, and their future performance cannot be predicted based on our results.

AI model using CT-based imaging biomarkers to predict hepatocellular carcinoma in patients with chronic hepatitis B.

Shin H, Hur MH, Song BG, Park SY, Kim GA, Choi G, Nam JY, Kim MA, Park Y, Ko Y, Park J, Lee HA, Chung SW, Choi NR, Park MK, Lee YB, Sinn DH, Kim SU, Kim HY, Kim JM, Park SJ, Lee HC, Lee DH, Chung JW, Kim YJ, Yoon JH, Lee JH

pubmed logopapersJun 1 2025
Various hepatocellular carcinoma (HCC) prediction models have been proposed for patients with chronic hepatitis B (CHB) using clinical variables. We aimed to develop an artificial intelligence (AI)-based HCC prediction model by incorporating imaging biomarkers derived from abdominal computed tomography (CT) images along with clinical variables. An AI prediction model employing a gradient-boosting machine algorithm was developed utilizing imaging biomarkers extracted by DeepFore, a deep learning-based CT auto-segmentation software. The derivation cohort (n = 5,585) was randomly divided into the training and internal validation sets at a 3:1 ratio. The external validation cohort included 2,883 patients. Six imaging biomarkers (i.e. abdominal visceral fat-total fat volume ratio, total fat-trunk volume ratio, spleen volume, liver volume, liver-spleen Hounsfield unit ratio, and muscle Hounsfield unit) and eight clinical variables were selected as the main variables of our model, PLAN-B-DF. In the internal validation set (median follow-up duration = 7.4 years), PLAN-B-DF demonstrated an excellent predictive performance with a c-index of 0.91 and good calibration function (p = 0.78 by the Hosmer-Lemeshow test). In the external validation cohort (median follow-up duration = 4.6 years), PLAN-B-DF showed a significantly better discrimination function compared to previous models, including PLAN-B, PAGE-B, modified PAGE-B, and CU-HCC (c-index, 0.89 vs. 0.65-0.78; all p <0.001), and maintained a good calibration function (p = 0.42 by the Hosmer-Lemeshow test). When patients were classified into four groups according to the risk probability calculated by PLAN-B-DF, the 10-year cumulative HCC incidence was 0.0%, 0.4%, 16.0%, and 46.2% in the minimal-, low-, intermediate-, and high-risk groups, respectively. This AI prediction model, integrating deep learning-based auto-segmentation of CT images, offers improved performance in predicting HCC risk among patients with CHB compared to previous models. The novel predictive model PLAN-B-DF, employing an automated computed tomography segmentation algorithm, significantly improves predictive accuracy and risk stratification for hepatocellular carcinoma in patients with chronic hepatitis B (CHB). Using a gradient-boosting algorithm and computed tomography metrics, such as visceral fat volume and myosteatosis, PLAN-B-DF outperforms previous models based solely on clinical and demographic data. This model not only shows a higher c-index compared to previous models, but also effectively classifies patients with CHB into different risk groups. This model uses machine learning to analyze the complex relationships among various risk factors contributing to hepatocellular carcinoma occurrence, thereby enabling more personalized surveillance for patients with CHB.

Incorporating Radiologist Knowledge Into MRI Quality Metrics for Machine Learning Using Rank-Based Ratings.

Tang C, Eisenmenger LB, Rivera-Rivera L, Huo E, Junn JC, Kuner AD, Oechtering TH, Peret A, Starekova J, Johnson KM

pubmed logopapersJun 1 2025
Deep learning (DL) often requires an image quality metric; however, widely used metrics are not designed for medical images. To develop an image quality metric that is specific to MRI using radiologists image rankings and DL models. Retrospective. A total of 19,344 rankings on 2916 unique image pairs from the NYU fastMRI Initiative neuro database was used for the neural network-based image quality metrics training with an 80%/20% training/validation split and fivefold cross-validation. 1.5 T and 3 T T1, T1 postcontrast, T2, and FLuid Attenuated Inversion Recovery (FLAIR). Synthetically corrupted image pairs were ranked by radiologists (N = 7), with a subset also scoring images using a Likert scale (N = 2). DL models were trained to match rankings using two architectures (EfficientNet and IQ-Net) with and without reference image subtraction and compared to ranking based on mean squared error (MSE) and structural similarity (SSIM). Image quality assessing DL models were evaluated as alternatives to MSE and SSIM as optimization targets for DL denoising and reconstruction. Radiologists' agreement was assessed by a percentage metric and quadratic weighted Cohen's kappa. Ranking accuracies were compared using repeated measurements analysis of variance. Reconstruction models trained with IQ-Net score, MSE and SSIM were compared by paired t test. P < 0.05 was considered significant. Compared to direct Likert scoring, ranking produced a higher level of agreement between radiologists (70.4% vs. 25%). Image ranking was subjective with a high level of intraobserver agreement ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>94.9</mn> <mo>%</mo> <mo>±</mo> <mn>2.4</mn> <mo>%</mo></mrow> </math> ) and lower interobserver agreement ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>61.47</mn> <mo>%</mo> <mo>±</mo> <mn>5.51</mn> <mo>%</mo></mrow> </math> ). IQ-Net and EfficientNet accurately predicted rankings with a reference image ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>75.2</mn> <mo>%</mo> <mo>±</mo> <mn>1.3</mn> <mo>%</mo></mrow> </math> and <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>79.2</mn> <mo>%</mo> <mo>±</mo> <mn>1.7</mn> <mo>%</mo></mrow> </math> ). However, EfficientNet resulted in images with artifacts and high MSE when used in denoising tasks while IQ-Net optimized networks performed well for both denoising and reconstruction tasks. Image quality networks can be trained from image ranking and used to optimize DL tasks. 3 TECHNICAL EFFICACY: Stage 1.

Dual Energy CT for Deep Learning-Based Segmentation and Volumetric Estimation of Early Ischemic Infarcts.

Kamel P, Khalid M, Steger R, Kanhere A, Kulkarni P, Parekh V, Yi PH, Gandhi D, Bodanapally U

pubmed logopapersJun 1 2025
Ischemic changes are not visible on non-contrast head CT until several hours after infarction, though deep convolutional neural networks have shown promise in the detection of subtle imaging findings. This study aims to assess if dual-energy CT (DECT) acquisition can improve early infarct visibility for machine learning. The retrospective dataset consisted of 330 DECTs acquired up to 48 h prior to confirmation of a DWI positive infarct on MRI between 2016 and 2022. Infarct segmentation maps were generated from the MRI and co-registered to the CT to serve as ground truth for segmentation. A self-configuring 3D nnU-Net was trained for segmentation on (1) standard 120 kV mixed-images (2) 190 keV virtual monochromatic images and (3) 120 kV + 190 keV images as dual channel inputs. Algorithm performance was assessed with Dice scores with paired t-tests on a test set. Global aggregate Dice scores were 0.616, 0.645, and 0.665 for standard 120 kV images, 190 keV, and combined channel inputs respectively. Differences in overall Dice scores were statistically significant with highest performance for combined channel inputs (p < 0.01). Small but statistically significant differences were observed for infarcts between 6 and 12 h from last-known-well with higher performance for larger infarcts. Volumetric accuracy trended higher with combined inputs but differences were not statistically significant (p = 0.07). Supplementation of standard head CT images with dual-energy data provides earlier and more accurate segmentation of infarcts for machine learning particularly between 6 and 12 h after last-known-well.

Radiomics-driven spectral profiling of six kidney stone types with monoenergetic CT reconstructions in photon-counting CT.

Hertel A, Froelich MF, Overhoff D, Nestler T, Faby S, Jürgens M, Schmidt B, Vellala A, Hesse A, Nörenberg D, Stoll R, Schmelz H, Schoenberg SO, Waldeck S

pubmed logopapersJun 1 2025
Urolithiasis, a common and painful urological condition, is influenced by factors such as lifestyle, genetics, and medication. Differentiating between different types of kidney stones is crucial for personalized therapy. The purpose of this study is to investigate the use of photon-counting computed tomography (PCCT) in combination with radiomics and machine learning to develop a method for automated and detailed characterization of kidney stones. This approach aims to enhance the accuracy and detail of stone classification beyond what is achievable with conventional computed tomography (CT) and dual-energy CT (DECT). In this ex vivo study, 135 kidney stones were first classified using infrared spectroscopy. All stones were then scanned in a PCCT embedded in a phantom. Various monoenergetic reconstructions were generated, and radiomics features were extracted. Statistical analysis was performed using Random Forest (RF) classifiers for both individual reconstructions and a combined model. The combined model, using radiomics features from all monoenergetic reconstructions, significantly outperformed individual reconstructions and SPP parameters, with an AUC of 0.95 and test accuracy of 0.81 for differentiating all six stone types. Feature importance analysis identified key parameters, including NGTDM_Strength and wavelet-LLH_firstorder_Variance. This ex vivo study demonstrates that radiomics-driven PCCT analysis can improve differentiation between kidney stone subtypes. The combined model outperformed individual monoenergetic levels, highlighting the potential of spectral profiling in PCCT to optimize treatment through image-based strategies. Question How can photon-counting computed tomography (PCCT) combined with radiomics improve the differentiation of kidney stone types beyond conventional CT and dual-energy CT, enhancing personalized therapy? Findings Our ex vivo study demonstrates that a combined spectral-driven radiomics model achieved 95% AUC and 81% test accuracy in differentiating six kidney stone types. Clinical relevance Implementing PCCT-based spectral-driven radiomics allows for precise non-invasive differentiation of kidney stone types, leading to improved diagnostic accuracy and more personalized, effective treatment strategies, potentially reducing the need for invasive procedures and recurrence.
Page 133 of 2132126 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.