Sort by:
Page 1 of 6036030 results
Next

Benvenuto M, Bologna M, Fortunati A, Perazzo C, Cellina M, Cè M, Rubiu G, Martini I, Sala D, Di Palma L, Fazzini D, Alba S, Papa S, Alì M

pubmed logopapersOct 25 2025
This study aimed to develop and evaluate an artificial intelligence (AI) framework for detecting dental restorations and prosthesis devices on panoramic radiographs (PRs). Detecting these elements is essential for enhancing automated reporting, improving the accuracy of dental assessments, and reducing manual examination time. A Fast Region-Based Convolutional Neural Network (Fast R-CNN) was trained using 186 PRs for the training set and 42 for validation. The model's performance was assessed on an external test dataset of 1133 PRs. Seven dental restorations and prosthesis devices were targeted: appliance, bridge, endodontic filling, crown filling, implant, retainer, and single crown. Precision, recall, and F1-score were calculated for each element to measure detection accuracy. The AI framework achieved high performance across all categories, with precision, recall, and F1-scores as follows: appliance (0.79, 0.96, 0.87), bridge (0.91, 0.86, 0.89), endodontic filling (0.98, 0.98, 0.98), crown filling (0.95, 0.95, 0.95), implant (0.99, 0.97, 0.98), retainer (0.98, 0.98, 0.98), and single crown (0.94, 0.96, 0.95). The system processes one panoramic image in under 30 seconds. The AI framework demonstrated high recall and efficiency in detecting dental prosthesis and other dental restorations on PRs. Its application could significantly streamline dental diagnostics and automated reporting, enhancing both the speed and accuracy of dental assessments. This study highlights the potential of AI in automating the detection of multiple dental restorations and prosthesis on PRs, offering a valuable tool for dental professionals to improve diagnostic workflows.

Yang Y, Zhong C, Ma R, Zhang X, Guo Y, Li G, Li J

pubmed logopapersOct 25 2025
Reliable cancellous bone segmentation in Cone Beam CT (CBCT) images is essential for post-orthognathic assessment of condylar resorption. However, challenges such as edge blurring and low contrast in CBCT images make effective segmentation difficult. This study aims to overcome these issues, providing a foundation for accurate bone quantification to enhance surgical planning and patient outcomes. We propose a novel approach to enhance edge-based segmentation for cancellous bone in CBCT images. By incorporating edge features from the cancellous bone region and utilizing cancellous edge localization as an auxiliary task via Dual-Branch Fusion Network (DBF-Net), our model leverages shared feature parameters across functions to improve segmentation accuracy and robustness. Our DBF-Net outperformed other models, achieving DICE coefficient of 91.48%. And the 95% Hausdorff Distance decreased to 3.88 mm, demonstrating significant improvement in cancellous bone boundary detection, which is crucial for the post-orthognathic assessment of condylar resorption. This method provides a robust solution for reliable cancellous bone segmentation in CBCT images to support the quantitative assessment of condylar resorption.

Du L, Cheng J, Shen C, Cheng J, Yang G, He C, Xu P, Lin W, Liu L, Hu X, Huang J, Pang Y, Xu G, Guo J, Zhu Y, Wang H

pubmed logopapersOct 25 2025
A deep learning model integrating CT radiomics and clinical features was developed to predict perioperative complications and risk grade in patients undergoing partial nephrectomy, and was compared to traditional anatomical classification models. Between June 2014 and July 2024, 1214 patients diagnosed with renal cell carcinoma or renal cysts who underwent partial nephrectomy were included. A deep learning model incorporating CT radiomics (segmented by nnU-Net and extracted by pyradiomics) and clinical features was developed. Logistic regression models using RENAL or PADUA scores were also developed for comparison. An external validation cohort (n = 260) was used to assess the model's generalizability. In predicting complications, the deep learning model achieved an area under the curve (AUC) of 0.87 (95%CI: 0.80-0.93), outperforming the RENAL (0.68, 95%CI: 0.60-0.70) (p < 0.001) and PADUA models (0.69, 95%CI: 0.55-0.71) (p < 0.001). For risk grades, the deep learning model outperformed RENAL/PADUA models for the no-risk group (AUC = 0.83 [95%CI: 0.81-0.87] vs. 0.68 [95%CI: 0.58-0.71], p = 0.01; 0.66 [95%CI: 0.65-0.67], p < 0.001) and low-risk group (AUC = 0.79 [95%CI: 0.75-0.82] vs. 0.64 [95%CI: 0.60-0.74], p = 0.03; 0.66 [95%CI: 0.63-0.73], p = 0.04). However, no significant differences were found for moderate- and high-risk groups (p > 0.05). In the external validation cohort, the model achieved a prediction accuracy of 0.854 and an AUC of 0.83. The CT-based deep learning model showed superior performance in predicting complications and risk grades for no-risk and low-risk patients undergoing partial nephrectomy. No significant differences were found for moderate- and high-risk groups.

Huang Q, Mo J, Deng X, Jiang Y, Yang M, Wang G, Yang X, Yu J, Fan W, Wang Y

pubmed logopapersOct 25 2025
NHOC and NHOP, defined as the normalized distances from peak uptake to tumour centroid and perimeter, are novel PET/CT metrics of tumour aggressiveness. This two-centre study assessed the baseline NHOC/NHOP for predicting lymph node metastasis (LNM) in non-small cell lung cancer (NSCLC), then developed and validated an interpretable machine learning model combining clinical data, NHOC/NHOP and PET radiomics for LNM and occult nodal metastasis (ONM) prediction. 342 patients from two centres underwent 18F-FDG PET/CT scans, and data were divided into training (n = 188), internal (n = 63), and external (n = 91) sets. NHOC/NHOP and 284 radiomics features were initially extracted using LIFEx software. These features were normalized using Z-score and harmonized via ComBat. To avoid single algorithmic bias, eight machine-learning models were trained on the optimal radiomics features. The best-performing algorithm was employed to develop four predictive models including clinical, NHOC/NHOP, radiomics, and their combination. Shapley Additive Explanations (SHAP) values were used to interpret model contributions. Key independent predictors were PD-L1 value, lesion size and the novel biomarker NHOC, establishing the clinical model (PD-L1 and size) and the NHOC model. The multi-layer perceptron classifier (MLP) model achieved the highest Area Under the Curve (AUC) (0.82, 95% CI: 0.69-0.92). For LNM prediction, the combined model demonstrated superior performance across training (AUC 0.852), internal test (AUC 0.822), and external test (AUC 0.885) sets. It significantly outperformed clinical and NHOC models (p < 0.05). For ONM prediction, the combined model achieved the AUC (0.85) on the full datasets. SHAP analysis highlighted key features like GLCM_InverseVariance-PET and NGTDM_Strength-CT. A nomogram and online calculator were developed, with decision-curve analysis confirming superior net clinical benefit. This study established an accurate, interpretable machine learning model for preoperative prediction of LNM and ONM in NSCLC. NHOC emerged as a novel independent predictor with respect to classical PET parameters.

Seymour T, Jazayeri SB, Ghozy S, Rinaldo L, Kadirvel R

pubmed logopapersOct 25 2025
High-resolution imaging is critical for the diagnosis, treatment planning, and postoperative monitoring of cerebral aneurysms, which affect up to 5% of the population and pose a significant risk of rupture and subarachnoid hemorrhage. Surgical clipping remains a definitive treatment option, but metallic clips can introduce substantial imaging artifacts, complicating posttreatment assessment. This review synthesizes current knowledge on the impact of aneurysm clip materials and designs on artifact generation and explores strategies for artifact mitigation. Conventional materials like titanium are favored for their biocompatibility and reduced ferromagnetism but still cause beam hardening, streak artifacts, and signal loss in CT and MRI scans. Emerging alternatives, including ceramics, composites, polymers, and bioresorbable clips, show promise in reducing artifacts while maintaining mechanical reliability. Innovations in clip design, such as fenestrated or low-profile models, further aid in minimizing imaging distortion. Advanced imaging methods, including dual-energy CT, iterative reconstruction algorithms, and metal artifact reduction software, demonstrate significant improvements in image quality but may introduce limitations such as increased processing demands or subtle anatomical distortions. Future directions emphasize the development of next-generation clip materials, robotic-assisted surgical approaches, and artificial intelligence-driven reconstruction techniques to further optimize visualization and patient safety. Continued research and multidisciplinary collaboration will be essential to translate these innovations into routine neurosurgical practice.

Gan Y, Chen Z, Zou E, Cheng C, Guan W, Shen Z, Wang L, Lin J, Wang Y, Zhao X, Zhang Z, Wang Y, Wu L, Zhou B, Liang X, Chen G

pubmed logopapersOct 24 2025
Early recurrence (ER) of intrahepatic cholangiocarcinoma (ICC) after curative hepatectomy correlates with dismal prognosis. We hypothesized that body composition radiomics reflecting systemic metabolic-immunologic status could enhance ER prediction. This multi-center study aimed to develop and validate integrated radiomics-clinical machine learning (RCML) models for postoperative ER risk stratification. In this retrospective study, 258 ICC patients (2011-2022) from three institutions who underwent curative resection were enrolled. Body composition features were extracted from preoperative contrast-enhanced CT (L3 level). After minimum redundancy maximum relevance(mRMR) feature selection, radiomics-based ML(RML) models were constructed. Integrated RCML models combined radiomic features with clinical variables. Six ML algorithms were employed and performance assessed by area under the receiver operating characteristic curve (AUC) with five-fold cross-validation, and external testing. ER occurred in 134 patients (52%). The optimal RML model achieved AUC 0.82 with 15 selected features, outperforming clinical-only models (mean AUC 0.72). The support vector machine (SVM) based RCML models demonstrated superior performance (training AUC 0.86; external validation AUC 0.84). The RCML model achieved balanced classification metrics (sensitivity 0.80, specificity 0.87, F1-score 0.82), indicating robust generalizability. Statistical differences between SVM-models were validated using DeLong's test. All best-performing models significantly stratified high/low-risk groups with divergent survival (log-rank P < 0.001). Integration of body composition radiomics and clinical factors in RCML models significantly improves ER prediction for resected ICC, enabling clinically actionable risk stratification. This approach leverages routinely acquired preoperative CT to quantify metabolic-immunologic derangements, providing opportunities for personalized surveillance protocols targeting high-risk patients.

Chen H, Li Y, Zhang J, Yang L, Sun Y, Chen Y, Zhou S, Li Z, Qian X, Xu Q, Shen D

pubmed logopapersOct 24 2025
Recently, numerous deep learning models have been proposed for breast cancer diagnosis using multimodal multi-view ultrasound images. However, their performance could be highly affected by overlooking interactions between different modalities and views. Moreover, existing methods struggle to handle cases where certain modalities or views are missing, which limits their clinical applications. To address these issues, we propose a novel Alignment and Imputation Network (AINet) by integrating 1) alignment and imputation pre-training, and 2) hierarchical fusion fine-tuning. Specifically, in the pre-training stage, cross-modal contrastive learning is employed to align features across different modalities, for effectively capturing inter-modal interactions. To simulate missing modality (view) scenarios, we randomly mask out features and then impute them by leveraging inter-modal and inter-view relationships. Following the clinical diagnosis procedure, the subsequent fine-tuning stage further incorporates modality-level and view-level fusion in a hierarchical manner. The proposed AINet is developed and evaluated on three datasets, comprising 15,223 subjects in total. Experimental results demonstrate that AINet significantly outperforms state-of-the-art methods, particularly in handling missing modalities (views). This highlights its robustness and potential for real-world clinical applications.

Huang C, Thakore NL, Shen Y, Rasromani EK, Saba BA, Levine JM, Jacobi SM, Chen R, Pan H, Kang SK

pubmed logopapersOct 24 2025
To evaluate the association of patient characteristics, community-level social determinants of health, and cyst risk categories with completion of follow-up recommendations for incidental Pancreatic Cystic Lesions (PCLs). We retrospectively identified consecutive patients (2013-2023) whose MRI radiology reports described PCLs. A fine-tuned LLaMA-3.1 8B Instruct large language model was used to extract PCL features. Lesions were classified using the 2017 ACR white paper: Category 1 (low risk), Category 2 (worrisome features), or Category 3 (high-risk stigmata). We recorded demographics and follow-up imaging or endoscopic ultrasound dates. Community-level factors were characterized by the 2020 CDC Social Vulnerability Index (SVI), stratified into quartiles. The primary outcome, "inappropriate follow-up," combined late and no follow-up. Multivariable binomial regression was applied to evaluate associations with inappropriate follow-up. In 7,745 patients (mean age 66.3 years; 4,796 women), 92.9% (7,198/7,745) of cysts were Category 1, 6.4% (498/7,745) were Category 2, and 0.6% (49/7,745) were Category 3. Only 36.3% of patients completed appropriate follow-up, 12.1% were late, and 51.6% were lost to follow-up. Inappropriate follow-up was high in every cyst category: 64.2% in Category 1, 59.4% in Category 2 and 49.0% in Category 3. In multivariable analysis, non-English primary language (RR 1.08; 95% CI, 1.02-1.14) and residing in more vulnerable communities of the 3rd quartiles of the socioeconomic Social Vulnerability Index subcategory (RR 1.07; 95% CI, 1.02-1.12) were associated with inappropriate follow-up. Higher age-adjusted Charlson Comorbidity Index (CCI ≥ 4) (RR .84; 95% CI, .79-.88), CCI 2-3 (RR .84; 95% CI, .79-.88), and higher-risk cysts in patients under 65 years of age (RR .76; 95% CI, .65-.89) were associated with completed follow-up. Follow-up completion for incidental PCLs was low. Factors most consistently associated with follow-up completion were language barriers, residence in socioeconomically vulnerable communities, age-adjusted CCI and higher-risk features among those under 65 years.

Nazem-Zadeh MR, Chang RS, Barnard S, Pardoe HR, Kuzniecky R, Mishra D, Kamkar H, Nhu D, Metha D, Thom D, Chen Z, Ge Z, O'Brien TJ, Sinclair B, French J, Law M, Kwan P

pubmed logopapersOct 24 2025
Antiseizure medications (ASMs) are the first-line treatment for epilepsy, yet they are ineffective in controlling seizures in about 40% of patients with unpredictable individual response to treatment. This study aimed to develop and validate artificial intelligence (AI) models using clinical and brain magnetic resonance imaging (MRI) data to predict responses to the first two ASMs in people with epilepsy. People with recently diagnosed epilepsy treated with ASM monotherapy at the Alfred Hospital, Melbourne, Australia formed the development cohort. We developed AI models employing various combinations of clinical features, prescribed ASMs, and brain multimodal MRI images/features to predict the probability of seizure freedom at 12 months while taking the first or second ASM monotherapy. Five-fold cross-internal validation was performed. External validation was conducted on a validation cohort comprising participants of the Human Epilepsy Project. The development cohort included 154 individuals (36% female, 85% focal epilepsy), of whom 29% had received both the first and second ASM monotherapy. The validation cohort included 301 individuals (61% female, all focal epilepsy), of whom 33% had received both the first and second ASM monotherapy. A fusion deep learning (DL) model comprising an 18-layer 3D videoResNet (for multi-sequence MRI data), a transformer encoder (ASM regimens), and a dual linear neural network (for clinical characteristics) outperformed other models. It achieved an internal cross validation F1 score of 0.75 ± 0.05 (average ± 95% confidence interval), higher than other machine learning (ML) models and DL models with less complex architecture or integration of fewer imaging sequences. This DL model significantly outperformed the best ML model on validation cohort (p < 0.001). AI-based models incorporating brain MRI, clinical, and medication data can efficiently predict seizure freedom in recently diagnosed epilepsy. They may enhance treatment selection in epilepsy and offer a foundation for clinical decision support systems. Further validation in larger cohorts is warranted.

Saha D, Mandal A, Das AK, Bhattacharya A

pubmed logopapersOct 24 2025
Detecting ovarian structures in ultrasound images is essential in gynecological and reproductive medicine. An automated detection system can serve as a valuable tool for physicians and assist in complex ultrasound interpretations. This study presents a CNN-based object detector designed to segment and count follicle regions in ovarian ultrasound images. Automated identification of ovarian follicles can aid in diagnosing conditions such as infertility, Polycystic Ovarian Syndrome (PCOS), ovarian cancer, and other reproductive health issues. The proposed model, Multi-Attention Residual Dilated UNet with Squeeze and Excitation (MARDSE-UNet), integrates residual UNet, dilated UNet, and squeeze-and-excitation blocks to enhance follicle detection performance. MARDSE-UNet achieved exceptional results, with 98.69% accuracy, 97.89% precision, 97.7% recall, an F1-score of 86.97%, and Intersection over Union (IoU) of 95.66% in follicle detection using 5-fold cross-validation. The USOVA3D dataset was used for experimentation. The model also incorporates a novel preprocessing stage to address noise and low contrast issues, as well as a post-processing stage to refine edges and extract features such as area, perimeter, and diameter of follicles for a more comprehensive performance comparison. The proposed model outperformed traditional CNN models and other state-of-the-art methods in comparative evaluations.
Page 1 of 6036030 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.