Sort by:
Page 78 of 2232230 results

Determination of Kennedy's classification in panoramic X-rays by automated tooth labeling.

Meine H, Metzger MC, Weingart P, Wüster J, Schmelzeisen R, Rörich A, Georgii J, Brandenburg LS

pubmed logopapersJun 24 2025
Panoramic X-rays (PX) are extensively utilized in dental and maxillofacial diagnostics, offering comprehensive imaging of teeth and surrounding structures. This study investigates the automatic determination of Kennedy's classification in partially edentulous jaws. A retrospective study involving 209 PX images from 206 patients was conducted. The established Mask R-CNN, a deep learning-based instance segmentation model, was trained for the automatic detection, position labeling (according to the international dental federation's scheme), and segmentation of teeth in PX. Subsequent post-processing steps filter duplicate outputs by position label and by geometric overlap. Finally, a rule-based determination of Kennedy's class of partially edentulous jaws was performed. In a fivefold cross-validation, Kennedy's classification was correctly determined in 83.0% of cases, with the most common errors arising from the mislabeling of morphologically similar teeth. The underlying algorithm demonstrated high sensitivity (97.1%) and precision (98.1%) in tooth detection, with an F1 score of 97.6%. FDI position label accuracy was 94.7%. Ablation studies indicated that post-processing steps, such as duplicate filtering, significantly improved algorithm performance. Our findings show that automatic dentition analysis in PX images can be extended to include clinically relevant jaw classification, reducing the workload associated with manual labeling and classification.

A Multicentre Comparative Analysis of Radiomics, Deep-learning, and Fusion Models for Predicting Postpartum Hemorrhage.

Zhang W, Zhao X, Meng L, Lu L, Guo J, Cheng M, Tian H, Ren N, Yin J, Zhang X

pubmed logopapersJun 24 2025
This study compared the capabilities of two-dimensional (2D) and three-dimensional (3D) deep learning (DL), radiomics, and fusion models to predict postpartum hemorrhage (PPH), using sagittal T2-weighted MRI images. This retrospective study successively included 581 pregnant women suspected of placenta accreta spectrum (PAS) disorders who underwent placental MRI assessment between May 2018 and June 2024 in two hospitals. Clinical information was collected, and MRI images were analyzed by two experienced radiologists. The study cohort was divided into training (hospital 1, n=470) and validation (hospital 2, n=160) sets. Radiomics features were extracted after image segmentation to develop the radiomics model, 2D and 3D DL models were developed, and two fusion strategies (early and late fusion) were used to construct the fusion models. ROC curves, AUC, sensitivity, specificity, calibration curves, and decision curve analysis were used to evaluate the models' performance. The late-fusion model (DLRad_LF) yielded the highest performance, with AUCs of 0.955 (95% CI: 0.935-0.974) and 0.898 (95% CI: 0.848-0.949) in the training and validation sets, respectively. In the validation set, the AUC of the 3D DL model was significantly larger than those of the radiomics (AUC=0.676, P<0.001) and 2D DL (AUC=0.752, P<0.001) models. Subgroup analysis found that placenta previa and PAS did not impact the models' performance significantly. The DLRad_LF model could predict PPH reasonably accurately based on sagittal T2-weighted MRI images.

From Faster Frames to Flawless Focus: Deep Learning HASTE in Postoperative Single Sequence MRI.

Hosse C, Fehrenbach U, Pivetta F, Malinka T, Wagner M, Walter-Rittel T, Gebauer B, Kolck J, Geisel D

pubmed logopapersJun 24 2025
This study evaluates the feasibility of a novel deep learning-accelerated half-fourier single-shot turbo spin-echo sequence (HASTE-DL) compared to the conventional HASTE sequence (HASTE<sub>S</sub>) in postoperative single-sequence MRI for the detection of fluid collections following abdominal surgery. As small fluid collections are difficult to visualize using other techniques, HASTE-DL may offer particular advantages in this clinical context. A retrospective analysis was conducted on 76 patients (mean age 65±11.69 years) who underwent abdominal MRI for suspected septic foci following abdominal surgery. Imaging was performed using 3-T MRI scanners, and both sequences were analyzed in terms of image quality, contrast, sharpness, and artifact presence. Quantitative assessments focused on fluid collection detectability, while qualitative assessments evaluated visualization of critical structures. Inter-reader agreement was measured using Cohen's kappa coefficient, and statistical significance was determined with the Mann-Whitney U test. HASTE-DL achieved a 46% reduction in scan time compared to HASTE<sub>S</sub>, while significantly improving overall image quality (p<0.001), contrast (p<0.001), and sharpness (p<0.001). The inter-reader agreement for HASTE-DL was excellent (κ=0.960), with perfect agreement on overall image quality and fluid collection detection (κ=1.0). Fluid detectability and characterization scores were higher for HASTE-DL, and visualization of critical structures was significantly enhanced (p<0.001). No relevant artifacts were observed in either sequence. HASTE-DL offers superior image quality, improved visualization of critical structures, such as drainages, vessels, bile and pancreatic ducts, and reduced acquisition time, making it an effective alternative to the standard HASTE sequence, and a promising complementary tool in the postoperative imaging workflow.

Differentiating adenocarcinoma and squamous cell carcinoma in lung cancer using semi automated segmentation and radiomics.

Vijitha R, Wickramasinghe WMIS, Perera PAS, Jayatissa RMGCSB, Hettiarachchi RT, Alwis HARV

pubmed logopapersJun 24 2025
Adenocarcinoma (AD) and squamous cell carcinoma (SCC) are frequently observed forms of non-small cell lung cancer (NSCLC), playing a significant role in global cancer mortality. This research categorizes NSCLC subtypes by analyzing image details using computer-assisted semi-automatic segmentation and radiomic features in model development. This study includes 80 patients with 50 AD and 30 SCC which were analyzed using 3D Slicer software and extracted 107 quantitative radiomic features per patient. After eliminating correlated attributes, LASSO binary logistic regression model and 10-fold cross-validation were used for feature selection. The Shapiro-Wilk test assessed radiomic score normality, and the Mann-Whitney U test compared score distributions. Random Forest (RF) and Support Vector Machine (SVM) classification models were implemented for subtype classification. Receiver-Operator Characteristic (ROC) curves evaluated the radiomics score, showing a moderate predictive ability with training set area under curve (AUC) of 0.679 (95 % CI, 0.541-0.871) and validation set AUC of 0.560 (95 % CI, 0.342-0.778). Rad-Score distributions were normal for AD and not normal for SCC. RF and SVM classification models, which are based on selected features, resulted RF accuracy (95 % CI) of 0.73 and SVM accuracy (95 % CI) of 0.87, with respective AUC values of 0.54 and 0.87. These findings enhance the understanding that the two subtypes of NSCLC can be differentiated. The study demonstrated radiomic analysis improves diagnostic accuracy and offers a non-invasive alternative. However, the AUCs and ROC curves for the machine learning models must be critically evaluated to ensure clinical acceptability. If robust, these models could reduce the need for biopsies and enhance personalized treatment planning. Further research is needed to validate these findings and integrate radiomics into NSCLC clinical practice.

Predicting enamel depth distribution of maxillary teeth based on intraoral scanning: A machine learning study.

Chen D, He X, Li Q, Wang Z, Shen J, Shen J

pubmed logopapersJun 24 2025
Measuring enamel depth distribution (EDD) is of great importance for preoperative design of tooth preparations, restorative aesthetic preview and monitoring enamel wear. But, currently there are no non-invasive methods available to efficiently obtain EDD. This study aimed to develop a machine learning (ML) framework to achieve noninvasive and radiation-free EDD predictions with intraoral scanning (IOS) images. Cone-beam computed tomography (CBCT) and IOS images of right maxillary central incisors, canines, and first premolars from 200 volunteers were included and preprocessed with surface parameterization. During the training stage, the EDD ground truths were obtained from CBCT. Five-dimensional features (incisal-gingival position, mesial-distal position, local surface curvature, incisal-gingival stretch, mesial-distal stretch) were extracted on labial enamel surfaces and served as inputs to the ML models. An eXtreme gradient boosting (XGB) model was trained to establish the mapping of features to the enamel depth values. R<sup>2</sup> and mean absolute error (MAE) were utilized to evaluate the training accuracy of XGB model. In prediction stage, the predicted EDDs were compared with the ground truths, and the EDD discrepancies were analyzed using a paired t-test and Frobenius norm. The XGB model achieved superior performance in training with average R<sup>2</sup> and MAE values of 0.926 and 0.080, respectively. Independent validation confirmed its robust EDD prediction ability, showing no significant deviation from ground truths in paired t-test and low prediction errors (Frobenius norm: 12.566-18.312), despite minor noise in IOS-based predictions. This study performed preliminary validation of an IOS-based ML model for high-quality EDD prediction.

Enhancing Lung Cancer Diagnosis: An Optimization-Driven Deep Learning Approach with CT Imaging.

Lakshminarasimha K, Priyeshkumar AT, Karthikeyan M, Sakthivel R

pubmed logopapersJun 23 2025
Lung cancer (LC) remains a leading cause of mortality worldwide, affecting individuals across all genders and age groups. Early and accurate diagnosis is critical for effective treatment and improved survival rates. Computed Tomography (CT) imaging is widely used for LC detection and classification. However, manual identification can be time-consuming and error-prone due to the visual similarities among various LC types. Deep learning (DL) has shown significant promise in medical image analysis. Although numerous studies have investigated LC detection using deep learning techniques, the effective extraction of highly correlated features remains a significant challenge, thereby limiting diagnostic accuracy. Furthermore, most existing models encounter substantial computational complexity and find it difficult to efficiently handle the high-dimensional nature of CT images. This study introduces an optimized CBAM-EfficientNet model to enhance feature extraction and improve LC classification. EfficientNet is utilized to reduce computational complexity, while the Convolutional Block Attention Module (CBAM) emphasizes essential spatial and channel features. Additionally, optimization algorithms including Gray Wolf Optimization (GWO), Whale Optimization (WO), and the Bat Algorithm (BA) are applied to fine-tune hyperparameters and boost predictive accuracy. The proposed model, integrated with different optimization strategies, is evaluated on two benchmark datasets. The GWO-based CBAM-EfficientNet achieves outstanding classification accuracies of 99.81% and 99.25% on the Lung-PET-CT-Dx and LIDC-IDRI datasets, respectively. Following GWO, the BA-based CBAM-EfficientNet achieves 99.44% and 98.75% accuracy on the same datasets. Comparative analysis highlights the superiority of the proposed model over existing approaches, demonstrating strong potential for reliable and automated LC diagnosis. Its lightweight architecture also supports real-time implementation, offering valuable assistance to radiologists in high-demand clinical environments.

[Incidental pulmonary nodules on CT imaging: what to do?].

van der Heijden EHFM, Snoeren M, Jacobs C

pubmed logopapersJun 23 2025
Incidental pulmonary nodules are very frequently found on CT imaging and may represent (early stage) lung cancers without any signs or symptoms. These incidental findings can be solid lesions or ground glass lesions that may be solitary or multiple. Careful, and systematic evaluation of these findings in imaging is needed to determine the risk of malignancy, based on imaging characteristics, patient factors like smoking habits, prior cancers or family history, and growth rate preferably determined by volume measurements. Once the risk of malignancy is increased, minimal invasive image guided biopsy is warranted, preferably by navigation bronchoscopy. We present two cases to illustrate this clinical workup: one case with a benign solitary pulmonary nodule, and a second case with multiple ground glass opacities, diagnosed as synchronous primary adenocarcinomas of the lung. This is followed by a review of the current status of computer and artificial intelligence aided diagnostic support and clinical workflow optimization.

Fine-tuned large language model for classifying CT-guided interventional radiology reports.

Yasaka K, Nishimura N, Fukushima T, Kubo T, Kiryu S, Abe O

pubmed logopapersJun 23 2025
BackgroundManual data curation was necessary to extract radiology reports due to the ambiguities of natural language.PurposeTo develop a fine-tuned large language model that classifies computed tomography (CT)-guided interventional radiology reports into technique categories and to compare its performance with that of the readers.Material and MethodsThis retrospective study included patients who underwent CT-guided interventional radiology between August 2008 and November 2024. Patients were chronologically assigned to the training (n = 1142; 646 men; mean age = 64.1 ± 15.7 years), validation (n = 131; 83 men; mean age = 66.1 ± 16.1 years), and test (n = 332; 196 men; mean age = 66.1 ± 14.8 years) datasets. In establishing a reference standard, reports were manually classified into categories 1 (drainage), 2 (lesion biopsy within fat or soft tissue density tissues), 3 (lung biopsy), and 4 (bone biopsy). The bi-directional encoder representation from the transformers model was fine-tuned with the training dataset, and the model with the best performance in the validation dataset was selected. The performance and required time for classification in the test dataset were compared between the best-performing model and the two readers.ResultsCategories 1/2/3/4 included 309/367/270/196, 30/42/40/19, and 75/124/78/55 patients for the training, validation, and test datasets, respectively. The model demonstrated an accuracy of 0.979 in the test dataset, which was significantly better than that of the readers (0.922-0.940) (<i>P</i> ≤0.012). The model classified reports within a 49.8-53.5-fold shorter time compared to readers.ConclusionThe fine-tuned large language model classified CT-guided interventional radiology reports into four categories demonstrating high accuracy within a remarkably short time.

Multimodal deep learning for predicting neoadjuvant treatment outcomes in breast cancer: a systematic review.

Krasniqi E, Filomeno L, Arcuri T, Ferretti G, Gasparro S, Fulvi A, Roselli A, D'Onofrio L, Pizzuti L, Barba M, Maugeri-Saccà M, Botti C, Graziano F, Puccica I, Cappelli S, Pelle F, Cavicchi F, Villanucci A, Paris I, Calabrò F, Rea S, Costantini M, Perracchio L, Sanguineti G, Takanen S, Marucci L, Greco L, Kayal R, Moscetti L, Marchesini E, Calonaci N, Blandino G, Caravagna G, Vici P

pubmed logopapersJun 23 2025
Pathological complete response (pCR) to neoadjuvant systemic therapy (NAST) is an established prognostic marker in breast cancer (BC). Multimodal deep learning (DL), integrating diverse data sources (radiology, pathology, omics, clinical), holds promise for improving pCR prediction accuracy. This systematic review synthesizes evidence on multimodal DL for pCR prediction and compares its performance against unimodal DL. Following PRISMA, we searched PubMed, Embase, and Web of Science (January 2015-April 2025) for studies applying DL to predict pCR in BC patients receiving NAST, using data from radiology, digital pathology (DP), multi-omics, and/or clinical records, and reporting AUC. Data on study design, DL architectures, and performance (AUC) were extracted. A narrative synthesis was conducted due to heterogeneity. Fifty-one studies, mostly retrospective (90.2%, median cohort 281), were included. Magnetic resonance imaging and DP were common primary modalities. Multimodal approaches were used in 52.9% of studies, often combining imaging with clinical data. Convolutional neural networks were the dominant architecture (88.2%). Longitudinal imaging improved prediction over baseline-only (median AUC 0.91 vs. 0.82). Overall, the median AUC across studies was 0.88, with 35.3% achieving AUC ≥ 0.90. Multimodal models showed a modest but consistent improvement over unimodal approaches (median AUC 0.88 vs. 0.83). Omics and clinical text were rarely primary DL inputs. DL models demonstrate promising accuracy for pCR prediction, especially when integrating multiple modalities and longitudinal imaging. However, significant methodological heterogeneity, reliance on retrospective data, and limited external validation hinder clinical translation. Future research should prioritize prospective validation, integration underutilized data (multi-omics, clinical), and explainable AI to advance DL predictors to the clinical setting.

Intelligent Virtual Dental Implant Placement via 3D Segmentation Strategy.

Cai G, Wen B, Gong Z, Lin Y, Liu H, Zeng P, Shi M, Wang R, Chen Z

pubmed logopapersJun 23 2025
Virtual dental implant placement in cone-beam computed tomography (CBCT) is a prerequisite for digital implant surgery, carrying clinical significance. However, manual placement is a complex process that should meet clinical essential requirements of restoration orientation, bone adaptation, and anatomical safety. This complexity presents challenges in balancing multiple considerations comprehensively and automating the entire workflow efficiently. This study aims to achieve intelligent virtual dental implant placement through a 3-dimensional (3D) segmentation strategy. Focusing on the missing mandibular first molars, we developed a segmentation module based on nnU-Net to generate the virtual implant from the edentulous region of CBCT and employed an approximation module for mathematical optimization. The generated virtual implant was integrated with the original CBCT to meet clinical requirements. A total of 190 CBCT scans from 4 centers were collected for model development and testing. This tool segmented the virtual implant with a surface Dice coefficient (sDice) of 0.903 and 0.884 on internal and external testing sets. Compared to the ground truth, the average deviations of the implant platform, implant apex, and angle were 0.850 ± 0.554 mm, 1.442 ± 0.539 mm, and 4.927 ± 3.804° on the internal testing set and 0.822 ± 0.353 mm, 1.467 ± 0.560 mm, and 5.517 ± 2.850° on the external testing set, respectively. The 3D segmentation-based artificial intelligence tool demonstrated good performance in predicting both the dimension and position of the virtual implants, showing significant clinical application potential in implant planning.
Page 78 of 2232230 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.