Sort by:
Page 81 of 1611605 results

Multi-class transformer-based segmentation of pancreatic ductal adenocarcinoma and surrounding structures in CT imaging: a multi-center evaluation.

Wen S, Xiao X

pubmed logopapersJun 14 2025
Accurate segmentation of pancreatic ductal adenocarcinoma (PDAC) and surrounding anatomical structures is critical for diagnosis, treatment planning, and outcome assessment. This study proposes a deep learning-based framework to automate multi-class segmentation in CT images, comparing the performance of four state-of-the-art architectures. This retrospective multi-center study included 3265 patients from six institutions. Four deep learning models-UNet, nnU-Net, UNETR, and Swin-UNet-were trained using five-fold cross-validation on data from five centers and tested independently on a sixth center (n = 569). Preprocessing included intensity normalization, voxel resampling, and standardized annotation for six structures: PDAC lesion, pancreas, veins, arteries, pancreatic duct, and common bile duct. Evaluation metrics included Dice Similarity Coefficient (DSC), Intersection over Union (IoU), directed Hausdorff Distance (dHD), Average Symmetric Surface Distance (ASSD), and Volume Overlap Error (VOE). Statistical comparisons were made using Wilcoxon signed-rank tests with Bonferroni correction. Swin-UNet outperformed all models with a mean validation DSC of 92.4% and test DSC of 90.8%, showing minimal overfitting. It also achieved the lowest dHD (4.3 mm), ASSD (1.2 mm), and VOE (6.0%) in cross-validation. Per-class DSCs for Swin-UNet were consistently higher across all anatomical targets, including challenging structures like the pancreatic duct (91.0%) and bile duct (91.8%). Statistical analysis confirmed the superiority of Swin-UNet (p < 0.001). All models showed generalization capability, but Swin-UNet provided the most accurate and robust segmentation across datasets. Transformer-based architectures, particularly Swin-UNet, enable precise and generalizable multi-class segmentation of PDAC and surrounding anatomy. This framework has potential for clinical integration in PDAC diagnosis, staging, and therapy planning.

Qualitative evaluation of automatic liver segmentation in computed tomography images for clinical use in radiation therapy.

Khalal DM, Slimani S, Bouraoui ZE, Azizi H

pubmed logopapersJun 14 2025
Segmentation of target volumes and organs at risk on computed tomography (CT) images constitutes an important step in the radiotherapy workflow. Artificial intelligence-based methods have significantly improved organ segmentation in medical images. Automatic segmentations are frequently evaluated using geometric metrics. Before a clinical implementation in the radiotherapy workflow, automatic segmentations must also be evaluated by clinicians. The aim of this study was to investigate the correlation between geometric metrics used for segmentation evaluation and the assessment performed by clinicians. In this study, we used the U-Net model to segment the liver in CT images from a publicly available dataset. The model's performance was evaluated using two geometric metrics: the Dice similarity coefficient and the Hausdorff distance. Additionally, a qualitative evaluation was performed by clinicians who reviewed the automatic segmentations to rate their clinical acceptability for use in the radiotherapy workflow. The correlation between the geometric metrics and the clinicians' evaluations was studied. The results showed that while the Dice coefficient and Hausdorff distance are reliable indicators of segmentation accuracy, they do not always align with clinician segmentation. In some cases, segmentations with high Dice scores still required clinician corrections before clinical use in the radiotherapy workflow. This study highlights the need for more comprehensive evaluation metrics beyond geometric measures to assess the clinical acceptability of artificial intelligence-based segmentation. Although the deep learning model provided promising segmentation results, the present study shows that standardized validation methodologies are crucial for ensuring the clinical viability of automatic segmentation systems.

A multimodal fusion system predicting survival benefits of immune checkpoint inhibitors in unresectable hepatocellular carcinoma.

Xu J, Wang T, Li J, Wang Y, Zhu Z, Fu X, Wang J, Zhang Z, Cai W, Song R, Hou C, Yang LZ, Wang H, Wong STC, Li H

pubmed logopapersJun 14 2025
Early identification of unresectable hepatocellular carcinoma (HCC) patients who may benefit from immune checkpoint inhibitors (ICIs) is crucial for optimizing outcomes. Here, we developed a multimodal fusion (MMF) system integrating CT-derived deep learning features and clinical data to predict overall survival (OS) and progression-free survival (PFS). Using retrospective multicenter data (n = 859), the MMF combining an ensemble deep learning (Ensemble-DL) model with clinical variables achieved strong external validation performance (C-index: OS = 0.74, PFS = 0.69), outperforming radiomics (29.8% OS improvement), mRECIST (27.6% OS improvement), clinical benchmarks (C-index: OS = 0.67, p = 0.0011; PFS = 0.65, p = 0.033), and Ensemble-DL (C-index: OS = 0.69, p = 0.0028; PFS = 0.66, p = 0.044). The MMF system effectively stratified patients across clinical subgroups and demonstrated interpretability through activation maps and radiomic correlations. Differential gene expression analysis revealed enrichment of the PI3K/Akt pathway in patients identified by the MMF system. The MMF system provides an interpretable, clinically applicable approach to guide personalized ICI treatment in unresectable HCC.

FDTooth: Intraoral Photographs and CBCT Images for Fenestration and Dehiscence Detection.

Liu K, Elbatel M, Chu G, Shan Z, Sum FHKMH, Hung KF, Zhang C, Li X, Yang Y

pubmed logopapersJun 14 2025
Fenestration and dehiscence (FD) pose significant challenges in dental treatments as they adversely affect oral health. Although cone-beam computed tomography (CBCT) provides precise diagnostics, its extensive time requirements and radiation exposure limit its routine use for monitoring. Currently, there is no public dataset that combines intraoral photographs and corresponding CBCT images; this limits the development of deep learning algorithms for the automated detection of FD and other potential diseases. In this paper, we present FDTooth, a dataset that includes both intraoral photographs and CBCT images of 241 patients aged between 9 and 55 years. FDTooth contains 1,800 precise bounding boxes annotated on intraoral photographs, with gold-standard ground truth extracted from CBCT. We developed a baseline model for automated FD detection in intraoral photographs. The developed dataset and model can serve as valuable resources for research on interdisciplinary dental diagnostics, offering clinicians a non-invasive, efficient method for early FD screening without invasive procedures.

Optimizing stroke detection with genetic algorithm-based feature selection in deep learning models.

Nayak GS, Mallick PK, Sahu DP, Kathi A, Reddy R, Viyyapu J, Pabbisetti N, Udayana SP, Sanapathi H

pubmed logopapersJun 14 2025
Brain stroke is a leading cause of disability and mortality worldwide, necessitating the development of accurate and efficient diagnostic models. In this study, we explore the integration of Genetic Algorithm (GA)-based feature selection with three state-of-the-art deep learning architectures InceptionV3, VGG19, and MobileNetV2 to enhance stroke detection from neuroimaging data. GA is employed to optimize feature selection, reducing redundancy and improving model performance. The selected features are subsequently fed into the respective deep-learning models for classification. The dataset used in this study comprises neuroimages categorized into "Normal" and "Stroke" classes. Experimental results demonstrate that incorporating GA improves classification accuracy while reducing computational complexity. A comparative analysis of the three architectures reveals their effectiveness in stroke detection, with MobileNetV2 achieving the highest accuracy of 97.21%. Notably, the integration of Genetic Algorithms with MobileNetV2 for feature selection represents a novel contribution, setting this study apart from prior approaches that rely solely on traditional CNN pipelines. Owing to its lightweight design and low computational demands, MobileNetV2 also offers significant advantages for real-time clinical deployment, making it highly applicable for use in emergency care settings where rapid diagnosis is critical. Additionally, performance metrics such as precision, recall, F1-score, and Receiver Operating Characteristic (ROC) curves are evaluated to provide comprehensive insights into model efficacy. This research underscores the potential of genetic algorithm-driven optimization in enhancing deep learning-based medical image classification, paving the way for more efficient and reliable stroke diagnosis.

Utility of Thin-slice Single-shot T2-weighted MR Imaging with Deep Learning Reconstruction as a Protocol for Evaluating Pancreatic Cystic Lesions.

Ozaki K, Hasegawa H, Kwon J, Katsumata Y, Yoneyama M, Ishida S, Iyoda T, Sakamoto M, Aramaki S, Tanahashi Y, Goshima S

pubmed logopapersJun 14 2025
To assess the effects of industry-developed deep learning reconstruction with super resolution (DLR-SR) on single-shot turbo spin-echo (SshTSE) images with thickness of 2 mm with DLR (SshTSE<sup>2mm</sup>) relative to those of images with a thickness of 5 mm with DLR (SSshTSE<sup>5mm</sup>) in the patients with pancreatic cystic lesions. Thirty consecutive patients who underwent abdominal MRI examinations because of pancreatic cystic lesions under observation between June 2024 and July 2024 were enrolled. We qualitatively and quantitatively evaluated the image qualities of SshTSE<sup>2mm</sup> and SshTSE<sup>5mm</sup> with and without DLR-SR. The SNRs of the pancreas, spleen, paraspinal muscle, peripancreatic fat, and pancreatic cystic lesions of SshTSE<sup>2mm</sup> with and without DLR-SR did not decrease in compared to that of SshTSE<sup>5mm</sup> with and without DLR-SR. There were no significant differences in contrast-to-noise ratios (CNRs) of the pancreas-to-cystic lesions and fat between 4 types of images. SshTSE<sup>2mm</sup> with DLR-SR had the highest image quality related to pancreas edge sharpness, perceived coarseness pancreatic duct clarity, noise, artifacts, overall image quality, and diagnostic confidence of cystic lesions, followed by SshTSE<sup>2mm</sup> without DLR-SR and SshTSE<sup>5mm</sup> with and without DLR-SR (P  <  0.0001). SshTSE<sup>2mm</sup> with DLR-SR images had better quality than the other images and did not have decreased SNRs and CNRs. The thin-slice SshTSE with DLR-SR may be feasible and clinically useful for the evaluation of patients with pancreatic cystic lesions.

Hierarchical Deep Feature Fusion and Ensemble Learning for Enhanced Brain Tumor MRI Classification

Zahid Ullah, Jihie Kim

arxiv logopreprintJun 14 2025
Accurate brain tumor classification is crucial in medical imaging to ensure reliable diagnosis and effective treatment planning. This study introduces a novel double ensembling framework that synergistically combines pre-trained deep learning (DL) models for feature extraction with optimized machine learning (ML) classifiers for robust classification. The framework incorporates comprehensive preprocessing and data augmentation of brain magnetic resonance images (MRI), followed by deep feature extraction using transfer learning with pre-trained Vision Transformer (ViT) networks. The novelty lies in the dual-level ensembling strategy: feature-level ensembling, which integrates deep features from the top-performing ViT models, and classifier-level ensembling, which aggregates predictions from hyperparameter-optimized ML classifiers. Experiments on two public Kaggle MRI brain tumor datasets demonstrate that this approach significantly surpasses state-of-the-art methods, underscoring the importance of feature and classifier fusion. The proposed methodology also highlights the critical roles of hyperparameter optimization (HPO) and advanced preprocessing techniques in improving diagnostic accuracy and reliability, advancing the integration of DL and ML for clinically relevant medical image analysis.

Sex-estimation method for three-dimensional shapes of the skull and skull parts using machine learning.

Imaizumi K, Usui S, Nagata T, Hayakawa H, Shiotani S

pubmed logopapersJun 14 2025
Sex estimation is an indispensable test for identifying skeletal remains in the field of forensic anthropology. We developed a novel sex-estimation method for skulls and several parts of the skull using machine learning. A total of 240 skull shapes were obtained from postmortem computed tomography scans. The shapes of the whole skull, cranium, and mandible were simplified by wrapping them with virtual elastic film. These were then transformed into homologous shape models. Homologous models of the cranium and mandible were segmented into six regions containing well-known sexually dimorphic areas. Shape data were reduced in dimensionality by principal component analysis (PCA) or partial least squares regression (PLS). The components of PCA and PLS were applied to a support vector machine (SVM), and the accuracy rates of sex estimation were assessed. High accuracy rates in sex estimation were observed in SVM after reducing the dimensionality of data with PLS. The rates exceeded 90 % in two of the nine regions examined, whereas the SVM with PCA components did not reach 90 % in any region. Virtual shapes created from very large and small scores of the first principal components of PLS closely resembled masculine and feminine models created by emphasizing the shape difference between the averaged shape of male and female skulls. Such similarities were observed in all skull regions examined, particularly in sexually dimorphic areas. Estimation models also achieved high estimation accuracies in newly prepared skull shapes, suggesting that the estimation method developed here may be sufficiently applicable to actual casework.

The Machine Learning Models in Major Cardiovascular Adverse Events Prediction Based on Coronary Computed Tomography Angiography: Systematic Review.

Ma Y, Li M, Wu H

pubmed logopapersJun 13 2025
Coronary computed tomography angiography (CCTA) has emerged as the first-line noninvasive imaging test for patients at high risk of coronary artery disease (CAD). When combined with machine learning (ML), it provides more valid evidence in diagnosing major adverse cardiovascular events (MACEs). Radiomics provides informative multidimensional features that can help identify high-risk populations and can improve the diagnostic performance of CCTA. However, its role in predicting MACEs remains highly debated. We evaluated the diagnostic value of ML models constructed using radiomic features extracted from CCTA in predicting MACEs, and compared the performance of different learning algorithms and models, thereby providing clinical recommendations for the diagnosis, treatment, and prognosis of MACEs. We comprehensively searched 5 online databases, Cochrane Library, Web of Science, Elsevier, CNKI, and PubMed, up to September 10, 2024, for original studies that used ML models among patients who underwent CCTA to predict MACEs and reported clinical outcomes and endpoints related to it. Risk of bias in the ML models was assessed by the Prediction Model Risk of Bias Assessment Tool, while the radiomics quality score (RQS) was used to evaluate the methodological quality of the radiomics prediction model development and validation. We also followed the TRIPOD (Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis) guidelines to ensure transparency of ML models included. Meta-analysis was performed using Meta-DiSc software (version 1.4), which included the I² score and Cochran Q test, along with StataMP 17 (StataCorp) to assess heterogeneity and publication bias. Due to the high heterogeneity observed, subgroup analysis was conducted based on different model groups. Ten studies were included in the analysis, 5 (50%) of which differentiated between training and testing groups, where the training set collected 17 kinds of models and the testing set gathered 26 models. The pooled area under the receiver operating characteristic (AUROC) curve for ML models predicting MACEs was 0.7879 in the training set and 0.7981 in the testing set. Logistic regression (LR), the most commonly used algorithm, achieved an AUROC of 0.8229 in the testing group and 0.7983 in the training group. Non-LR models yielded AUROCs of 0.7390 in the testing set and 0.7648 in the training set, while the random forest (RF) models reached an AUROC of 0.8444 in the training group. Study limitations included a limited number of studies, high heterogeneity, and the types of included studies. The performance of ML models for predicting MACEs was found to be superior to that of general models based on basic feature extraction and integration from CCTA. Specifically, LR-based ML diagnostic models demonstrated significant clinical potential, particularly when combined with clinical features, and are worth further validation through more clinical trials. PROSPERO CRD42024596364; https://www.crd.york.ac.uk/PROSPERO/view/CRD42024596364.

High-Fidelity 3D Imaging of Dental Scenes Using Gaussian Splatting.

Jin CX, Li MX, Yu H, Gao Y, Guo YP, Xia GS, Huang C

pubmed logopapersJun 13 2025
Three-dimensional visualization is increasingly used in dentistry for diagnostics, education, and treatment design. The accurate replication of geometry and color is crucial for these applications. Image-based rendering, which uses 2-dimensional photos to generate photo-realistic 3-dimensional representations, provides an affordable and practical option, aiding both regular and remote health care. This study explores an advanced novel view synthesis (NVS) method called Gaussian splatting (GS), a differentiable image-based rendering approach, to assess its feasibility for dental scene capturing. The rendering quality and resource usage were compared with representative NVS methods. In addition, the linear measurement trueness of extracted craniofacial meshes was evaluated against a commercial facial scanner and 3 smartphone facial scanning apps, while teeth meshes were assessed against 2 intraoral scanners and a desktop scanner. GS-based representation demonstrated superior rendering quality, achieving the highest visual quality, fastest rendering speed, and lowest resource usage. The craniofacial measurements showed similar trueness to commercial facial scanners. The dental measurements had larger deviations than intraoral and desktop scanners did, although all deviations remained within clinically acceptable limits. The GS-based representation shows great potential for developing a convenient and cost-effective method of capturing dental scenes, offering a balance between color fidelity and trueness suitable for clinical applications.
Page 81 of 1611605 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.