Sort by:
Page 46 of 75743 results

AN INNOVATIVE MACHINE LEARNING-BASED ALGORITHM FOR DIAGNOSING PEDIATRIC OVARIAN TORSION.

Boztas AE, Sencan E, Payza AD, Sencan A

pubmed logopapersJun 16 2025
We aimed to develop a machine-learning(ML) algorithm consisting of physical examination, sonographic findings, and laboratory markers. The data of 70 patients with confirmed ovarian torsion followed and treated in our clinic for ovarian torsion and 73 patients for control group that presented to the emergency department with similar complaints but didn't have ovarian torsion detected on ultrasound as the control group between 2013-2023 were retrospectively analyzed. Sonographic findings, laboratory values, and clinical status of patients were examined and fed into three supervised ML systems to identify and develop viable decision algorithms. Presence of nausea/vomiting and symptom duration was statistically significant(p<0.05) for ovarian torsion. Presence of abdominal pain and palpable mass on physical examination weren't significant(p>0.05). White blood cell count(WBC), neutrophile/lymphocyte ratio(NLR), systemic immune-inflammation index(SII) and systemic inflammation response index(SIRI), high values of C-reactive protein was highly significant in prediction of torsion( p<0.001,p<0.05). Ovarian size ratio, medialization, follicular ring sign, presence of free fluid in pelvis in ultrasound demonstrated statistical significance in the torsion group(p<0.001). We used supervised ML algorithms, including decision trees, random forests, and LightGBM, to classify patients as either control or having torsion. We evaluated the models using 5-fold cross-validation, achieving an average F1-score of 98%, an accuracy of 98%, and a specificity of 100% across each fold with the decision tree model. This study represents the first development of a ML algorithm that integrates clinical, laboratory and ultrasonographic findings for the diagnosis of pediatric ovarian torsion with over 98% accuracy.

A multimodal deep learning model for detecting endoscopic images of near-infrared fluorescence capsules.

Wang J, Zhou C, Wang W, Zhang H, Zhang A, Cui D

pubmed logopapersJun 15 2025
Early screening for gastrointestinal (GI) diseases is critical for preventing cancer development. With the rapid advancement of deep learning technology, artificial intelligence (AI) has become increasingly prominent in the early detection of GI diseases. Capsule endoscopy is a non-invasive medical imaging technique used to examine the gastrointestinal tract. In our previous work, we developed a near-infrared fluorescence capsule endoscope (NIRF-CE) capable of exciting and capturing near-infrared (NIR) fluorescence images to specifically identify subtle mucosal microlesions and submucosal abnormalities while simultaneously capturing conventional white-light images to detect lesions with significant morphological changes. However, limitations such as low camera resolution and poor lighting within the gastrointestinal tract may lead to misdiagnosis and other medical errors. Manually reviewing and interpreting large volumes of capsule endoscopy images is time-consuming and prone to errors. Deep learning models have shown potential in automatically detecting abnormalities in NIRF-CE images. This study focuses on an improved deep learning model called Retinex-Attention-YOLO (RAY), which is based on single-modality image data and built on the YOLO series of object detection models. RAY enhances the accuracy and efficiency of anomaly detection, especially under low-light conditions. To further improve detection performance, we also propose a multimodal deep learning model, Multimodal-Retinex-Attention-YOLO (MRAY), which combines both white-light and fluorescence image data. The dataset used in this study consists of images of pig stomachs captured by our NIRF-CE system, simulating the human GI tract. In conjunction with a targeted fluorescent probe, which accumulates at lesion sites and releases fluorescent signals for imaging when abnormalities are present, a bright spot indicates a lesion. The MRAY model achieved an impressive precision of 96.3%, outperforming similar object detection models. To further validate the model's performance, ablation experiments were conducted, and comparisons were made with publicly available datasets. MRAY shows great promise for the automated detection of GI cancers, ulcers, inflammations, and other medical conditions in clinical practice.

A multimodal fusion system predicting survival benefits of immune checkpoint inhibitors in unresectable hepatocellular carcinoma.

Xu J, Wang T, Li J, Wang Y, Zhu Z, Fu X, Wang J, Zhang Z, Cai W, Song R, Hou C, Yang LZ, Wang H, Wong STC, Li H

pubmed logopapersJun 14 2025
Early identification of unresectable hepatocellular carcinoma (HCC) patients who may benefit from immune checkpoint inhibitors (ICIs) is crucial for optimizing outcomes. Here, we developed a multimodal fusion (MMF) system integrating CT-derived deep learning features and clinical data to predict overall survival (OS) and progression-free survival (PFS). Using retrospective multicenter data (n = 859), the MMF combining an ensemble deep learning (Ensemble-DL) model with clinical variables achieved strong external validation performance (C-index: OS = 0.74, PFS = 0.69), outperforming radiomics (29.8% OS improvement), mRECIST (27.6% OS improvement), clinical benchmarks (C-index: OS = 0.67, p = 0.0011; PFS = 0.65, p = 0.033), and Ensemble-DL (C-index: OS = 0.69, p = 0.0028; PFS = 0.66, p = 0.044). The MMF system effectively stratified patients across clinical subgroups and demonstrated interpretability through activation maps and radiomic correlations. Differential gene expression analysis revealed enrichment of the PI3K/Akt pathway in patients identified by the MMF system. The MMF system provides an interpretable, clinically applicable approach to guide personalized ICI treatment in unresectable HCC.

Multi-class transformer-based segmentation of pancreatic ductal adenocarcinoma and surrounding structures in CT imaging: a multi-center evaluation.

Wen S, Xiao X

pubmed logopapersJun 14 2025
Accurate segmentation of pancreatic ductal adenocarcinoma (PDAC) and surrounding anatomical structures is critical for diagnosis, treatment planning, and outcome assessment. This study proposes a deep learning-based framework to automate multi-class segmentation in CT images, comparing the performance of four state-of-the-art architectures. This retrospective multi-center study included 3265 patients from six institutions. Four deep learning models-UNet, nnU-Net, UNETR, and Swin-UNet-were trained using five-fold cross-validation on data from five centers and tested independently on a sixth center (n = 569). Preprocessing included intensity normalization, voxel resampling, and standardized annotation for six structures: PDAC lesion, pancreas, veins, arteries, pancreatic duct, and common bile duct. Evaluation metrics included Dice Similarity Coefficient (DSC), Intersection over Union (IoU), directed Hausdorff Distance (dHD), Average Symmetric Surface Distance (ASSD), and Volume Overlap Error (VOE). Statistical comparisons were made using Wilcoxon signed-rank tests with Bonferroni correction. Swin-UNet outperformed all models with a mean validation DSC of 92.4% and test DSC of 90.8%, showing minimal overfitting. It also achieved the lowest dHD (4.3 mm), ASSD (1.2 mm), and VOE (6.0%) in cross-validation. Per-class DSCs for Swin-UNet were consistently higher across all anatomical targets, including challenging structures like the pancreatic duct (91.0%) and bile duct (91.8%). Statistical analysis confirmed the superiority of Swin-UNet (p < 0.001). All models showed generalization capability, but Swin-UNet provided the most accurate and robust segmentation across datasets. Transformer-based architectures, particularly Swin-UNet, enable precise and generalizable multi-class segmentation of PDAC and surrounding anatomy. This framework has potential for clinical integration in PDAC diagnosis, staging, and therapy planning.

Automated quantification of T1 and T2 relaxation times in liver mpMRI using deep learning: a sequence-adaptive approach.

Zbinden L, Erb S, Catucci D, Doorenbos L, Hulbert L, Berzigotti A, Brönimann M, Ebner L, Christe A, Obmann VC, Sznitman R, Huber AT

pubmed logopapersJun 14 2025
To evaluate a deep learning sequence-adaptive liver multiparametric MRI (mpMRI) assessment with validation in different populations using total and segmental T1 and T2 relaxation time maps. A neural network was trained to label liver segmental parenchyma and its vessels on noncontrast T1-weighted gradient-echo Dixon in-phase acquisitions on 200 liver mpMRI examinations. Then, 120 unseen liver mpMRI examinations of patients with primary sclerosing cholangitis or healthy controls were assessed by coregistering the labels to noncontrast and contrast-enhanced T1 and T2 relaxation time maps for optimization and internal testing. The algorithm was externally tested in a segmental and total liver analysis of previously unseen 65 patients with biopsy-proven liver fibrosis and 25 healthy volunteers. Measured relaxation times were compared to manual measurements using intraclass correlation coefficient (ICC) and Wilcoxon test. Comparison of manual and deep learning-generated segmental areas on different T1 and T2 maps was excellent for segmental (ICC = 0.95 ± 0.1; p < 0.001) and total liver assessment (0.97 ± 0.02, p < 0.001). The resulting median of the differences between automated and manual measurements among all testing populations and liver segments was 1.8 ms for noncontrast T1 (median 835 versus 842 ms), 2.0 ms for contrast-enhanced T1 (median 518 versus 519 ms), and 0.3 ms for T2 (median 37 versus 37 ms). Automated quantification of liver mpMRI is highly effective across different patient populations, offering excellent reliability for total and segmental T1 and T2 maps. Its scalable, sequence-adaptive design could foster comprehensive clinical decision-making. The proposed automated, sequence-adaptive algorithm for total and segmental analysis of liver mpMRI may be co-registered to any combination of parametric sequences, enabling comprehensive quantitative analysis of liver mpMRI without sequence-specific training. A deep learning-based algorithm automatically quantified segmental T1 and T2 relaxation times in liver mpMRI. The two-step approach of segmentation and co-registration allowed to assess arbitrary sequences. The algorithm demonstrated high reliability with manual reader quantification. No additional sequence-specific training is required to assess other parametric sequences. The DL algorithm has the potential to enhance individual liver phenotyping.

Utility of Thin-slice Single-shot T2-weighted MR Imaging with Deep Learning Reconstruction as a Protocol for Evaluating Pancreatic Cystic Lesions.

Ozaki K, Hasegawa H, Kwon J, Katsumata Y, Yoneyama M, Ishida S, Iyoda T, Sakamoto M, Aramaki S, Tanahashi Y, Goshima S

pubmed logopapersJun 14 2025
To assess the effects of industry-developed deep learning reconstruction with super resolution (DLR-SR) on single-shot turbo spin-echo (SshTSE) images with thickness of 2 mm with DLR (SshTSE<sup>2mm</sup>) relative to those of images with a thickness of 5 mm with DLR (SSshTSE<sup>5mm</sup>) in the patients with pancreatic cystic lesions. Thirty consecutive patients who underwent abdominal MRI examinations because of pancreatic cystic lesions under observation between June 2024 and July 2024 were enrolled. We qualitatively and quantitatively evaluated the image qualities of SshTSE<sup>2mm</sup> and SshTSE<sup>5mm</sup> with and without DLR-SR. The SNRs of the pancreas, spleen, paraspinal muscle, peripancreatic fat, and pancreatic cystic lesions of SshTSE<sup>2mm</sup> with and without DLR-SR did not decrease in compared to that of SshTSE<sup>5mm</sup> with and without DLR-SR. There were no significant differences in contrast-to-noise ratios (CNRs) of the pancreas-to-cystic lesions and fat between 4 types of images. SshTSE<sup>2mm</sup> with DLR-SR had the highest image quality related to pancreas edge sharpness, perceived coarseness pancreatic duct clarity, noise, artifacts, overall image quality, and diagnostic confidence of cystic lesions, followed by SshTSE<sup>2mm</sup> without DLR-SR and SshTSE<sup>5mm</sup> with and without DLR-SR (P  <  0.0001). SshTSE<sup>2mm</sup> with DLR-SR images had better quality than the other images and did not have decreased SNRs and CNRs. The thin-slice SshTSE with DLR-SR may be feasible and clinically useful for the evaluation of patients with pancreatic cystic lesions.

Qualitative evaluation of automatic liver segmentation in computed tomography images for clinical use in radiation therapy.

Khalal DM, Slimani S, Bouraoui ZE, Azizi H

pubmed logopapersJun 14 2025
Segmentation of target volumes and organs at risk on computed tomography (CT) images constitutes an important step in the radiotherapy workflow. Artificial intelligence-based methods have significantly improved organ segmentation in medical images. Automatic segmentations are frequently evaluated using geometric metrics. Before a clinical implementation in the radiotherapy workflow, automatic segmentations must also be evaluated by clinicians. The aim of this study was to investigate the correlation between geometric metrics used for segmentation evaluation and the assessment performed by clinicians. In this study, we used the U-Net model to segment the liver in CT images from a publicly available dataset. The model's performance was evaluated using two geometric metrics: the Dice similarity coefficient and the Hausdorff distance. Additionally, a qualitative evaluation was performed by clinicians who reviewed the automatic segmentations to rate their clinical acceptability for use in the radiotherapy workflow. The correlation between the geometric metrics and the clinicians' evaluations was studied. The results showed that while the Dice coefficient and Hausdorff distance are reliable indicators of segmentation accuracy, they do not always align with clinician segmentation. In some cases, segmentations with high Dice scores still required clinician corrections before clinical use in the radiotherapy workflow. This study highlights the need for more comprehensive evaluation metrics beyond geometric measures to assess the clinical acceptability of artificial intelligence-based segmentation. Although the deep learning model provided promising segmentation results, the present study shows that standardized validation methodologies are crucial for ensuring the clinical viability of automatic segmentation systems.

Investigating the Role of Area Deprivation Index in Observed Differences in CT-Based Body Composition by Race.

Chisholm M, Jabal MS, He H, Wang Y, Kalisz K, Lafata KJ, Calabrese E, Bashir MR, Tailor TD, Magudia K

pubmed logopapersJun 13 2025
Differences in CT-based body composition (BC) have been observed by race. We sought to investigate whether indices reporting census block group-level disadvantage, area deprivation index (ADI) and social vulnerability index (SVI), age, sex, and/or clinical factors could explain race-based differences in body composition. The first abdominal CT exams for patients in Durham County at a single institution in 2020 were analyzed using a fully automated and open-source deep learning BC analysis workflow to generate cross-sectional areas for skeletal muscle (SMA), subcutaneous fat (SFA), and visceral fat (VFA). Patient level demographic and clinical data were gathered from the electronic health record. State ADI ranking and SVI values were linked to each patient. Univariable and multivariable models were created to assess the association of demographics, ADI, SVI, and other relevant clinical factors with SMA, SFA, and VFA. 5,311 patients (mean age, 57.4 years; 55.5% female, 46.5% Black; 39.5% White 10.3% Hispanic) were included. At univariable analysis, race, ADI, SVI, sex, BMI, weight, and height were significantly associated with all body compartments (SMA, SFA, and VFA, all p<0.05). At multivariable analyses adjusted for patient characteristics and clinical comorbidities, race remained a significant predictor, whereas ADI did not. SVI was significant in a multivariable model with SMA.

Uncovering ethical biases in publicly available fetal ultrasound datasets.

Fiorentino MC, Moccia S, Cosmo MD, Frontoni E, Giovanola B, Tiribelli S

pubmed logopapersJun 13 2025
We explore biases present in publicly available fetal ultrasound (US) imaging datasets, currently at the disposal of researchers to train deep learning (DL) algorithms for prenatal diagnostics. As DL increasingly permeates the field of medical imaging, the urgency to critically evaluate the fairness of benchmark public datasets used to train them grows. Our thorough investigation reveals a multifaceted bias problem, encompassing issues such as lack of demographic representativeness, limited diversity in clinical conditions depicted, and variability in US technology used across datasets. We argue that these biases may significantly influence DL model performance, which may lead to inequities in healthcare outcomes. To address these challenges, we recommend a multilayered approach. This includes promoting practices that ensure data inclusivity, such as diversifying data sources and populations, and refining model strategies to better account for population variances. These steps will enhance the trustworthiness of DL algorithms in fetal US analysis.

Radiogenomic correlation of hypoxia-related biomarkers in clear cell renal cell carcinoma.

Shao Y, Cen HS, Dhananjay A, Pawan SJ, Lei X, Gill IS, D'souza A, Duddalwar VA

pubmed logopapersJun 12 2025
This study aimed to evaluate radiomic models' ability to predict hypoxia-related biomarker expression in clear cell renal cell carcinoma (ccRCC). Clinical and molecular data from 190 patients were extracted from The Cancer Genome Atlas-Kidney Renal Clear Cell Carcinoma dataset, and corresponding CT imaging data were manually segmented from The Cancer Imaging Archive. A panel of 2,824 radiomic features was analyzed, and robust, high-interscanner-reproducibility features were selected. Gene expression data for 13 hypoxia-related biomarkers were stratified by tumor grade (1/2 vs. 3/4) and stage (I/II vs. III/IV) and analyzed using Wilcoxon rank sum test. Machine learning modeling was conducted using the High-Performance Random Forest (RF) procedure in SAS Enterprise Miner 15.1, with significance at P < 0.05. Descriptive univariate analysis revealed significantly lower expression of several biomarkers in high-grade and late-stage tumors, with KLF6 showing the most notable decrease. The RF model effectively predicted the expression of KLF6, ETS1, and BCL2, as well as PLOD2 and PPARGC1A underexpression. Stratified performance assessment showed improved predictive ability for RORA, BCL2, and KLF6 in high-grade tumors and for ETS1 across grades, with no significant performance difference across grade or stage. The RF model demonstrated modest but significant associations between texture metrics derived from clinical CT scans, such as GLDM and GLCM, and key hypoxia-related biomarkers including KLF6, BCL2, ETS1, and PLOD2. These findings suggest that radiomic analysis could support ccRCC risk stratification and personalized treatment planning by providing non-invasive insights into tumor biology.
Page 46 of 75743 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.