Sort by:
Page 279 of 3463455 results

Transfer Learning and Explainable AI for Brain Tumor Classification: A Study Using MRI Data from Bangladesh

Shuvashis Sarker

arxiv logopreprintJun 8 2025
Brain tumors, regardless of being benign or malignant, pose considerable health risks, with malignant tumors being more perilous due to their swift and uncontrolled proliferation, resulting in malignancy. Timely identification is crucial for enhancing patient outcomes, particularly in nations such as Bangladesh, where healthcare infrastructure is constrained. Manual MRI analysis is arduous and susceptible to inaccuracies, rendering it inefficient for prompt diagnosis. This research sought to tackle these problems by creating an automated brain tumor classification system utilizing MRI data obtained from many hospitals in Bangladesh. Advanced deep learning models, including VGG16, VGG19, and ResNet50, were utilized to classify glioma, meningioma, and various brain cancers. Explainable AI (XAI) methodologies, such as Grad-CAM and Grad-CAM++, were employed to improve model interpretability by emphasizing the critical areas in MRI scans that influenced the categorization. VGG16 achieved the most accuracy, attaining 99.17%. The integration of XAI enhanced the system's transparency and stability, rendering it more appropriate for clinical application in resource-limited environments such as Bangladesh. This study highlights the capability of deep learning models, in conjunction with explainable artificial intelligence (XAI), to enhance brain tumor detection and identification in areas with restricted access to advanced medical technologies.

Diagnostic performance of lumbar spine CT using deep learning denoising to evaluate disc herniation and spinal stenosis.

Park S, Kang JH, Moon SG

pubmed logopapersJun 7 2025
To evaluate the diagnostic performance of lumbar spine CT using deep learning denoising (DLD CT) for detecting disc herniation and spinal stenosis. This retrospective study included 47 patients (229 intervertebral discs from L1/2 to L5/S1; 18 men and 29 women; mean age, 69.1 ± 10.9 years) who underwent lumbar spine CT and MRI within 1 month. CT images were reconstructed using filtered back projection (FBP) and denoised using a deep learning algorithm (ClariCT.AI). Three radiologists independently evaluated standard CT and DLD CT at an 8-week interval for the presence of disc herniation, central canal stenosis, and neural foraminal stenosis. Subjective image quality and diagnostic confidence were also assessed using five-point Likert scales. Standard CT and DLD CT were compared using MRI as a reference standard. DLD CT showed higher sensitivity (60% (70/117) vs. 44% (51/117); p < 0.001) and similar specificity (94% (534/570) vs. 94% (538/570); p = 0.465) for detecting disc herniation. Specificity for detecting spinal canal stenosis and neural foraminal stenosis was higher in DLD CT (90% (487/540) vs. 86% (466/540); p = 0.003, 94% (1202/1272) vs. 92% (1171/1272); p < 0.001), while sensitivity was comparable (81% (119/147) vs. 77% (113/147); p = 0.233, 83% (85/102) vs. 81% (83/102); p = 0.636). Image quality and diagnostic confidence were superior for DLD CT (all comparisons, p < 0.05). Compared to standard CT, DLD CT can improve diagnostic performance in detecting disc herniation and spinal stenosis with superior image quality and diagnostic confidence. Question The accurate diagnosis of disc herniation and spinal stenosis is limited on lumbar spine CT because of the low soft-tissue contrast. Findings Lumbar spine CT using deep learning denoising (DLD CT) demonstrated superior diagnostic performance in detecting disc herniation and spinal stenosis compared to standard CT. Clinical relevance DLD CT can be used as a simple and cost-effective screening test.

Automatic MRI segmentation of masticatory muscles using deep learning enables large-scale muscle parameter analysis.

Ten Brink RSA, Merema BJ, den Otter ME, Jensma ML, Witjes MJH, Kraeima J

pubmed logopapersJun 7 2025
Mandibular reconstruction to restore mandibular continuity often relies on patient-specific implants and virtual surgical planning, but current implant designs rarely consider individual biomechanical demands, which are critical for preventing complications such as stress shielding, screw loosening, and implant failure. The inclusion of patient-specific masticatory muscle parameters such as cross-sectional area, vectors, and volume could improve implant success, but manual segmentation of these parameters is time-consuming, limiting large-scale analyses. In this study, a deep learning model was trained for automatic segmentation of eight masticatory muscles on MRI images. Forty T1-weighted MRI scans were segmented manually or via pseudo-labelling for training. Training employed 5-fold cross-validation over 1000 epochs per fold and testing was done on 10 manually segmented scans. The model achieved a mean Dice similarity coefficient (DSC) of 0.88, intersection over union (IoU) of 0.79, precision of 0.87, and recall of 0.89, demonstrating high segmentation accuracy. These results indicate the feasibility of large-scale, reproducible analyses of muscle volumes, directions, and estimated forces. By integrating these parameters into implant design and surgical planning, this method offers a step forward in developing personalized surgical strategies that could improve postoperative outcomes in mandibular reconstruction. This brings the field closer to truly individualized patient care.

Simulating workload reduction with an AI-based prostate cancer detection pathway using a prediction uncertainty metric.

Fransen SJ, Bosma JS, van Lohuizen Q, Roest C, Simonis FFJ, Kwee TC, Yakar D, Huisman H

pubmed logopapersJun 7 2025
This study compared two uncertainty quantification (UQ) metrics to rule out prostate MRI scans with a high-confidence artificial intelligence (AI) prediction and investigated the resulting potential radiologist's workload reduction in a clinically significant prostate cancer (csPCa) detection pathway. This retrospective study utilized 1612 MRI scans from three institutes for csPCa (Gleason Grade Group ≥ 2) assessment. We compared the standard diagnostic pathway (radiologist reading) to an AI-based rule-out pathway in terms of efficacy and accuracy in diagnosing csPCa. In the rule-out pathway, 15 AI submodels (trained on 7756 cases) diagnosed each MRI scan, and any prediction deemed uncertain was referred to a radiologist for reading. We compared the mean (meanUQ) and variability (varUQ) of predictions using the DeLong test on the area under the receiver operating characteristic curves (AUROC). The level of workload reduction of the best UQ method was determined based on a maintained sensitivity at non-inferior specificity using the margins 0.05 and 0.10. The workload reduction of the proposed pathway was institute-specific: up to 20% at a 0.10 non-inferiority margin (p < 0.05) and non-significant workload reduction at a 0.05 margin. VarUQ-based rule out gave higher but non-significant AUROC scores than meanUQ in certain selected cases (+0.05 AUROC, p > 0.05). MeanUQ and varUQ showed promise in AI-based rule-out csPCa detection. Using varUQ in an AI-based csPCa detection pathway could reduce the number of scans radiologists need to read. The varying performance of the UQ rule-out indicates the need for institute-specific UQ thresholds. Question AI can autonomously assess prostate MRI scans with high certainty at a non-inferior performance compared to radiologists, potentially reducing the workload of radiologists. Findings The optimal ratio of AI-model and radiologist readings is institute-dependent and requires calibration. Clinical relevance Semi-autonomous AI-based prostate cancer detection with variational UQ scores shows promise in reducing the number of scans radiologists need to read.

Estimation of tumor coverage after RF ablation of hepatocellular carcinoma using single 2D image slices.

Varble N, Li M, Saccenti L, Borde T, Arrichiello A, Christou A, Lee K, Hazen L, Xu S, Lencioni R, Wood BJ

pubmed logopapersJun 7 2025
To assess the technical success of radiofrequency ablation (RFA) in patients with hepatocellular carcinoma (HCC), an artificial intelligence (AI) model was developed to estimate the tumor coverage without the need for segmentation or registration tools. A secondary retrospective analysis of 550 patients in the multicenter and multinational OPTIMA trial (3-7 cm solidary HCC lesions, randomized to RFA or RFA + LTLD) identified 182 patients with well-defined pre-RFA tumor and 1-month post-RFA devascularized ablation zones on enhanced CT. The ground-truth, or percent tumor coverage, was determined based on the result of semi-automatic 3D tumor and ablation zone segmentation and elastic registration. The isocenter of the tumor and ablation was isolated on 2D axial CT images. Feature extraction was performed, and classification and linear regression models were built. Images were augmented, and 728 image pairs were used for training and testing. The estimated percent tumor coverage using the models was compared to ground-truth. Validation was performed on eight patient cases from a separate institution, where RFA was performed, and pre- and post-ablation images were collected. In testing cohorts, the best model accuracy was with classification and moderate data augmentation (AUC = 0.86, TPR = 0.59, and TNR = 0.89, accuracy = 69%) and regression with random forest (RMSE = 12.6%, MAE = 9.8%). Validation in a separate institution did not achieve accuracy greater than random estimation. Visual review of training cases suggests that poor tumor coverage may be a result of atypical ablation zone shrinkage 1 month post-RFA, which may not be reflected in clinical utilization. An AI model that uses 2D images at the center of the tumor and 1 month post-ablation can accurately estimate ablation tumor coverage. In separate validation cohorts, translation could be challenging.

De-identification of medical imaging data: a comprehensive tool for ensuring patient privacy.

Rempe M, Heine L, Seibold C, Hörst F, Kleesiek J

pubmed logopapersJun 7 2025
Medical imaging data employed in research frequently comprises sensitive Protected Health Information (PHI) and Personal Identifiable Information (PII), which is subject to rigorous legal frameworks such as the General Data Protection Regulation (GDPR) or the Health Insurance Portability and Accountability Act (HIPAA). Consequently, these types of data must be de-identified prior to utilization, which presents a significant challenge for many researchers. Given the vast array of medical imaging data, it is necessary to employ a variety of de-identification techniques. To facilitate the de-identification process for medical imaging data, we have developed an open-source tool that can be used to de-identify Digital Imaging and Communications in Medicine (DICOM) magnetic resonance images, computer tomography images, whole slide images and magnetic resonance twix raw data. Furthermore, the implementation of a neural network enables the removal of text within the images. The proposed tool reaches comparable results to current state-of-the-art algorithms at reduced computational time (up to × 265). The tool also manages to fully de-identify image data of various types, such as Neuroimaging Informatics Technology Initiative (NIfTI) or Whole Slide Image (WSI-)DICOMS. The proposed tool automates an elaborate de-identification pipeline for multiple types of inputs, reducing the need for additional tools used for de-identification of imaging data. Question How can researchers effectively de-identify sensitive medical imaging data while complying with legal frameworks to protect patient health information? Findings We developed an open-source tool that automates the de-identification of various medical imaging formats, enhancing the efficiency of de-identification processes. Clinical relevance This tool addresses the critical need for robust and user-friendly de-identification solutions in medical imaging, facilitating data exchange in research while safeguarding patient privacy.

Contribution of Labrum and Cartilage to Joint Surface in Different Hip Deformities: An Automatic Deep Learning-Based 3-Dimensional Magnetic Resonance Imaging Analysis.

Meier MK, Roshardt JA, Ruckli AC, Gerber N, Lerch TD, Jung B, Tannast M, Schmaranzer F, Steppacher SD

pubmed logopapersJun 7 2025
Multiple 2-dimensional magnetic resonance imaging (MRI) studies have indicated that the size of the labrum adjusts in response to altered joint loading. In patients with hip dysplasia, it tends to increase as a compensatory mechanism for inadequate acetabular coverage. To determine the differences in labral contribution to the joint surface among different hip deformities as well as which radiographic parameters influence labral contribution to the joint surface using a deep learning-based approach for automatic 3-dimensional (3D) segmentation of MRI. Cross-sectional study; Level of evidence, 4. This retrospective study was approved by the local ethics committee with waiver for informed consent. A total of 98 patients (100 hips) with symptomatic hip deformities undergoing direct hip magnetic resonance arthrography (3 T) between January 2020 and October 2021 were consecutively selected (mean age, 30 ± 9 years; 64% female). The standard imaging protocol included proton density-weighted turbo spin echo images and an axial-oblique 3D T1-weighted MP2RAGE sequence. According to acetabular morphology, hips were divided into subgroups: dysplasia (lateral center-edge [LCE] angle, <23°), normal coverage (LCE, 23°-33°), overcoverage (LCE, 33°-39°), severe overcoverage (LCE, >39°), and retroversion (retroversion index >10% and all 3 retroversion signs positive). A previously validated deep learning approach for automatic segmentation and software for calculation of the joint surface were used. The labral contribution to the joint surface was defined as follows: labrum surface area/(labrum surface area + cartilage surface area). One-way analysis of variance with Tukey correction for multiple comparison and linear regression analysis was performed. The mean labral contribution of the joint surface of dysplastic hips was 26% ± 5% (95% CI, 24%-28%) and higher compared with all other hip deformities (<i>P</i> value range, .001-.036). Linear regression analysis identified LCE angle (β = -.002; <i>P</i> < .001) and femoral torsion (β = .001; <i>P</i> = .008) as independent predictors for labral contribution to the joint surface with a goodness-of-fit <i>R</i><sup>2</sup> value of 0.35. The labral contribution to the joint surface differs among hip deformities and is influenced by lateral acetabular coverage and femoral torsion. This study paves the way for a more in-depth understanding of the underlying pathomechanism and a reliable 3D analysis of the hip joint that can be indicative for surgical decision-making in patients with hip deformities.

Diagnostic accuracy of radiomics in risk stratification of gastrointestinal stromal tumors: A systematic review and meta-analysis.

Salimi M, Mohammadi H, Ghahramani S, Nemati M, Ashari A, Imani A, Imani MH

pubmed logopapersJun 7 2025
This systematic review and meta-analysis aimed to assess the diagnostic accuracy of radiomics in risk stratification of gastrointestinal stromal tumors (GISTs). It focused on evaluating radiomic models as a non-invasive tool in clinical practice. A comprehensive search was conducted across PubMed, Web of Science, EMBASE, Scopus, and Cochrane Library up to May 17, 2025. Studies involving preoperative imaging and radiomics-based risk stratification of GISTs were included. Quality was assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool and Radiomics Quality Score (RQS). Pooled sensitivity, specificity, and area under the curve (AUC) were calculated using bivariate random-effects models. Meta-regression and subgroup analyses were performed to explore heterogeneity. A total of 29 studies were included, with 22 (76 %) based on computed tomography scans, while 2 (7 %) were based on endoscopic ultrasound, 3 (10 %) on magnetic resonance imaging, and 2 (7 %) on ultrasound. Of these, 18 studies provided sufficient data for meta-analysis. Pooled sensitivity, specificity, and AUC for radiomics-based GIST risk stratification were 0.84, 0.86, and 0.90 for training cohorts, and 0.84, 0.80, and 0.89 for validation cohorts. QUADAS-2 indicated some bias due to insufficient pre-specified thresholds. The mean RQS score was 13.14 ± 3.19. Radiomics holds promise for non-invasive GIST risk stratification, particularly with advanced imaging techniques. However, radiomic models are still in the early stages of clinical adoption. Further research is needed to improve diagnostic accuracy and validate their role alongside conventional methods like biopsy or surgery.

Automated transcatheter heart valve 4DCT-based deformation assessment throughout the cardiac cycle: Towards enhanced long-term durability.

Busto L, Veiga C, González-Nóvoa JA, Campanioni S, Martínez C, Juan-Salvadores P, Jiménez V, Suárez S, López-Campos JÁ, Segade A, Alba-Castro JL, Kütting M, Baz JA, Íñiguez A

pubmed logopapersJun 7 2025
Transcatheter heart valve (THV) durability is a critical concern, and its deformation may influence long-term performance. Current assessments rely on CT-based single-phase measurements and require a tedious analysis process, potentially overlooking deformation dynamics throughout the cardiac cycle. A fully automated artificial intelligence-based method was developed to assess THV deformation in post-transcatheter aortic valve implantation (TAVI) 4DCT scans. The approach involves segmenting the THV, extracting orthogonal cross-sections along its axis, fitting ellipses to these cross-sections, and computing eccentricity to analyze deformation over the cardiac cycle. The method was evaluated in 21 TAVI patients with different self-expandable THV models, using one post-TAVI 4DCT series per patient. The THV inflow level exhibited the greatest eccentricity variations (0.35-0.69 among patients with the same THV model at end-diastole). Additionally, eccentricity varied throughout the cardiac cycle (0.23-0.57), highlighting the limitations of single-phase assessments in characterizing THV deformation. This method enables automated THV deformation assessment based on cross-sectional eccentricity. Significant differences were observed at the inflow level, and cyclic variations suggest that full cardiac cycle analysis provides a more comprehensive evaluation than single-phase measurements. This approach may aid in optimizing THV durability and function while preventing related complications.

Lack of children in public medical imaging data points to growing age bias in biomedical AI

Hua, S. B. Z., Heller, N., He, P., Towbin, A. J., Chen, I., Lu, A., Erdman, L.

medrxiv logopreprintJun 7 2025
Artificial intelligence (AI) is rapidly transforming healthcare, but its benefits are not reaching all patients equally. Children remain overlooked with only 17% of FDA-approved medical AI devices labeled for pediatric use. In this work, we demonstrate that this exclusion may stem from a fundamental data gap. Our systematic review of 181 public medical imaging datasets reveals that children represent just under 1% of available data, while the majority of machine learning imaging conference papers we surveyed utilized publicly available data for methods development. Much like systematic biases of other kinds in model development, past studies have demonstrated the manner in which pediatric representation in data used for models intended for the pediatric population is essential for model performance in that population. We add to these findings, showing that adult-trained chest radiograph models exhibit significant age bias when applied to pediatric populations, with higher false positive rates in younger children. This work underscores the urgent need for increased pediatric representation in publicly accessible medical datasets. We provide actionable recommendations for researchers, policymakers, and data curators to address this age equity gap and ensure AI benefits patients of all ages. 1-2 sentence summaryOur analysis reveals a critical healthcare age disparity: children represent less than 1% of public medical imaging datasets. This gap in representation leads to biased predictions across medical image foundation models, with the youngest patients facing the highest risk of misdiagnosis.
Page 279 of 3463455 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.