Sort by:
Page 49 of 1391387 results

BenchXAI: Comprehensive benchmarking of post-hoc explainable AI methods on multi-modal biomedical data.

Metsch JM, Hauschild AC

pubmed logopapersJun 1 2025
The increasing digitalization of multi-modal data in medicine and novel artificial intelligence (AI) algorithms opens up a large number of opportunities for predictive models. In particular, deep learning models show great performance in the medical field. A major limitation of such powerful but complex models originates from their 'black-box' nature. Recently, a variety of explainable AI (XAI) methods have been introduced to address this lack of transparency and trust in medical AI. However, the majority of such methods have solely been evaluated on single data modalities. Meanwhile, with the increasing number of XAI methods, integrative XAI frameworks and benchmarks are essential to compare their performance on different tasks. For that reason, we developed BenchXAI, a novel XAI benchmarking package supporting comprehensive evaluation of fifteen XAI methods, investigating their robustness, suitability, and limitations in biomedical data. We employed BenchXAI to validate these methods in three common biomedical tasks, namely clinical data, medical image and signal data, and biomolecular data. Our newly designed sample-wise normalization approach for post-hoc XAI methods enables the statistical evaluation and visualization of performance and robustness. We found that the XAI methods Integrated Gradients, DeepLift, DeepLiftShap, and GradientShap performed well over all three tasks, while methods like Deconvolution, Guided Backpropagation, and LRP-α1-β0 struggled for some tasks. With acts such as the EU AI Act the application of XAI in the biomedical domain becomes more and more essential. Our evaluation study represents a first step towards verifying the suitability of different XAI methods for various medical domains.

Combining Deep Data-Driven and Physics-Inspired Learning for Shear Wave Speed Estimation in Ultrasound Elastography.

Tehrani AKZ, Schoen S, Candel I, Gu Y, Guo P, Thomenius K, Pierce TT, Wang M, Tadross R, Washburn M, Rivaz H, Samir AE

pubmed logopapersJun 1 2025
The shear wave elastography (SWE) provides quantitative markers for tissue characterization by measuring the shear wave speed (SWS), which reflects tissue stiffness. SWE uses an acoustic radiation force pulse sequence to generate shear waves that propagate laterally through tissue with transient displacements. These waves travel perpendicular to the applied force, and their displacements are tracked using high-frame-rate ultrasound. Estimating the SWS map involves two main steps: speckle tracking and SWS estimation. Speckle tracking calculates particle velocity by measuring RF/IQ data displacement between adjacent firings, while SWS estimation methods typically compare particle velocity profiles of samples that are laterally a few millimeters apart. Deep learning (DL) methods have gained attention for SWS estimation, often relying on supervised training using simulated data. However, these methods may struggle with real-world data, which can differ significantly from the simulated training data, potentially leading to artifacts in the estimated SWS map. To address this challenge, we propose a physics-inspired learning approach that utilizes real data without known SWS values. Our method employs an adaptive unsupervised loss function, allowing the network to train with the real noisy data to minimize the artifacts and improve the robustness. We validate our approach using experimental phantom data and in vivo liver data from two human subjects, demonstrating enhanced accuracy and reliability in SWS estimation compared with conventional and supervised methods. This hybrid approach leverages the strengths of both data-driven and physics-inspired learning, offering a promising solution for more accurate and robust SWS mapping in clinical applications.

Ultrasound-based radiomics and machine learning for enhanced diagnosis of knee osteoarthritis: Evaluation of diagnostic accuracy, sensitivity, specificity, and predictive value.

Kiso T, Okada Y, Kawata S, Shichiji K, Okumura E, Hatsumi N, Matsuura R, Kaminaga M, Kuwano H, Okumura E

pubmed logopapersJun 1 2025
To evaluate the usefulness of radiomics features extracted from ultrasonographic images in diagnosing and predicting the severity of knee osteoarthritis (OA). In this single-center, prospective, observational study, radiomics features were extracted from standing radiographs and ultrasonographic images of knees of patients aged 40-85 years with primary medial OA and without OA. Analysis was conducted using LIFEx software (version 7.2.n), ANOVA, and LASSO regression. The diagnostic accuracy of three different models, including a statistical model incorporating background factors and machine learning models, was evaluated. Among 491 limbs analyzed, 318 were OA and 173 were non-OA cases. The mean age was 72.7 (±8.7) and 62.6 (±11.3) years in the OA and non-OA groups, respectively. The OA group included 81 (25.5 %) men and 237 (74.5 %) women, whereas the non-OA group included 73 men (42.2 %) and 100 (57.8 %) women. A statistical model using the cutoff value of MORPHOLOGICAL_SurfaceToVolumeRatio (IBSI:2PR5) achieved a specificity of 0.98 and sensitivity of 0.47. Machine learning diagnostic models (Model 2) demonstrated areas under the curve (AUCs) of 0.88 (discriminant analysis) and 0.87 (logistic regression), with sensitivities of 0.80 and 0.81 and specificities of 0.82 and 0.80, respectively. For severity prediction, the statistical model using MORPHOLOGICAL_SurfaceToVolumeRatio (IBSI:2PR5) showed sensitivity and specificity values of 0.78 and 0.86, respectively, whereas machine learning models achieved an AUC of 0.92, sensitivity of 0.81, and specificity of 0.85 for severity prediction. The use of radiomics features in diagnosing knee OA shows potential as a supportive tool for enhancing clinicians' decision-making.

MRI-based risk factors for intensive care unit admissions in acute neck infections.

Vierula JP, Merisaari H, Heikkinen J, Happonen T, Sirén A, Velhonoja J, Irjala H, Soukka T, Mattila K, Nyman M, Nurminen J, Hirvonen J

pubmed logopapersJun 1 2025
We assessed risk factors and developed a score to predict intensive care unit (ICU) admissions using MRI findings and clinical data in acute neck infections. This retrospective study included patients with MRI-confirmed acute neck infection. Abscess diameters were measured on post-gadolinium T1-weighted Dixon MRI, and specific edema patterns, retropharyngeal (RPE) and mediastinal edema, were assessed on fat-suppressed T2-weighted Dixon MRI. A multivariate logistic regression model identified ICU admission predictors, with risk scores derived from regression coefficients. Model performance was evaluated using the area under the curve (AUC) from receiver operating characteristic analysis. Machine learning models (random forest, XGBoost, support vector machine, neural networks) were tested. The sample included 535 patients, of whom 373 (70 %) had an abscess, and 62 (12 %) required ICU treatment. Significant predictors for ICU admission were RPE, maximal abscess diameter (≥40 mm), and C-reactive protein (CRP) (≥172 mg/L). The risk score (0-7) (AUC=0.82, 95 % confidence interval [CI] 0.77-0.88) outperformed CRP (AUC=0.73, 95 % CI 0.66-0.80, p = 0.001), maximal abscess diameter (AUC=0.72, 95 % CI 0.64-0.80, p < 0.001), and RPE (AUC=0.71, 95 % CI 0.65-0.77, p < 0.001). The risk score at a cut-off > 3 yielded the following metrics: sensitivity 66 %, specificity 82 %, positive predictive value 33 %, negative predictive value 95 %, accuracy 80 %, and odds ratio 9.0. Discriminative performance was robust in internal (AUC=0.83) and hold-out (AUC=0.81) validations. ML models were not better than regression models. A risk model incorporating RPE, abscess size, and CRP showed moderate accuracy and high negative predictive value for ICU admissions, supporting MRI's role in acute neck infections.

Whole Brain 3D T1 Mapping in Multiple Sclerosis Using Standard Clinical Images Compared to MP2RAGE and MR Fingerprinting.

Snyder J, Blevins G, Smyth P, Wilman AH

pubmed logopapersJun 1 2025
Quantitative T1 and T2 mapping is a useful tool to assess properties of healthy and diseased tissues. However, clinical diagnostic imaging remains dominated by relaxation-weighted imaging without direct collection of relaxation maps. Dedicated research sequences such as MR fingerprinting can save time and improve resolution over classical gold standard quantitative MRI (qMRI) methods, although they are not widely adopted in clinical studies. We investigate the use of clinical sequences in conjunction with prior knowledge provided by machine learning to elucidate T1 maps of brain in routine imaging studies without the need for specialized sequences. A classification learner was trained on T1w (magnetization prepared rapid gradient echo [MPRAGE]) and T2w (fluid-attenuated inversion recovery [FLAIR]) data (2.6 million voxels) from multiple sclerosis (MS) patients at 3T, compared to gold standard inversion recovery fast spin echo T1 maps in five healthy subjects, and tested on eight MS patients. In the MS patient test, the results of the machine learner-produced T1 maps were compared to MP2RAGE and MR fingerprinting T1 maps in seven tissue regions of the brain: cortical grey matter, white matter, cerebrospinal fluid, caudate, putamen and globus pallidus. Additionally, T1s in lesion-segmented tissue was compared using the three different methods. The machine learner (ML) method had excellent agreement with MP2RAGE, with all average tissue deviations less than 3.2%, with T1 lesion variation of 0.1%-5.3% across the eight patients. The machine learning method provides a valuable and accurate estimation of T1 values in the human brain while using data from standard clinical sequences and allowing retrospective reconstruction from past studies without the need for new quantitative techniques.

Lag-Net: Lag correction for cone-beam CT via a convolutional neural network.

Ren C, Kan S, Huang W, Xi Y, Ji X, Chen Y

pubmed logopapersJun 1 2025
Due to the presence of charge traps in amorphous silicon flat-panel detectors, lag signals are generated in consecutively captured projections. These signals lead to ghosting in projection images and severe lag artifacts in cone-beam computed tomography (CBCT) reconstructions. Traditional Linear Time-Invariant (LTI) correction need to measure lag correction factors (LCF) and may leave residual lag artifacts. This incomplete correction is partly attributed to the lack of consideration for exposure dependency. To measure the lag signals more accurately and suppress lag artifacts, we develop a novel hardware correction method. This method requires two scans of the same object, with adjustments to the operating timing of the CT instrumentation during the second scan to measure the lag signal from the first. While this hardware correction significantly mitigates lag artifacts, it is complex to implement and imposes high demands on the CT instrumentation. To enhance the process, We introduce a deep learning method called Lag-Net to remove lag signal, utilizing the nearly lag-free results from hardware correction as training targets for the network. Qualitative and quantitative analyses of experimental results on both simulated and real datasets demonstrate that deep learning correction significantly outperforms traditional LTI correction in terms of lag artifact suppression and image quality enhancement. Furthermore, the deep learning method achieves reconstruction results comparable to those obtained from hardware correction while avoiding the operational complexities associated with the hardware correction approach. The proposed hardware correction method, despite its operational complexity, demonstrates superior artifact suppression performance compared to the LTI algorithm, particularly under low-exposure conditions. The introduced Lag-Net, which utilizes the results of the hardware correction method as training targets, leverages the end-to-end nature of deep learning to circumvent the intricate operational drawbacks associated with hardware correction. Furthermore, the network's correction efficacy surpasses that of the LTI algorithm in low-exposure scenarios.

Evaluation of MRI anatomy in machine learning predictive models to assess hydrogel spacer benefit for prostate cancer patients.

Bush M, Jones S, Hargrave C

pubmed logopapersJun 1 2025
Hydrogel spacers (HS) are designed to minimise the radiation doses to the rectum in prostate cancer radiation therapy (RT) by creating a physical gap between the rectum and the target treatment volume inclusive of the prostate and seminal vesicles (SV). This study aims to determine the feasibility of incorporating diagnostic MRI (dMRI) information in statistical machine learning (SML) models developed with planning CT (pCT) anatomy for dose and rectal toxicity prediction. The SML models aim to support HS insertion decision-making prior to RT planning procedures. Regions of interest (ROIs) were retrospectively contoured on the pCT and registered dMRI scans for 20 patients. ROI Dice and Hausdorff distance (HD) comparison metrics were calculated. The ROI and patient clinical risk factors (CRFs) variables were inputted into three SML models and then pCT and dMRI-based dose and toxicity model performance compared through confusion matrices, AUC curves, accuracy performance metric results and observed patient outcomes. Average Dice values comparing dMRI and pCT ROIs were 0.81, 0.47 and 0.71 for the prostate, SV, and rectum respectively. Average Hausdorff distances were 2.15, 2.75 and 2.75 mm for the prostate, SV, and rectum respectively. The average accuracy metric across all models was 0.83 when using dMRI ROIs and 0.85 when using pCT ROIs. Differences between pCT and dMRI anatomical ROI variables did not impact SML model performance in this study, demonstrating the feasibility of using dMRI images. Due to the limited sample size further training of the predictive models including dMRI anatomy is recommended.

Liver Tumor Prediction using Attention-Guided Convolutional Neural Networks and Genomic Feature Analysis.

Edwin Raja S, Sutha J, Elamparithi P, Jaya Deepthi K, Lalitha SD

pubmed logopapersJun 1 2025
The task of predicting liver tumors is critical as part of medical image analysis and genomics area since diagnosis and prognosis are important in making correct medical decisions. Silent characteristics of liver tumors and interactions between genomic and imaging features are also the main sources of challenges toward reliable predictions. To overcome these hurdles, this study presents two integrated approaches namely, - Attention-Guided Convolutional Neural Networks (AG-CNNs), and Genomic Feature Analysis Module (GFAM). Spatial and channel attention mechanisms in AG-CNN enable accurate tumor segmentation from CT images while providing detailed morphological profiling. Evaluation with three control databases TCIA, LiTS, and CRLM shows that our model produces more accurate output than relevant literature with an accuracy of 94.5%, a Dice Similarity Coefficient of 91.9%, and an F1-Score of 96.2% for the Dataset 3. More considerably, the proposed methods outperform all the other methods in different datasets in terms of recall, precision, and Specificity by up to 10 percent than all other methods including CELM, CAGS, DM-ML, and so on.•Utilization of Attention-Guided Convolutional Neural Networks (AG-CNN) enhances tumor region focus and segmentation accuracy.•Integration of Genomic Feature Analysis (GFAM) identifies molecular markers for subtype-specific tumor classification.

Multi-level feature fusion network for kidney disease detection.

Rehman Khan SU

pubmed logopapersJun 1 2025
Kidney irregularities pose a significant public health challenge, often leading to severe complications, yet the limited availability of nephrologists makes early detection costly and time-consuming. To address this issue, we propose a deep learning framework for automated kidney disease detection, leveraging feature fusion and sequential modeling techniques to enhance diagnostic accuracy. Our study thoroughly evaluates six pretrained models under identical experimental conditions, identifying ResNet50 and VGG19 as the highly efficient models for feature extraction due to their deep residual learning and hierarchical representations. Our proposed methodology integrates feature fusion with an inception block to extract diverse feature representations while maintaining imbalance dataset overhead. To enhance sequential learning and capture long-term dependencies in disease progression, ConvLSTM is incorporated after feature fusion. Additionally, Inception block is employed after ConvLSTM to refine hierarchical feature extraction, further strengthening the proposed model ability to leverage both spatial and temporal patterns. To validate our approach, we introduce a new named Multiple Hospital Collected (MHC-CT) dataset, consisting of 1860 tumor and 1024 normal kidney CT scans, meticulously annotated by medical experts. Our model achieves 99.60 % accuracy on this dataset, demonstrating its robustness in binary classification. Furthermore, to assess its generalization capability, we evaluate the model on a publicly available benchmark multiclass CT scan dataset, achieving 91.31 % accuracy. The superior performance is attributed to the effective feature fusion using inception blocks and the sequential learning capabilities of ConvLSTM, which together enhance spatial and temporal feature representations. These results highlight the efficacy of the proposed framework in automating kidney disease detection, providing a reliable, and efficient solution for clinical decision-making. https://github.com/VS-EYE/KidneyDiseaseDetection.git.

Artificial intelligence-assisted magnetic resonance lymphography for evaluation of micro- and macro-sentinel lymph node metastasis in breast cancer.

Yang Z, Ling J, Sun W, Pan C, Chen T, Dong C, Zhou X, Zhang J, Zheng J, Ma X

pubmed logopapersJun 1 2025
Contrast-enhanced magnetic resonance lymphography (CE-MRL) plays a crucial role in preoperative diagnostic for evaluating tumor metastatic sentinel lymph node (T-SLN), by integrating detailed lymphatic information about lymphatic anatomy and drainage function from MR images. However, the clinical gadolinium-based contrast agents for identifying T-SLN is seriously limited, owing to their small molecular structure and rapid diffusion into the bloodstream. Herein, we propose a novel albumin-modified manganese-based nanoprobes enhanced MRL method for accurately assessing micro- and macro-T-SLN. Specifically, the inherent concentration gradient of albumin between blood and interstitial fluid aids in the movement of nanoprobes into the lymphatic system. The micro-T-SLN exhibits a notably higher MR signal due to the formation of new lymphatic vessels and increased lymphatic flow, allowing for a greater influx of nanoprobes. In contrast, the macro-T-SLN shows a lower MR signal as a result of tumor cell proliferation and damage to the lymphatic vessels. Additionally, a highly accurate and sensitive machine learning model has been developed to guide the identification of micro- and macro-T-SLN by analyzing manganese-enhanced MR images. In conclusion, our research presents a novel comprehensive assessment framework utilizing albumin-modified manganese-based nanoprobes for a highly sensitive evaluation of micro- and macro-T-SLN in breast cancer.
Page 49 of 1391387 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.