Sort by:
Page 305 of 3433423 results

A computed tomography-based deep learning radiomics model for predicting the gender-age-physiology stage of patients with connective tissue disease-associated interstitial lung disease.

Long B, Li R, Wang R, Yin A, Zhuang Z, Jing Y, E L

pubmed logopapersJun 1 2025
To explore the feasibility of using a diagnostic model constructed with deep learning-radiomics (DLR) features extracted from chest computed tomography (CT) images to predict the gender-age-physiology (GAP) stage of patients with connective tissue disease-associated interstitial lung disease (CTD-ILD). The data of 264 CTD-ILD patients were retrospectively collected. GAP Stage I, II, III patients are 195, 56, 13 cases respectively. The latter two stages were combined into one group. The patients were randomized into a training set and a validation set. Single-input models were separately constructed using the selected radiomics and DL features, while DLR model was constructed from both sets of features. For all models, the support vector machine (SVM) and logistic regression (LR) algorithms were used for construction. The nomogram models were generated by integrating age, gender, and DLR features. The DLR model outperformed the radiomics and DL models in both the training set and the validation set. The predictive performance of the DLR model based on the LR algorithm was the best among all the feature-based models (AUC = 0.923). The comprehensive models had even greater performance in predicting the GAP stage of CTD-ILD patients. The comprehensive model using the SVM algorithm had the best performance of the two models (AUC = 0.951). The DLR model extracted from CT images can assist in the clinical prediction of the GAP stage of CTD-ILD patients. A nomogram showed even greater performance in predicting the GAP stage of CTD-ILD patients.

multiPI-TransBTS: A multi-path learning framework for brain tumor image segmentation based on multi-physical information.

Zhu H, Huang J, Chen K, Ying X, Qian Y

pubmed logopapersJun 1 2025
Brain Tumor Segmentation (BraTS) plays a critical role in clinical diagnosis, treatment planning, and monitoring the progression of brain tumors. However, due to the variability in tumor appearance, size, and intensity across different MRI modalities, automated segmentation remains a challenging task. In this study, we propose a novel Transformer-based framework, multiPI-TransBTS, which integrates multi-physical information to enhance segmentation accuracy. The model leverages spatial information, semantic information, and multi-modal imaging data, addressing the inherent heterogeneity in brain tumor characteristics. The multiPI-TransBTS framework consists of an encoder, an Adaptive Feature Fusion (AFF) module, and a multi-source, multi-scale feature decoder. The encoder incorporates a multi-branch architecture to separately extract modality-specific features from different MRI sequences. The AFF module fuses information from multiple sources using channel-wise and element-wise attention, ensuring effective feature recalibration. The decoder combines both common and task-specific features through a Task-Specific Feature Introduction (TSFI) strategy, producing accurate segmentation outputs for Whole Tumor (WT), Tumor Core (TC), and Enhancing Tumor (ET) regions. Comprehensive evaluations on the BraTS2019 and BraTS2020 datasets demonstrate the superiority of multiPI-TransBTS over the state-of-the-art methods. The model consistently achieves better Dice coefficients, Hausdorff distances, and Sensitivity scores, highlighting its effectiveness in addressing the BraTS challenges. Our results also indicate the need for further exploration of the balance between precision and recall in the ET segmentation task. The proposed framework represents a significant advancement in BraTS, with potential implications for improving clinical outcomes for brain tumor patients.

Early-stage lung cancer detection via thin-section low-dose CT reconstruction combined with AI in non-high risk populations: a large-scale real-world retrospective cohort study.

Ji G, Luo W, Zhu Y, Chen B, Wang M, Jiang L, Yang M, Song W, Yao P, Zheng T, Yu H, Zhang R, Wang C, Ding R, Zhuo X, Chen F, Li J, Tang X, Xian J, Song T, Tang J, Feng M, Shao J, Li W

pubmed logopapersJun 1 2025
Current lung cancer screening guidelines recommend annual low-dose computed tomography (LDCT) for high-risk individuals. However, the effectiveness of LDCT in non-high-risk individuals remains inadequately explored. With the incidence of lung cancer steadily increasing among non-high-risk individuals, this study aims to assess the risk of lung cancer in non-high-risk individuals and evaluate the potential of thin-section LDCT reconstruction combined with artificial intelligence (LDCT-TRAI) as a screening tool. A real-world cohort study on lung cancer screening was conducted at the West China Hospital of Sichuan University from January 2010 to July 2021. Participants were screened using either LDCT-TRAI or traditional thick-section LDCT without AI (traditional LDCT) . The AI system employed was the uAI-ChestCare software. Lung cancer diagnoses were confirmed through pathological examination. Among the 259 121 enrolled non-high-risk participants, 87 260 (33.7%) had positive screening results. Within 1 year, 728 (0.3%) participants were diagnosed with lung cancer, of whom 87.1% (634/728) were never-smokers, and 92.7% (675/728) presented with stage I disease. Compared with traditional LDCT, LDCT-TRAI demonstrated a higher lung cancer detection rate (0.3% vs. 0.2%, <i>P</i> < 0.001), particularly for stage I cancers (94.4% vs. 83.2%, <i>P</i> < 0.001), and was associated with improved survival outcomes (5-year overall survival rate: 95.4% vs. 81.3%, <i>P</i> < 0.0001). These findings highlight the importance of expanding lung cancer screening to non-high-risk populations, especially never-smokers. LDCT-TRAI outperformed traditional LDCT in detecting early-stage cancers and improving survival outcomes, underscoring its potential as a more effective screening tool for early lung cancer detection in this population.

Review and reflections on live AI mammographic screen reading in a large UK NHS breast screening unit.

Puri S, Bagnall M, Erdelyi G

pubmed logopapersJun 1 2025
The Radiology team from a large Breast Screening Unit in the UK with a screening population of over 135,000 took part in a service evaluation project using artificial intelligence (AI) for reading breast screening mammograms. To evaluate the clinical benefit AI may provide when implemented as a silent reader in a double reading breast screening programme and to evaluate feasibility and the operational impact of deploying AI into the breast screening programme. The service was one of 14 breast screening sites in the UK to take part in this project and we present our local experience with AI in breast screening. A commercially available AI platform was deployed and worked in real time as a 'silent third reader' so as not to impact standard workflows and patient care. All cases flagged by AI but not recalled by standard double reading (positive discordant cases) were reviewed along with all cases recalled by human readers but not flagged by AI (negative discordant cases). 9,547 cases were included in the evaluation. 1,135 positive discordant cases were reviewed, and one woman was recalled from the reviews who was not found to have cancer on further assessment in the breast assessment clinic. 139 negative discordant cases were reviewed, and eight cancer cases (8.79% of total cancers detected in this period) recalled by human readers were not detected by AI. No additional cancers were detected by AI during the study. Performance of AI was inferior to human readers in our unit. Having missed a significant number of cancers makes it unreliable and not safe to be used in clinical practice. AI is not currently of sufficient accuracy to be considered in the NHS Breast Screening Programme.

Liver Tumor Prediction using Attention-Guided Convolutional Neural Networks and Genomic Feature Analysis.

Edwin Raja S, Sutha J, Elamparithi P, Jaya Deepthi K, Lalitha SD

pubmed logopapersJun 1 2025
The task of predicting liver tumors is critical as part of medical image analysis and genomics area since diagnosis and prognosis are important in making correct medical decisions. Silent characteristics of liver tumors and interactions between genomic and imaging features are also the main sources of challenges toward reliable predictions. To overcome these hurdles, this study presents two integrated approaches namely, - Attention-Guided Convolutional Neural Networks (AG-CNNs), and Genomic Feature Analysis Module (GFAM). Spatial and channel attention mechanisms in AG-CNN enable accurate tumor segmentation from CT images while providing detailed morphological profiling. Evaluation with three control databases TCIA, LiTS, and CRLM shows that our model produces more accurate output than relevant literature with an accuracy of 94.5%, a Dice Similarity Coefficient of 91.9%, and an F1-Score of 96.2% for the Dataset 3. More considerably, the proposed methods outperform all the other methods in different datasets in terms of recall, precision, and Specificity by up to 10 percent than all other methods including CELM, CAGS, DM-ML, and so on.•Utilization of Attention-Guided Convolutional Neural Networks (AG-CNN) enhances tumor region focus and segmentation accuracy.•Integration of Genomic Feature Analysis (GFAM) identifies molecular markers for subtype-specific tumor classification.

Whole Brain 3D T1 Mapping in Multiple Sclerosis Using Standard Clinical Images Compared to MP2RAGE and MR Fingerprinting.

Snyder J, Blevins G, Smyth P, Wilman AH

pubmed logopapersJun 1 2025
Quantitative T1 and T2 mapping is a useful tool to assess properties of healthy and diseased tissues. However, clinical diagnostic imaging remains dominated by relaxation-weighted imaging without direct collection of relaxation maps. Dedicated research sequences such as MR fingerprinting can save time and improve resolution over classical gold standard quantitative MRI (qMRI) methods, although they are not widely adopted in clinical studies. We investigate the use of clinical sequences in conjunction with prior knowledge provided by machine learning to elucidate T1 maps of brain in routine imaging studies without the need for specialized sequences. A classification learner was trained on T1w (magnetization prepared rapid gradient echo [MPRAGE]) and T2w (fluid-attenuated inversion recovery [FLAIR]) data (2.6 million voxels) from multiple sclerosis (MS) patients at 3T, compared to gold standard inversion recovery fast spin echo T1 maps in five healthy subjects, and tested on eight MS patients. In the MS patient test, the results of the machine learner-produced T1 maps were compared to MP2RAGE and MR fingerprinting T1 maps in seven tissue regions of the brain: cortical grey matter, white matter, cerebrospinal fluid, caudate, putamen and globus pallidus. Additionally, T1s in lesion-segmented tissue was compared using the three different methods. The machine learner (ML) method had excellent agreement with MP2RAGE, with all average tissue deviations less than 3.2%, with T1 lesion variation of 0.1%-5.3% across the eight patients. The machine learning method provides a valuable and accurate estimation of T1 values in the human brain while using data from standard clinical sequences and allowing retrospective reconstruction from past studies without the need for new quantitative techniques.

MRI-based risk factors for intensive care unit admissions in acute neck infections.

Vierula JP, Merisaari H, Heikkinen J, Happonen T, Sirén A, Velhonoja J, Irjala H, Soukka T, Mattila K, Nyman M, Nurminen J, Hirvonen J

pubmed logopapersJun 1 2025
We assessed risk factors and developed a score to predict intensive care unit (ICU) admissions using MRI findings and clinical data in acute neck infections. This retrospective study included patients with MRI-confirmed acute neck infection. Abscess diameters were measured on post-gadolinium T1-weighted Dixon MRI, and specific edema patterns, retropharyngeal (RPE) and mediastinal edema, were assessed on fat-suppressed T2-weighted Dixon MRI. A multivariate logistic regression model identified ICU admission predictors, with risk scores derived from regression coefficients. Model performance was evaluated using the area under the curve (AUC) from receiver operating characteristic analysis. Machine learning models (random forest, XGBoost, support vector machine, neural networks) were tested. The sample included 535 patients, of whom 373 (70 %) had an abscess, and 62 (12 %) required ICU treatment. Significant predictors for ICU admission were RPE, maximal abscess diameter (≥40 mm), and C-reactive protein (CRP) (≥172 mg/L). The risk score (0-7) (AUC=0.82, 95 % confidence interval [CI] 0.77-0.88) outperformed CRP (AUC=0.73, 95 % CI 0.66-0.80, p = 0.001), maximal abscess diameter (AUC=0.72, 95 % CI 0.64-0.80, p < 0.001), and RPE (AUC=0.71, 95 % CI 0.65-0.77, p < 0.001). The risk score at a cut-off > 3 yielded the following metrics: sensitivity 66 %, specificity 82 %, positive predictive value 33 %, negative predictive value 95 %, accuracy 80 %, and odds ratio 9.0. Discriminative performance was robust in internal (AUC=0.83) and hold-out (AUC=0.81) validations. ML models were not better than regression models. A risk model incorporating RPE, abscess size, and CRP showed moderate accuracy and high negative predictive value for ICU admissions, supporting MRI's role in acute neck infections.

Ultrasound-based radiomics and machine learning for enhanced diagnosis of knee osteoarthritis: Evaluation of diagnostic accuracy, sensitivity, specificity, and predictive value.

Kiso T, Okada Y, Kawata S, Shichiji K, Okumura E, Hatsumi N, Matsuura R, Kaminaga M, Kuwano H, Okumura E

pubmed logopapersJun 1 2025
To evaluate the usefulness of radiomics features extracted from ultrasonographic images in diagnosing and predicting the severity of knee osteoarthritis (OA). In this single-center, prospective, observational study, radiomics features were extracted from standing radiographs and ultrasonographic images of knees of patients aged 40-85 years with primary medial OA and without OA. Analysis was conducted using LIFEx software (version 7.2.n), ANOVA, and LASSO regression. The diagnostic accuracy of three different models, including a statistical model incorporating background factors and machine learning models, was evaluated. Among 491 limbs analyzed, 318 were OA and 173 were non-OA cases. The mean age was 72.7 (±8.7) and 62.6 (±11.3) years in the OA and non-OA groups, respectively. The OA group included 81 (25.5 %) men and 237 (74.5 %) women, whereas the non-OA group included 73 men (42.2 %) and 100 (57.8 %) women. A statistical model using the cutoff value of MORPHOLOGICAL_SurfaceToVolumeRatio (IBSI:2PR5) achieved a specificity of 0.98 and sensitivity of 0.47. Machine learning diagnostic models (Model 2) demonstrated areas under the curve (AUCs) of 0.88 (discriminant analysis) and 0.87 (logistic regression), with sensitivities of 0.80 and 0.81 and specificities of 0.82 and 0.80, respectively. For severity prediction, the statistical model using MORPHOLOGICAL_SurfaceToVolumeRatio (IBSI:2PR5) showed sensitivity and specificity values of 0.78 and 0.86, respectively, whereas machine learning models achieved an AUC of 0.92, sensitivity of 0.81, and specificity of 0.85 for severity prediction. The use of radiomics features in diagnosing knee OA shows potential as a supportive tool for enhancing clinicians' decision-making.

Combining Deep Data-Driven and Physics-Inspired Learning for Shear Wave Speed Estimation in Ultrasound Elastography.

Tehrani AKZ, Schoen S, Candel I, Gu Y, Guo P, Thomenius K, Pierce TT, Wang M, Tadross R, Washburn M, Rivaz H, Samir AE

pubmed logopapersJun 1 2025
The shear wave elastography (SWE) provides quantitative markers for tissue characterization by measuring the shear wave speed (SWS), which reflects tissue stiffness. SWE uses an acoustic radiation force pulse sequence to generate shear waves that propagate laterally through tissue with transient displacements. These waves travel perpendicular to the applied force, and their displacements are tracked using high-frame-rate ultrasound. Estimating the SWS map involves two main steps: speckle tracking and SWS estimation. Speckle tracking calculates particle velocity by measuring RF/IQ data displacement between adjacent firings, while SWS estimation methods typically compare particle velocity profiles of samples that are laterally a few millimeters apart. Deep learning (DL) methods have gained attention for SWS estimation, often relying on supervised training using simulated data. However, these methods may struggle with real-world data, which can differ significantly from the simulated training data, potentially leading to artifacts in the estimated SWS map. To address this challenge, we propose a physics-inspired learning approach that utilizes real data without known SWS values. Our method employs an adaptive unsupervised loss function, allowing the network to train with the real noisy data to minimize the artifacts and improve the robustness. We validate our approach using experimental phantom data and in vivo liver data from two human subjects, demonstrating enhanced accuracy and reliability in SWS estimation compared with conventional and supervised methods. This hybrid approach leverages the strengths of both data-driven and physics-inspired learning, offering a promising solution for more accurate and robust SWS mapping in clinical applications.

BenchXAI: Comprehensive benchmarking of post-hoc explainable AI methods on multi-modal biomedical data.

Metsch JM, Hauschild AC

pubmed logopapersJun 1 2025
The increasing digitalization of multi-modal data in medicine and novel artificial intelligence (AI) algorithms opens up a large number of opportunities for predictive models. In particular, deep learning models show great performance in the medical field. A major limitation of such powerful but complex models originates from their 'black-box' nature. Recently, a variety of explainable AI (XAI) methods have been introduced to address this lack of transparency and trust in medical AI. However, the majority of such methods have solely been evaluated on single data modalities. Meanwhile, with the increasing number of XAI methods, integrative XAI frameworks and benchmarks are essential to compare their performance on different tasks. For that reason, we developed BenchXAI, a novel XAI benchmarking package supporting comprehensive evaluation of fifteen XAI methods, investigating their robustness, suitability, and limitations in biomedical data. We employed BenchXAI to validate these methods in three common biomedical tasks, namely clinical data, medical image and signal data, and biomolecular data. Our newly designed sample-wise normalization approach for post-hoc XAI methods enables the statistical evaluation and visualization of performance and robustness. We found that the XAI methods Integrated Gradients, DeepLift, DeepLiftShap, and GradientShap performed well over all three tasks, while methods like Deconvolution, Guided Backpropagation, and LRP-α1-β0 struggled for some tasks. With acts such as the EU AI Act the application of XAI in the biomedical domain becomes more and more essential. Our evaluation study represents a first step towards verifying the suitability of different XAI methods for various medical domains.
Page 305 of 3433423 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.