Sort by:
Page 21 of 3993982 results

FoundBioNet: A Foundation-Based Model for IDH Genotyping of Glioma from Multi-Parametric MRI

Somayeh Farahani, Marjaneh Hejazi, Antonio Di Ieva, Sidong Liu

arxiv logopreprintAug 9 2025
Accurate, noninvasive detection of isocitrate dehydrogenase (IDH) mutation is essential for effective glioma management. Traditional methods rely on invasive tissue sampling, which may fail to capture a tumor's spatial heterogeneity. While deep learning models have shown promise in molecular profiling, their performance is often limited by scarce annotated data. In contrast, foundation deep learning models offer a more generalizable approach for glioma imaging biomarkers. We propose a Foundation-based Biomarker Network (FoundBioNet) that utilizes a SWIN-UNETR-based architecture to noninvasively predict IDH mutation status from multi-parametric MRI. Two key modules are incorporated: Tumor-Aware Feature Encoding (TAFE) for extracting multi-scale, tumor-focused features, and Cross-Modality Differential (CMD) for highlighting subtle T2-FLAIR mismatch signals associated with IDH mutation. The model was trained and validated on a diverse, multi-center cohort of 1705 glioma patients from six public datasets. Our model achieved AUCs of 90.58%, 88.08%, 65.41%, and 80.31% on independent test sets from EGD, TCGA, Ivy GAP, RHUH, and UPenn, consistently outperforming baseline approaches (p <= 0.05). Ablation studies confirmed that both the TAFE and CMD modules are essential for improving predictive accuracy. By integrating large-scale pretraining and task-specific fine-tuning, FoundBioNet enables generalizable glioma characterization. This approach enhances diagnostic accuracy and interpretability, with the potential to enable more personalized patient care.

Multi-institutional study for comparison of detectability of hypovascular liver metastases between 70- and 40-keV images: DELMIO study.

Ichikawa S, Funayama S, Hyodo T, Ozaki K, Ito A, Kakuya M, Kobayashi T, Tanahashi Y, Kozaka K, Igarashi S, Suto T, Noda Y, Matsuo M, Narita A, Okada H, Suzuki K, Goshima S

pubmed logopapersAug 9 2025
To compare the lesion detectability of hypovascular liver metastases between 70-keV and 40-keV images from dual energy-computed tomography (CT) reconstructed with deep-learning image reconstruction (DLIR). This multi-institutional, retrospective study included adult patients both pre- and post-treatment for gastrointestinal adenocarcinoma. All patients underwent contrast-enhanced CT with reconstruction at 40-keV and 70-keV. Liver metastases were confirmed using gadoxetic acid-enhanced magnetic resonance imaging. Four radiologists independently assessed lesion conspicuity (per-patient and per-lesion) using a 5-point scale. A radiologic technologist measured image noise, tumor-to-liver contrast, and contrast-to-noise ratio (CNR). Quantitative and qualitative results were compared between 70-keV and 40-keV images. The study included 138 patients (mean age, 69 ± 12 years; 80 men) with 208 liver metastases. Seventy-one patients had liver metastases, while 67 did not. Primary cancer sites included 68 cases of pancreas, 50 colorectal, 12 stomach, and 8 gallbladder/bile duct. No significant difference in per-patient lesion detectability was found between 70-keV images (sensitivity, 71.8-90.1%; specificity, 61.2-85.1%; accuracy, 73.9-79.7%) and 40-keV images (sensitivity, 76.1-90.1%; specificity, 53.7-82.1%; accuracy, 71.7-79.0%) (p = 0.18-> 0.99). Similarly, no significant difference in per-lesion lesion detectability was observed between 70-keV (sensitivity, 67.3-82.2%) and 40-keV images (sensitivity, 68.8-81.7%) (p = 0.20-> 0.99). However, Image noise was significantly higher at 40 keV, along with greater tumor-to-liver contrast and CNRs for both hepatic parenchyma and tumors (p < 0.01). There was no significant difference in hypovascular liver metastases detectability between 70-keV and 40-keV images using the DLIR technology.

Reducing motion artifacts in the aorta: super-resolution deep learning reconstruction with motion reduction algorithm.

Yasaka K, Tsujimoto R, Miyo R, Abe O

pubmed logopapersAug 9 2025
To assess the efficacy of super-resolution deep learning reconstruction (SR-DLR) with motion reduction algorithm (SR-DLR-M) in mitigating aorta motion artifacts compared to SR-DLR and deep learning reconstruction with motion reduction algorithm (DLR-M). This retrospective study included 86 patients (mean age, 65.0 ± 14.1 years; 53 males) who underwent contrast-enhanced CT including the chest region. CT images were reconstructed with SR-DLR-M, SR-DLR, and DLR-M. Circular or ovoid regions of interest were placed on the aorta, and the standard deviation of the CT attenuation was recorded as quantitative noise. From the CT attenuation profile along a line region of interest that intersected the left common carotid artery wall, edge rise slope and edge rise distance were calculated. Two readers assessed the images based on artifact, sharpness, noise, structure depiction, and diagnostic acceptability (for aortic dissection). Quantitative noise was 7.4/5.4/8.3 Hounsfield unit (HU) in SR-DLR-M/SR-DLR/DLR-M. Significant differences were observed between SR-DLR-M vs. SR-DLR and DLR-M (p < 0.001). Edge rise slope and edge rise distance were 107.1/108.8/85.8 HU/mm and 1.6/1.5/2.0 mm, respectively, in SR-DLR-M/SR-DLR/DLR-M. Statistically significant differences were detected between SR-DLR-M vs. DLR-M (p ≤ 0.001 for both). Two readers scored artifacts in SR-DLR-M as significantly better than those in SR-DLR (p < 0.001). Scores for sharpness, noise, and structure depiction in SR-DLR-M were significantly better than those in DLR-M (p ≤ 0.005). Diagnostic acceptability in SR-DLR-M was significantly better than that in SR-DLR and DLR-M (p < 0.001). SR-DLR-M provided significantly better CT images in diagnosing aortic dissection compared to SR-DLR and DLR-M.

Automated 3D segmentation of rotator cuff muscle and fat from longitudinal CT for shoulder arthroplasty evaluation.

Yang M, Jun BJ, Owings T, Subhas N, Polster J, Winalski CS, Ho JC, Entezari V, Derwin KA, Ricchetti ET, Li X

pubmed logopapersAug 9 2025
To develop and validate a deep learning model for automated 3D segmentation of rotator cuff muscles on longitudinal CT scans to quantify muscle volume and fat fraction in patients undergoing total shoulder arthroplasty (TSA). The proposed segmentation models adopted DeepLabV3 + with ResNet50 as the backbone. The models were trained, validated, and tested on preoperative or minimum 2-year follow-up CT scans from 53 TSA subjects. 3D Dice similarity scores, average symmetric surface distance (ASSD), 95th percentile Hausdorff distance (HD95), and relative absolute volume difference (RAVD) were used to evaluate the model performance on hold-out test sets. The trained models were applied to a cohort of 172 patients to quantify rotator cuff muscle volumes and fat fractions across preoperative and minimum 2- and 5-year follow-ups. Compared to the ground truth, the models achieved mean Dice of 0.928 and 0.916, mean ASSD of 0.844 mm and 1.028 mm, mean HD95 of 3.071 mm and 4.173 mm, and mean RAVD of 0.025 and 0.068 on the hold-out test sets for the pre-operative and the minimum 2-year follow-up CT scans, respectively. This study developed accurate and reliable deep learning models for automated 3D segmentation of rotator cuff muscles on clinical CT scans in TSA patients. These models substantially reduce the time required for muscle volume and fat fraction analysis and provide a practical tool for investigating how rotator cuff muscle health relates to surgical outcomes. This has the potential to inform patient selection, rehabilitation planning, and surgical decision-making in TSA and RCR.

Ultrasound-Based Machine Learning and SHapley Additive exPlanations Method Evaluating Risk of Gallbladder Cancer: A Bicentric and Validation Study.

Chen B, Zhong H, Lin J, Lyu G, Su S

pubmed logopapersAug 9 2025
This study aims to construct and evaluate 8 machine learning models by integrating ultrasound imaging features, clinical characteristics, and serological features to assess the risk of gallbladder cancer (GBC) occurrence in patients. A retrospective analysis was conducted on ultrasound and clinical data of 300 suspected GBC patients who visited the Second Affiliated Hospital of Fujian Medical University from January 2020 to January 2024 and 69 patients who visited the Zhongshan Hospital Affiliated to Xiamen University from January 2024 to January 2025. Key relevant features were selected using Least Absolute Shrinkage and Selection Operator (LASSO) regression. Predictive models were constructed using XGBoost, logistic regression, support vector machine, k-nearest neighbors, random forest, decision tree, naive Bayes, and neural network, with the SHapley Additive exPlanations (SHAP) method employed to explain model interpretability. The LASSO regression demonstrated that gender, age, alkaline phosphatase (ALP), clarity of interface with liver, stratification of the gallbladder wall, intracapsular anechoic lesions, and intracapsular punctiform strong lesions were key features for GBC. The XGBoost model demonstrated an area under receiver operating characteristic curve (AUC) of 0.934, 0.916, and 0.813 in the training, validating, and test sets. SHAP analysis revealed the importance ranking of factors as clarity of interface with liver, stratification of the gallbladder wall, intracapsular anechoic lesions, and intracapsular punctiform strong lesions, ALP, gender, and age. Personalized prediction explanations through SHAP values demonstrated the contribution of each feature to the final prediction, enhancing result interpretability. Furthermore, decision plots were generated to display the influence trajectory of each feature on model predictions, aiding in analyzing which features had the greatest impact on these mispredictions; thereby facilitating further model optimization or feature adjustment. This study proposed a GBC ML model based on ultrasound, clinical, and serological characteristics, indicating the superior performance of the XGBoost model and enhancing the interpretability of the model through the SHAP method.

Spinal-QDCNN: advanced feature extraction for brain tumor detection using MRI images.

T L, J JJ, Rani VV, Saini ML

pubmed logopapersAug 9 2025
Brain tumor occurs due to the abnormal development of cells in the brain. It has adversely affected human health, and early diagnosis is required to improve the survival rate of the patient. Hence, various brain tumor detection models have been developed to detect brain tumors. However, the existing methods often suffer from limited accuracy and inefficient learning architecture. The traditional approaches cannot effectively detect the small and subtle changes in the brain cells. To overcome these limitations, a SpinalNet-Quantum Dilated Convolutional Neural Network (Spinal-QDCNN) model is proposed for detecting brain tumors using MRI images. The Spinal-QDCNN method is developed by the combination of QDCNN and SpinalNet for brain tumor detection using MRI. At first, the input brain image is pre-processed using RoI extraction. Then, image enhancement is done by using the thresholding transformation, which is followed by segmentation using Projective Adversarial Networks (PAN). Then, different processes, like random erasing, flipping, and resizing, are applied in the image augmentation phase. This is followed by feature extraction, where statistical features such as average contrast, kurtosis and skewness, and mean, Gabor wavelet features, Discrete Wavelet Transform (DWT) with Gradient Binary Pattern (GBP) are extracted, and finally detection is done using Spinal-QDCNN. Moreover, the proposed method attained a maximum accuracy of 86.356%, sensitivity of 87.37%, and specificity of 88.357%.

Parental and carer views on the use of AI in imaging for children: a national survey.

Agarwal G, Salami RK, Lee L, Martin H, Shantharam L, Thomas K, Ashworth E, Allan E, Yung KW, Pauling C, Leyden D, Arthurs OJ, Shelmerdine SC

pubmed logopapersAug 9 2025
Although the use of artificial intelligence (AI) in healthcare is increasing, stakeholder engagement remains poor, particularly relating to understanding parent/carer acceptance of AI tools in paediatric imaging. We explore these perceptions and compare them to the opinions of children and young people (CYAP). A UK national online survey was conducted, inviting parents, carers and guardians of children to participate. The survey was "live" from June 2022 to 2023. The survey included questions asking about respondents' views of AI in general, as well as in specific circumstances (e.g. fractures) with respect to children's healthcare. One hundred forty-six parents/carers (mean age = 45; range = 21-80) from all four nations of the UK responded. Most respondents (93/146, 64%) believed that AI would be more accurate at interpreting paediatric musculoskeletal radiographs than healthcare professionals, but had a strong preference for human supervision (66%). Whilst male respondents were more likely to believe that AI would be more accurate (55/72, 76%), they were twice as likely as female parents/carers to believe that AI use could result in their child's data falling into the wrong hands. Most respondents would like to be asked permission before AI is used for the interpretation of their child's scans (104/146, 71%). Notably, 79% of parents/carers prioritised accuracy over speed compared to 66% of CYAP. Parents/carers feel positively about AI for paediatric imaging but strongly discourage autonomous use. Acknowledging the diverse opinions of the patient population is vital in aiding the successful integration of AI for paediatric imaging. Parents/carers demonstrate a preference for AI use with human supervision that prioritises accuracy, transparency and institutional accountability. AI is welcomed as a supportive tool, but not as a substitute for human expertise. Parents/carers are accepting of AI use, with human supervision. Over half believe AI would replace doctors/nurses looking at bone X-rays within 5 years. Parents/carers are more likely than CYAP to trust AI's accuracy. Parents/carers are also more sceptical about AI data misuse.

Enhanced hyper tuning using bioinspired-based deep learning model for accurate lung cancer detection and classification.

Kumari J, Sinha S, Singh L

pubmed logopapersAug 9 2025
Lung cancer (LC) is one of the leading causes of cancer related deaths worldwide and early recognition is critical for enhancing patient outcomes. However, existing LC detection techniques face challenges such as high computational demands, complex data integration, scalability limitations, and difficulties in achieving rigorous clinical validation. This research proposes an Enhanced Hyper Tuning Deep Learning (EHTDL) model utilizing bioinspired algorithms to overcome these limitations and improve accuracy and efficiency of LC detection and classification. The methodology begins with the Smooth Edge Enhancement (SEE) technique for preprocessing CT images, followed by feature extraction using GLCM-based Texture Analysis. To refine the features and reduce dimensionality, a Hybrid Feature Selection approach combining Grey Wolf optimization (GWO) and Differential Evolution (DE) is employed. Precise lung segmentation is performed using Mask R-CNN to ensure accurate delineation of lung regions. A Deep Fractal Edge Classifier (DFEC) is introduced, consisting of five fractal blocks with convolutional layers and pooling to progressively learn LC characteristics. The proposed EHTDL model achieves remarkable performance metrics, including 99% accuracy, 100% precision, 98% recall, and 99% <i>F</i>1-score, demonstrating its robustness and effectiveness. The model's scalability and efficiency make it suitable for real-time clinical application offering a promising solution for early LC detection and significantly enhancing patient care.

Neurobehavioral mechanisms of fear and anxiety in multiple sclerosis.

Meyer-Arndt L, Rust R, Bellmann-Strobl J, Schmitz-Hübsch T, Marko L, Forslund S, Scheel M, Gold SM, Hetzer S, Paul F, Weygandt M

pubmed logopapersAug 9 2025
Anxiety is a common yet often underdiagnosed and undertreated comorbidity in multiple sclerosis (MS). While altered fear processing is a hallmark of anxiety in other populations, its neurobehavioral mechanisms in MS remain poorly understood. This study investigates the extent to which neurobehavioral mechanisms of fear generalization contribute to anxiety in MS. We recruited 18 persons with MS (PwMS) and anxiety, 36 PwMS without anxiety, and 23 healthy persons (HPs). Participants completed a functional MRI (fMRI) fear generalization task to assess fear processing and diffusion-weighted MRI for graph-based structural connectome analyses. Consistent with findings in non-MS anxiety populations, PwMS with anxiety exhibit fear overgeneralization, perceiving non-threating stimuli as threatening. A machine learning model trained on HPs in a multivariate pattern analysis (MVPA) cross-decoding approach accurately predicts behavioral fear generalization in both MS groups using whole-brain fMRI fear response patterns. Regional fMRI prediction and graph-based structural connectivity analyses reveal that fear response activity and structural network integrity of partially overlapping areas, such as hippocampus (for fear stimulus comparison) and anterior insula (for fear excitation), are crucial for MS fear generalization. Reduced network integrity in such regions is a direct indicator of MS anxiety. Our findings demonstrate that MS anxiety is substantially characterized by fear overgeneralization. The fact that a machine learning model trained to associate fMRI fear response patterns with fear ratings in HPs predicts fear ratings from fMRI data across MS groups using an MVPA cross-decoding approach suggests that generic fear processing mechanisms substantially contribute to anxiety in MS.

Supporting intraoperative margin assessment using deep learning for automatic tumour segmentation in breast lumpectomy micro-PET-CT.

Maris L, Göker M, De Man K, Van den Broeck B, Van Hoecke S, Van de Vijver K, Vanhove C, Keereman V

pubmed logopapersAug 9 2025
Complete tumour removal is vital in curative breast cancer (BCa) surgery to prevent recurrence. Recently, [<sup>18</sup>F]FDG micro-PET-CT of lumpectomy specimens has shown promise for intraoperative margin assessment (IMA). To aid interpretation, we trained a 2D Residual U-Net to delineate invasive carcinoma of no special type in micro-PET-CT lumpectomy images. We collected 53 BCa lamella images from 19 patients with true histopathology-defined tumour segmentations. Group five-fold cross-validation yielded a dice similarity coefficient of 0.71 ± 0.20 for segmentation. Afterwards, an ensemble model was generated to segment tumours and predict margin status. Comparing predicted and true histopathological margin status in a separate set of 31 micro-PET-CT lumpectomy images of 31 patients achieved an F1 score of 84%, closely matching the mean performance of seven physicians who manually interpreted the same images. This model represents an important step towards a decision-support system that enhances micro-PET-CT-based IMA in BCa, facilitating its clinical adoption.
Page 21 of 3993982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.