Sort by:
Page 1 of 72715 results
Next

Breast tumor diagnosis via multimodal deep learning using ultrasound B-mode and Nakagami images.

Muhtadi S, Gallippi CM

pubmed logopapersNov 1 2025
We propose and evaluate multimodal deep learning (DL) approaches that combine ultrasound (US) B-mode and Nakagami parametric images for breast tumor classification. It is hypothesized that integrating tissue brightness information from B-mode images with scattering properties from Nakagami images will enhance diagnostic performance compared with single-input approaches. An EfficientNetV2B0 network was used to develop multimodal DL frameworks that took as input (i) numerical two-dimensional (2D) maps or (ii) rendered red-green-blue (RGB) representations of both B-mode and Nakagami data. The diagnostic performance of these frameworks was compared with single-input counterparts using 831 US acquisitions from 264 patients. In addition, gradient-weighted class activation mapping was applied to evaluate diagnostically relevant information utilized by the different networks. The multimodal architectures demonstrated significantly higher area under the receiver operating characteristic curve (AUC) values ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ) than their monomodal counterparts, achieving an average improvement of 10.75%. In addition, the multimodal networks incorporated, on average, 15.70% more diagnostically relevant tissue information. Among the multimodal models, those using RGB representations as input outperformed those that utilized 2D numerical data maps ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ). The top-performing multimodal architecture achieved a mean AUC of 0.896 [95% confidence interval (CI): 0.813 to 0.959] when performance was assessed at the image level and 0.848 (95% CI: 0.755 to 0.903) when assessed at the lesion level. Incorporating B-mode and Nakagami information together in a multimodal DL framework improved classification outcomes and increased the amount of diagnostically relevant information accessed by networks, highlighting the potential for automating and standardizing US breast cancer diagnostics to enhance clinical outcomes.

Robust evaluation of tissue-specific radiomic features for classifying breast tissue density grades.

Dong V, Mankowski W, Silva Filho TM, McCarthy AM, Kontos D, Maidment ADA, Barufaldi B

pubmed logopapersNov 1 2025
Breast cancer risk depends on an accurate assessment of breast density due to lesion masking. Although governed by standardized guidelines, radiologist assessment of breast density is still highly variable. Automated breast density assessment tools leverage deep learning but are limited by model robustness and interpretability. We assessed the robustness of a feature selection methodology (RFE-SHAP) for classifying breast density grades using tissue-specific radiomic features extracted from raw central projections of digital breast tomosynthesis screenings ( <math xmlns="http://www.w3.org/1998/Math/MathML"> <mrow> <msub><mrow><mi>n</mi></mrow> <mrow><mi>I</mi></mrow> </msub> <mo>=</mo> <mn>651</mn></mrow> </math> , <math xmlns="http://www.w3.org/1998/Math/MathML"> <mrow> <msub><mrow><mi>n</mi></mrow> <mrow><mi>II</mi></mrow> </msub> <mo>=</mo> <mn>100</mn></mrow> </math> ). RFE-SHAP leverages traditional and explainable AI methods to identify highly predictive and influential features. A simple logistic regression (LR) classifier was used to assess classification performance, and unsupervised clustering was employed to investigate the intrinsic separability of density grade classes. LR classifiers yielded cross-validated areas under the receiver operating characteristic (AUCs) per density grade of [ <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>A</mi></mrow> </math> : <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0.909</mn> <mo>±</mo> <mn>0.032</mn></mrow> </math> , <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>B</mi></mrow> </math> : <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0.858</mn> <mo>±</mo> <mn>0.027</mn></mrow> </math> , <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>C</mi></mrow> </math> : <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0.927</mn> <mo>±</mo> <mn>0.013</mn></mrow> </math> , <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>D</mi></mrow> </math> : <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0.890</mn> <mo>±</mo> <mn>0.089</mn></mrow> </math> ] and an AUC of <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0.936</mn> <mo>±</mo> <mn>0.016</mn></mrow> </math> for classifying patients as nondense or dense. In external validation, we observed per density grade AUCs of [ <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>A</mi></mrow> </math> : 0.880, <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>B</mi></mrow> </math> : 0.779, <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>C</mi></mrow> </math> : 0.878, <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>D</mi></mrow> </math> : 0.673] and nondense/dense AUC of 0.823. Unsupervised clustering highlighted the ability of these features to characterize different density grades. Our RFE-SHAP feature selection methodology for classifying breast tissue density generalized well to validation datasets after accounting for natural class imbalance, and the identified radiomic features properly captured the progression of density grades. Our results potentiate future research into correlating selected radiomic features with clinical descriptors of breast tissue density.

Automated Measurements of Spinal Parameters for Scoliosis Using Deep Learning.

Meng X, Zhu S, Yang Q, Zhu F, Wang Z, Liu X, Dong P, Wang S, Fan L

pubmed logopapersJun 15 2025
Retrospective single-institution study. To develop and validate an automated convolutional neural network (CNN) to measure the Cobb angle, T1 tilt angle, coronal balance, clavicular angle, height of the shoulders, T5-T12 Cobb angle, and sagittal balance for accurate scoliosis diagnosis. Scoliosis, characterized by a Cobb angle >10°, requires accurate and reliable measurements to guide treatment. Traditional manual measurements are time-consuming and have low interobserver and intraobserver reliability. While some automated tools exist, they often require manual intervention and focus primarily on the Cobb angle. In this study, we utilized four data sets comprising the anterior-posterior (AP) and lateral radiographs of 1682 patients with scoliosis. The CNN includes coarse segmentation, landmark localization, and fine segmentation. The measurements were evaluated using the dice coefficient, mean absolute error (MAE), and percentage of correct key-points (PCK) with a 3-mm threshold. An internal testing set, including 87 adolescent (7-16 yr) and 26 older adult patients (≥60 yr), was used to evaluate the agreement between automated and manual measurements. The automated measures by the CNN achieved high mean dice coefficients (>0.90), PCK of 89.7%-93.7%, and MAE for vertebral corners of 2.87-3.62 mm on AP radiographs. Agreement on the internal testing set for manual measurements was acceptable, with an MAE of 0.26 mm or degree-0.51 mm or degree for the adolescent subgroup and 0.29 mm or degree-4.93 mm or degree for the older adult subgroup on AP radiographs. The MAE for the T5-T12 Cobb angle and sagittal balance, on lateral radiographs, was 1.03° and 0.84 mm, respectively, in adolescents, and 4.60° and 9.41 mm, respectively, in older adults. Automated measurement time was significantly shorter compared with manual measurements. The deep learning automated system provides rapid, accurate, and reliable measurements for scoliosis diagnosis, which could improve clinical workflow efficiency and guide scoliosis treatment. Level III.

Biological age prediction in schizophrenia using brain MRI, gut microbiome and blood data.

Han R, Wang W, Liao J, Peng R, Liang L, Li W, Feng S, Huang Y, Fong LM, Zhou J, Li X, Ning Y, Wu F, Wu K

pubmed logopapersJun 15 2025
The study of biological age prediction using various biological data has been widely explored. However, single biological data may offer limited insights into the pathological process of aging and diseases. Here we evaluated the performance of machine learning models for biological age prediction by using the integrated features from multi-biological data of 140 healthy controls and 43 patients with schizophrenia, including brain MRI, gut microbiome, and blood data. Our results revealed that the models using multi-biological data achieved higher predictive accuracy than those using only brain MRI. Feature interpretability analysis of the optimal model elucidated that the substantial contributions of the frontal lobe, the temporal lobe and the fornix were effective for biological age prediction. Notably, patients with schizophrenia exhibited a pronounced increase in the predicted biological age gap (BAG) when compared to healthy controls. Moreover, the BAG in the SZ group was negatively and positively correlated with the MCCB and PANSS scores, respectively. These findings underscore the potential of BAG as a valuable biomarker for assessing cognitive decline and symptom severity of neuropsychiatric disorders.

Altered resting-state brain activity in patients with major depression disorder and bipolar disorder: A regional homogeneity analysis.

Han W, Su Y, Wang X, Yang T, Zhao G, Mao R, Zhu N, Zhou R, Wang X, Wang Y, Peng D, Wang Z, Fang Y, Chen J, Sun P

pubmed logopapersJun 15 2025
Major Depressive Disorder (MDD) and Bipolar Disorder (BD) exhibit overlapping depressive symptoms, complicating their differentiation in clinical practice. Traditional neuroimaging studies have focused on specific regions of interest, but few have employed whole-brain analyses like regional homogeneity (ReHo). This study aims to differentiate MDD from BD by identifying key brain regions with abnormal ReHo and using advanced machine learning techniques to improve diagnostic accuracy. A total of 63 BD patients, 65 MDD patients, and 70 healthy controls were recruited from the Shanghai Mental Health Center. Resting-state functional MRI (rs-fMRI) was used to analyze ReHo across the brain. We applied Support Vector Machine (SVM) and SVM-Recursive Feature Elimination (SVM-RFE), a robust machine learning model known for its high precision in feature selection and classification, to identify critical brain regions that could serve as biomarkers for distinguishing BD from MDD. SVM-RFE allows for the recursive removal of non-informative features, enhancing the model's ability to accurately classify patients. Correlations between ReHo values and clinical scores were also evaluated. ReHo analysis revealed significant differences in several brain regions. The study results revealed that, compared to healthy controls, both BD and MDD patients exhibited reduced ReHo in the superior parietal gyrus. Additionally, MDD patients showed decreased ReHo values in the Right Lenticular nucleus, putamen (PUT.R), Right Angular gyrus (ANG.R), and Left Superior occipital gyrus (SOG.L). Compared to the MDD group, BD patients exhibited increased ReHo values in the Left Inferior occipital gyrus (IOG.L). In BD patients only, the reduction in ReHo values in the right superior parietal gyrus and the right angular gyrus was positively correlated with Hamilton Depression Scale (HAMD) scores. SVM-RFE identified the IOG.L, SOG.L, and PUT.R as the most critical features, achieving an area under the curve (AUC) of 0.872, with high sensitivity and specificity in distinguishing BD from MDD. This study demonstrates that BD and MDD patients exhibit distinct patterns of regional brain activity, particularly in the occipital and parietal regions. The combination of ReHo analysis and SVM-RFE provides a powerful approach for identifying potential biomarkers, with the left inferior occipital gyrus, left superior occipital gyrus, and right putamen emerging as key differentiating regions. These findings offer valuable insights for improving the diagnostic accuracy between BD and MDD, contributing to more targeted treatment strategies.

A computed tomography angiography-based radiomics model for prognostic prediction of endovascular abdominal aortic repair.

Huang S, Liu D, Deng K, Shu C, Wu Y, Zhou Z

pubmed logopapersJun 15 2025
This study aims to develop a radiomics machine learning (ML) model that uses preoperative computed tomography angiography (CTA) data to predict the prognosis of endovascular aneurysm repair (EVAR) for abdominal aortic aneurysm (AAA) patients. In this retrospective study, 164 AAA patients underwent EVAR and were categorized into shrinkage (good prognosis) or stable (poor prognosis) groups based on post-EVAR sac regression. From preoperative AAA and perivascular adipose tissue (PVAT) image, radiomics features (RFs) were extracted for model creation. Patients were split into 80 % training and 20 % test sets. A support vector machine model was constructed for prediction. Accuracy is evaluated via the area under the receiver operating characteristic curve (AUC). Demographics and comorbidities showed no significant differences between shrinkage and stable groups. The model containing 5 AAA RFs (which are original_firstorder_InterquartileRange, log-sigma-3-0-mm-3D_glrlm_GrayLevelNonUniformityNormalized, log-sigma-3-0-mm-3D_glrlm_RunPercentage, log-sigma-4-0-mm-3D_glrlm_ShortRunLowGrayLevelEmphasis, wavelet-LLH_glcm_SumEntropy) had AUCs of 0.86 (training) and 0.77 (test). The model containing 7 PVAT RFs (which are log-sigma-3-0-mm-3D_firstorder_InterquartileRange, log-sigma-3-0-mm-3D_glcm_Correlation, wavelet-LHL_firstorder_Energy, wavelet-LHL_firstorder_TotalEnergy, wavelet-LHH_firstorder_Mean, wavelet-LHH_glcm_Idmn, wavelet-LHH_glszm_GrayLevelNonUniformityNormalized) had AUCs of 0.76 (training) and 0.78 (test). Combining AAA and PVAT RFs yielded the highest accuracy: AUCs of 0.93 (training) and 0.87 (test). Radiomics-based CTA model predicts aneurysm sac regression post-EVAR in AAA patients. PVAT RFs from preoperative CTA images were closely related to AAA prognosis after EVAR, enhancing accuracy when combined with AAA RFs. This preliminary study explores a predictive model designed to assist clinicians in optimizing therapeutic strategies during clinical decision-making processes.

Predicting pulmonary hemodynamics in pediatric pulmonary arterial hypertension using cardiac magnetic resonance imaging and machine learning: an exploratory pilot study.

Chu H, Ferreira RJ, Lokhorst C, Douwes JM, Haarman MG, Willems TP, Berger RMF, Ploegstra MJ

pubmed logopapersJun 14 2025
Pulmonary arterial hypertension (PAH) significantly affects the pulmonary vasculature, requiring accurate estimation of mean pulmonary arterial pressure (mPAP) and pulmonary vascular resistance index (PVRi). Although cardiac catheterization is the gold standard for these measurements, it poses risks, especially in children. This pilot study explored how machine learning (ML) can predict pulmonary hemodynamics from non-invasive cardiac magnetic resonance (CMR) cine images in pediatric PAH patients. A retrospective analysis of 40 CMR studies from children with PAH using a four-fold stratified group cross-validation was conducted. The endpoints were severity profiles of mPAP and PVRi, categorised as 'low', 'high', and 'extreme'. Deep learning (DL) and traditional ML models were optimized through hyperparameter tuning. Receiver operating characteristic curves and area under the curve (AUC) were used as the primary evaluation metrics. DL models utilizing CMR cine imaging showed the best potential for predicting mPAP and PVRi severity profiles on test folds (AUC<sub>mPAP</sub>=0.82 and AUC<sub>PVRi</sub>=0.73). True positive rates (TPR) for predicting low, high, and extreme mPAP were 5/10, 11/16, and 11/14, respectively. TPR for predicting low, high, and extreme PVRi were 5/13, 14/15, and 7/12, respectively. Optimal DL models only used spatial patterns from consecutive CMR cine frames to maximize prediction performance. This exploratory pilot study demonstrates the potential of DL leveraging CMR imaging for non-invasive prediction of mPAP and PVRi in pediatric PAH. While preliminary, these findings may lay the groundwork for future advancements in CMR imaging in pediatric PAH, offering a pathway to safer disease monitoring and reduced reliance on invasive cardiac catheterization.

Automated quantification of T1 and T2 relaxation times in liver mpMRI using deep learning: a sequence-adaptive approach.

Zbinden L, Erb S, Catucci D, Doorenbos L, Hulbert L, Berzigotti A, Brönimann M, Ebner L, Christe A, Obmann VC, Sznitman R, Huber AT

pubmed logopapersJun 14 2025
To evaluate a deep learning sequence-adaptive liver multiparametric MRI (mpMRI) assessment with validation in different populations using total and segmental T1 and T2 relaxation time maps. A neural network was trained to label liver segmental parenchyma and its vessels on noncontrast T1-weighted gradient-echo Dixon in-phase acquisitions on 200 liver mpMRI examinations. Then, 120 unseen liver mpMRI examinations of patients with primary sclerosing cholangitis or healthy controls were assessed by coregistering the labels to noncontrast and contrast-enhanced T1 and T2 relaxation time maps for optimization and internal testing. The algorithm was externally tested in a segmental and total liver analysis of previously unseen 65 patients with biopsy-proven liver fibrosis and 25 healthy volunteers. Measured relaxation times were compared to manual measurements using intraclass correlation coefficient (ICC) and Wilcoxon test. Comparison of manual and deep learning-generated segmental areas on different T1 and T2 maps was excellent for segmental (ICC = 0.95 ± 0.1; p < 0.001) and total liver assessment (0.97 ± 0.02, p < 0.001). The resulting median of the differences between automated and manual measurements among all testing populations and liver segments was 1.8 ms for noncontrast T1 (median 835 versus 842 ms), 2.0 ms for contrast-enhanced T1 (median 518 versus 519 ms), and 0.3 ms for T2 (median 37 versus 37 ms). Automated quantification of liver mpMRI is highly effective across different patient populations, offering excellent reliability for total and segmental T1 and T2 maps. Its scalable, sequence-adaptive design could foster comprehensive clinical decision-making. The proposed automated, sequence-adaptive algorithm for total and segmental analysis of liver mpMRI may be co-registered to any combination of parametric sequences, enabling comprehensive quantitative analysis of liver mpMRI without sequence-specific training. A deep learning-based algorithm automatically quantified segmental T1 and T2 relaxation times in liver mpMRI. The two-step approach of segmentation and co-registration allowed to assess arbitrary sequences. The algorithm demonstrated high reliability with manual reader quantification. No additional sequence-specific training is required to assess other parametric sequences. The DL algorithm has the potential to enhance individual liver phenotyping.

Artificial intelligence for age-related macular degeneration diagnosis in Australia: A Novel Qualitative Interview Study.

Ly A, Herse S, Williams MA, Stapleton F

pubmed logopapersJun 14 2025
Artificial intelligence (AI) systems for age-related macular degeneration (AMD) diagnosis abound but are not yet widely implemented. AI implementation is complex, requiring the involvement of multiple, diverse stakeholders including technology developers, clinicians, patients, health networks, public hospitals, private providers and payers. There is a pressing need to investigate how AI might be adopted to improve patient outcomes. The purpose of this first study of its kind was to use the AI translation extended version of the non-adoption, abandonment, scale-up, spread and sustainability of healthcare technologies framework to explore stakeholder experiences, attitudes, enablers, barriers and possible futures of digital diagnosis using AI for AMD and eyecare in Australia. Semi-structured, online interviews were conducted with 37 stakeholders (12 clinicians, 10 healthcare leaders, 8 patients and 7 developers) from September 2022 to March 2023. The interviews were audio-recorded, transcribed and analysed using directed and summative content analysis. Technological features influencing implementation were most frequently discussed, followed by the context or wider system, value proposition, adopters, organisations, the condition and finally embedding the adaptation. Patients preferred to focus on the condition, while healthcare leaders elaborated on organisation factors. Overall, stakeholders supported a portable, device-independent clinical decision support tool that could be integrated with existing diagnostic equipment and patient management systems. Opportunities for AI to drive new models of healthcare, patient education and outreach, and the importance of maintaining equity across population groups were consistently emphasised. This is the first investigation to report numerous, interacting perspectives on the adoption of digital diagnosis for AMD in Australia, incorporating an intentionally diverse stakeholder group and the patient voice. It provides a series of practical considerations for the implementation of AI and digital diagnosis into existing care for people with AMD.

Multi-class transformer-based segmentation of pancreatic ductal adenocarcinoma and surrounding structures in CT imaging: a multi-center evaluation.

Wen S, Xiao X

pubmed logopapersJun 14 2025
Accurate segmentation of pancreatic ductal adenocarcinoma (PDAC) and surrounding anatomical structures is critical for diagnosis, treatment planning, and outcome assessment. This study proposes a deep learning-based framework to automate multi-class segmentation in CT images, comparing the performance of four state-of-the-art architectures. This retrospective multi-center study included 3265 patients from six institutions. Four deep learning models-UNet, nnU-Net, UNETR, and Swin-UNet-were trained using five-fold cross-validation on data from five centers and tested independently on a sixth center (n = 569). Preprocessing included intensity normalization, voxel resampling, and standardized annotation for six structures: PDAC lesion, pancreas, veins, arteries, pancreatic duct, and common bile duct. Evaluation metrics included Dice Similarity Coefficient (DSC), Intersection over Union (IoU), directed Hausdorff Distance (dHD), Average Symmetric Surface Distance (ASSD), and Volume Overlap Error (VOE). Statistical comparisons were made using Wilcoxon signed-rank tests with Bonferroni correction. Swin-UNet outperformed all models with a mean validation DSC of 92.4% and test DSC of 90.8%, showing minimal overfitting. It also achieved the lowest dHD (4.3 mm), ASSD (1.2 mm), and VOE (6.0%) in cross-validation. Per-class DSCs for Swin-UNet were consistently higher across all anatomical targets, including challenging structures like the pancreatic duct (91.0%) and bile duct (91.8%). Statistical analysis confirmed the superiority of Swin-UNet (p < 0.001). All models showed generalization capability, but Swin-UNet provided the most accurate and robust segmentation across datasets. Transformer-based architectures, particularly Swin-UNet, enable precise and generalizable multi-class segmentation of PDAC and surrounding anatomy. This framework has potential for clinical integration in PDAC diagnosis, staging, and therapy planning.
Page 1 of 72715 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.