Sort by:
Page 202 of 3623611 results

Dual-Branch Attention Fusion Network for Pneumonia Detection.

Li T, Li B, Zheng C

pubmed logopapersJul 4 2025
Pneumonia, as a serious respiratory disease caused by bacterial, viral or fungal infections, is an important cause of increased morbidity and mortality in high-risk populations (e.g.the elderly, infants and young children, and immunodeficient patients) worldwide. Early diagnosis is decisive for improving patient prognosis. In this study, we propose a Dual-Branch Attention Fusion Network based on transfer learning, aiming to improve the accuracy of pneumonia classification in lung X-ray images. The model adopts a dual-branch feature extraction architecture: independent feature extraction paths are constructed based on pre-trained convolutional neural networks (CNNs) and structural spatial state models, respectively, and feature complementarity is achieved through a feature fusion strategy. In the fusion stage, a Self-Attention Mechanism is introduced to dynamically weight the feature representations of different paths, which effectively improves the characterisation of key lesion regions. The experiments are carried out based on the publicly available ChestX-ray dataset, and through data enhancement, migration learning optimisation and hyper-parameter tuning, the model achieves an accuracy of 97.78% on an independent test set, and the experimental results fully demonstrate the excellent performance of the model in the field of pneumonia diagnosis, which provides a new and powerful tool for the rapid and accurate diagnosis of pneumonia in clinical practice, and our methodology provides a high--performance computational framework for intelligent pneumonia Early screening provides a high-performance computing framework, and its architecture design of multipath and attention fusion can provide a methodological reference for other medical image analysis tasks.&#xD.

Comparison of neural networks for classification of urinary tract dilation from renal ultrasounds: evaluation of agreement with expert categorization.

Chung K, Wu S, Jeanne C, Tsai A

pubmed logopapersJul 4 2025
Urinary tract dilation (UTD) is a frequent problem in infants. Automated and objective classification of UTD from renal ultrasounds would streamline their interpretations. To develop and evaluate the performance of different deep learning models in predicting UTD classifications from renal ultrasound images. We searched our image archive to identify renal ultrasounds performed in infants ≤ 3-months-old for the clinical indications of prenatal UTD and urinary tract infection (9/2023-8/2024). An expert pediatric uroradiologist provided the ground truth UTD labels for representative sagittal sonographic renal images. Three different deep learning models trained with cross-entropy loss were adapted with four-fold cross-validation experiments to determine the overall performance. Our curated database included 492 right and 487 left renal ultrasounds (mean age ± standard deviation = 1.2 ± 0.1 months for both cohorts, with 341 boys/151 girls and 339 boys/148 girls, respectively). The model prediction accuracies for the right and left kidneys were 88.7% (95% confidence interval [CI], [85.8%, 91.5%]) and 80.5% (95% CI, [77.6%, 82.9%]), with weighted kappa scores of 0.90 (95% CI, [0.88, 0.91]) and 0.87 (95% CI, [0.82, 0.92]), respectively. When predictions were binarized into mild (normal/P1) and severe (UTD P2/P3) dilation, accuracies of the right and left kidneys increased to 96.3% (95% CI, [94.9%, 97.8%]) and 91.3% (95% CI, [88.5%, 94.2%]), but agreements decreased to 0.78 (95% CI, [0.73, 0.82]) and 0.75 (95% CI, [0.68, 0.82]), respectively. Deep learning models demonstrated high accuracy and agreement in classifying UTD from infant renal ultrasounds, supporting their potential as decision-support tools in clinical workflows.

Knowledge, attitudes, and practices of cardiovascular health care personnel regarding coronary CTA and AI-assisted diagnosis: a cross-sectional study.

Jiang S, Ma L, Pan K, Zhang H

pubmed logopapersJul 4 2025
Artificial intelligence (AI) holds significant promise for medical applications, particularly in coronary computed tomography angiography (CTA). We assessed the knowledge, attitudes, and practices (KAP) of cardiovascular health care personnel regarding coronary CTA and AI-assisted diagnosis. We conducted a cross-sectional survey from 1 July to 1 August 2024 at Tsinghua University Hospital, Beijing, China. Healthcare professionals, including both physicians and nurses, aged ≥18 years were eligible to participate. We used a structured questionnaire to collect demographic information and KAP scores. We analysed the data using correlation and regression methods, along with structural equation modelling. Among 496 participants, 58.5% were female, 52.6% held a bachelor's degree, and 40.7% worked in radiology. Mean KAP scores were 13.87 (standard deviation (SD) = 4.96, possible range = 0-20) for knowledge, 28.25 (SD = 4.35, possible range = 8-40) for attitude, and 31.67 (SD = 8.23, possible range = 10-50) for practice. Knowledge (r = 0.358; P < 0.001) and attitude positively correlated with practice (r = 0.489; P < 0.001). Multivariate logistic regression indicated that educational level, department affiliation, and job satisfaction were significant predictors of knowledge. Attitude was influenced by marital status, department, and years of experience, while practice was shaped by knowledge, attitude, departmental factors, and job satisfaction. Structural equation modelling showed that knowledge was directly affected by gender (β = -0.121; P = 0.009), workplace (β = -0.133; P = 0.004), department (β = -0.197; P < 0.001), employment status (β = -0.166; P < 0.001), and night shift frequency (β = 0.163; P < 0.001). Attitude was directly influenced by marriage (β = 0.124; P = 0.006) and job satisfaction (β = -0.528; P < 0.001). Practice was directly affected by knowledge (β = 0.389; P < 0.001), attitude (β = 0.533; P < 0.001), and gender (β = -0.092; P = 0.010). Additionally, gender (β = -0.051; P = 0.010) and marriage (β = 0.066; P = 0.007) had indirect effects on practice. Cardiovascular health care personnel exhibited suboptimal knowledge, positive attitudes, and relatively inactive practices regarding coronary CTA and AI-assisted diagnosis. Targeted educational efforts are needed to enhance knowledge and support the integration of AI into clinical workflows.

Characteristics of brain network connectome and connectome-based efficacy predictive model in bipolar depression.

Xi C, Lu B, Guo X, Qin Z, Yan C, Hu S

pubmed logopapersJul 4 2025
Aberrant functional connectivity (FC) between brain networks has been indicated closely associated with bipolar disorder (BD). However, the previous findings of specific brain network connectivity patterns have been inconsistent, and the clinical utility of FCs for predicting treatment outcomes in bipolar depression was underexplored. To identify robust neuro-biomarkers of bipolar depression, a connectome-based analysis was conducted on resting-state functional MRI (rs-fMRI) data of 580 bipolar depression patients and 116 healthy controls (HCs). A subsample of 148 patients underwent a 4-week quetiapine treatment with post-treatment clinical assessment. Adopting machine learning, a predictive model based on pre-treatment brain connectome was then constructed to predict treatment response and identify the efficacy-specific networks. Distinct brain network connectivity patterns were observed in bipolar depression compared to HCs. Elevated intra-network connectivity was identified within the default mode network (DMN), sensorimotor network (SMN), and subcortical network (SC); and as to the inter-network connectivity, increased FCs were between the DMN, SMN and frontoparietal (FPN), ventral attention network (VAN), and decreased FCs were between the SC and cortical networks, especially the DMN and FPN. And the global network topology analyses revealed decreased global efficiency and increased characteristic path length in BD compared to HC. Further, the support vector regression model successfully predicted the efficacy of quetiapine treatment, as indicated by a high correspondence between predicted and actual HAMD reduction ratio values (r<sub>(df=147)</sub>=0.4493, p = 2*10<sup>-4</sup>). The identified efficacy-specific networks primarily encompassed FCs between the SMN and SC, and between the FPN, DMN, and VAN. These identified networks further predicted treatment response with r = 0.3940 in the subsequent validation with an independent cohort (n = 43). These findings presented the characteristic aberrant patterns of brain network connectome in bipolar depression and demonstrated the predictive potential of pre-treatment network connectome for quetiapine response. Promisingly, the identified connectivity networks may serve as functional targets for future precise treatments for bipolar depression.

Medical slice transformer for improved diagnosis and explainability on 3D medical images with DINOv2.

Müller-Franzes G, Khader F, Siepmann R, Han T, Kather JN, Nebelung S, Truhn D

pubmed logopapersJul 4 2025
Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) are essential clinical cross-sectional imaging techniques for diagnosing complex conditions. However, large 3D datasets with annotations for deep learning are scarce. While methods like DINOv2 are encouraging for 2D image analysis, these methods have not been applied to 3D medical images. Furthermore, deep learning models often lack explainability due to their "black-box" nature. This study aims to extend 2D self-supervised models, specifically DINOv2, to 3D medical imaging while evaluating their potential for explainable outcomes. We introduce the Medical Slice Transformer (MST) framework to adapt 2D self-supervised models for 3D medical image analysis. MST combines a Transformer architecture with a 2D feature extractor, i.e., DINOv2. We evaluate its diagnostic performance against a 3D convolutional neural network (3D ResNet) across three clinical datasets: breast MRI (651 patients), chest CT (722 patients), and knee MRI (1199 patients). Both methods were tested for diagnosing breast cancer, predicting lung nodule dignity, and detecting meniscus tears. Diagnostic performance was assessed by calculating the Area Under the Receiver Operating Characteristic Curve (AUC). Explainability was evaluated through a radiologist's qualitative comparison of saliency maps based on slice and lesion correctness. P-values were calculated using Delong's test. MST achieved higher AUC values compared to ResNet across all three datasets: breast (0.94 ± 0.01 vs. 0.91 ± 0.02, P = 0.02), chest (0.95 ± 0.01 vs. 0.92 ± 0.02, P = 0.13), and knee (0.85 ± 0.04 vs. 0.69 ± 0.05, P = 0.001). Saliency maps were consistently more precise and anatomically correct for MST than for ResNet. Self-supervised 2D models like DINOv2 can be effectively adapted for 3D medical imaging using MST, offering enhanced diagnostic accuracy and explainability compared to convolutional neural networks.

Multi-modality radiomics diagnosis of breast cancer based on MRI, ultrasound and mammography.

Wu J, Li Y, Gong W, Li Q, Han X, Zhang T

pubmed logopapersJul 4 2025
To develop a multi-modality machine learning-based radiomics model utilizing Magnetic Resonance Imaging (MRI), Ultrasound (US), and Mammography (MMG) for the differentiation of benign and malignant breast nodules. This study retrospectively collected data from 204 patients across three hospitals, including MRI, US, and MMG imaging data along with confirmed pathological diagnoses. Lesions on 2D US, 2D MMG, and 3D MRI images were selected to outline the areas of interest, which were then automatically expanded outward by 3 mm, 5 mm, and 8 mm to extract radiomic features within and around the tumor. ANOVA, the maximum correlation minimum redundancy (mRMR) algorithm, and the least absolute shrinkage and selection operator (LASSO) were used to select features for breast cancer diagnosis through logistic regression analysis. The performance of the radiomics models was evaluated using receiver operating characteristic (ROC) curve analysis, curves decision curve analysis (DCA), and calibration curves. Among the various radiomics models tested, the MRI_US_MMG multi-modality logistic regression model with 5 mm peritumoral features demonstrated the best performance. In the test cohort, this model achieved an AUC of 0.905(95% confidence interval [CI]: 0.805-1). These results suggest that the inclusion of peritumoral features, specifically at a 5 mm expansion, significantly enhanced the diagnostic efficiency of the multi-modality radiomics model in differentiating benign from malignant breast nodules. The multi-modality radiomics model based on MRI, ultrasound, and mammography can predict benign and malignant breast lesions.

A tailored deep learning approach for early detection of oral cancer using a 19-layer CNN on clinical lip and tongue images.

Liu P, Bagi K

pubmed logopapersJul 4 2025
Early and accurate detection of oral cancer plays a pivotal role in improving patient outcomes. This research introduces a custom-designed, 19-layer convolutional neural network (CNN) for the automated diagnosis of oral cancer using clinical images of the lips and tongue. The methodology integrates advanced preprocessing steps, including min-max normalization and histogram-based contrast enhancement, to optimize image features critical for reliable classification. The model is extensively validated on the publicly available Oral Cancer (Lips and Tongue) Images (OCI) dataset, which is divided into 80% training and 20% testing subsets. Comprehensive performance evaluation employs established metrics-accuracy, sensitivity, specificity, precision, and F1-score. Our CNN architecture achieved an accuracy of 99.54%, sensitivity of 95.73%, specificity of 96.21%, precision of 96.34%, and F1-score of 96.03%, demonstrating substantial improvements over prominent transfer learning benchmarks, including SqueezeNet, AlexNet, Inception, VGG19, and ResNet50, all tested under identical experimental protocols. The model's robust performance, efficient computation, and high reliability underline its practicality for clinical application and support its superiority over existing approaches. This study provides a reproducible pipeline and a new reference point for deep learning-based oral cancer detection, facilitating translation into real-world healthcare environments and promising enhanced diagnostic confidence.

Improving risk assessment of local failure in brain metastases patients using vision transformers - A multicentric development and validation study.

Erdur AC, Scholz D, Nguyen QM, Buchner JA, Mayinger M, Christ SM, Brunner TB, Wittig A, Zimmer C, Meyer B, Guckenberger M, Andratschke N, El Shafie RA, Debus JU, Rogers S, Riesterer O, Schulze K, Feldmann HJ, Blanck O, Zamboglou C, Bilger-Z A, Grosu AL, Wolff R, Eitz KA, Combs SE, Bernhardt D, Wiestler B, Rueckert D, Peeken JC

pubmed logopapersJul 4 2025
This study investigates the use of Vision Transformers (ViTs) to predict Freedom from Local Failure (FFLF) in patients with brain metastases using pre-operative MRI scans. The goal is to develop a model that enhances risk stratification and informs personalized treatment strategies. Within the AURORA retrospective trial, patients (n = 352) who received surgical resection followed by post-operative stereotactic radiotherapy (SRT) were collected from seven hospitals. We trained our ViT for the direct image-to-risk task on T1-CE and FLAIR sequences and combined clinical features along the way. We employed segmentation-guided image modifications, model adaptations, and specialized patient sampling strategies during training. The model was evaluated with five-fold cross-validation and ensemble learning across all validation runs. An external, international test cohort (n = 99) within the dataset was used to assess the generalization capabilities of the model, and saliency maps were generated for explainability analysis. We achieved a competent C-Index score of 0.7982 on the test cohort, surpassing all clinical, CNN-based, and hybrid baselines. Kaplan-Meier analysis showed significant FFLF risk stratification. Saliency maps focusing on the BM core confirmed that model explanations aligned with expert observations. Our ViT-based model offers a potential for personalized treatment strategies and follow-up regimens in patients with brain metastases. It provides an alternative to radiomics as a robust, automated tool for clinical workflows, capable of improving patient outcomes through effective risk assessment and stratification.

A preliminary attempt to harmonize using physics-constrained deep neural networks for multisite and multiscanner MRI datasets (PhyCHarm).

Lee G, Ye DH, Oh SH

pubmed logopapersJul 4 2025
In magnetic resonance imaging (MRI), variations in scan parameters and scanner specifications can result in differences in image appearance. To minimize these differences, harmonization in MRI has been suggested as a crucial image processing technique. In this study, we developed an MR physics-based harmonization framework, Physics-Constrained Deep Neural Network for multisite and multiscanner Harmonization (PhyCHarm). PhyCHarm includes two deep neural networks: (1) the Quantitative Maps Generator to generate T<sub>1</sub>- and M<sub>0</sub>-maps and (2) the Harmonization Network. We used an open dataset consisting of 3T MP2RAGE images from 50 healthy individuals for the Quantitative Maps Generator and a traveling dataset consisting of 3T T<sub>1</sub>w images from 9 healthy individuals for the Harmonization Network. PhyCHarm was evaluated using the structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), and normalized-root-mean square error (NRMSE) for the Quantitative Maps Generator, and using SSIM, PSNR, and volumetric analysis for the Harmonization network, respectively. PhyCHarm demonstrated increased SSIM and PSNR, the highest Dice score in the FSL FAST segmentation results for gray and white matter compared to U-Net, Pix2Pix, CALAMITI, and HarmonizingFlows. PhyCHarm showed a greater reduction in volume differences after harmonization for gray and white matter than U-Net, Pix2Pix, CALAMITI, or HarmonizingFlows. As an initial step toward developing advanced harmonization techniques, we investigated the applicability of physics-based constraints within a supervised training strategy. The proposed physics constraints could be integrated with unsupervised methods, paving the way for more sophisticated harmonization qualities.

Prior knowledge of anatomical relationships supports automatic delineation of clinical target volume for cervical cancer.

Shi J, Mao X, Yang Y, Lu S, Zhang W, Zhao S, He Z, Yan Z, Liang W

pubmed logopapersJul 4 2025
Deep learning has been used for automatic planning of radiotherapy targets, such as inferring the clinical target volume (CTV) for a given new patient. However, previous deep learning methods mainly focus on predicting CTV from CT images without considering the rich prior knowledge. This limits the usability of such methods and prevents them from being generalized to larger clinical scenarios. We propose an automatic CTV delineation method for cervical cancer based on prior knowledge of anatomical relationships. This prior knowledge involves the anatomical position relationship between Organ-at-risk (OAR) and CTV, and the relationship between CTV and psoas muscle. First, our model proposes a novel feature attention module to integrate the relationship between nearby OARs and CTV to improve segmentation accuracy. Second, we propose a width-driven attention network to incorporate the relative positions of psoas muscle and CTV. The effectiveness of our method is verified by conducting a large number of experiments in private datasets. Compared to the state-of-the-art models, our method has obtained the Dice of 81.33%±6.36% and HD95 of 9.39mm±7.12mm, and ASSD of 2.02mm±0.98mm, which has proved the superiority of our method in cervical cancer CTV delineation. Furthermore, experiments on subgroup analysis and multi-center datasets also verify the generalization of our method. Our study can improve the efficiency of automatic CTV delineation and help the implementation of clinical applications.
Page 202 of 3623611 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.