Sort by:
Page 131 of 2432422 results

Characteristics of brain network connectome and connectome-based efficacy predictive model in bipolar depression.

Xi C, Lu B, Guo X, Qin Z, Yan C, Hu S

pubmed logopapersJul 4 2025
Aberrant functional connectivity (FC) between brain networks has been indicated closely associated with bipolar disorder (BD). However, the previous findings of specific brain network connectivity patterns have been inconsistent, and the clinical utility of FCs for predicting treatment outcomes in bipolar depression was underexplored. To identify robust neuro-biomarkers of bipolar depression, a connectome-based analysis was conducted on resting-state functional MRI (rs-fMRI) data of 580 bipolar depression patients and 116 healthy controls (HCs). A subsample of 148 patients underwent a 4-week quetiapine treatment with post-treatment clinical assessment. Adopting machine learning, a predictive model based on pre-treatment brain connectome was then constructed to predict treatment response and identify the efficacy-specific networks. Distinct brain network connectivity patterns were observed in bipolar depression compared to HCs. Elevated intra-network connectivity was identified within the default mode network (DMN), sensorimotor network (SMN), and subcortical network (SC); and as to the inter-network connectivity, increased FCs were between the DMN, SMN and frontoparietal (FPN), ventral attention network (VAN), and decreased FCs were between the SC and cortical networks, especially the DMN and FPN. And the global network topology analyses revealed decreased global efficiency and increased characteristic path length in BD compared to HC. Further, the support vector regression model successfully predicted the efficacy of quetiapine treatment, as indicated by a high correspondence between predicted and actual HAMD reduction ratio values (r<sub>(df=147)</sub>=0.4493, p = 2*10<sup>-4</sup>). The identified efficacy-specific networks primarily encompassed FCs between the SMN and SC, and between the FPN, DMN, and VAN. These identified networks further predicted treatment response with r = 0.3940 in the subsequent validation with an independent cohort (n = 43). These findings presented the characteristic aberrant patterns of brain network connectome in bipolar depression and demonstrated the predictive potential of pre-treatment network connectome for quetiapine response. Promisingly, the identified connectivity networks may serve as functional targets for future precise treatments for bipolar depression.

Medical slice transformer for improved diagnosis and explainability on 3D medical images with DINOv2.

Müller-Franzes G, Khader F, Siepmann R, Han T, Kather JN, Nebelung S, Truhn D

pubmed logopapersJul 4 2025
Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) are essential clinical cross-sectional imaging techniques for diagnosing complex conditions. However, large 3D datasets with annotations for deep learning are scarce. While methods like DINOv2 are encouraging for 2D image analysis, these methods have not been applied to 3D medical images. Furthermore, deep learning models often lack explainability due to their "black-box" nature. This study aims to extend 2D self-supervised models, specifically DINOv2, to 3D medical imaging while evaluating their potential for explainable outcomes. We introduce the Medical Slice Transformer (MST) framework to adapt 2D self-supervised models for 3D medical image analysis. MST combines a Transformer architecture with a 2D feature extractor, i.e., DINOv2. We evaluate its diagnostic performance against a 3D convolutional neural network (3D ResNet) across three clinical datasets: breast MRI (651 patients), chest CT (722 patients), and knee MRI (1199 patients). Both methods were tested for diagnosing breast cancer, predicting lung nodule dignity, and detecting meniscus tears. Diagnostic performance was assessed by calculating the Area Under the Receiver Operating Characteristic Curve (AUC). Explainability was evaluated through a radiologist's qualitative comparison of saliency maps based on slice and lesion correctness. P-values were calculated using Delong's test. MST achieved higher AUC values compared to ResNet across all three datasets: breast (0.94 ± 0.01 vs. 0.91 ± 0.02, P = 0.02), chest (0.95 ± 0.01 vs. 0.92 ± 0.02, P = 0.13), and knee (0.85 ± 0.04 vs. 0.69 ± 0.05, P = 0.001). Saliency maps were consistently more precise and anatomically correct for MST than for ResNet. Self-supervised 2D models like DINOv2 can be effectively adapted for 3D medical imaging using MST, offering enhanced diagnostic accuracy and explainability compared to convolutional neural networks.

Multi-modality radiomics diagnosis of breast cancer based on MRI, ultrasound and mammography.

Wu J, Li Y, Gong W, Li Q, Han X, Zhang T

pubmed logopapersJul 4 2025
To develop a multi-modality machine learning-based radiomics model utilizing Magnetic Resonance Imaging (MRI), Ultrasound (US), and Mammography (MMG) for the differentiation of benign and malignant breast nodules. This study retrospectively collected data from 204 patients across three hospitals, including MRI, US, and MMG imaging data along with confirmed pathological diagnoses. Lesions on 2D US, 2D MMG, and 3D MRI images were selected to outline the areas of interest, which were then automatically expanded outward by 3 mm, 5 mm, and 8 mm to extract radiomic features within and around the tumor. ANOVA, the maximum correlation minimum redundancy (mRMR) algorithm, and the least absolute shrinkage and selection operator (LASSO) were used to select features for breast cancer diagnosis through logistic regression analysis. The performance of the radiomics models was evaluated using receiver operating characteristic (ROC) curve analysis, curves decision curve analysis (DCA), and calibration curves. Among the various radiomics models tested, the MRI_US_MMG multi-modality logistic regression model with 5 mm peritumoral features demonstrated the best performance. In the test cohort, this model achieved an AUC of 0.905(95% confidence interval [CI]: 0.805-1). These results suggest that the inclusion of peritumoral features, specifically at a 5 mm expansion, significantly enhanced the diagnostic efficiency of the multi-modality radiomics model in differentiating benign from malignant breast nodules. The multi-modality radiomics model based on MRI, ultrasound, and mammography can predict benign and malignant breast lesions.

A tailored deep learning approach for early detection of oral cancer using a 19-layer CNN on clinical lip and tongue images.

Liu P, Bagi K

pubmed logopapersJul 4 2025
Early and accurate detection of oral cancer plays a pivotal role in improving patient outcomes. This research introduces a custom-designed, 19-layer convolutional neural network (CNN) for the automated diagnosis of oral cancer using clinical images of the lips and tongue. The methodology integrates advanced preprocessing steps, including min-max normalization and histogram-based contrast enhancement, to optimize image features critical for reliable classification. The model is extensively validated on the publicly available Oral Cancer (Lips and Tongue) Images (OCI) dataset, which is divided into 80% training and 20% testing subsets. Comprehensive performance evaluation employs established metrics-accuracy, sensitivity, specificity, precision, and F1-score. Our CNN architecture achieved an accuracy of 99.54%, sensitivity of 95.73%, specificity of 96.21%, precision of 96.34%, and F1-score of 96.03%, demonstrating substantial improvements over prominent transfer learning benchmarks, including SqueezeNet, AlexNet, Inception, VGG19, and ResNet50, all tested under identical experimental protocols. The model's robust performance, efficient computation, and high reliability underline its practicality for clinical application and support its superiority over existing approaches. This study provides a reproducible pipeline and a new reference point for deep learning-based oral cancer detection, facilitating translation into real-world healthcare environments and promising enhanced diagnostic confidence.

Intralesional and perilesional radiomics strategy based on different machine learning for the prediction of international society of urological pathology grade group in prostate cancer.

Li Z, Yang L, Wang X, Xu H, Chen W, Kang S, Huang Y, Shu C, Cui F, Zhang Y

pubmed logopapersJul 4 2025
To develop and evaluate a intralesional and perilesional radiomics strategy based on different machine learning model to differentiate International Society of Urological Pathology (ISUP) grade > 2 group and ISUP ≤ 2 prostate cancers (PCa). 340 case of PCa patients confirmed by radical prostatectomy pathology were obtained from two hospitals. The patients were divided into training, internal validation, and external validation groups. Radiomic features were extracted from T2-weighted imaging, and four distinct radiomic feature models were constructed: intralesional, perilesional, combined tumoral and perilesional, and intralesional and perilesional image fusion. Four machine learning classifiers logistic regression (LR), random forest (RF), extra trees (ET), and multilayer perceptron (MLP) were employed for model training and evaluation to select the optimal model. The performance of each model was assessed by calculating the area under the ROC curve (AUC), accuracy, sensitivity, specificity, positive predictive value, negative predictive value, and F1 score. The AUCs for the RF classifier were higher than that of LR, ET, and MLP, and was selected as the final radiomic model. The nomogram model integrating perilesional, combined intralesional and perilesional, and intralesional and perilesional image fusion had an AUC of 0.929, 0.734, 0.743 for the training, internal, and external validation cohorts, respectively, which was higher than that of the individual intralesional, perilesional, combined intralesional and perilesional, and intralesional and perilesional image fusion models. The proposed nomogram established from perilesional, combined intralesional and perilesional, and intralesional and perilesional image fusion radiomic has the potential to predict the differentiation degree of ISUP PCa patients. Not applicable.

Predicting ESWL success for ureteral stones: a radiomics-based machine learning approach.

Yang R, Zhao D, Ye C, Hu M, Qi X, Li Z

pubmed logopapersJul 4 2025
This study aimed to develop and validate a machine learning (ML) model that integrates radiomics and conventional radiological features to predict the success of single-session extracorporeal shock wave lithotripsy (ESWL) for ureteral stones. This retrospective study included 329 patients with ureteral stones who underwent ESWL between October 2022 and June 2024. Patients were randomly divided into a training set (n = 230) and a test set (n = 99) in a 7:3 ratio. Preoperative clinical data and noncontrast CT images were collected, and radiomic features were extracted by outlining the stone's region of interest (ROI). Univariate analysis was used to identify clinical and conventional radiological features related to the success of single-session ESWL. Radiomic features were selected using the least absolute shrinkage and selection operator (LASSO) algorithm to calculate a radiomic score (Rad-score). Five machine learning models (RF, KNN, LR, SVM, AdaBoost) were developed using 10-fold cross-validation. Model performance was assessed using AUC, accuracy, sensitivity, specificity, and F1 score. Calibration and decision curve analyses were used to evaluate model calibration and clinical value. SHAP analysis was conducted to interpret feature importance, and a nomogram was built to improve model interpretability. Ureteral diameter proximal to the stone (UDPS), stone-to-skin distance (SSD), and renal pelvic width (RPW) were identified as significant predictors. Six radiomic features were selected from 1,595 to calculate the Rad-score. The LR model showed the best performance on the test set, with an accuracy of 83.8%, sensitivity of 84.9%, specificity of 82.6%, F1 score of 84.9%, and AUC of 0.888 (95% CI: 0.822-0.949). SHAP analysis indicated that the Rad-score and UDPS were the most influential features. Calibration and decision curve analyses confirmed the model's good calibration and clinical utility. The LR model, integrating radiomics and conventional radiological features, demonstrated strong performance in predicting the success of single-session ESWL for ureteral stones. This approach may assist clinicians in making more accurate treatment decisions. Retrospectively. Not applicable.

Deep learning-based classification of parotid gland tumors: integrating dynamic contrast-enhanced MRI for enhanced diagnostic accuracy.

Sinci KA, Koska IO, Cetinoglu YK, Erdogan N, Koc AM, Eliyatkin NO, Koska C, Candan B

pubmed logopapersJul 4 2025
To evaluate the performance of deep learning models in classifying parotid gland tumors using T2-weighted, diffusion-weighted, and contrast-enhanced T1-weighted MR images, along with DCE data derived from time-intensity curves. In this retrospective, single-center study including a total of 164 participants, 124 patients with surgically confirmed parotid gland tumors and 40 individuals with normal parotid glands underwent multiparametric MRI, including DCE sequences. Data partitions were performed at the patient level (80% training, 10% validation, 10% testing). Two deep learning architectures (MobileNetV2 and EfficientNetB0), as well as a combined approach integrating predictions from both models, were fine-tuned using transfer learning to classify (i) normal versus tumor (Task 1), (ii) benign versus malignant tumors (Task 2), and (iii) benign subtypes (Warthin tumor vs. pleomorphic adenoma) (Task 3). For Tasks 2 and 3, DCE-derived metrics were integrated via a support vector machine. Classification performance was assessed using accuracy, precision, recall, and F1-score, with 95% confidence intervals derived via bootstrap resampling. In Task 1, EfficientNetB0 achieved the highest accuracy (85%). In Task 2, the combined approach reached an accuracy of 65%, while adding DCE data significantly improved performance, with MobileNetV2 achieving an accuracy of 96%. In Task 3, EfficientNetB0 demonstrated the highest accuracy without DCE data (75%), while including DCE data boosted the combined approach to an accuracy of 89%. Adding DCE-MRI data to deep learning models substantially enhances parotid gland tumor classification accuracy, highlighting the value of functional imaging biomarkers in improving noninvasive diagnostic workflows.

Progression risk of adolescent idiopathic scoliosis based on SHAP-Explained machine learning models: a multicenter retrospective study.

Fang X, Weng T, Zhang Z, Gong W, Zhang Y, Wang M, Wang J, Ding Z, Lai C

pubmed logopapersJul 4 2025
To develop an interpretable machine learning model, explained using SHAP, based on imaging features of adolescent idiopathic scoliosis extracted by convolutional neural networks (CNNs), in order to predict the risk of curve progression and identify the most accurate predictive model. This study included 233 patients with adolescent idiopathic scoliosis from three medical centers. CNNs were used to extract features from full-spine coronal X-ray images taken at three follow-up points for each patient. Imaging and clinical features from center 1 were analyzed using the Boruta algorithm to identify independent predictors. Data from center 1 were divided into training (80%) and testing (20%) sets, while data from centers 2 and 3 were used as external validation sets. Six machine learning models were constructed. Receiver operating characteristic (ROC) curves were plotted, and model performance was assessed by calculating the area under the curve (AUC), accuracy, sensitivity, and specificity in the training, testing, and external validation sets. The SHAP interpreter was used to analyze the most effective model. The six models yielded AUCs ranging from 0.565 to 0.989, accuracies from 0.600 to 0.968, sensitivities from 0.625 to 1.0, and specificities from 0.571 to 0.974. The XGBoost model achieved the best performance, with an AUC of 0.896 in the external validation set. SHAP analysis identified the change in the main Cobb angle between the second and first follow-ups [Cobb1(2−1)] as the most important predictor, followed by the main Cobb angle at the second follow-up (Cobb1-2) and the change in the secondary Cobb angle [Cobb2(2−1)]. The XGBoost model demonstrated the best predictive performance in the external validation cohort, confirming its preliminary stability and generalizability. SHAP analysis indicated that Cobb1(2−1) was the most important feature for predicting scoliosis progression. This model offers a valuable tool for clinical decision-making by enabling early identification of high-risk patients and supporting early intervention strategies through automated feature extraction and interpretable analysis. The online version contains supplementary material available at 10.1186/s12891-025-08841-3.

Hybrid-View Attention for csPCa Classification in TRUS

Zetian Feng, Juan Fu, Xuebin Zou, Hongsheng Ye, Hong Wu, Jianhua Zhou, Yi Wang

arxiv logopreprintJul 4 2025
Prostate cancer (PCa) is a leading cause of cancer-related mortality in men, and accurate identification of clinically significant PCa (csPCa) is critical for timely intervention. Transrectal ultrasound (TRUS) is widely used for prostate biopsy; however, its low contrast and anisotropic spatial resolution pose diagnostic challenges. To address these limitations, we propose a novel hybrid-view attention (HVA) network for csPCa classification in 3D TRUS that leverages complementary information from transverse and sagittal views. Our approach integrates a CNN-transformer hybrid architecture, where convolutional layers extract fine-grained local features and transformer-based HVA models global dependencies. Specifically, the HVA comprises intra-view attention to refine features within a single view and cross-view attention to incorporate complementary information across views. Furthermore, a hybrid-view adaptive fusion module dynamically aggregates features along both channel and spatial dimensions, enhancing the overall representation. Experiments are conducted on an in-house dataset containing 590 subjects who underwent prostate biopsy. Comparative and ablation results prove the efficacy of our method. The code is available at https://github.com/mock1ngbrd/HVAN.

An Advanced Deep Learning Framework for Ischemic and Hemorrhagic Brain Stroke Diagnosis Using Computed Tomography (CT) Images

Md. Sabbir Hossen, Eshat Ahmed Shuvo, Shibbir Ahmed Arif, Pabon Shaha, Md. Saiduzzaman, Mostofa Kamal Nasir

arxiv logopreprintJul 4 2025
Brain stroke is one of the leading causes of mortality and long-term disability worldwide, highlighting the need for precise and fast prediction techniques. Computed Tomography (CT) scan is considered one of the most effective methods for diagnosing brain strokes. The majority of stroke classification techniques rely on a single slice-level prediction mechanism, allowing the radiologist to manually choose the most critical CT slice from the original CT volume. Although clinical evaluations are often used in traditional diagnostic procedures, machine learning (ML) has opened up new avenues for improving stroke diagnosis. To supplement traditional diagnostic techniques, this study investigates the use of machine learning models, specifically concerning the prediction of brain stroke at an early stage utilizing CT scan images. In this research, we proposed a novel approach to brain stroke detection leveraging machine learning techniques, focusing on optimizing classification performance with pre-trained deep learning models and advanced optimization strategies. Pre-trained models, including DenseNet201, InceptionV3, MobileNetV2, ResNet50, and Xception, are utilized for feature extraction. Additionally, we employed feature engineering techniques, including BFO, PCA, and LDA, to enhance models' performance further. These features are subsequently classified using machine learning algorithms such as SVC, RF, XGB, DT, LR, KNN, and GNB. Our experiments demonstrate that the combination of MobileNetV2, LDA, and SVC achieved the highest classification accuracy of 97.93%, significantly outperforming other model-optimizer-classifier combinations. The results underline the effectiveness of integrating lightweight pre-trained models with robust optimization and classification techniques for brain stroke diagnosis.
Page 131 of 2432422 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.