Sort by:
Page 185 of 2352345 results

Optimized attention-enhanced U-Net for autism detection and region localization in MRI.

K VRP, Bindu CH, Rama Devi K

pubmed logopapersJun 1 2025
Autism spectrum disorder (ASD) is a neurodevelopmental condition that affects a child's cognitive and social skills, often diagnosed only after symptoms appear around age 2. Leveraging MRI for early ASD detection can improve intervention outcomes. This study proposes a framework for autism detection and region localization using an optimized deep learning approach with attention mechanisms. The pipeline includes MRI image collection, pre-processing (bias field correction, histogram equalization, artifact removal, and non-local mean filtering), and autism classification with a Symmetric Structured MobileNet with Attention Mechanism (SSM-AM). Enhanced by Refreshing Awareness-aided Election-Based Optimization (RA-EBO), SSM-AM achieves robust classification. Abnormality region localization utilizes a Multiscale Dilated Attention-based Adaptive U-Net (MDA-AUnet) further optimized by RA-EBO. Experimental results demonstrate that our proposed model outperforms existing methods, achieving an accuracy of 97.29%, sensitivity of 97.27%, specificity of 97.36%, and precision of 98.98%, significantly improving classification and localization performance. These results highlight the potential of our approach for early ASD diagnosis and targeted interventions. The datasets utilized for this work are publicly available at https://fcon_1000.projects.nitrc.org/indi/abide/.

Exploring the significance of the frontal lobe for diagnosis of schizophrenia using explainable artificial intelligence and group level analysis.

Varaprasad SA, Goel T

pubmed logopapersJun 1 2025
Schizophrenia (SZ) is a complex mental disorder characterized by a profound disruption in cognition and emotion, often resulting in a distorted perception of reality. Magnetic resonance imaging (MRI) is an essential tool for diagnosing SZ which helps to understand the organization of the brain. Functional MRI (fMRI) is a specialized imaging technique to measure and map brain activity by detecting changes in blood flow and oxygenation. The proposed paper correlates the results using an explainable deep learning approach to identify the significant regions of SZ patients using group-level analysis for both structural MRI (sMRI) and fMRI data. The study found that the heat maps for Grad-CAM show clear visualization in the frontal lobe for the classification of SZ and CN with a 97.33% accuracy. The group difference analysis reveals that sMRI data shows intense voxel activity in the right superior frontal gyrus of the frontal lobe in SZ patients. Also, the group difference between SZ and CN during n-back tasks of fMRI data indicates significant voxel activation in the frontal cortex of the frontal lobe. These findings suggest that the frontal lobe plays a crucial role in the diagnosis of SZ, aiding clinicians in planning the treatment.

Prediction of lymph node metastasis in papillary thyroid carcinoma using non-contrast CT-based radiomics and deep learning with thyroid lobe segmentation: A dual-center study.

Wang H, Wang X, Du Y, Wang Y, Bai Z, Wu D, Tang W, Zeng H, Tao J, He J

pubmed logopapersJun 1 2025
This study aimed to develop a predictive model for lymph node metastasis (LNM) in papillary thyroid carcinoma (PTC) patients by deep learning radiomic (DLRad) and clinical features. This study included 271 thyroid lobes from 228 PTC patients who underwent preoperative neck non-contrast CT at Center 1 (May 2021-April 2024). LNM status was confirmed via postoperative pathology, with each thyroid lobe labeled accordingly. The cohort was divided into training (n = 189) and validation (n = 82) cohorts, with additional temporal (n = 59 lobes, Center 1, May-August 2024) and external (n = 66 lobes, Center 2) test cohorts. Thyroid lobes were manually segmented from the isthmus midline, ensuring interobserver consistency (ICC ≥ 0.8). Deep learning and radiomics features were selected using LASSO algorithms to compute DLRad scores. Logistic regression identified independent predictors, forming DLRad, clinical, and combined models. Model performance was evaluated using AUC, calibration, decision curves, and the DeLong test, compared against radiologists' assessments. Independent predictors of LNM included age, gender, multiple nodules, tumor size group, and DLRad. The combined model demonstrated superior diagnostic performance with AUCs of 0.830 (training), 0.799 (validation), 0.819 (temporal test), and 0.756 (external test), outperforming the DLRad model (AUCs: 0.786, 0.730, 0.753, 0.642), clinical model (AUCs: 0.723, 0.745, 0.671, 0.660), and radiologist evaluations (AUCs: 0.529, 0.606, 0.620, 0.503). It also achieved the lowest Brier scores (0.167, 0.184, 0.175, 0.201) and the highest net benefit in decision-curve analysis at threshold probabilities > 20 %. The combined model integrating DLRad and clinical features exhibits good performance in predicting LNM in PTC patients.

DCE-MRI based deep learning analysis of intratumoral subregion for predicting Ki-67 expression level in breast cancer.

Ding Z, Zhang C, Xia C, Yao Q, Wei Y, Zhang X, Zhao N, Wang X, Shi S

pubmed logopapersJun 1 2025
To evaluate whether deep learning (DL) analysis of intratumor subregion based on dynamic contrast-enhanced MRI (DCE-MRI) can help predict Ki-67 expression level in breast cancer. A total of 290 breast cancer patients from two hospitals were retrospectively collected. A k-means clustering algorithm confirmed subregions of tumor. DL features of whole tumor and subregions were extracted from DCE-MRI images based on 3D ResNet18 pre-trained model. The logistic regression model was constructed after dimension reduction. Model performance was assessed using the area under the curve (AUC), and clinical value was demonstrated through decision curve analysis (DCA). The k-means clustering method clustered the tumor into two subregions (habitat 1 and habitat 2) based on voxel values. Both the habitat 1 model (validation set: AUC = 0.771, 95 %CI: 0.642-0.900 and external test set: AUC = 0.794, 95 %CI: 0.696-0.891) and the habitat 2 model (AUC = 0.734, 95 %CI: 0.605-0.862 and AUC = 0.756, 95 %CI: 0.646-0.866) showed better predictive capabilities for Ki-67 expression level than the whole tumor model (AUC = 0.686, 95 %CI: 0.550-0.823 and AUC = 0.680, 95 %CI: 0.555-0.804). The combined model based on the two subregions further enhanced the predictive capability (AUC = 0.808, 95 %CI: 0.696-0.921 and AUC = 0.842, 95 %CI: 0.758-0.926), and it demonstrated higher clinical value than other models in DCA. The deep learning model derived from subregion of tumor showed better performance for predicting Ki-67 expression level in breast cancer patients. Additionally, the model that integrated two subregions further enhanced the predictive performance.

MCNEL: A multi-scale convolutional network and ensemble learning for Alzheimer's disease diagnosis.

Yan F, Peng L, Dong F, Hirota K

pubmed logopapersJun 1 2025
Alzheimer's disease (AD) significantly threatens community well-being and healthcare resource allocation due to its high incidence and mortality. Therefore, early detection and intervention are crucial for reducing AD-related fatalities. However, the existing deep learning-based approaches often struggle to capture complex structural features of magnetic resonance imaging (MRI) data effectively. Common techniques for multi-scale feature fusion, such as direct summation and concatenation methods, often introduce redundant noise that can negatively affect model performance. These challenges highlight the need for developing more advanced methods to improve feature extraction and fusion, aiming to enhance diagnostic accuracy. This study proposes a multi-scale convolutional network and ensemble learning (MCNEL) framework for early and accurate AD diagnosis. The framework adopts enhanced versions of the EfficientNet-B0 and MobileNetV2 models, which are subsequently integrated with the DenseNet121 model to create a hybrid feature extraction tool capable of extracting features from multi-view slices. Additionally, a SimAM-based feature fusion method is developed to synthesize key feature information derived from multi-scale images. To ensure classification accuracy in distinguishing AD from multiple stages of cognitive impairment, this study designs an ensemble learning classifier model using multiple classifiers and a self-adaptive weight adjustment strategy. Extensive experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset validate the effectiveness of our solution, which achieves average accuracies of 96.67% for ADNI-1 and 96.20% for ADNI-2, respectively. The results indicate that the MCNEL outperforms recent comparable algorithms in terms of various evaluation metrics, demonstrating superior performance and robustness in AD diagnosis. This study markedly enhances the diagnostic capabilities for AD, allowing patients to receive timely treatments that can slow down disease progression and improve their quality of life.

UniBrain: Universal Brain MRI diagnosis with hierarchical knowledge-enhanced pre-training.

Lei J, Dai L, Jiang H, Wu C, Zhang X, Zhang Y, Yao J, Xie W, Zhang Y, Li Y, Zhang Y, Wang Y

pubmed logopapersJun 1 2025
Magnetic Resonance Imaging (MRI) has become a pivotal tool in diagnosing brain diseases, with a wide array of computer-aided artificial intelligence methods being proposed to enhance diagnostic accuracy. However, early studies were often limited by small-scale datasets and a narrow range of disease types, which posed challenges in model generalization. This study presents UniBrain, a hierarchical knowledge-enhanced pre-training framework designed for universal brain MRI diagnosis. UniBrain leverages a large-scale dataset comprising 24,770 imaging-report pairs from routine diagnostics for pre-training. Unlike previous approaches that either focused solely on visual representation learning or used brute-force alignment between vision and language, the framework introduces a hierarchical alignment mechanism. This mechanism extracts structured knowledge from free-text clinical reports at multiple granularities, enabling vision-language alignment at both the sequence and case levels, thereby significantly improving feature learning efficiency. A coupled vision-language perception module is further employed for text-guided multi-label classification, which facilitates zero-shot evaluation and fine-tuning of downstream tasks without modifying the model architecture. UniBrain is validated on both in-domain and out-of-domain datasets, consistently surpassing existing state-of-the-art diagnostic models and demonstrating performance on par with radiologists in specific disease categories. It shows strong generalization capabilities across diverse tasks, highlighting its potential for broad clinical application. The code is available at https://github.com/ljy19970415/UniBrain.

MRI and CT radiomics for the diagnosis of acute pancreatitis.

Tartari C, Porões F, Schmidt S, Abler D, Vetterli T, Depeursinge A, Dromain C, Violi NV, Jreige M

pubmed logopapersJun 1 2025
To evaluate the single and combined diagnostic performances of CT and MRI radiomics for diagnosis of acute pancreatitis (AP). We prospectively enrolled 78 patients (mean age 55.7 ± 17 years, 48.7 % male) diagnosed with AP between 2020 and 2022. Patients underwent contrast-enhanced CT (CECT) within 48-72 h of symptoms and MRI ≤ 24 h after CECT. The entire pancreas was manually segmented tridimensionally by two operators on portal venous phase (PVP) CECT images, T2-weighted imaging (WI) MR sequence and non-enhanced and PVP T1-WI MR sequences. A matched control group (n = 77) with normal pancreas was used. Dataset was randomly split into training and test, and various machine learning algorithms were compared. Receiver operating curve analysis was performed. The T2WI model exhibited significantly better diagnostic performance than CECT and non-enhanced and venous T1WI, with sensitivity, specificity and AUC of 73.3 % (95 % CI: 71.5-74.7), 80.1 % (78.2-83.2), and 0.834 (0.819-0.844) for T2WI (p = 0.001), 74.4 % (71.5-76.4), 58.7 % (56.3-61.1), and 0.654 (0.630-0.677) for non-enhanced T1WI, 62.1 % (60.1-64.2), 78.7 % (77.1-81), and 0.787 (0.771-0.810) for venous T1WI, and 66.4 % (64.8-50.9), 48.4 % (46-50.9), and 0.610 (0.586-0.626) for CECT, respectively.The combination of T2WI with CECT enhanced diagnostic performance compared to T2WI, achieving sensitivity, specificity and AUC of 81.4 % (80-80.3), 78.1 % (75.9-80.2), and 0.911 (0.902-0.920) (p = 0.001). The MRI radiomics outperformed the CT radiomics model to detect diagnosis of AP and the combination of MRI with CECT showed better performance than single models. The translation of radiomics into clinical practice may improve detection of AP, particularly MRI radiomics.

Deep learning based on ultrasound images predicting cervical lymph node metastasis in postoperative patients with differentiated thyroid carcinoma.

Fan F, Li F, Wang Y, Liu T, Wang K, Xi X, Wang B

pubmed logopapersJun 1 2025
To develop a deep learning (DL) model based on ultrasound (US) images of lymph nodes for predicting cervical lymph node metastasis (CLNM) in postoperative patients with differentiated thyroid carcinoma (DTC). Retrospective collection of 352 lymph nodes from 330 patients with cytopathology findings between June 2021 and December 2023 at our institution. The database was randomly divided into the training and test cohort at an 8:2 ratio. The DL basic model of longitudinal and cross-sectional of lymph nodes was constructed based on ResNet50 respectively, and the results of the 2 basic models were fused (1:1) to construct a longitudinal + cross-sectional DL model. Univariate and multivariate analyses were used to assess US features and construct a conventional US model. Subsequently, a combined model was constructed by integrating DL and US. The diagnostic accuracy of the longitudinal + cross-sectional DL model was higher than that of longitudinal or cross-sectional alone. The area under the curve (AUC) of the combined model (US + DL) was 0.855 (95% CI, 0.767-0.942) and the accuracy, sensitivity, and specificity were 0.786 (95% CI, 0.671-0.875), 0.972 (95% CI, 0.855-0.999), and 0.588 (95% CI, 0.407-0.754), respectively. Compared with US and DL models, the integrated discrimination improvement and net reclassification improvement of the combined models are both positive. This preliminary study shows that the DL model based on US images of lymph nodes has a high diagnostic efficacy for predicting CLNM in postoperative patients with DTC, and the combined model of US+DL is superior to single conventional US and DL for predicting CLNM in this population. We innovatively used DL of lymph node US images to predict the status of cervical lymph nodes in postoperative patients with DTC.

Incorporating radiomic MRI models for presurgical response assessment in patients with early breast cancer undergoing neoadjuvant systemic therapy: Collaborative insights from breast oncologists and radiologists.

Gaudio M, Vatteroni G, De Sanctis R, Gerosa R, Benvenuti C, Canzian J, Jacobs F, Saltalamacchia G, Rizzo G, Pedrazzoli P, Santoro A, Bernardi D, Zambelli A

pubmed logopapersJun 1 2025
The assessment of neoadjuvant treatment's response is critical for selecting the most suitable therapeutic options for patients with breast cancer to reduce the need for invasive local therapies. Breast magnetic resonance imaging (MRI) is so far one of the most accurate approaches for assessing pathological complete response, although this is limited by the qualitative and subjective nature of radiologists' assessment, often making it insufficient for deciding whether to forgo additional locoregional therapy measures. To increase the accuracy and prediction of radiomic MRI with the aid of machine learning models and deep learning methods, as part of artificial intelligence, have been used to analyse the different subtypes of breast cancer and the specific changes observed before and after therapy. This review discusses recent advancements in radiomic MRI models for presurgical response assessment for patients with early breast cancer receiving preoperative treatments, with a focus on their implications for clinical practice.

Kellgren-Lawrence grading of knee osteoarthritis using deep learning: Diagnostic performance with external dataset and comparison with four readers.

Vaattovaara E, Panfilov E, Tiulpin A, Niinimäki T, Niinimäki J, Saarakkala S, Nevalainen MT

pubmed logopapersJun 1 2025
To evaluate the performance of a deep learning (DL) model in an external dataset to assess radiographic knee osteoarthritis using Kellgren-Lawrence (KL) grades against versatile human readers. Two-hundred-eight knee anteroposterior conventional radiographs (CRs) were included in this retrospective study. Four readers (three radiologists, one orthopedic surgeon) assessed the KL grades and consensus grade was derived as the mean of these. The DL model was trained using all the CRs from Multicenter Osteoarthritis Study (MOST) and validated on Osteoarthritis Initiative (OAI) dataset and then tested on our external dataset. To assess the agreement between the graders, Cohen's quadratic kappa (k) with 95 ​% confidence intervals were used. Diagnostic performance was measured using confusion matrices and receiver operating characteristic (ROC) analyses. The multiclass (KL grades from 0 to 4) diagnostic performance of the DL model was multifaceted: sensitivities were between 0.372 and 1.000, specificities 0.691-0.974, PPVs 0.227-0.879, NPVs 0.622-1.000, and AUCs 0.786-0.983. The overall balanced accuracy was 0.693, AUC 0.886, and kappa 0.820. If only dichotomous KL grading (i.e. KL0-1 vs. KL2-4) was utilized, superior metrics were seen with an overall balanced accuracy of 0.902 and AUC of 0.967. A substantial agreement between each reader and DL model was found: the inter-rater agreement was 0.737 [0.685-0.790] for the radiology resident, 0.761 [0.707-0.816] for the musculoskeletal radiology fellow, 0.802 [0.761-0.843] for the senior musculoskeletal radiologist, and 0.818 [0.775-0.860] for the orthopedic surgeon. In an external dataset, our DL model can grade knee osteoarthritis with diagnostic accuracy comparable to highly experienced human readers.
Page 185 of 2352345 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.