Sort by:
Page 44 of 3053046 results

Effect of spatial resolution on the diagnostic performance of machine-learning radiomics model in lung adenocarcinoma: comparisons between normal- and high-spatial-resolution imaging for predicting invasiveness.

Yanagawa M, Nagatani Y, Hata A, Sumikawa H, Moriya H, Iwano S, Tsuchiya N, Iwasawa T, Ohno Y, Tomiyama N

pubmed logopapersJul 31 2025
To construct two machine learning radiomics (MLR) for invasive adenocarcinoma (IVA) prediction using normal-spatial-resolution (NSR) and high-spatial-resolution (HSR) training cohorts, and to validate models (model-NSR and -HSR) in another test cohort while comparing independent radiologists' (R1, R2) performance with and without model-HSR. In this retrospective multicenter study, all CT images were reconstructed using NSR data (512 matrix, 0.5-mm thickness) and HSR data (2048 matrix, 0.25-mm thickness). Nodules were divided into training (n = 61 non-IVA, n = 165 IVA) and test sets (n = 36 non-IVA, n = 203 IVA). Two MLR models were developed with 18 significant factors for the NSR model and 19 significant factors for the HSR model from 172 radiomics features using random forest. Area under the receiver operator characteristic curves (AUC) was analyzed using DeLong's test in the test set. Accuracy (acc), sensitivity (sen), and specificity (spc) of R1 and R2 with and without model-HSR were compared using McNemar test. 437 patients (70 ± 9 years, 203 men) had 465 nodules (n = 368, IVA). Model-HSR AUCs were significantly higher than model-NSR in training (0.839 vs. 0.723) and test sets (0.863 vs. 0.718) (p < 0.05). R1's acc (87.2%) and sen (93.1%) with model-HSR were significantly higher than without (77.0% and 79.3%) (p < 0.0001). R2's acc (83.7%) and sen (86.7%) with model-HSR might be equal or higher than without (83.7% and 85.7%, respectively), but not significant (p > 0.50). Spc of R1 (52.8%) and R2 (66.7%) with model-HSR might be lower than without (63.9% and 72.2%, respectively), but not significant (p > 0.21). HSR-based MLR model significantly increased IVA diagnostic performance compared to NSR, supporting radiologists without compromising accuracy and sensitivity. However, this benefit came at the cost of reduced specificity, potentially increasing false positives, which may lead to unnecessary examinations or overtreatment in clinical settings.

Hybrid optimization enabled Eff-FDMNet for Parkinson's disease detection and classification in federated learning.

Subramaniam S, Balakrishnan U

pubmed logopapersJul 31 2025
Parkinson's Disease (PD) is a progressive neurodegenerative disorder and the early diagnosis is crucial for managing symptoms and slowing disease progression. This paper proposes a framework named Federated Learning Enabled Waterwheel Shuffled Shepherd Optimization-based Efficient-Fuzzy Deep Maxout Network (FedL_WSSO based Eff-FDMNet) for PD detection and classification. In local training model, the input image from the database "Image and Data Archive (IDA)" is given for preprocessing that is performed using Gaussian filter. Consequently, image augmentation takes place and feature extraction is conducted. These processes are executed for every input image. Therefore, the collected outputs of images are used for PD detection using Shepard Convolutional Neural Network Fuzzy Zeiler and Fergus Net (ShCNN-Fuzzy-ZFNet). Then, PD classification is accomplished using Eff-FDMNet, which is trained using WSSO. At last, based on CAViaR, local updation and aggregation are changed in server. The developed method obtained highest accuracy as 0.927, mean average precision as 0.905, lowest false positive rate (FPR) as 0.082, loss as 0.073, Mean Squared Error (MSE) as 0.213, and Root Mean Squared Error (RMSE) as 0.461. The high accuracy and low error rates indicate that the potent framework can enhance patient outcomes by enabling more reliable and personalized diagnosis.

Deep Learning-based Hierarchical Brain Segmentation with Preliminary Analysis of the Repeatability and Reproducibility.

Goto M, Kamagata K, Andica C, Takabayashi K, Uchida W, Goto T, Yuzawa T, Kitamura Y, Hatano T, Hattori N, Aoki S, Sakamoto H, Sakano Y, Kyogoku S, Daida H

pubmed logopapersJul 31 2025
We developed new deep learning-based hierarchical brain segmentation (DLHBS) method that can segment T1-weighted MR images (T1WI) into 107 brain subregions and calculate the volume of each subregion. This study aimed to evaluate the repeatability and reproducibility of volume estimation using DLHBS and compare them with those of representative brain segmentation tools such as statistical parametric mapping (SPM) and FreeSurfer (FS). Hierarchical segmentation using multiple deep learning models was employed to segment brain subregions within a clinically feasible processing time. The T1WI and brain mask pairs in 486 subjects were used as training data for training of the deep learning segmentation models. Training data were generated using a multi-atlas registration-based method. The high quality of training data was confirmed through visual evaluation and manual correction by neuroradiologists. The brain 3D-T1WI scan-rescan data of the 11 healthy subjects were obtained using three MRI scanners for evaluating the repeatability and reproducibility. The volumes of the eight ROIs-including gray matter, white matter, cerebrospinal fluid, hippocampus, orbital gyrus, cerebellum posterior lobe, putamen, and thalamus-obtained using DLHBS, SPM 12 with default settings, and FS with the "recon-all" pipeline. These volumes were then used for evaluation of repeatability and reproducibility. In the volume measurements, the bilateral thalamus showed higher repeatability with DLHBS compared with SPM. Furthermore, DLHBS demonstrated higher repeatability than FS in across all eight ROIs. Additionally, higher reproducibility was observed with DLHBS in both hemispheres of six ROIs when compared with SPM and in five ROIs compared with FS. The lower repeatability and reproducibility in DLHBS were not observed in any comparisons. Our results showed that the best performance in both repeatability and reproducibility was found in DLHBS compared with SPM and FS.

Utility of Thin-slice Fat-suppressed Single-shot T2-weighted MR Imaging with Deep Learning Image Reconstruction as a Protocol for Evaluating the Pancreas.

Shimada R, Sofue K, Ueno Y, Wakayama T, Yamaguchi T, Ueshima E, Kusaka A, Hori M, Murakami T

pubmed logopapersJul 31 2025
To compare the utility of thin-slice fat-suppressed single-shot T2-weighted imaging (T2WI) with deep learning image reconstruction (DLIR) and conventional fast spin-echo T2WI with DLIR for evaluating pancreatic protocol. This retrospective study included 42 patients (mean age, 70.2 years) with pancreatic cancer who underwent gadoxetic acid-enhanced MRI. Three fat-suppressed T2WI, including conventional fast-spin echo with 6 mm thickness (FSE 6 mm), single-shot fast-spin echo with 6 mm and 3 mm thickness (SSFSE 6 mm and SSFSE 3 mm), were acquired for each patient. For quantitative analysis, the SNRs of the upper abdominal organs were calculated between images with and without DLIR. The pancreas-to-lesion contrast on DLIR images was also calculated. For qualitative analysis, two abdominal radiologists independently scored the image quality on a 5-point scale in the FSE 6 mm, SSFSE 6 mm, and SSFSE 3 mm with DLIR. The SNRs significantly improved among the three T2-weighted images with DLIR compared to those without DLIR in all patients (P < 0.001). The pancreas-to-lesion contrast of SSFSE 3 mm was higher than those of the FSE 6 mm (P < 0.001) and tended to be higher than SSFSE 6 mm (P = 0.07). SSFSE 3 mm had the highest image qualities regarding pancreas edge sharpness, pancreatic duct clarity, and overall image quality, followed by SSFSE 6 mm and FSE 6 mm (P < 0.0001). SSFSE 3 mm with DLIR demonstrated significant improvements in SNRs of the pancreas, pancreas-to-lesion contrast, and image quality more efficiently than did SSFSE 6 mm and FSE 6 mm. Thin-slice fat-suppressed single-shot T2WI with DLIR can be easily implemented for pancreatic MR protocol.

Thin-slice 2D MR Imaging of the Shoulder Joint Using Denoising Deep Learning Reconstruction Provides Higher Image Quality Than 3D MR Imaging.

Kakigi T, Sakamoto R, Arai R, Yamamoto A, Kuriyama S, Sano Y, Imai R, Numamoto H, Miyake KK, Saga T, Matsuda S, Nakamoto Y

pubmed logopapersJul 31 2025
This study was conducted to evaluate whether thin-slice 2D fat-saturated proton density-weighted images of the shoulder joint in three imaging planes combined with parallel imaging, partial Fourier technique, and denoising approach with deep learning-based reconstruction (dDLR) are more useful than 3D fat-saturated proton density multi-planar voxel images. Eighteen patients who underwent MRI of the shoulder joint at 3T were enrolled. The denoising effect of dDLR in 2D was evaluated using coefficient of variation (CV). Qualitative evaluation of anatomical structures, noise, and artifacts in 2D after dDLR and 3D was performed by two radiologists using a five-point Likert scale. All were analyzed statistically. Gwet's agreement coefficients were also calculated. The CV of 2D after dDLR was significantly lower than that before dDLR (P < 0.05). Both radiologists rated 2D higher than 3D for all anatomical structures and noise (P < 0.05), except for artifacts. Both Gwet's agreement coefficients of anatomical structures, noise, and artifacts in 2D and 3D produced nearly perfect agreement between the two radiologists. The evaluation of 2D tended to be more reproducible than 3D. 2D with parallel imaging, partial Fourier technique, and dDLR was proved to be superior to 3D for depicting shoulder joint structures with lower noise.

Quantifying the Trajectory of Percutaneous Endoscopic Lumbar Discectomy in 3D Lumbar Models Based on Automated MR Image Segmentation-A Cross-Sectional Study.

Su Z, Wang Y, Huang C, He Q, Lu J, Liu Z, Zhang Y, Zhao Q, Zhang Y, Cai J, Pang S, Yuan Z, Chen Z, Chen T, Lu H

pubmed logopapersJul 31 2025
Creating a 3D lumbar model and planning a personalized puncture trajectory has an advantage in establishing the working channel for percutaneous endoscopic lumbar discectomy (PELD). However, existing 3D lumbar models, which seldom include lumbar nerves and dural sac reconstructions, primarily depend on CT images for preoperative trajectory planning. Therefore, our study aims to further investigate the relationship between different virtual working channels and the 3D lumbar model, which includes automated MR image segmentation of lumbar bone, nerves, and dural sac at the L4/L5 level. Preoperative lumbar MR images of 50 patients with L4/L5 lumbar disc herniation were collected from a teaching hospital between March 2020 and July 2020. Automated MR image segmentation was initially used to create a 3D model of the lumbar spine, including the L4 vertebrae, L5 vertebrae, intervertebral disc, L4 nerves, dural sac, and skin. Thirty were then randomly chosen from the segmentation results to clarify the relationship between various virtual working channels and the lumbar 3D model. A bivariate Spearman's rank correlation analysis was used in this study. Preoperative MR images of 50 patients (34 males, mean age 45.6 ± 6 years) were used to train and validate the automated segmentation model, which had mean Dice scores of 0.906, 0.891, 0.896, 0.695, 0.892, and 0.892 for the L4 vertebrae, L5 vertebrae, intervertebral disc, L4 nerves, dural sac, and skin, respectively. With an increase in the coronal plane angle (CPA), there was a reduction in the intersection volume involving the L4 nerves and atypical structures. Conversely, the intersection volume encompassing the dural sac, L4 inferior articular process, and L5 superior articular process increased; the total intersection volume showed a fluctuating pattern: it initially decreased, followed by an increase, and then decreased once more. As the cross-section angle (CSA) increased, there was a rise in the intersection volume of both the L4 nerves and the dural sac; the intersection volume involving the L4 inferior articular process grew while that of the L5 superior articular process diminished; the overall intersection volume and the intersection volume of atypical structures initially decreased, followed by an increase. In terms of regularity, the optimal angles for L4/L5 PELD are a CSA of 15° and a CPA of 15°-20°, minimizing harm to the vertebral bones, facet joint, spinal nerves, and dural sac. Additionally, our 3D preoperative planning method could enhance puncture trajectories for individual patients, potentially advancing surgical navigation, robots, and artificial intelligence in PELD procedures.

Identification and validation of an explainable machine learning model for vascular depression diagnosis in the older adults: a multicenter cohort study.

Zhang R, Li T, Fan F, He H, Lan L, Sun D, Xu Z, Peng S, Cao J, Xu J, Peng X, Lei M, Song H, Zhang J

pubmed logopapersJul 31 2025
Vascular depression (VaDep) is a prevalent affective disorder in older adults that significantly impacts functional status and quality of life. Early identification and intervention are crucial but largely insufficient in clinical practice due to inconspicuous depressive symptoms mostly, heterogeneous imaging manifestations, and the lack of definitive peripheral biomarkers. This study aimed to develop and validate an interpretable machine learning (ML) model for VaDep to serve as a clinical support tool. This study included 602 participants from Wuhan in China divided into 236 VaDep patients and 366 controls for training and internal validation from July 2020 to October 2023. An independent dataset of 171 participants from surrounding areas was used for external validation. We collected clinical data, neuropsychological assessments, blood test results, and MRI scans to develop and refine ML models through cross-validation. Feature reduction was implemented to simplify the models without compromising their performance, with validation achieved through internal and external datasets. The SHapley Additive exPlanations method was used to enhance model interpretability. The Light Gradient Boosting Machine (LGBM) model outperformed from the selected 6 ML algorithms based on performance metrics. An optimized, interpretable LGBM model with 8 key features, including white matter hyperintensities score, age, vascular endothelial growth factor, interleukin-6, brain-derived neurotrophic factor, tumor necrosis factor-alpha levels, lacune counts, and serotonin level, demonstrated high diagnostic accuracy in both internal (AUROC = 0.937) and external (AUROC = 0.896) validations. The final model also achieved, and marginally exceeded, clinician-level diagnostic performance. Our research established a consistent and explainable ML framework for identifying VaDep in older adults, utilizing comprehensive clinical data. The 8 characteristics identified in the final LGBM model provide new insights for further exploration of VaDep mechanisms and emphasize the need for enhanced focus on early identification and intervention in this vulnerable group. More attention needs to be paid to the affective health of older adults.

A brain tumor segmentation enhancement in MRI images using U-Net and transfer learning.

Pourmahboubi A, Arsalani Saeed N, Tabrizchi H

pubmed logopapersJul 31 2025
This paper presents a novel transfer learning approach for segmenting brain tumors in Magnetic Resonance Imaging (MRI) images. Using Fluid-Attenuated Inversion Recovery (FLAIR) abnormality segmentation masks and MRI scans from The Cancer Genome Atlas's (TCGA's) lower-grade glioma collection, our proposed approach uses a VGG19-based U-Net architecture with fixed pretrained weights. The experimental findings, which show an Area Under the Curve (AUC) of 0.9957, F1-Score of 0.9679, Dice Coefficient of 0.9679, Precision of 0.9541, Recall of 0.9821, and Intersection-over-Union (IoU) of 0.9378, show how effective the proposed framework is. According to these metrics, the VGG19-powered U-Net outperforms not only the conventional U-Net model but also other variants that were compared and used different pre-trained backbones in the U-Net encoder.Clinical trial registrationNot applicable as this study utilized existing publicly available dataset and did not involve a clinical trial.

An interpretable CT-based machine learning model for predicting recurrence risk in stage II colorectal cancer.

Wu Z, Gong L, Luo J, Chen X, Yang F, Wen J, Hao Y, Wang Z, Gu R, Zhang Y, Liao H, Wen G

pubmed logopapersJul 31 2025
This study aimed to develop an interpretable 3-year disease-free survival risk prediction tool to stratify patients with stage II colorectal cancer (CRC) by integrating CT images and clinicopathological factors. A total of 769 patients with pathologically confirmed stage II CRC and disease-free survival (DFS) follow-up information were recruited from three medical centers and divided into training (n = 442), test (n = 190), and validation cohorts (n = 137). CT-based tumor radiomics features were extracted, selected, and used to calculate a Radscore. A combined model was developed using artificial neural network (ANN) algorithm, by integrating the Radscore with significant clinicoradiological factors to classify patients into high- and low-risk groups. Model performance was assessed using the area under the curve (AUC), and feature contributions were qualified using the Shapley additive explanation (SHAP) algorithm. Kaplan-Meier survival analysis revealed the prognostic stratification value of the risk groups. Fourteen radiomics features and five clinicoradiological factors were selected to construct the radiomics and clinicoradiological models, respectively. The combined model demonstrated optimal performance, with AUCs of 0.811 and 0.846 in the test and validation cohorts, respectively. Kaplan-Meier curves confirmed effective patient stratification (p < 0.001) in both test and validation cohorts. A high Radscore, rough intestinal outer edge, and advanced age were identified as key prognostic risk factors using the SHAP. The combined model effectively stratified patients with stage II CRC into different prognostic risk groups, aiding clinical decision-making. Integrating CT images with clinicopathological information can facilitate the identification of patients with stage II CRC who are most likely to benefit from adjuvant chemotherapy. The effectiveness of adjuvant chemotherapy for stage II colorectal cancer remains debated. A combined model successfully identified high-risk stage II colorectal cancer patients. Shapley additive explanations enhance the interpretability of the model's predictions.

Impact of AI assistance on radiologist interpretation of knee MRI.

Herpe G, Vesoul T, Zille P, Pluot E, Guillin R, Rizk B, Ardon R, Adam C, d'Assignies G, Gondim Teixeira PA

pubmed logopapersJul 31 2025
Knee injuries frequently require Magnetic Resonance Imaging (MRI) evaluation, increasing radiologists' workload. This study evaluates the impact of a Knee AI assistant on radiologists' diagnostic accuracy and efficiency in detecting anterior cruciate ligament (ACL), meniscus, cartilage, and medial collateral ligament (MCL) lesions on knee MRI exams. This retrospective reader study was conducted from January 2024 to April 2024. Knee MRI studies were evaluated with and without AI assistance by six radiologists with between 2 and 10 years of experience in musculoskeletal imaging in two sessions, 1 month apart. The AI algorithm was trained on 23,074 MRI studies separate from the study dataset and tested on various knee structures, including ACL, MCL, menisci, and cartilage. The reference standard was established by the consensus of three expert MSK radiologists. Statistical analysis included sensitivity, specificity, accuracy, and Fleiss' Kappa. The study dataset involved 165 knee MRIs (89 males, 76 females; mean age, 42.3 ± 15.7 years). AI assistance improved sensitivity from 81% (134/165, 95% CI = [79.7, 83.3]) to 86%(142/165, 95% CI = [84.2, 87.5]) (p < 0.001), accuracy from 86% (142/165, 95% CI = [85.4, 86.9]) to 91%(150/165, 95% CI = [90.7, 92.1]) (p < 0.001), and specificity from 88% (145/165, 95% CI = [87.1, 88.5]) to 93% (153/165, 95% CI = [92.7, 93.8]) (p < 0.001). Sensitivity and accuracy improvements were observed across all knee structures with varied statistical significance ranging from < 0.001 to 0.28. The Fleiss' Kappa values among readers increased from 54% (95% CI = [53.0, 55.3]) to 78% (95% CI = [76.6, 79.0]) (p < 0.001) post-AI integration. The integration of AI improved diagnostic accuracy, efficiency, and inter-reader agreement in knee MRI interpretation, highlighting the value of this approach in clinical practice. Question Can artificial intelligence (AI) assistance improve the diagnostic accuracy and efficiency of radiologists in detecting main lesions anterior cruciate ligament, meniscus, cartilage, and medial collateral ligament lesions in knee MRI? Findings AI assistance in knee MRI interpretation increased radiologists' sensitivity from 81 to 86% and accuracy from 86 to 91% for detecting knee lesions while improving inter-reader agreement (p < 0.001). Clinical relevance AI-assisted knee MRI interpretation enhances diagnostic precision and consistency among radiologists, potentially leading to more accurate injury detection, improved patient outcomes, and reduced diagnostic variability in musculoskeletal imaging.
Page 44 of 3053046 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.