Sort by:
Page 2 of 6036030 results

Wang L, Zhang S, Xu N, He Q, Zhu Y, Chang Z, Wu Y, Wang H, Qi S, Zhang L, Shi Y, Qu X, Zhou X, Song J

pubmed logopapersOct 24 2025
With the emergence of deep learning techniques based on convolutional neural networks, artificial intelligence (AI) has driven transformative developments in the field of medical image analysis. Recently, large language models (LLMs) such as ChatGPT have also started to achieve distinction in this domain. Increasing research shows the undeniable role of AI in reshaping various aspects of medical image analysis, including processes such as image enhancement, segmentation, detection in image preprocessing, and postprocessing related to medical diagnosis and prognosis in clinical settings. However, despite the significant progress in AI research, studies investigating the recent advances in AI technology in the aforementioned aspects, the changes in research hotspot trajectories, and the performance of studies in addressing key clinical challenges in this field are limited. This article provides an overview of recent advances in AI for medical image analysis and discusses the methodological profiles, advantages, disadvantages, and future trends of AI technologies.

Jeon SK, Lee JM, Park J, Hwang S, Ryu RR

pubmed logopapersOct 24 2025
To evaluate the feasibility and diagnostic utility of a deep learning (DL)-based super-resolution (SR) reconstruction algorithm applied to pancreatobiliary MRI for assessing pancreatic intraductal papillary mucinous neoplasms (IPMNs). This retrospective study included 162 patients with presumed pancreatic IPMN (≥ 1 cm) who underwent pancreatobiliary MRI between May 2019 and May 2022. Two portal venous phase (PVP) images of dynamic T1-wegithed imaging were sequentially acquired: early PVP image obtained using standard compressed sensing (CS)-volumetric interpolated breath-hold examination (VIBE) (standard CS-VIBE) and late PVP image obtained using CS-VIBE with DL-based SR reconstruction algorithm to generate 1 mm-thickness images (DL-SR CS-VIBE). Arterial phase and 3-min delayed phase were also acquired using DL-SR CS-VIBE. The image quality of standard and DL-SR CS-VIBE PVP sequences was compared using Wilcoxon signed-rank test. The diagnostic performance of full-sequence pancreatobiliaryMRI including DL-SR CS-VIBE for predicting malignant IPMN was assessed using multi-reader multi-case analysis. Diagnostic accuracy was assessed using receiver operating characteristic analysis, while sensitivity and specificity were estimated with corresponding 95% confidence intervals. Among 162 patients, 15 had malignant IPMN, while 147 had benign IPMN. DL-SR CS-VIBE demonstrated significantly better overall image quality (3.73 ± 0.33 vs. 3.22 ± 0.43) and cystic lesion conspicuity (3.37 ± 0.50 vs. 2.71 ± 0.52) than standard CS-VIBE (all Ps < 0.001). The area under the ROC curve (AUC) for predicting malignant IPMN was 0.858 (95% CI: 0.807, 0.909). Using the presence of high-risk stigmata as an indicator of test-positive, pooled sensitivity and pooled specificity of pancreatobiliary MRI including DL-SR CS-VIBE for malignant IPMN were 71.1% (95% confidence interval [CI]: 55.7, 83.6) and 82.8% (95% CI: 78.9, 86.2), respectively. Among MRI features, diagnostic accuracy was highest for mural nodules ≥ 5 mm (AUC, 0.736) and main pancreatic duct size ≥ 10 mm (AUC, 0.720). Pancreatobiliary MRI with DL-SR CS-VIBE enhances image quality and lesion conspicuity, offering promising diagnostic accuracy for malignant IPMN, though further studies with larger cohorts are needed to refine these findings and evaluate clinical impact.

Nicoli AP, Bach M, Wasserthal J, Indrakanti AK, Segeroth M, Yang S, Cyriac J, Boll D, Wilder-Smith AJ

pubmed logopapersOct 24 2025
This study aims to develop a tool based on deep learning algorithms for automatic liver segment and liver lesion segmentation on Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). We demonstrate its clinical utility using a qualitative example of hepatocellular carcinoma (HCC) response to transarterial chemoembolization (TACE). The models are provided as open-source software to update and improve the capabilities of TotalSegmentator. Liver segmentation was performed on 193 CTs and 120 MRIs, using fivefold cross-validation for training/testing. 429 CTs and 321 MRIs with liver lesions and 15 CTs and 13 MRIs without liver lesions were collected. Of these 414 CTs and 308 MRIs were manually segmented and a nnU-Net was trained on 750 images and tested on 80. Inter-rater variability was examined on 20 CTs and 20 MRIs by two independent readers. We analyzed its potential clinical utility on 172 TACE-treated HCC on CT. Performance was evaluated using sensitivity, false positives, and volume. Voxel-wise segmentation was evaluated using the Dice coefficient. Our model's liver segmentation achieved Dice coefficients of 0.897 for CT and 0.847 for MRI. Liver lesion detection on CT achieved 75.8% sensitivity, 0.522 false positives per case (FP/c), and 0.658 Dice; on MRI, 62.7% sensitivity, 1.029 FP/c, and 0.337 Dice. Following TACE, median HCC attenuation significantly decreased from 51.33 HU to 38.5 HU. Human readers showed higher agreement (sensitivity: 64.7%, Dice: 0.464) than Lesion model (LM)-reader comparisons (sensitivity: 53.2%, Dice: 0.432) and the LM had a slightly higher FP/c (0.825 vs. 0.775). Overall, our algorithms reliably detect and segment liver segments and lesions on both CT and MRI and the qualitative assessment of HCC response to TACE illustrates the model's potential value for clinical and research applications.

Aslam W, Hussain J, Aslam MZ, Jan S, Riaz TB, Iqbal A, Arif M, Khan I

pubmed logopapersOct 24 2025
Accurate segmentation of brain tumors from multi-modal MRI scans is critical for diagnosis, treatment planning, and disease monitoring. Tumor heterogeneity and inter-image variability across MRI sequences pose challenging problems to state-of-the-art segmentation models. This paper presents a novel Multi-Modal Multi-Scale Contextual Aggregation with Attention Fusion (MM-MSCA-AF) framework that leverages multi-modal MRI images (T1, T2, FLAIR, and T1-CE) to enhance segmentation performance. The model employs multi-scale contextual aggregation to obtain global and fine-grained spatial features, and gated attention fusion for selectively refining effective feature representations and discarding noise. Evaluated on the BRATS 2020 dataset, MM-MSCA-AF achieves a Dice value of 0.8158 for necrotic tumor regions and 0.8589 in total, outperforming state-of-the-art architectures such as U-Net, nnU-Net, and Attention U-Net. These results demonstrate the effectiveness of MM-MSCA-AF in handling complex tumor shapes and improving segmentation accuracy. The proposed approach has significant clinical value, offering a more accurate and automatic brain tumor segmentation solution in medical imaging.

Chen Y, Gao L, Gao Y, Wang R, Lian J, Meng X, Duan Y, Chai L, Han H, Cheng Z, Xie Z

pubmed logopapersOct 24 2025
The integration of deep learning in medical imaging has significantly advanced diagnostic, therapeutic, and research outcomes. However, applying universal models across multiple modalities remains challenging due to inherent inter-modality variability. Here we present the Modality Projection Universal Model (MPUM), trained on 861 subjects, which dynamically adapts to diverse imaging modalities through a modality-projection strategy. MPUM achieves state-of-the-art, whole-body organ segmentation, providing rapid localization for computer-aided diagnosis and precise anatomical quantification to support clinical decision-making. A controller-based convolutional layer further enables saliency map visualization, enhancing model interpretability for clinical use. Beyond segmentation, MPUM reveals metabolic correlations along the brain-body axis and between distinct brain regions, providing insights into systemic and physiological interactions from a whole-body perspective. Here we show that this universal framework accelerates diagnosis, facilitates large-scale imaging analysis, and bridges anatomical and metabolic information, enabling discovery of cross-organ disease mechanisms and advancing integrative brain-body research.

Guiot J, Engelberts J, Henket M, Ernst B, Maloir Q, Louis R, Lynch DA, Humphries SM, Charbonnier JP

pubmed logopapersOct 24 2025
Idiopathic pulmonary fibrosis (IPF) is a progressive fibrosing interstitial lung disease associated with high morbidity and mortality despite specific anti-fibrotic therapies. Management of IPF is complex and relies on pulmonary function tests (PFT) to evaluate severity and monitor progression. CT provides non-invasive morphologic assessment and emerging software techniques enable quantitative analysis. We included 319 individuals with IPF from the OSIC dataset. A cross-sectional analysis was made for all patients, with a longitudinal evaluation for 143 of them. We used LungQ software (Thirona, The Netherlands) to quantify lung and pulmonary vessel volumes, as well as the extent of interstitial lung disease and to assess correlation with PFT and mortality. Quantitative extent of fibrotic abnormalities was correlated with baseline FVC and DLCO (r -0.47, p < 0.0001 and r -0.55, p < 0.0001 respectively) and longitudinal modifications over time (r -0.48, p < 0.0001 and r-0.43 p < 0.0001, respectively). Median baseline extent of ILD, expressed as a percentage of lung volume, was 16.5% (10.8-25.5) and increased to 17.3% (11.6-29) on follow-up (p < 0.001). The median ILD progression was of 9.8% (-9.5-40.0). Vascular enlargement quantification as well as ILD quantification were predictive marker of death (p < 0.0001). However, vascular abnormalities' independent predictive value could not be assessed in multivariate models due to multicollinearity with other variables. LungQ allows to quantify interstitial and vascular lung features and their changes over time in a large cohort of patients with IPF. Imaging markers were negatively correlated with PFT at baseline and follow-ups were predictive of mortality confirming their potential as disease quantifiers. Further clinical validation is needed to specify the potential clinical use.

Zhang R, Yi F, Mao H, Huang Z, Wang K, Zhang J

pubmed logopapersOct 24 2025
The brain age gap (BAG) is a neuroimaging-derived marker of accelerated brain aging. However, its clinical application faces challenges due to model inaccuracies and unclear links to disease mechanisms. This study investigates the clinical relevance of BAG across neuropsychiatric disorders, cognitive decline, mortality, and lifestyle interventions. We use data from multiple cohorts, including 38,967 participants from the UK Biobank (ages 45-82, 52.5% female), 1,402 individuals from the ADNI study (ages 55-96, 56.0% female), and 1,182 from the PPMI study (ages 45-83, 58.0% female). We develop a 3D Vision Transformer for whole-brain age estimation. Survival analysis, restricted cubic splines, and regression models assess BAG's associations with cognitive, neuropsychiatric disorders, mortality and impact of lifestyle factors. Here we show that the model achieves a mean error of 2.68 years in the UK Biobank and 2.99-3.20 years in ADNI/PPMI. Each one-year increase in BAG raises Alzheimer's risk by 16.5%, mild cognitive impairment by 4.0%, and all-cause mortality by 12%. The highest-risk group (Q4) shows a 2.8-fold increased risk of Alzheimer's disease, a 6.4-fold risk of multiple sclerosis, and a 2.4-fold higher mortality risk. Cognitive decline is most evident in Q4, particularly in reaction time and processing speed. Lifestyle interventions, especially smoking cessation, moderate alcohol consumption, and physical activity, significantly slow BAG progression in individuals with advanced neurodegeneration. BAG predicts accelerated brain aging, neuropsychiatric disorders, and mortality. Its ability to detect nonlinear cognitive thresholds and modifiability through lifestyle changes makes it useful for risk stratification and prevention.

Hu HT, Li MD, Lin XX, Cai MY, Liu S, Wu SH, Tong WJ, Ye FY, Hu JB, Ke WP, Chen LD, Yang H, Liu GJ, Wang HB, Lu MD, Huang QH, Kuang M, Wang W

pubmed logopapersOct 24 2025
Data heterogeneity critically limits distributed artificial intelligence (AI) in medical imaging. We propose HeteroSync Learning (HSL), a privacy-preserving framework that addresses heterogeneity through: (1) Shared Anchor Task (SAT) for cross-node representation alignment, and (2) an Auxiliary Learning Architecture coordinating SAT with local primary tasks. Validated via large-scale simulations (feature/label/quantity/combined heterogeneity) and a real-world multi-center thyroid cancer study, HSL outperforms local learning, 12 benchmark methods (FedAvg, FedProx, SplitAVG, FedRCL, FedCOME, etc.), and foundation models (e.g., CLIP) by better stability and up to 40% in area under the curve (AUC), matching central learning performance. HSL achieves 0.846 AUC on the out-of-distribution pediatric thyroid cancer data (outperforming others by 5.1-28.2%), demonstrating superior generalization. Visualizations confirm HSL successfully homogenizes heterogeneous distributions. This work provides an effective solution for distributed medical AI, enabling equitable collaboration across institutions and advancing healthcare AI democratization.

Hankel S, Till H, Schweintzger G, Kraxner C, Singer G, Stranger N, Till T, Tschauner S

pubmed logopapersOct 23 2025
Accurate differentiation between skull fractures and sutures is challenging in young children. Traditional diagnostic modalities like computed tomography involve ionizing radiation, while sonography is safer but demands expertise. This study explores the application of artificial intelligence (AI) to improve diagnostic accuracy in this context. A retrospective study utilized sonographic images of 86 children (mean age: 8.5 months) presenting with suspected skull fractures was performed. The AI approach included binary classification and object localization, with tenfold cross-validation applied to 385 images. The study compared AI performance against nine raters with varying expertise, with and without AI assistance. EfficientNet demonstrated superior classification metrics, with the B6 variant achieving the highest F1 score (0.841) and PR AUC (0.913). YOLOv11 models underperformed compared to EfficientNet in detecting fractures and sutures. Raters significantly benefited from AI-assisted diagnostics, with F1 scores improving from 0.749 (unassisted) to 0.833 (assisted). AI models consistently outperformed unassisted human raters. This study presents the first AI model differentiating skull fractures from sutures on pediatric sonographic images, highlighting AI's potential to enhance diagnostic accuracy. Future efforts should focus on expanding datasets, validating AI models on independent cohorts, and exploring dynamic sonographic data to improve the diagnostic impact.

Göktürk Y, Başarslan SK, Göktürk Ş, Kocaman H, Yıldırım H

pubmed logopapersOct 23 2025
Traditional diagnostic methods used by neurosurgeons are limited in their ability to address complex interactions. These limitations have necessitated the use of advanced artificial intelligence approaches capable of analyzing multidimensional data with greater precision in neurosurgical clinics. Postoperative intracranial hemorrhage is a critical complication following cerebral tumor surgery, often associated with increased morbidity and mortality. This study aimed to predict the risk of postoperative intracerebral hemorrhage in patients undergoing intracranial tumor surgery by employing machine learning (ML) algorithms for risk stratification and identifying key contributing factors. This retrospective study included 118 patients monitored in the neurosurgical intensive care unit between January 2024 and January 2025. The primary outcome was postoperative hemorrhage, defined as a radiologically confirmed hematoma ≥ 5 ml on brain CT within 24 h. Using a predefined set of clinical and biochemical parameters analyzed with SPSS and R, multiple ML algorithms were developed. To address class imbalance in the training data, the Synthetic Minority Over-sampling Technique (SMOTE) was applied. Models were evaluated using metrics including Area Under the Curve (AUC), accuracy, and F1-score, with further assessment via calibration plots and Decision Curve Analysis (DCA). The LightGBM model demonstrated a robust and balanced predictive performance, achieving a test AUC of 0.7451, an accuracy of 76.9%, a sensitivity of 77.8%, and an F1-score of 0.700. Platelet count (PLT), serum chloride (Cl), and the change in C-reactive protein from pre- to postoperative state (delta-CRP) emerged as the most influential predictors of hemorrhage. Model explainability was enhanced using SHAP and LIME analyses, and the model showed good calibration with potential clinical net benefit. Our study suggests that ML algorithms, particularly LightGBM, show promise for predicting postoperative hemorrhage following brain tumor surgery. Biomarkers such as platelet count, chloride, and delta-CRP offer clinically meaningful insights for early risk detection. Once externally validated, the integration of such models into clinical decision support systems could potentially improve postoperative monitoring and patient outcomes.
Page 2 of 6036030 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.