Sort by:
Page 29 of 3433422 results

Early postnatal characteristics and differential diagnosis of choledochal cyst and cystic biliary atresia.

Tian Y, Chen S, Ji C, Wang XP, Ye M, Chen XY, Luo JF, Li X, Li L

pubmed logopapersSep 7 2025
Choledochal cysts (CC) and cystic biliary atresia (CBA) present similarly in early infancy but require different treatment approaches. While CC surgery can be delayed until 3-6 months of age in asymptomatic patients, CBA requires intervention within 60 days to prevent cirrhosis. To develop a diagnostic model for early differentiation between these conditions. A total of 319 patients with hepatic hilar cysts (< 60 days old at surgery) were retrospectively analyzed; these patients were treated at three hospitals between 2011 and 2022. Clinical features including biochemical markers and ultrasonographic measurements were compared between CC (<i>n</i> = 274) and CBA (<i>n</i> = 45) groups. Least absolute shrinkage and selection operator regression identified key diagnostic features, and 11 machine learning models were developed and compared. The CBA group showed higher levels of total bile acid, total bilirubin, γ-glutamyl transferase, aspartate aminotransferase, and alanine aminotransferase, and direct bilirubin, while longitudinal diameter of the cysts and transverse diameter of the cysts were larger in the CC group. The multilayer perceptron model demonstrated optimal performance with 95.8% accuracy, 92.9% sensitivity, 96.3% specificity, and an area under the curve of 0.990. Decision curve analysis confirmed its clinical utility. Based on the model, we developed user-friendly diagnostic software for clinical implementation. Our machine learning approach differentiates CC from CBA in early infancy using routinely available clinical parameters. Early accurate diagnosis facilitates timely surgical intervention for CBA cases, potentially improving patient outcomes.

A Deep Learning-Based Fully Automated Cardiac MRI Segmentation Approach for Tetralogy of Fallot Patients.

Chai WY, Lin G, Wang CJ, Chiang HJ, Ng SH, Kuo YS, Lin YC

pubmed logopapersSep 7 2025
Automated cardiac MR segmentation enables accurate and reproducible ventricular function assessment in Tetralogy of Fallot (ToF), whereas manual segmentation remains time-consuming and variable. To evaluate the deep learning (DL)-based models for automatic left ventricle (LV), right ventricle (RV), and LV myocardium segmentation in ToF, compared with manual reference standard annotations. Retrospective. 427 patients with diverse cardiac conditions (305 non-ToF, 122 ToF), with 395 for training/validation, 32 ToF for internal testing, and 12 external ToF for generalizability assessment. Steady-state free precession cine sequence at 1.5/3 T. U-Net, Deep U-Net, and MultiResUNet were trained under three regimes (non-ToF, ToF-only, mixed), using manual segmentations from one radiologist and one researcher (20 and 10 years of experience, respectively) as reference, with consensus for discrepancies. Performance for LV, RV, and LV myocardium was evaluated using Dice Similarity Coefficient (DSC), Intersection over Union (IoU), and F1-score, alongside regional (basal, middle, apical) and global ventricular function comparisons to manual results. Friedman tests were applied for architecture and regime comparisons, paired Wilcoxon tests for ED-ES differences, and Pearson's r for assessing agreement in global function. MultiResUNet model trained on a mixed dataset (TOF and non-TOF cases) achieved the best segmentation performance, with DSCs of 96.1% for LV and 93.5% for RV. In the internal test set, DSCs for LV, RV, and LV myocardium were 97.3%, 94.7%, and 90.7% at end-diastole, and 93.6%, 92.1%, and 87.8% at end-systole, with ventricular measurement correlations ranging from 0.84 to 0.99. Regional analysis showed LV DSCs of 96.3% (basal), 96.4% (middle), and 94.1% (apical), and RV DSCs of 92.8%, 94.2%, and 89.6%. External validation (n = 12) showed correlations ranging from 0.81 to 0.98. The MultiResUNet model enabled accurate automated cardiac MRI segmentation in ToF with the potential to streamline workflows and improve disease monitoring. 3. Stage 2.

RetinaGuard: Obfuscating Retinal Age in Fundus Images for Biometric Privacy Preserving

Zhengquan Luo, Chi Liu, Dongfu Xiao, Zhen Yu, Yueye Wang, Tianqing Zhu

arxiv logopreprintSep 7 2025
The integration of AI with medical images enables the extraction of implicit image-derived biomarkers for a precise health assessment. Recently, retinal age, a biomarker predicted from fundus images, is a proven predictor of systemic disease risks, behavioral patterns, aging trajectory and even mortality. However, the capability to infer such sensitive biometric data raises significant privacy risks, where unauthorized use of fundus images could lead to bioinformation leakage, breaching individual privacy. In response, we formulate a new research problem of biometric privacy associated with medical images and propose RetinaGuard, a novel privacy-enhancing framework that employs a feature-level generative adversarial masking mechanism to obscure retinal age while preserving image visual quality and disease diagnostic utility. The framework further utilizes a novel multiple-to-one knowledge distillation strategy incorporating a retinal foundation model and diverse surrogate age encoders to enable a universal defense against black-box age prediction models. Comprehensive evaluations confirm that RetinaGuard successfully obfuscates retinal age prediction with minimal impact on image quality and pathological feature representation. RetinaGuard is also flexible for extension to other medical image derived biomarkers. RetinaGuard is also flexible for extension to other medical image biomarkers.

Multi-Strategy Guided Diffusion via Sparse Masking Temporal Reweighting Distribution Correction

Zekun Zhou, Yanru Gong, Liu Shi, Qiegen Liu

arxiv logopreprintSep 7 2025
Diffusion models have demonstrated remarkable generative capabilities in image processing tasks. We propose a Sparse condition Temporal Rewighted Integrated Distribution Estimation guided diffusion model (STRIDE) for sparse-view CT reconstruction. Specifically, we design a joint training mechanism guided by sparse conditional probabilities to facilitate the model effective learning of missing projection view completion and global information modeling. Based on systematic theoretical analysis, we propose a temporally varying sparse condition reweighting guidance strategy to dynamically adjusts weights during the progressive denoising process from pure noise to the real image, enabling the model to progressively perceive sparse-view information. The linear regression is employed to correct distributional shifts between known and generated data, mitigating inconsistencies arising during the guidance process. Furthermore, we construct a dual-network parallel architecture to perform global correction and optimization across multiple sub-frequency components, thereby effectively improving the model capability in both detail restoration and structural preservation, ultimately achieving high-quality image reconstruction. Experimental results on both public and real datasets demonstrate that the proposed method achieves the best improvement of 2.58 dB in PSNR, increase of 2.37\% in SSIM, and reduction of 0.236 in MSE compared to the best-performing baseline methods. The reconstructed images exhibit excellent generalization and robustness in terms of structural consistency, detail restoration, and artifact suppression.

AI-Based Applied Innovation for Fracture Detection in X-rays Using Custom CNN and Transfer Learning Models

Amna Hassan, Ilsa Afzaal, Nouman Muneeb, Aneeqa Batool, Hamail Noor

arxiv logopreprintSep 7 2025
Bone fractures present a major global health challenge, often resulting in pain, reduced mobility, and productivity loss, particularly in low-resource settings where access to expert radiology services is limited. Conventional imaging methods suffer from high costs, radiation exposure, and dependency on specialized interpretation. To address this, we developed an AI-based solution for automated fracture detection from X-ray images using a custom Convolutional Neural Network (CNN) and benchmarked it against transfer learning models including EfficientNetB0, MobileNetV2, and ResNet50. Training was conducted on the publicly available FracAtlas dataset, comprising 4,083 anonymized musculoskeletal radiographs. The custom CNN achieved 95.96% accuracy, 0.94 precision, 0.88 recall, and an F1-score of 0.91 on the FracAtlas dataset. Although transfer learning models (EfficientNetB0, MobileNetV2, ResNet50) performed poorly in this specific setup, these results should be interpreted in light of class imbalance and data set limitations. This work highlights the promise of lightweight CNNs for detecting fractures in X-rays and underscores the importance of fair benchmarking, diverse datasets, and external validation for clinical translation

MedSeqFT: Sequential Fine-tuning Foundation Models for 3D Medical Image Segmentation

Yiwen Ye, Yicheng Wu, Xiangde Luo, He Zhang, Ziyang Chen, Ting Dang, Yanning Zhang, Yong Xia

arxiv logopreprintSep 7 2025
Foundation models have become a promising paradigm for advancing medical image analysis, particularly for segmentation tasks where downstream applications often emerge sequentially. Existing fine-tuning strategies, however, remain limited: parallel fine-tuning isolates tasks and fails to exploit shared knowledge, while multi-task fine-tuning requires simultaneous access to all datasets and struggles with incremental task integration. To address these challenges, we propose MedSeqFT, a sequential fine-tuning framework that progressively adapts pre-trained models to new tasks while refining their representational capacity. MedSeqFT introduces two core components: (1) Maximum Data Similarity (MDS) selection, which identifies downstream samples most representative of the original pre-training distribution to preserve general knowledge, and (2) Knowledge and Generalization Retention Fine-Tuning (K&G RFT), a LoRA-based knowledge distillation scheme that balances task-specific adaptation with the retention of pre-trained knowledge. Extensive experiments on two multi-task datasets covering ten 3D segmentation tasks demonstrate that MedSeqFT consistently outperforms state-of-the-art fine-tuning strategies, yielding substantial performance gains (e.g., an average Dice improvement of 3.0%). Furthermore, evaluations on two unseen tasks (COVID-19-20 and Kidney) verify that MedSeqFT enhances transferability, particularly for tumor segmentation. Visual analyses of loss landscapes and parameter variations further highlight the robustness of MedSeqFT. These results establish sequential fine-tuning as an effective, knowledge-retentive paradigm for adapting foundation models to evolving clinical tasks. Code will be released.

Multi-task learning for classification and prediction of adolescent idiopathic scoliosis based on fringe-projection three-dimensional imaging.

Feng CK, Chen YJ, Dinh QT, Tran KT, Liu CY

pubmed logopapersSep 6 2025
This study aims to address the limitations of radiographic imaging and single-task learning models in adolescent idiopathic scoliosis assessment by developing a noninvasive, radiation-free diagnostic framework. A multi-task deep learning model was trained using structured back surface data acquired via fringe projection three-dimensional imaging. The model was designed to simultaneously predict the Cobb angle, curve type (thoracic, lumbar, mixed, none), and curve direction (left, right, none) by learning shared morphological features. The multi-task model achieved a mean absolute error (MAE) of 2.9° and a root mean square error (RMSE) of 6.9° for Cobb angle prediction, outperforming the single-task baseline (5.4° MAE, 12.5° RMSE). It showed strong correlation with radiographic measurements (R = 0.96, R² = 0.91). For curve classification, it reached 89% sensitivity in lumbar and mixed types, and 80% and 75% sensitivity for right and left directions, respectively, with an 87% positive predictive value for right-sided curves. The proposed multi-task learning model demonstrates that jointly learning related clinical tasks allows for the extraction of more robust and clinically meaningful geometric features from surface data. It outperforms traditional single-task approaches in both accuracy and stability. This framework provides a safe, efficient, and non-invasive alternative to X-ray-based scoliosis assessment and has the potential to support real-time screening and long-term monitoring of adolescent idiopathic scoliosis in clinical practice.

Interpretable machine learning model for characterizing magnetic susceptibility-based biomarkers in first episode psychosis.

Franco P, Montalba C, Caulier-Cisterna R, Milovic C, González A, Ramirez-Mahaluf JP, Undurraga J, Salas R, Crossley N, Tejos C, Uribe S

pubmed logopapersSep 6 2025
Several studies have shown changes in neurochemicals within the deep-brain nuclei of patients with psychosis. These alterations indicate a dysfunction in dopamine within subcortical regions affected by fluctuations in iron concentrations. Quantitative Susceptibility Mapping (QSM) is a method employed to measure iron concentration, offering a potential means to identify dopamine dysfunction in these subcortical areas. This study employed a random forest algorithm to predict susceptibility features of the First-Episode Psychosis (FEP) and the response to antipsychotics using Shapley Additionality Explanation (SHAP) values. 3D multi-echo Gradient Echo (GRE) and T1-weighted GRE were obtained in 61 healthy-volunteers (HV) and 76 FEP patients (32 % Treatment-Resistant Schizophrenia (TRS) and 68 % treatment-Responsive Schizophrenia (RS)) using a 3T Philips Ingenia MRI scanner. QSM and R2* were reconstructed and averaged in twenty-two segmented regions of interest. We used a Sequential Forward Selection as a feature selection algorithm and a Random Forest as a model to predict FEP patients and their response to antipsychotics. We further applied the SHAP framework to identify informative features and their interpretations. Finally, multiple correlation patterns from magnetic susceptibility parameters were extracted using hierarchical clustering. Our approach accurately classifies HV and FEP patients with 76.48 ± 10.73 % accuracy (using four features) and TRS vs RS patients with 76.43 ± 12.57 % accuracy (using four features), using 10-fold stratified cross-validation. The SHAP analyses indicated the top four nonlinear relationships between the selected features. Hierarchical clustering revealed two groups of correlated features for each study. Early prediction of treatment response enables tailored strategies for FEP patients with treatment resistance, ensuring timely and effective interventions.

Brain Tumor Detection Through Diverse CNN Architectures in IoT Healthcare Industries: Fast R-CNN, U-Net, Transfer Learning-Based CNN, and Fully Connected CNN

Mohsen Asghari Ilani, Yaser M. Banad

arxiv logopreprintSep 6 2025
Artificial intelligence (AI)-powered deep learning has advanced brain tumor diagnosis in Internet of Things (IoT)-healthcare systems, achieving high accuracy with large datasets. Brain health is critical to human life, and accurate diagnosis is essential for effective treatment. Magnetic Resonance Imaging (MRI) provides key data for brain tumor detection, serving as a major source of big data for AI-driven image classification. In this study, we classified glioma, meningioma, and pituitary tumors from MRI images using Region-based Convolutional Neural Network (R-CNN) and UNet architectures. We also applied Convolutional Neural Networks (CNN) and CNN-based transfer learning models such as Inception-V3, EfficientNetB4, and VGG19. Model performance was assessed using F-score, recall, precision, and accuracy. The Fast R-CNN achieved the best results with 99% accuracy, 98.5% F-score, 99.5% Area Under the Curve (AUC), 99.4% recall, and 98.5% precision. Combining R-CNN, UNet, and transfer learning enables earlier diagnosis and more effective treatment in IoT-healthcare systems, improving patient outcomes. IoT devices such as wearable monitors and smart imaging systems continuously collect real-time data, which AI algorithms analyze to provide immediate insights for timely interventions and personalized care. For external cohort cross-dataset validation, EfficientNetB2 achieved the strongest performance among fine-tuned EfficientNet models, with 92.11% precision, 92.11% recall/sensitivity, 95.96% specificity, 92.02% F1-score, and 92.23% accuracy. These findings underscore the robustness and reliability of AI models in handling diverse datasets, reinforcing their potential to enhance brain tumor classification and patient care in IoT healthcare environments.

A novel multimodal framework combining habitat radiomics, deep learning, and conventional radiomics for predicting MGMT gene promoter methylation in Glioma: Superior performance of integrated models.

Zhu FY, Chen WJ, Chen HY, Ren SY, Zhuo LY, Wang TD, Ren CC, Yin XP, Wang JN

pubmed logopapersSep 6 2025
The present study aimed to develop a noninvasive predictive framework that integrates clinical data, conventional radiomics, habitat imaging, and deep learning for the preoperative stratification of MGMT gene promoter methylation in glioma. This retrospective study included 410 patients from the University of California, San Francisco, USA, and 102 patients from our hospital. Seven models were constructed using preoperative contrast-enhanced T1-weighted MRI with gadobenate dimeglumine as the contrast agent. Habitat radiomics features were extracted from tumor subregions by k-means clustering, while deep learning features were acquired using a 3D convolutional neural network. Model performance was evaluated based on area under the curve (AUC) value, F1-score, and decision curve analysis. The combined model integrating clinical data, conventional radiomics, habitat imaging features, and deep learning achieved the highest performance (training AUC = 0.979 [95 % CI: 0.969-0.990], F1-score = 0.944; testing AUC = 0.777 [0.651-0.904], F1-score = 0.711). Among the single-modality models, habitat radiomics outperformed the other models (training AUC = 0.960 [0.954-0.983]; testing AUC = 0.724 [0.573-0.875]). The proposed multimodal framework considerably enhances preoperative prediction of MGMT gene promoter methylation, with habitat radiomics highlighting the critical role of tumor heterogeneity. This approach provides a scalable tool for personalized management of glioma.
Page 29 of 3433422 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.