Sort by:
Page 82 of 6346332 results

Paramanandham N, Rajendiran K, Pavithra LK, Niranjan M, Shibu TM, Santosh A, Kumar A

pubmed logopapersOct 7 2025
Image segmentation is an essential research field in image processing that has developed from traditional processing techniques to modern deep learning methods. In medical image processing, the primary goal of the segmentation process is to segment organs, lesions or tumors. Segmentation of tumors in the brain is a difficult task due to the vast variations in the intensity and size of gliomas. Clinical segmentation typically requires a high-quality image with relevant features and domain experts for the best results. Due to this, automatic segmentation is a necessity in modern society since gliomas are considered highly malignant. Encoder-decoder-based structures, as popular as they are, have some areas where the research is still in progress, like reducing the number of false positives and false negatives. Sometimes these models also struggled to capture the finest boundaries, producing jagged or inaccurate boundaries after segmentation. This research article introduces a novel and efficient method for segmenting out the tumorous region in brain images to overcome the research gap of the recent state-of-the-art deep learning-based segmentation approaches. The proposed 4-staged 2D-VNET + + is an efficient deep learning tumor segmentation network that introduces a context-boosting framework and a custom loss function to accomplish the task. The results show that the proposed model gives a Dice score of 99.287, Jaccard similarity index of 99.642 and a Tversky index of 99.743, all of which outperform the recent state-of-the-art techniques like 2D-VNet, Attention ResUNet with Guided Decoder (ARU-GD), MultiResUNet, 2D UNet, Link Net, TransUNet and 3D-UNet.

Sharma N, Duvuri S, Rastogi A, Singh SS, Garg N, Kumar V

pubmed logopapersOct 7 2025
Medical imaging has revolutionized disease detection and patient care; however, conventional modalities such as Magnetic Resonance Imaging (MRI), Computed Tomography (CT), ultrasound, and optical imaging have inherent limitations in sensitivity, penetration depth, and safety. Terahertz (THz) imaging is an emerging, non-ionizing technique that offers high sensitivity to water content and molecular composition, making it particularly suitable for skin diagnostics. This review provides a comparative analysis of transmission and reflection-mode THz imaging, with a detailed focus on the two primary reflection techniques-Terahertz Pulsed Imaging (THz-PI) and Continuous-Wave Terahertz Imaging (CW-THz). Their working principles, benefits, limitations, and clinical relevance are critically evaluated. Reflection-mode THz imaging shows strong potential for biological tissue analysis, offering high contrast for detecting skin malignancies, assessing hydration levels, monitoring wound healing, and evaluating transdermal drug delivery. Despite ongoing challenges in penetration depth and real-time imaging, advancements in AI-based analysis, multimodal integration, and system miniaturization are progressively enhancing its clinical applicability. This review serves as a comprehensive resource for researchers and clinicians aiming to integrate THz imaging into skin diagnostics. It highlights the transformative potential of THz technology in facilitating early disease detection, enabling personalized treatment strategies, and advancing the future of biomedical imaging.

Mohammed Alsubaie, Wenxi Liu, Linxia Gu, Ovidiu C. Andronesi, Sirani M. Perera, Xianqi Li

arxiv logopreprintOct 7 2025
Magnetic Resonance Imaging (MRI) is a critical tool in modern medical diagnostics, yet its prolonged acquisition time remains a critical limitation, especially in time-sensitive clinical scenarios. While undersampling strategies can accelerate image acquisition, they often result in image artifacts and degraded quality. Recent diffusion models have shown promise for reconstructing high-fidelity images from undersampled data by learning powerful image priors; however, most existing approaches either (i) rely on unsupervised score functions without paired supervision or (ii) apply data consistency only as a post-processing step. In this work, we introduce a conditional denoising diffusion framework with iterative data-consistency correction, which differs from prior methods by embedding the measurement model directly into every reverse diffusion step and training the model on paired undersampled-ground truth data. This hybrid design bridges generative flexibility with explicit enforcement of MRI physics. Experiments on the fastMRI dataset demonstrate that our framework consistently outperforms recent state-of-the-art deep learning and diffusion-based methods in SSIM, PSNR, and LPIPS, with LPIPS capturing perceptual improvements more faithfully. These results demonstrate that integrating conditional supervision with iterative consistency updates yields substantial improvements in both pixel-level fidelity and perceptual realism, establishing a principled and practical advance toward robust, accelerated MRI reconstruction.

Luiken I, Lemke T, Komenda A, Marka AW, Kim SH, Graf MM, Ziegelmayer S, Weller D, Mertens CJ, Bressem KK, Makowski MR, Adams LC, Prucker P, Busch F

pubmed logopapersOct 7 2025
To prospectively evaluate and directly compare the performance of three commercial AI algorithms (Gleamer, AZmed, and Radiobotics) for detecting fractures, dislocations, and joint effusions across multiple anatomical regions in real-world adult clinical radiography. In this single-center, prospective technical performance evaluation study, we assessed these algorithms on radiographs from adult patients (n = 1037; 2926 radiographs; 22 anatomical regions) at [anonymized] (January-March 2025). Radiologists' reports served as the reference standard, with CT adjudication when available. Sensitivity, specificity, accuracy, and AUC were calculated; AUCs were compared using Bonferroni-corrected DeLong tests. Fractures were identified in 29.60 % of patients; 13.69 % had acute fractures and 6.65 % had multiple fractures. For all fractures, Gleamer (AUC 83.95 %, sensitivity 75.57 %, specificity 92.33 %) and AZmed (AUC 84.88 %, sensitivity 79.48 %, specificity 90.27 %) outperformed Radiobotics (AUC 77.24 %, sensitivity 60.91 %, specificity 93.56 %). For acute fractures, AUCs were comparable (range: 84.81-87.78 %). For multiple fractures, performance was limited (AUCs 64.17-73.40 %). AZmed had higher AUC for dislocation (61.85 % vs. 54.48 % for Gleamer), while Gleamer and Radiobotics outperformed AZmed for effusion (AUC 69.59 % and 73.63 % vs. 57.99 %). No algorithm exceeded 91 % accuracy for acute fractures. In this real-world, single-center study, commercial AI algorithms showed moderate to high performance for straightforward fracture detection but limited accuracy for complex scenarios such as multiple fractures and dislocations. Current tools should be used as adjuncts rather than replacements for radiologists and reporting radiographers. Multicenter validation and more diverse training data are necessary to improve generalizability and robustness.

Alorf A

pubmed logopapersOct 7 2025
Parkinson's disease (PD) is a neurodegenerative disease that affects both the motor and nonmotor functions of an individual and is more prevalent in older adults. PD is preceded by an early stage called prodromal PD, which starts very early before the typical symptoms of the disease appear. If patients are managed and diagnosed at this initial stage, their quality of life can be maintained. Magnetic Resonance Imaging (MRI) is a widespread approach in neuroimaging that is very helpful in the diagnosis of brain-related diseases. Current studies of PD classification mostly use T1-weighted MRI or other modalities. T2-FLAIR MRI, including the multimodal techniques that employ it, is understudied despite its ability to reliably identify white matter lesions in the brain, which directly aids in diagnosing PD. In this study, two networks based on deep learning and machine learning are proposed for better and early disease classification using multimodal data, including the T1-weighted, T2-FLAIR MRI, and Montreal Cognitive Assessment (MoCA) score datasets. The datasets were downloaded from an online longitudinal study called the Parkinson's Progression Markers Initiative (PPMI). The first network is an ensemble-based network that combines three deep learning models, MobileNet, EfficientNet, and a custom Convolutional Neural Network (CNN), and the second network blends a custom CNN trained on both MRI modalities and a multilayer perceptron (MLP) trained on the MoCA score dataset followed by an attention module, thus providing a multimodal fusion network. Both networks achieve efficient results with respect to different evaluation metrics. The ensemble model attained an accuracy of 97.1 %, a sensitivity of 96.2 %, a precision of 96.4 %, an F1 score of 96.3 %, and a specificity of 97.4 %, while the data fusion model achieved an accuracy of 97.9 %, a sensitivity of 97.1 %, a precision of 97.6 %, an F1 score of 97.3 %, and a specificity of 98 %. Grad-CAM analysis was employed to visualize the key brain regions contributing to model decisions, thereby enhancing transparency and clinical relevance.

Di Cosmo M, Migliorelli G, Villani FP, Francioni M, Muçaj A, Frontoni E, Moccia S, Fiorentino MC

pubmed logopapersOct 7 2025
The automatic identification of coronary stenosis in x-ray coronary angiography (XCA) is hindered by the variability in imaging protocols and patient characteristics across different hospitals, leading to significant domain shifts. These challenges impact the ability of algorithms to generalize effectively across diverse clinical environments. This study aims to address these issues by proposing FedStenoNet, a personalized federated learning (PFL) framework tailored for enhanced stenosis detection. In place of a single global model, FedStenoNet shares only backbone weights across clients and customizes the model to each client's specific data distribution. The framework also incorporates histogram matching to tackle inter-dataset variability and a novel test-time adaptation algorithm to mitigate intra-dataset variability. Evaluation of FedStenoNet across three non-identical and independently distributed datasets (one released with this study) demonstrated an average F1-score of 50.82%. FedStenoNet shows promising diagnostic accuracy in a challenging domain, where achieving high performance has proven difficult. By managing domain shifts via FedStenoNet, this study sets a promising direction for future research, further supported by the release of one XCA dataset.

Yang H, Wang W, Zhao X, Xuan Q, Jiang C, Zhao B

pubmed logopapersOct 7 2025
To develop and validate predictive models based on diffusion-weighted imaging MRI (DWI-MRI) for assessing the prognosis of patients with acute ischemic stroke (AIS) treated with intravenous thrombolysis, and to compare the performance of deep learning versus traditional machine learning methods. A retrospective analysis was conducted on 682 AIS patients from two hospitals. Data from Hospital 1 were divided into a training set (70%) and a test set (30%), while data from Hospital 2 were used for external validation. Five predictive models were developed: Model A (clinical features), Model B (radiomic features based on DWI-MRI), Model C (deep learning features), Model D (clinical + radiomic features), and Model E (clinical + deep learning features). Performance metrics included Area Under the Curve (AUC), sensitivity, specificity, and accuracy. In the test set, Models A, B, and C achieved AUCs of 0.760, 0.820, and 0.857, respectively. The combined models, D and E, showed superior performance with AUCs of 0.904 and 0.925, respectively. Model E outperformed Model D and also demonstrated robust performance in external validation (AUC = 0.937). Deep learning models integrating DWI-MRI and clinical features outperformed traditional methods, demonstrating strong generalizability in external validation. These models may support clinical decision-making in AIS prognosis.

Hu M, Zhang Q, Wei Z, Jia P, Yuan M, Yu H, Yin XC, Peng J

pubmed logopapersOct 7 2025
To overcome reliance on large-scale, costly labeled datasets and annotation variability for accurate periapical film segmentation. This study develops a self-supervised learning framework requiring limited labeled data, enhancing practical applicability while reducing extensive manual annotation efforts. This research proposes a two-stage framework: 1) Self-supervised pre-training. A Vision Transformer (ViT), initialized with weights from the DINOv2 model pre-trained on 142M natural images (LVD-142M), undergoes further self-supervised pre-training on our dataset of 74,292 unlabeled periapical films using student-teacher contrastive learning. 2) Fine-tuning adapts these features for segmentation. The domain-adapted ViT is fine-tuned with a Mask2Former head on only 229 labeled films for segmenting seven critical dental structures (tooth, pulp, crown, fillings, root canal fillings, caries, periapical lesions). The domain-adapted self-supervised method significantly outperformed traditional fully-supervised models like U-Net and DeepLabV3+ (average Dice coefficient: 74.77% vs 33.53%-41.55%; 80%-123% relative improvement). Comprehensive comparison with cutting-edge SSL methods through cross-validation demonstrated the superiority of our DINOv2-based approach (74.77 ± 1.87%) over MAE (72.53 ± 1.90%), MoCov3 (65.92 ± 1.68%) and BEiTv3 (65.17 ± 1.77%). The method surpassed its supervised Mask2Former counterparts with statistical significance (p<0.01). This two-stage, domain-specific self-supervised framework effectively learns robust anatomical features. It enables accurate, reliable periapical film segmentation using very limited annotations. The approach addresses the challenge of labeled data scarcity in medical imaging. This approach provides a feasible pathway for developing AI-assisted diagnostic tools. It can improve diagnostic accuracy through consistent segmentation and enhance workflow efficiency by reducing manual analysis time, especially in resource-constrained dental practices.

Takinami, S., Morikawa, S., Oshika, T.

medrxiv logopreprintOct 7 2025
PurposeTo develop and evaluate a novel self-supervised learning approach using Masked Autoencoder (MAE) pre-trained Vision Transformer (ViT) for automated detection of diabetic macular edema (DME) from optical coherence tomography (OCT) images, addressing the critical need for scalable screening solutions in diabetic eye care. Study DesignArtificial intelligence model training. MethodsWe utilized the publicly available Kermany dataset containing 109,312 OCT images, defining DME detection as a binary classification task (11,559 DME vs. 97,753 non-DME images). Five deep learning architectures were compared: MAE-pretrained ViT (MAE_ViT), standard ViT, ResNet18, VGG19_bn, and EfficientNetV2. MAE_ViT underwent two-stage training: (1) self-supervised pre-training with 75% patch masking for 1,000 epochs to learn robust visual representations, and (2) supervised fine-tuning for DME classification. Model performance was evaluated using accuracy, sensitivity, specificity, F1 score, and area under the receiver operating characteristic curve (AU-ROC) with 95% confidence intervals calculated via bootstrap resampling. ResultsMAE_ViT achieved superior performance with AU-ROC 0.999 (95% CI: 0.999-1.000), accuracy 98.5% (95% CI: 97.7-99.2%), sensitivity 99.6% (95% CI: 98.7-100%), and specificity 98.1% (95% CI: 97.2-99.1%). VGG19_bn showed the second-best performance (AU-ROC 0.997), while ResNet18 demonstrated poor specificity (28.3%) despite perfect sensitivity. The self-supervised approach of MAE_ViT outperformed standard supervised ViT (AU-ROC 0.995), demonstrating the effectiveness of learning from unlabeled data. ConclusionMAE pre-trained Vision Transformer establishes a new benchmark for automated DME detection, offering exceptional diagnostic accuracy and potential for deployment in resource-constrained settings through reduced annotation requirements.

Lewis, M., Theis, N., Bowei, O., Prasad, K. M.

medrxiv logopreprintOct 7 2025
A neurobiologically-based diagnosis with superior reliability in place of clinical interview-based diagnosis is a primary goal in psychiatry. Dynamic functional connectomes (dFCs) identified using change-point detection applied to functional magnetic resonance imaging (fMRI) data was used to train graph convolutional network (GCN) models to classify persons with psychiatric diagnoses from healthy controls. We examined four samples, adolescent-onset schizophrenia (AOS), adult schizophrenia, major depressive disorder, bipolar disorder, each with healthy controls (HC) for resting state fMRI (rs-fMRI) and working memory task for AOS. Classification accuracy was as high as 89.2% (sensitivity=0.90; specificity=0.88) for adult schizophrenia. The GCNs were further examined to understand which nodes and edges contributed highly to the classification using Class Activation Mapping (CAM) and Integrated Gradients (IG), respectively. CAM and IG analysis were convergent between adult schizophrenia and AOS which included default mode network regions, cerebellum, and sensory regions for rs-fMRI. For working memory, Brodmann area 10 and dorsolateral prefrontal cortex contributed the most towards AOS classification. Applied in a clinical context, post-test probability of accurate classification was 93% for adult-onset schizophrenia using rs-fMRI with a positive test suggesting clinical usefulness of our model. Our results suggest that a combination of deep-learning models and explanatory algorithms can markedly improve diagnostic reliability, offer approaches to objective diagnostic approach, and provide a neurobiological basis for the diagnosis by identifying regions and edges in the networks.
Page 82 of 6346332 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.