Sort by:
Page 28 of 6046038 results

Pedük Ş

pubmed logopapersOct 14 2025
Breast cancer (BC) remains one of the most prevalent and challenging malignancies worldwide, affecting millions of women and shaping healthcare priorities across continents. Advances in early detection have significantly improved survival rates. In recent years, artificial intelligence (AI) has emerged as a powerful tool in this domain, transforming traditional diagnostic methods. Initially based on simple rule-based systems, AI has evolved into sophisticated deep learning models capable of analyzing complex medical data with remarkable accuracy. This bibliometric analysis examines the application of AI in the early diagnosis of breast cancer, aiming to understand not only the current state of the field but also its growth over the past decade. Publications indexed in Web of Science and Scopus from 2012 to March 2025 were systematically reviewed, while earlier literature (1994-2012) provided historical context. Tools such as Biblioshiny and VOSviewer were used to map research trends, collaboration patterns, and thematic evolution. Out of 1,436 initial documents, 1,293 high-quality studies were included. The results show a clear acceleration in AI-focused research after 2020, with increased global collaboration and a notable shift toward open-access publication. Recurring themes such as "machine learning," "diagnostic imaging," and "clinical decision support" highlight the field's direction. As AI becomes more integrated into clinical workflows, its potential to enhance diagnostic speed, consistency, and personalization is undeniable. However, key ethical issues such as bias, transparency, and patient data protection remain central to responsible implementation.

Yin Z, Din H, Sun JEP, MacAskill CJ, Tirumani SH, Yap PT, Griswold M, Flask CA, Chen Y

pubmed logopapersOct 14 2025
Magnetic Resonance Fingerprinting (MRF) is a technique that can provide rapid quantification of multiple tissue properties. Deep learning may potentially contribute to an accelerated acquisition of MRF. (1) To develop a deep learning method to accelerate the acquisition for kidney MRF; (2) to evaluate its performance in healthy subjects and patients with renal masses. Retrospective and based on internal reference data. Development set was 36 healthy subjects and 20 patients with renal masses. The testing set: 4 healthy subjects and 16 patients. 3T, Steady-State Free Precession (FISP)-based MRF. Quantification accuracy was evaluated in healthy kidneys and renal masses using quantitative metrics including normalized root-mean-square error (NRMSE) calculated based on reference maps generated using the standard template matching approach with all acquired MRF time frames. Paired Student's t-test. p < 0.05 was considered statistically significant. Accurate quantification in both T<sub>1</sub> (NRMSE = 0.025 ± 0.003) and T<sub>2</sub> (NRMSE = 0.053 ± 0.010) maps was obtained for healthy kidney tissues with a three-fold acceleration (576 time frames, 5 s of scan time), outperforming the template matching approach (T<sub>1</sub>, NRMSE = 0.057 ± 0.015; T<sub>2</sub>, NRMSE = 0.143 ± 0.080). For renal masses with T<sub>1</sub> and T<sub>2</sub> values in close range of healthy kidney tissues, similar performance was achieved with a three-fold acceleration. For renal masses presenting distinct T<sub>1</sub> or T<sub>2</sub> values, more MRF time frames were required to provide accurate tissue quantification. No significant difference was noticed in tissue/tumor quantification between neural networks trained using only healthy subjects versus a mixed dataset with healthy subjects and patients (p > 0.05). A deep learning-based method was developed to accelerate acquisition without compromising the accuracy of relaxation time mapping using kidney MRF. These results demonstrate reliable tissue quantification with at least a two-fold acceleration for both healthy kidneys and renal masses with various subtypes and histopathological grades. 4. Stage 1.

Wang M, Li B

pubmed logopapersOct 14 2025
Papillary thyroid carcinoma (PTC) constitutes the predominant subtype among thyroid malignancies. Despite its generally favorable prognosis, certain aggressive subtypes, along with recurrent and metastatic manifestations, substantially affect patient survival outcomes. Recent advancements in the diagnostic and therapeutic strategies for PTC have ushered in a new era, characterized by the integration of molecular mechanisms and imaging-based evaluations. This review offers an integrated perspective of the clinicopathological features, molecular genetic characteristics, epigenetic regulation, and the contribution of the immune microenvironment to the aggressiveness of PTC. Primary investigation targets include BRAF/RAS/RET-related molecular mechanisms and the functional significance of non-coding RNAs [especially long non-coding RNAs (lncRNAs) and microRNAs (miRNAs)] in molecular regulation. Additionally, the impact of clinical factors such as age, sex, obesity, and comorbidity with Hashimoto's thyroiditis on the aggressiveness of PTC is thoroughly examined. Furthermore, this review systematically synthesizes the clinical advances in the early detection and risk assessment of aggressive PTC by emerging imaging modalities such as conventional ultrasound, interventional ultrasound, ultrasound elastography, contrast-enhanced ultrasound, and artificial intelligence-assisted analysis. Looking ahead, multidisciplinary collaborations integrating pathology, genomics, and imaging are anticipated to enhance the precise evaluation of PTC aggressiveness and facilitate the development of individualized treatment strategies. This review serves as a comprehensive reference for mechanistic exploration and clinical translation in the study of PTC aggressiveness, and provides guidance for the progression of precision medicine and management models for PTC patients.

de Wilde D, Alakmeh A, Zanier O, Da Mutten R, Aicher A, Burström G, Edström E, Elmi-Terander A, Voglis S, Regli L, Serra C, Staartjes VE

pubmed logopapersOct 14 2025
Ultrasound (US) imaging is valued for its safety, affordability, and accessibility, but its low spatial resolution and operator dependence limit its diagnostic capabilities. Tomographic imaging modalities like computed tomography (CT) and magnetic resonance imaging (MRI) offer high-resolution 3D visualization but are cost-prohibitive and complex. Ultrasound-based tomographic imaging aims to combine the advantages of both modalities, potentially democratizing access to advanced imaging. A scoping review was conducted following PRISMA-SR guidelines. Articles were identified through searches in PubMed MEDLINE, Embase, Scopus, and arXiv from inception to July 2025. Eligibility criteria included full-text original studies focused on ultrasound-based tomographic imaging generation or reconstruction methods. Out of 8256 identified articles, 86 met the inclusion criteria. Studies examined four imaging modalities: photoacoustic tomography (36%), ultrasound computed tomography (36%), 3D reconstruction (20%), and synthetic imaging (7%). Deep learning algorithms (67%) were the most common, followed by iterative reconstruction algorithms (9%), and other methods. The breast (17%), brain (16%), and blood vessels (14%) were the most studied anatomical regions. This review highlights advancements in ultrasound-based tomographic imaging, driven by deep learning innovations. Despite progress, the field is still in its infancy, and challenges remain in clinical adoption, particularly in standardization and validating performance. Future research should focus on improving algorithm efficiency, generalizability, and validation.

Kurt Pehlivanoğlu M, Albayrak NB, Karhan D, Doğan İ

pubmed logopapersOct 14 2025
Accurate detection of brain midline shift is critical for the diagnosis and monitoring of neurological conditions such as traumatic brain injuries, strokes, and tumors. This study aims to address the lack of dedicated datasets and tools for this task by introducing a novel dataset and a 3D Slicer extension, evaluating the effectiveness of multiple deep learning models for automatic detection of brain midline shift. We introduce the brain-midline-detection dataset, specifically designed for identifying three brain landmarks-Anterior Falx (AF), Posterior Falx (PF), and Septum Pellucidum (SP)-in MRI scans. A comprehensive performance evaluation was conducted using deep learning models including YOLOv5 (n, s, m, l), YOLOv8, and YOLOv9 (GELAN-C model). The best-performing model was integrated into the 3D Slicer platform as a custom extension, incorporating steps such as MRI preprocessing, filtering, skull stripping, registration, and midline shift computation. Among the evaluated models, YOLOv5l achieved the highest precision (0.9601) and recall (0.9489), while YOLOv5m delivered the best [email protected]:0.95 score (0.6087). YOLOv5n and YOLOv5s exhibited the lowest loss values, indicating high efficiency. Although YOLOv8s achieved a higher [email protected]:0.95 score (0.6382), its high loss values reduced its practical effectiveness. YOLOv9-GELAN-C performed the worst, with the highest losses and lowest overall accuracy. YOLOv5m was selected as the optimal model due to its balanced performance and was successfully integrated into 3D Slicer as an extension for automated midline shift detection. By offering a new annotated dataset, a validated detection pipeline, and open-source tools, this study contributes to more accurate, efficient, and accessible AI-assisted medical imaging for brain midline assessment.

Yasuda E, Hattori T, Shimano K, Hase T, Oyama J, Yamagiwa K, Kawauchi M, Horovitz SG, Lungu C, Matsuzawa H, Hallett M

pubmed logopapersOct 14 2025
Parkinson's disease (PD) is a neurodegenerative disorder that affects both motor and cognitive functions, particularly working memory (WM). Machine learning offers an advantage for decoding complex brain activity patterns, but its application to task-based functional magnetic resonance imaging (task-based fMRI) has been limited. This study aimed to develop an explainable machine learning model to classify WM performance levels in PD based on task-based fMRI data. We enrolled 45 patients with PD and 15 healthy controls (HCs), all of whom performed an n-back WM task in an MRI scanner. Patients were stratified into three subgroups based on their 3-back task performance: better, intermediate, and worse WM. A three-dimensional convolutional neural network (3D-CNN) model, pre-trained with a 3D convolutional autoencoder, was developed to perform binary classifications between group pairs. The model achieved an accuracy of 93.3% in discriminating task-based fMRI images of PD patients with worse WM from HCs, surpassing the mean accuracy of three expert radiologists (70.0%). Saliency maps identified brain regions influencing the model's decisions, including the dorsolateral prefrontal cortex and superior/inferior parietal lobules. These regions were consistent with both the areas with intergroup differences in the task-based fMRI data and the anatomical areas that are crucial for better WM performance. We developed an explainable deep learning model that is capable of classifying WM performance levels in PD using task-based fMRI. This approach may enhance the objective and interpretable assessment of brain function in clinical neuroimaging practice.

Saadh MJ, Khidr WA, Albadr RJ, Doshi H, Rekha MM, Kundlas M, Anand DA, Kubaev A, Taher WM, Alwan M, Jawad MJ, Al-Nuaimi AMA, Farhood B

pubmed logopapersOct 14 2025
This study aimed to develop and evaluate a Transformer-CNN framework for automated segmentation of multiple sclerosis (MS) lesions on FLAIR MRI. The model was benchmarked against U-Net and DeepLabV3 and assessed for both segmentation accuracy and across-center performance under internal 5-fold cross-validation to ensure robustness across diverse clinical datasets. A dataset of 1,800 3D FLAIR MRI scans from five clinical centers was split using 5-fold cross-validation. Preprocessing included isotropic resampling, intensity normalization, and bias field correction. The Transformer-CNN combined CNN-based local feature extraction with Transformer-based global context modeling. Data augmentation strategies, including geometric transformations and noise injection, enhanced generalization. Performance was evaluated using Dice score, IoU, HD95, and pixel accuracy, along with internal cross-validation-based metrics such as Generalized Dice Similarity Coefficient (GDSC), Domain-wise IoU (DwIoU), Cross-Fold Dice Deviation (CFDD), and Volume Agreement (Intraclass Correlation Coefficient, ICC). Statistical significance was tested using Kruskal-Wallis and Dunn's post-hoc analyses to compare models. The Transformer-CNN achieved the best overall performance, with a Dice score of 92.3%, IoU of 91.4%, HD95 of 2.25 mm, and pixel accuracy of 95.6%. It also excelled in internal cross-validation-based across-center metrics, achieving the highest GDSC (91.3%) and DwIoU (89.2%), the lowest CFDD (1.05%), and the highest ICC (96.5%). DeepLabV3 and U-Net scored 85.1% and 83.0% in Dice, with HD95 values of 4.15 mm and 4.30 mm, respectively. The worst performance was observed in U-Net, which exhibited high variability across datasets and struggled with small lesion detection. The Transformer-CNN outperformed U-Net and DeepLabV3 in segmentation accuracy and across-center performance under internal 5-fold cross-validation. Its robustness, minimal variability, and ability to generalize across diverse datasets establish it as a practical and reliable tool for clinical MS lesion segmentation and monitoring.

Shelley Zixin Shu, Haozhe Luo, Alexander Poellinger, Mauricio Reyes

arxiv logopreprintOct 14 2025
Transformer-based deep learning models have demonstrated exceptional performance in medical imaging by leveraging attention mechanisms for feature representation and interpretability. However, these models are prone to learning spurious correlations, leading to biases and limited generalization. While human-AI attention alignment can mitigate these issues, it often depends on costly manual supervision. In this work, we propose a Hybrid Explanation-Guided Learning (H-EGL) framework that combines self-supervised and human-guided constraints to enhance attention alignment and improve generalization. The self-supervised component of H-EGL leverages class-distinctive attention without relying on restrictive priors, promoting robustness and flexibility. We validate our approach on chest X-ray classification using the Vision Transformer (ViT), where H-EGL outperforms two state-of-the-art Explanation-Guided Learning (EGL) methods, demonstrating superior classification accuracy and generalization capability. Additionally, it produces attention maps that are better aligned with human expertise.

Huu Tien Nguyen, Ahmed Karam Eldaly

arxiv logopreprintOct 14 2025
This paper introduces a novel framework for image quality transfer based on conditional flow matching (CFM). Unlike conventional generative models that rely on iterative sampling or adversarial objectives, CFM learns a continuous flow between a noise distribution and target data distributions through the direct regression of an optimal velocity field. We evaluate this approach in the context of low-field magnetic resonance imaging (LF-MRI), a rapidly emerging modality that offers affordable and portable scanning but suffers from inherently low signal-to-noise ratio and reduced diagnostic quality. Our framework is designed to reconstruct high-field-like MR images from their corresponding low-field inputs, thereby bridging the quality gap without requiring expensive infrastructure. Experiments demonstrate that CFM not only achieves state-of-the-art performance, but also generalizes robustly to both in-distribution and out-of-distribution data. Importantly, it does so while utilizing significantly fewer parameters than competing deep learning methods. These results underline the potential of CFM as a powerful and scalable tool for MRI reconstruction, particularly in resource-limited clinical environments.

Ziyuan Gao, Philippe Morel

arxiv logopreprintOct 14 2025
One-shot medical image segmentation faces fundamental challenges in prototype representation due to limited annotated data and significant anatomical variability across patients. Traditional prototype-based methods rely on deterministic averaging of support features, creating brittle representations that fail to capture intra-class diversity essential for robust generalization. This work introduces Diffusion Prototype Learning (DPL), a novel framework that reformulates prototype construction through diffusion-based feature space exploration. DPL models one-shot prototypes as learnable probability distributions, enabling controlled generation of diverse yet semantically coherent prototype variants from minimal labeled data. The framework operates through three core innovations: (1) a diffusion-based prototype enhancement module that transforms single support prototypes into diverse variant sets via forward-reverse diffusion processes, (2) a spatial-aware conditioning mechanism that leverages geometric properties derived from prototype feature statistics, and (3) a conservative fusion strategy that preserves prototype fidelity while maximizing representational diversity. DPL ensures training-inference consistency by using the same diffusion enhancement and fusion pipeline in both phases. This process generates enhanced prototypes that serve as the final representations for similarity calculations, while the diffusion process itself acts as a regularizer. Extensive experiments on abdominal MRI and CT datasets demonstrate significant improvements respectively, establishing new state-of-the-art performance in one-shot medical image segmentation.
Page 28 of 6046038 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.