Sort by:
Page 196 of 6526512 results

Nooh N, Turkawi R, Maybury M, Cardy C, Sahbudin I

pubmed logopapersSep 4 2025
Musculoskeletal ultrasound plays an important role in facilitating diagnostic and therapeutic decisions in rheumatic diseases. This article discusses the utility of ultrasound in rheumatoid arthritis, spondyloarthropathy and crystal arthropathy. This article also highlights the implementation challenges and the emerging role of artificial intelligence in enhancing musculoskeletal ultrasound. .

Jeyabose Sundar, A., Boerwinkle, V. L., Robinson Vimala, B., Leggio, O., Kazemi, M.

medrxiv logopreprintSep 4 2025
BackgroundAutomated interpretation of resting-state fMRI (rs-fMRI) for epilepsy diagnosis remains a challenge. We developed a regularized transformer that models parcel-wise spatial patterns and long-range temporal dynamics to classify epilepsy and generate interpretable, network-level candidate biomarkers. MethodsInputs were Schaefer-200 parcel time series extracted after standardized preprocessing (fMRIPrep). The Regularized Transformer is an attention-based sequence model with learned positional encoding and multi-head self-attention, combined with fMRI-specific regularization (dropout, weight decay, gradient clipping) and augmentation to improve robustness on modest clinical cohorts. Training used stratified group 4-fold cross-validation on n=65 (30 epilepsy, 35 controls) with fMRI-specific augmentation (time-warping, adaptive noise, structured masking). We compared the transformer to seven baselines (MLP, 1D-CNN, LSTM, CNN-LSTM, GCN, GAT, Attention-Only). External validation used an independent set (10 UNC epilepsy cohort, 10 controls). Biomarker discovery combined gradient-based attributions with parcelwise statistics and connectivity contrasts. ResultsOn an illustrative best-performing fold, the transformer attained Accuracy 0.77, Sensitivity 0.83, Specificity 0.88, F1-Score 0.77, and AUC 0.76. Averaged cross-validation performance was lower but consistent with these findings. External testing yielded Accuracy 0.60, AUC 0.64, Specificity 0.80, Sensitivity 0.40. Attribution-guided analysis identified distributed, network-level candidate biomarkers concentrated in limbic, somatomotor, default-mode and salience systems. ConclusionsA regularized transformer on parcel-level rs-fMRI can achieve strong within-fold discrimination and produce interpretable candidate biomarkers. Results are encouraging but preliminary larger multi-site validation, stability testing and multiple-comparison control are required prior to clinical translation.

Guan Yu, Zhang Jianhua, Liang Dong, Liu Qiegen

arxiv logopreprintSep 4 2025
Owing to the inherently dynamic and complex characteristics of cardiac magnetic resonance (CMR) imaging, high-quality and diverse k-space data are rarely available in practice, which in turn hampers robust reconstruction of dynamic cardiac MRI. To address this challenge, we perform feature-level learning directly in the frequency domain and employ a temporal-fusion strategy as the generative guidance to synthesize k-space data. Specifically, leveraging the global representation capacity of the Fourier transform, the frequency domain can be considered a natural global feature space. Therefore, unlike traditional methods that use pixel-level convolution for feature learning and modeling in the image domain, this letter focuses on feature-level modeling in the frequency domain, enabling stable and rich generation even with ultra low-data regimes. Moreover, leveraging the advantages of feature-level modeling in the frequency domain, we integrate k-space data across time frames with multiple fusion strategies to steer and further optimize the generative trajectory. Experimental results demonstrate that the proposed method possesses strong generative ability in low-data regimes, indicating practical potential to alleviate data scarcity in dynamic MRI reconstruction.

Aljneibi Z, Almenhali S, Lanca L

pubmed logopapersSep 3 2025
This study aimed to evaluate the diagnostic performance of an artificial intelligence (AI)-enhanced model for detecting lung cancer on computed tomography (CT) images of the chest. It assessed diagnostic accuracy, sensitivity, specificity, and interpretative consistency across normal, benign, and malignant cases. An exploratory analysis was performed using the publicly available IQ-OTH/NCCD dataset, comprising 110 CT cases (55 normal, 15 benign, 40 malignant). A pre-trained convolutional neural network in Google AI Studio was fine-tuned using 25 training images and tested on a separate image from each case. Quantitative evaluation of diagnostic accuracy and qualitative content analysis of AI-generated reports was conducted to assess diagnostic patterns and interpretative behavior. The AI model achieved an overall accuracy of 75.5 %, with a sensitivity of 74.5 % and specificity of 76.4 %. The area under the ROC curve (AUC) for all cases was 0.824 (95 % CI: 0.745-0.897), indicating strong discriminative power. Malignant cases had the highest classification performance (AUC = 0.902), while benign cases were more challenging to classify (AUC = 0.615). Qualitative analysis showed the AI used consistent radiological terminology, but demonstrated oversensitivity to ground-glass opacities, contributing to false positives in non-malignant cases. The AI model showed promising diagnostic potential, particularly in identifying malignancies. However, specificity limitations and interpretative errors in benign and normal cases underscore the need for human oversight and continued model refinement. AI-enhanced CT interpretation can improve efficiency in high-volume settings but should serve as a decision-support tool rather than a replacement for expert image review.

Li G, Yang H, He L, Zeng G

pubmed logopapersSep 3 2025
Deep learning has demonstrated significant potential in advancing computer-aided diagnosis for neuropsychiatric disorders, such as migraine, enabling patient-specific diagnosis at an individual level. However, despite the superior accuracy of deep learning models, the interpretability of image classification models remains limited. Their black-box nature continues to pose a major obstacle in clinical applications, hindering biomarker discovery and personalized treatment. This study aims to investigate explainable artificial intelligence (XAI) techniques combined with multiple functional magnetic resonance imaging (fMRI) indicators to (1) compare their efficacy in migraine classification, (2) identify optimal model-indicator pairings, and (3) evaluate XAI's potential in clinical diagnostics by localizing discriminative brain regions. We analyzed resting-state fMRI data from 64 participants, including 21 (33%) patients with migraine without aura, 15 (23%) patients with migraine with aura, and 28 (44%) healthy controls. Three fMRI metrics-amplitude of low-frequency fluctuation, regional homogeneity, and regional functional connectivity strength (RFCS)-were extracted and classified using GoogleNet, ResNet18, and Vision Transformer. For comprehensive model comparison, conventional machine learning methods, including support vector machine and random forest, were also used as benchmarks. Model performance was evaluated through accuracy and area under the curve metrics, while activation heat maps were generated via gradient-weighted class activation mapping for convolutional neural networks and self-attention mechanisms for Vision Transformer. The GoogleNet model combined with RFCS indicators achieved the best classification performance, with an accuracy of >98.44% and an area under the receiver operating characteristic curve of 0.99 for the test set. In addition, among the 3 indicators, the RFCS indicator improved accuracy by approximately 8% compared with the amplitude of low-frequency fluctuation. Brain activation heat maps generated by XAI technology revealed that the precuneus and cuneus were the most discriminative brain regions, with slight activation also observed in the frontal gyrus. The use of XAI technology combined with brain region features provides visual explanations for the progression of migraine in patients. Understanding the decision-making process of the network has significant potential for clinical diagnosis of migraines, offering promising applications in enhancing diagnostic accuracy and aiding in the development of new diagnostic techniques.

Dannecker M, Sideri-Lampretsa V, Starck S, Mihailov A, Milh M, Girard N, Auzias G, Rueckert D

pubmed logopapersSep 3 2025
Magnetic resonance imaging of fetal and neonatal brains reveals rapid neurodevelopment marked by substantial anatomical changes unfolding within days. Studying this critical stage of the developing human brain, therefore, requires accurate brain models-referred to as atlases-of high spatial and temporal resolution. To meet these demands, established traditional atlases and recently proposed deep learning-based methods rely on large and comprehensive datasets. This poses a major challenge for studying brains in the presence of pathologies for which data remains scarce. We address this limitation with CINeMA (Conditional Implicit Neural Multi-Modal Atlas), a novel framework for creating high-resolution, spatio-temporal, multimodal brain atlases, suitable for low-data settings. Unlike established methods, CINeMA operates in latent space, avoiding compute-intensive image registration and reducing atlas construction times from days to minutes. Furthermore, it enables flexible conditioning on anatomical features including gestational age, birth age, and pathologies like agenesis of the corpus callosum and ventriculomegaly of varying degree. CINeMA supports downstream tasks such as tissue segmentation and age prediction whereas its generative properties enable synthetic data creation and anatomically informed data augmentation. Surpassing state-of-the-art methods in accuracy, efficiency, and versatility, CINeMA represents a powerful tool for advancing brain research. We release the code and atlases at https://github.com/m-dannecker/CINeMA.

Zhou Y, Liu Z, Xie X, Li H, Zhu W, Zhang Z, Suo Y, Meng X, Cheng J, Xu H, Wang N, Wang Y, Zhang C, Xue B, Jing J, Wang Y, Liu T

pubmed logopapersSep 3 2025
Low-field portable magnetic resonance imaging (pMRI) devices address a crucial requirement in the realm of healthcare by offering the capability for on-demand and timely access to MRI, especially in the context of routine stroke emergency. Nevertheless, images acquired by these devices often exhibit poor clarity and low resolution, resulting in their reduced potential to support precise diagnostic evaluations and lesion quantification. In this paper, we propose a 3D deep learning based model, named Stroke-Aware CycleGAN (SA-CycleGAN), to enhance the quality of low-field images for further improving diagnosis of routine stroke. Firstly, based on traditional CycleGAN, SA-CycleGAN incorporates a prior of stroke lesions by applying a novel spatial feature transform mechanism. Secondly, gradient difference losses are combined to deal with the problem that the synthesized images tend to be overly smooth. We present a dataset comprising 101 paired high-field and low-field diffusion-weighted imaging (DWI), which were acquired through dual scans of the same patient in close temporal proximity. Our experiments demonstrate that SA-CycleGAN is capable of generating images with higher quality and greater clarity compared to the original low-field DWI. Additionally, in terms of quantifying stroke lesions, SA-CycleGAN outperforms existing methods. The lesion volume exhibits a strong correlation between the generated images and the high-field images, with R=0.852. In contrast, the lesion volume correlation between the low-field images and the high-field images is notably lower, with R=0.462. Furthermore, the mean absolute difference in lesion volumes between the generated images and high-field images (1.73±2.03 mL) was significantly smaller than the difference between the low-field images and high-field images (2.53±4.24 mL). It shows that the synthesized images not only exhibit superior visual clarity compared to the low-field acquired images, but also possess a high degree of consistency with high-field images. In routine clinical practice, the proposed SA-CycleGAN offers an accessible and cost-effective means of rapidly obtaining higher-quality images, holding the potential to enhance the efficiency and accuracy of stroke diagnosis in routine clinical settings. The code and trained models will be released on GitHub: SA-CycleGAN.

Drury B, Machado IP, Gao Z, Buddenkotte T, Mahani G, Funingana G, Reinius M, McCague C, Woitek R, Sahdev A, Sala E, Brenton JD, Crispin-Ortuzar M

pubmed logopapersSep 3 2025
 : High-grade serous ovarian carcinoma (HGSOC) is characterised by significant spatial and temporal heterogeneity, often presenting at an advanced metastatic stage. One of the most common treatment approaches involves neoadjuvant chemotherapy (NACT), followed by surgery. However, the multi-scale complexity of HGSOC poses a major challenge in evaluating response to NACT.  : Here, we present a multi-task deep learning approach that facilitates simultaneous segmentation of pelvic/ovarian and omental lesions in contrast-enhanced computerised tomography (CE-CT) scans, as well as treatment response assessment in metastatic ovarian cancer. The model combines multi-scale feature representations from two identical U-Net architectures, allowing for an in-depth comparison of CE-CT scans acquired before and after treatment. The network was trained using 198 CE-CT images of 99 ovarian cancer patients for predicting segmentation masks and evaluating treatment response.  : It achieves an AUC of 0.78 (95% CI [0.70-0.91]) in an independent cohort of 98 scans of 49 ovarian cancer patients from a different institution. In addition to the classification performance, the segmentation Dice scores are only slightly lower than the current state-of-the-art for HGSOC segmentation.  : This work is the first to demonstrate the feasibility of a multi-task deep learning approach in assessing chemotherapy-induced tumour changes across the main disease burden of patients with complex multi-site HGSOC, which could be used for treatment response evaluation and disease monitoring.

Shin S, Chang Y, Ryu S

pubmed logopapersSep 3 2025
Although numerous breast cancer risk prediction models have been developed to categorize individuals by risk, a substantial gap persists in evaluating how well these models predict actual mortality outcomes. This study aimed to investigate the association between Mirai, a deep learning model for risk prediction based on mammography, and breast cancer-specific mortality in a large cohort of Korean women. This retrospective cohort study examined 124,653 cancer-free women aged ≥ 34 years who underwent mammography screening between 2009-2020. Participants were stratified into tertiles by Mirai risk scores and categorized into four groups based on risk changes over time. Cox proportional hazards regression models were used to evaluate the associations of both baseline Mirai scores and temporal risk changes with breast cancer-specific mortality. Over 1,075,177 person-years of follow-up, 31 breast cancer-related deaths occurred. The highest Mirai risk tertile showed significantly higher breast cancer-specific mortality than the lowest tertile (hazard ratio [HR], 5.34; 95% confidence interval [CI] 1.17-24.39; p for trend = 0.020). Temporal Mirai score changes were associated with mortality risk: those remaining in the high-risk (HR, 5.92; 95% CI 1.43-24.49) or moving from low to high risk (HR, 5.57; 95% CI 1.31-23.63) had higher mortality rates than those staying in low-risk. The Mirai model, developed to predict breast cancer incidence, was significantly associated with breast cancer-specific mortality. Changes in Mirai risk scores over time were also linked to breast cancer-specific mortality, supporting AI-based risk models in guiding risk-stratified screening and prevention of breast cancer-related deaths.

Kim SH, Schramm S, Schmitzer L, Serguen K, Ziegelmayer S, Busch F, Komenda A, Makowski MR, Adams LC, Bressem KK, Zimmer C, Kirschke J, Wiestler B, Hedderich D, Finck T, Bodden J

pubmed logopapersSep 3 2025
To evaluate the potential of LLMs to generate sequence-level brain MRI protocols. This retrospective study employed a dataset of 150 brain MRI cases derived from local imaging request forms. Reference protocols were established by two neuroradiologists. GPT-4o, o3-mini, DeepSeek-R1 and Qwen2.5-72B were employed to generate brain MRI protocols based on the case descriptions. Protocol generation was conducted (1) with additional in-context learning involving local standard protocols (enhanced) and (2) without additional information (base). Additionally, two radiology residents independently defined MRI protocols. The sum of redundant and missing sequences (accuracy index) was defined as performance metric. Accuracy indices were compared between groups using paired t-tests. The two neuroradiologists achieved substantial inter-rater agreement (Cohen's κ = 0.74). o3-mini demonstrated superior performance (base: 2.65 ± 1.61; enhanced: 1.94 ± 1.25), followed by GPT-4o (base: 3.11 ± 1.83; enhanced: 2.23 ± 1.48), DeepSeek-R1 (base: 3.42 ± 1.84; enhanced: 2.37 ± 1.42) and Qwen2.5-72B (base: 5.95 ± 2.78; enhanced: 2.75 ± 1.54). o3-mini consistently outperformed the other models with a significant margin. All four models showed highly significant performance improvements under the enhanced condition (adj. p < 0.001 for all models). The highest-performing LLM (o3-mini [enhanced]) yielded an accuracy index comparable to residents (o3-mini [enhanced]: 1.94 ± 1.25, resident 1: 1.77 ± 1.29, resident 2: 1.77 ± 1.28). Our findings demonstrate the promising potential of LLMs in automating brain MRI protocoling, especially when augmented through in-context learning. o3-mini exhibited superior performance, followed by GPT-4o. QuestionBrain MRI protocoling is a time-consuming, non-interpretative task, exacerbating radiologist workload. Findingso3-mini demonstrated superior brain MRI protocoling performance. All models showed notable improvements when augmented with local standard protocols. Clinical relevanceMRI protocoling is a time-intensive, non-interpretative task that adds to radiologist workload; large language models offer potential for (semi-)automation of this process.
Page 196 of 6526512 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.