Sort by:
Page 489 of 7527514 results

Hyunwoo Cho, Jongsoo Lee, Jinbum Kang, Yangmo Yoo

arxiv logopreprintJul 5 2025
Speckle patterns in ultrasound images often obscure anatomical details, leading to diagnostic uncertainty. Recently, various deep learning (DL)-based techniques have been introduced to effectively suppress speckle; however, their high computational costs pose challenges for low-resource devices, such as portable ultrasound systems. To address this issue, EdgeSRIE, which is a lightweight hybrid DL framework for real-time speckle reduction and image enhancement in portable ultrasound imaging, is introduced. The proposed framework consists of two main branches: an unsupervised despeckling branch, which is trained by minimizing a loss function between speckled images, and a deblurring branch, which restores blurred images to sharp images. For hardware implementation, the trained network is quantized to 8-bit integer precision and deployed on a low-resource system-on-chip (SoC) with limited power consumption. In the performance evaluation with phantom and in vivo analyses, EdgeSRIE achieved the highest contrast-to-noise ratio (CNR) and average gradient magnitude (AGM) compared with the other baselines (different 2-rule-based methods and other 4-DL-based methods). Furthermore, EdgeSRIE enabled real-time inference at over 60 frames per second while satisfying computational requirements (< 20K parameters) on actual portable ultrasound hardware. These results demonstrated the feasibility of EdgeSRIE for real-time, high-quality ultrasound imaging in resource-limited environments.

Haifeng Zhao, Yufei Zhang, Leilei Ma, Shuo Xu, Dengdi Sun

arxiv logopreprintJul 5 2025
Radiology report generation represents a significant application within medical AI, and has achieved impressive results. Concurrently, large language models (LLMs) have demonstrated remarkable performance across various domains. However, empirical validation indicates that general LLMs tend to focus more on linguistic fluency rather than clinical effectiveness, and lack the ability to effectively capture the relationship between X-ray images and their corresponding texts, thus resulting in poor clinical practicability. To address these challenges, we propose Optimal Transport-Driven Radiology Report Generation (OTDRG), a novel framework that leverages Optimal Transport (OT) to align image features with disease labels extracted from reports, effectively bridging the cross-modal gap. The core component of OTDRG is Alignment \& Fine-Tuning, where OT utilizes results from the encoding of label features and image visual features to minimize cross-modal distances, then integrating image and text features for LLMs fine-tuning. Additionally, we design a novel disease prediction module to predict disease labels contained in X-ray images during validation and testing. Evaluated on the MIMIC-CXR and IU X-Ray datasets, OTDRG achieves state-of-the-art performance in both natural language generation (NLG) and clinical efficacy (CE) metrics, delivering reports that are not only linguistically coherent but also clinically accurate.

Yang X, Li D, Chen S, Deng L, Wang J, Huang S

pubmed logopapersJul 5 2025
Although deep learning has driven remarkable advancements in medical image registration, deep neural network-based non-rigid deformation field generation methods demonstrate high accuracy in single-modality scenarios. However, multi-modal medical image registration still faces critical challenges. To address the issues of insufficient anatomical consistency and unstable deformation field optimization in cross-modal registration tasks among existing methods, this paper proposes an end-to-end medical image registration method based on a Dynamic Harmonized Registration framework (DHR-Net). DHR-Net employs a cascaded two-stage architecture, comprising a translation network and a registration network that operate in sequential processing phases. Furthermore, we propose a loss function based on the Noise Contrastive Estimation framework, which enhances anatomical consistency in cross-modal translation by maximizing mutual information between input and transformed image patches. This loss function incorporates a dynamic temperature adjustment mechanism that progressively optimizes feature contrast constraints during training to improve high-frequency detail preservation, thereby better constraining the topological structure of target images. Experiments conducted on the M&M Heart Dataset demonstrate that DHR-Net outperforms existing methods in registration accuracy, deformation field smoothness, and cross-modal robustness. The framework significantly enhances the registration quality of cardiac images while demonstrating exceptional performance in preserving anatomical structures, exhibiting promising potential for clinical applications.

Zhao X, Xie Z, He W, Fornage M, Zhi D

pubmed logopapersJul 5 2025
Fractional anisotropy (FA) derived from diffusion MRI is a widely used marker of white matter (WM) integrity. However, conventional FA based genetic studies focus on phenotypes representing tract- or atlas-defined averages, which may oversimplify spatial patterns of WM integrity and thus limiting the genetic discovery. Here, we proposed a deep learning-based framework, termed unsupervised deep representation of white matter (UDR-WM), to extract brain-wide FA features-referred to as UDIP-FA, that capture distributed microstructural variation without prior anatomical assumptions. UDIP-FAs exhibit enhanced sensitivity to aging and substantially higher SNP-based heritability compared to traditional FA phenotypes ( <i>P</i> < 2.20e-16, Mann-Whitney U test, mean h <sup>2</sup> = 50.81%). Through multivariate GWAS, we identified 939 significant lead SNPs in 586 loci, mapped to 3480 genes, dubbed UDIP-FA related genes (UFAGs). UFAGs are overexpressed in glial cells, particularly in astrocytes and oligodendrocytes (Bonferroni-corrected <i>P <</i> 2e-6, Wald Test), and show strong overlap with risk gene sets for schizophrenia and Parkinson disease (Bonferroni-corrected P < 7.06e-3, Fisher exact test). UDIP-FAs are genetically correlated with multiple brain disorders and cognitive traits, including fluid intelligence and reaction time, and are associated with polygenic risk for bone mineral density. Network analyses reveal that UFAGs form disease-enriched modules across protein-protein interaction and co-expression networks, implicating core pathways in myelination and axonal structure. Notably, several UFAGs, including <i>ACHE</i> and <i>ALDH2</i> , are targets of existing neuropsychiatric drugs. Together, our findings establish UDIP-FA as a biologically and clinically informative brain phenotype, enabling high-resolution dissection of white matter genetic architecture and its genetic links to complex brain traits.

Dunne, J., Kumarasamy, C., Belay, D. G., Betran, A. P., Gebremedhin, A. T., Mengistu, S., Nyadanu, S. D., Roy, A., Tessema, G., Tigest, T., Pereira, G.

medrxiv logopreprintJul 5 2025
BackgroundArtificial intelligence (AI) has potentially shown promise in interpreting ultrasound imaging through flexible pattern recognition and algorithmic learning, but implementation in clinical practice remains limited. This study aimed to investigate the current application of AI in prenatal ultrasounds to identify congenital anomalies, and to synthesise challenges and opportunities for the advancement of AI-assisted ultrasound diagnosis. This comprehensive analysis addresses the clinical translation gap between AI performance metrics and practical implementation in prenatal care. MethodsSystematic searches were conducted in eight electronic databases (CINAHL Plus, Ovid/EMBASE, Ovid/MEDLINE, ProQuest, PubMed, Scopus, Web of Science and Cochrane Library) and Google Scholar from inception to May 2025. Studies were included if they applied an AI-assisted ultrasound diagnostic tool to identify a congenital anomaly during pregnancy. This review adhered to PRISMA guidelines for systematic reviews. We evaluated study quality using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) guidelines. FindingsOf 9,918 records, 224 were identified for full-text review and 20 met the inclusion criteria. The majority of studies (11/20, 55%) were conducted in China, with most published after 2020 (16/20, 80%). All AI models were developed as an assistive tool for anomaly detection or classification. Most models (85%) focused on single-organ systems: heart (35%), brain/cranial (30%), or facial features (20%), while three studies (15%) attempted multi-organ anomaly detection. Fifty percent of the included studies reported exceptionally high model performance, with both sensitivity and specificity exceeding 0.95, with AUC-ROC values ranging from 0.91 to 0.97. Most studies (75%) lacked external validation, with internal validation often limited to small training and testing datasets. InterpretationWhile AI applications in prenatal ultrasound showed potential, current evidence indicates significant limitations in their practical implementation. Much work is required to optimise their application, including the external validation of diagnostic models with clinical utility to have real-world implications. Future research should prioritise larger-scale multi-centre studies, developing multi-organ anomaly detection capabilities rather than the current single-organ focus, and robust evaluation of AI tools in real-world clinical settings.

Soubeiran, C., Vilbert, M., Memmi, B., Georgeon, C., Borderie, V., Chessel, A., Plamann, K.

medrxiv logopreprintJul 5 2025
Photorefractive Keratectomy (PRK) is a widely used laser-assisted refractive surgical technique. In some cases, it leads to temporary subepithelial inflammation or fibrosis linked to visual haze. There are to our knowledge no physics based and quantitative tools to monitor these symptoms. We here present a comprehensive machine learning-based algorithm for the detection of fibrosis based on spectral domain optical coherence tomography images recorded in vivo on standard clinical devices. Because of the rarity of these phenomena, we trained the model on corneas presenting Fuchs dystrophy causing similar, but permanent, fibrosis symptoms, and applied it to images from patients who have undergone PRK surgery. Our study shows that the model output (probability of Fuchs dystrophy classification) provides a quantified and explainable indicator of corneal healing for post-operative follow-up.

Xiao X, Wang Z, Yao J, Wei J, Zhang B, Chen W, Geng Z, Song E

pubmed logopapersJul 5 2025
Most current medical image segmentation models employ a unified feature modeling strategy for all target regions. However, they overlook the significant heterogeneity in lesion structure, boundary characteristics, and semantic texture, which frequently restricts their ability to accurately segment morphologically diverse lesions in complex imaging contexts, thereby reducing segmentation accuracy and robustness. To address this issue, we propose a brain-inspired segmentation framework named BrainNet, which adopts a tri-level backbone encoder-Brain Network-decoder architecture. Such an architecture enables globally guided, locally differentiated feature modeling. We further instantiate the framework with an attention-enhanced segmentation model, termed Att-BrainNet. In this model, a Thalamus Gating Module (TGM) dynamically selects and activates structurally identical but functionally diverse Encephalic Region Networks (ERNs) to collaboratively extract lesion-specific features. In addition, an S-F image enhancement module is incorporated to improve sensitivity to boundaries and fine structures. Meanwhile, multi-head self-attention is embedded in the encoder to strengthen global semantic modeling and regional coordination. Experiments conducted on two lung cancer CT segmentation datasets and the Synapse multi-organ dataset demonstrate that Att-BrainNet outperforms existing mainstream segmentation models in terms of both accuracy and generalization. Further ablation studies and mechanism visualizations confirm the effectiveness of the BrainNet architecture and the dynamic scheduling strategy. This research provides a novel structural paradigm for medical image segmentation and holds promise for extension to other complex segmentation scenarios.

Takada S, Nakaura T, Yoshida N, Uetani H, Shiraishi K, Kobayashi N, Matsuo K, Morita K, Nagayama Y, Kidoh M, Yamashita Y, Takayanagi R, Hirai T

pubmed logopapersJul 5 2025
To evaluate the effects of super-resolution deep learning-based reconstruction (SR-DLR) on thin-slice T2-weighted hippocampal MR image quality using 3 T MRI, in both human volunteers and phantoms. Thirteen healthy volunteers underwent hippocampal MRI at standard and high resolutions. Original (standard-resolution; StR) images were reconstructed with and without deep learning-based reconstruction (DLR) (Matrix = 320 × 320), and with SR-DLR (Matrix = 960 × 960). High-resolution (HR) images were also reconstructed with/without DLR (Matrix = 960 × 960). Contrast, contrast-to-noise ratio (CNR), and septum slope were analyzed. Two radiologists evaluated the images for noise, contrast, artifacts, sharpness, and overall quality. Quantitative and qualitative results are reported as medians and interquartile ranges (IQR). Comparisons used the Wilcoxon signed-rank test with Holm correction. We also scanned an American College of Radiology (ACR) phantom to evaluate the ability of our SR-DLR approach to reduce artifacts induced by zero-padding interpolation (ZIP). SR-DLR exhibited contrast comparable to original images and significantly higher than HR-images. Its slope was comparable to that of HR images but was significantly steeper than that of StR images (p < 0.01). Furthermore, the CNR of SR-DLR (10.53; IQR: 10.08, 11.69) was significantly superior to the StR-images without DLR (7.5; IQR: 6.4, 8.37), StR-images with DLR (8.73; IQR: 7.68, 9.0), HR-images without DLR (2.24; IQR: 1.43, 2.38), and HR-images with DLR (4.84; IQR: 2.99, 5.43) (p < 0.05). In the phantom study, artifacts induced by ZIP were scarcely observed when using SR-DLR. SR-DLR for hippocampal MRI potentially improves image quality beyond that of actual HR-images while reducing acquisition time.

Prucker P, Busch F, Dorfner F, Mertens CJ, Bayerl N, Makowski MR, Bressem KK, Adams LC

pubmed logopapersJul 5 2025
Large Language Models (LLMs) show promise for generating patient-friendly radiology reports, but the performance of open-source versus proprietary LLMs needs assessment. To compare open-source and proprietary LLMs in generating patient-friendly radiology reports from chest CTs using quantitative readability metrics and qualitative assessments by radiologists. Fifty chest CT reports were processed by seven LLMs: three open-source models (Llama-3-70b, Mistral-7b, Mixtral-8x7b) and four proprietary models (GPT-4, GPT-3.5-Turbo, Claude-3-Opus, Gemini-Ultra). Simplification was evaluated using five quantitative readability metrics. Three radiologists rated patient-friendliness on a five-point Likert scale across five criteria. Content and coherence errors were counted. Inter-rater reliability and differences among models were statistically assessed. Inter-rater reliability was substantial to near perfect (κ = 0.76-0.86). Qualitatively, Llama-3-70b was non-inferior to leading proprietary models in 4/5 categories. GPT-3.5-Turbo showed the best overall readability, outperforming GPT-4 in two metrics. Llama-3-70b outperformed GPT-3.5-Turbo on the CLI (p = 0.006). Claude-3-Opus and Gemini-Ultra scored lower on readability but were rated highly in qualitative assessments. Claude-3-Opus maintained perfect factual accuracy. Claude-3-Opus and GPT-4 outperformed Llama-3-70b in emotional sensitivity (90.0 % vs 46.0 %, p < 0.001). Llama-3-70b shows strong potential in generating quality, patient-friendly radiology reports, challenging proprietary models. With further adaptation, open-source LLMs could advance patient-friendly reporting technology.

Hu S, Liu Y, Wang R, Li X, Konofagou EE

pubmed logopapersJul 4 2025
Harmonic Motion Imaging (HMI) is an ultrasound elasticity imaging method that measures the mechanical properties of tissue using amplitude-modulated acoustic radiation force (AM-ARF). Multi-frequency HMI (MF-HMI) excites tissue at various AM frequencies simultaneously, allowing for image optimization without prior knowledge of inclusion size and stiffness. However, challenges remain in size estimation as inconsistent boundary effects result in different perceived sizes across AM frequencies. Herein, we developed an automated assessment method for tumor and focused ultrasound surgery (FUS) induced lesions using a transformer-based multi-modality neural network, HMINet, and further automated neoadjuvant chemotherapy (NACT) response prediction. HMINet was trained on 380 pairs of MF-HMI and B-mode images of phantoms and in vivo orthotopic breast cancer mice (4T1). Test datasets included phantoms (n = 32), in vivo 4T1 mice (n = 24), breast cancer patients (n = 20), FUS-induced lesions in ex vivo animal tissue and in vivo clinical settings with real-time inference, with average segmentation accuracy (Dice) of 0.91, 0.83, 0.80, and 0.81, respectively. HMINet outperformed state-of-the-art models; we also demonstrated the enhanced robustness of the multi-modality strategy over B-mode-only, both quantitatively through Dice scores and in terms of interpretation using saliency analysis. The contribution of AM frequency based on the number of salient pixels showed that the most significant AM frequencies are 800 and 200 Hz across clinical cases. We developed an automated, multimodality ultrasound-based tumor and FUS lesion assessment method, which facilitates the clinical translation of stiffness-based breast cancer treatment response prediction and real-time image-guided FUS therapy.
Page 489 of 7527514 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,500+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.