Sort by:
Page 3 of 99981 results

Spatial Prior-Guided Dual-Path Network for Thyroid Nodule Segmentation.

Pang C, Miao H, Zhang R, Liu Q, Lyu L

pubmed logopapersAug 12 2025
Accurate segmentation of thyroid nodules in ultrasound images is critical for clinical diagnosis but remains challenging due to low contrast and complex anatomical structures. Existing deep learning methods often rely solely on local nodule features, lacking anatomical prior knowledge of the thyroid region, which can result in misclassification of non-thyroid tissues, especially in low-quality scans. To address these issues, we propose a Spatial Prior-Guided Dual-Path Network that integrates a prior-aware encoder to model thyroid anatomical structures and a low-cost heterogeneous encoder to preserve fine-grained multi-scale features, enhancing both spatial detail and contextual awareness. To capture the diverse and irregular appearances of nodules, we design a CrossBlock module, which combines an efficient cross-attention mechanism with mixed-scale convolutional operations to enable global context modeling and local feature extraction. The network further employs a dual-decoder architecture, where one decoder learns thyroid region priors and the other focuses on accurate nodule segmentation. Gland-specific features are hierarchically refined and injected into the nodule decoder to enhance boundary delineation through anatomical guidance. Extensive experiments on the TN3K and MTNS datasets demonstrate that our method consistently outperforms state-of-the-art approaches, particularly in boundary precision and localization accuracy, offering practical value for preoperative planning and clinical decision-making.

The association of symptoms, pulmonary function test and computed tomography in interstitial lung disease at the onset of connective tissue disease: an observational study with artificial intelligence analysis of high-resolution computed tomography.

Hoffmann T, Teichgräber U, Brüheim LB, Lassen-Schmidt B, Renz D, Weise T, Krämer M, Oelzner P, Böttcher J, Güttler F, Wolf G, Pfeil A

pubmed logopapersAug 12 2025
Interstitial lung disease (ILD) is a common and serious organ manifestation in patients with connective tissue disease (CTD), but it is uncertain whether there is a difference in ILD between symptomatic and asymptomatic patients. Therefore, we conducted a study to evaluate differences in the extent of ILD based on radiological findings between symptomatic/asymptomatic patients, using an artificial intelligence (AI)-based quantification of pulmonary high-resolution computed tomography (AIpqHRCT). Within the study, 67 cross-sectional HRCT datasets and clinical data (including pulmonary function test) of consecutively patients (mean age: 57.1 ± 14.7 years, woman n = 45; 67.2%) with both, initial diagnosis of CTD, with systemic sclerosis being the most frequent (n = 21, 31.3%), and ILD (all without immunosuppressive therapy), were analysed using AIqpHRCT. 25.4% (n = 17) of the patients with ILD at initial diagnosis of CTD had no pulmonary symptoms. Regarding the baseline characteristics (age, gender, disease), there were no significant difference between the symptomatic and asymptomatic group. The pulmonary function test (PFT) revealed the following mean values (%predicted) in the symptomatic and asymptomatic group, respectively: Forced vital capacity (FVC) 69.4 ± 17.4% versus 86.1 ± 15.8% (p = 0.001), and diffusing capacity of the lung for carbon monoxide (DLCO) 49.7 ± 17.9% versus 60.0 ± 15.8% (p = 0.043). AIqpHRCT data showed a significant higher amount of high attenuated volume (HAV) (14.8 ± 11.0% versus 8.9 ± 3.9%; p = 0.021) and reticulations (5.4 ± 8.7% versus 1.4 ± 1.5%; p = 0.035) in symptomatic patients. A quarter of patients with ILD at the time of initial CTD diagnosis had no pulmonary symptoms, showing DLCO were reduced in both groups. Also, AIqpHRCT demonstrated clinically relevant ILD in asymptomatic patients. These results underline the importance of an early risk adapted screening for ILD also in asymptomatic CTD patients, as ILD is associated with increased mortality.

MRI-derived quantification of hepatic vessel-to-volume ratios in chronic liver disease using a deep learning approach.

Herold A, Sobotka D, Beer L, Bastati N, Poetter-Lang S, Weber M, Reiberger T, Mandorfer M, Semmler G, Simbrunner B, Wichtmann BD, Ba-Ssalamah SA, Trauner M, Ba-Ssalamah A, Langs G

pubmed logopapersAug 12 2025
We aimed to quantify hepatic vessel volumes across chronic liver disease stages and healthy controls using deep learning-based magnetic resonance imaging (MRI) analysis, and assess correlations with biomarkers for liver (dys)function and fibrosis/portal hypertension. We assessed retrospectively healthy controls, non-advanced and advanced chronic liver disease (ACLD) patients using a 3D U-Net model for hepatic vessel segmentation on portal venous phase gadoxetic acid-enhanced 3-T MRI. Total (TVVR), hepatic (HVVR), and intrahepatic portal vein-to-volume ratios (PVVR) were compared between groups and correlated with: albumin-bilirubin (ALBI) and "model for end-stage liver disease-sodium" (MELD-Na) score) and fibrosis/portal hypertension (Fibrosis-4 (FIB-4) Score, liver stiffness measurement (LSM), hepatic venous pressure gradient (HVPG), platelet count (PLT), and spleen volume. We included 197 subjects, aged 54.9 ± 13.8 years (mean ± standard deviation), 111 males (56.3%): 35 healthy controls, 44 non-ACLD, and 118 ACLD patients. TVVR and HVVR were highest in controls (3.9; 2.1), intermediate in non-ACLD (2.8; 1.7), and lowest in ACLD patients (2.3; 1.0) (p ≤ 0.001). PVVR was reduced in both non-ACLD and ACLD patients (both 1.2) compared to controls (1.7) (p ≤ 0.001), but showed no difference between CLD groups (p = 0.999). HVVR significantly correlated indirectly with FIB-4, ALBI, MELD-Na, LSM, and spleen volume (ρ ranging from -0.27 to -0.40), and directly with PLT (ρ = 0.36). TVVR and PVVR showed similar but weaker correlations. Deep learning-based hepatic vessel volumetry demonstrated differences between healthy liver and chronic liver disease stages and shows correlations with established markers of disease severity. Hepatic vessel volumetry demonstrates differences between healthy liver and chronic liver disease stages, potentially serving as a non-invasive imaging biomarker. Deep learning-based vessel analysis can provide automated quantification of hepatic vascular changes across healthy liver and chronic liver disease stages. Automated quantification of hepatic vasculature shows significantly reduced hepatic vascular volume in advanced chronic liver disease compared to non-advanced disease and healthy liver. Decreased hepatic vascular volume, particularly in the hepatic venous system, correlates with markers of liver dysfunction, fibrosis, and portal hypertension.

Enhanced Liver Tumor Detection in CT Images Using 3D U-Net and Bat Algorithm for Hyperparameter Optimization

Nastaran Ghorbani, Bitasadat Jamshidi, Mohsen Rostamy-Malkhalifeh

arxiv logopreprintAug 11 2025
Liver cancer is one of the most prevalent and lethal forms of cancer, making early detection crucial for effective treatment. This paper introduces a novel approach for automated liver tumor segmentation in computed tomography (CT) images by integrating a 3D U-Net architecture with the Bat Algorithm for hyperparameter optimization. The method enhances segmentation accuracy and robustness by intelligently optimizing key parameters like the learning rate and batch size. Evaluated on a publicly available dataset, our model demonstrates a strong ability to balance precision and recall, with a high F1-score at lower prediction thresholds. This is particularly valuable for clinical diagnostics, where ensuring no potential tumors are missed is paramount. Our work contributes to the field of medical image analysis by demonstrating that the synergy between a robust deep learning architecture and a metaheuristic optimization algorithm can yield a highly effective solution for complex segmentation tasks.

Decoding fetal motion in 4D ultrasound with DeepLabCut.

Inubashiri E, Kaishi Y, Miyake T, Yamaguchi R, Hamaguchi T, Inubashiri M, Ota H, Watanabe Y, Deguchi K, Kuroki K, Maeda N

pubmed logopapersAug 11 2025
This study aimed to objectively and quantitatively analyze fetal motor behavior using DeepLabCut (DLC), a markerless posture estimation tool based on deep learning, applied to four-dimensional ultrasound (4DUS) data collected during the second trimester. We propose a novel clinical method for precise assessment of fetal neurodevelopment. Fifty 4DUS video recordings of normal singleton fetuses aged 12 to 22 gestational weeks were analyzed. Eight fetal joints were manually labeled in 2% of each video to train a customized DLC model. The model's accuracy was evaluated using likelihood scores. Intra- and inter-rater reliability of manual labeling were assessed using intraclass correlation coefficients (ICC). Angular velocity time series derived from joint coordinates were analyzed to quantify fetal movement patterns and developmental coordination. Manual labeling demonstrated excellent reproducibility (inter-rater ICC = 0.990, intra-rater ICC = 0.961). The trained DLC model achieved a mean likelihood score of 0.960, confirming high tracking accuracy. Kinematic analysis revealed developmental trends: localized rapid limb movements were common at 12-13 weeks; movements became more coordinated and systemic by 18-20 weeks, reflecting advancing neuromuscular maturation. Although a modest increase in tracking accuracy was observed with gestational age, this trend did not reach statistical significance (p < 0.001). DLC enables precise quantitative analysis of fetal motor behavior from 4DUS recordings. This AI-driven approach offers a promising, noninvasive alternative to conventional qualitative assessments, providing detailed insights into early fetal neurodevelopmental trajectories and potential early screening for neurodevelopmental disorders.

Neonatal neuroimaging: from research to bedside practice.

Cizmeci MN, El-Dib M, de Vries LS

pubmed logopapersAug 11 2025
Neonatal neuroimaging is essential in research and clinical practice, offering important insights into brain development and neurologic injury mechanisms. Visualizing the brain enables researchers and clinicians to improve neonatal care and parental counselling through better diagnosis and prognostication of disease. Common neuroimaging modalities used in the neonatal intensive care unit (NICU) are cranial ultrasonography (cUS) and magnetic resonance imaging (MRI). Between these modalities, conventional MRI provides the optimal image resolution and detail about the developing brain, while advanced MRI techniques allow for the evaluation of tissue microstructure and functional networks. Over the last two decades, medical imaging techniques using brain MRI have rapidly progressed, and these advances have facilitated high-quality extraction of quantitative features as well as the implementation of novel devices for use in neurological disorders. Major advancements encompass the use of low-field dedicated MRI systems within the NICU and trials of ultralow-field portable MRI systems at the bedside. Additionally, higher-field magnets are utilized to enhance image quality, and ultrafast brain MRI is employed to decrease image acquisition time. Furthermore, the implementation of advanced MRI sequences, the application of machine learning algorithms, multimodal neuroimaging techniques, motion correction techniques, and novel modalities are used to visualize pathologies that are not visible to the human eye. In this narrative review, we will discuss the fundamentals of these neuroimaging modalities, and their clinical applications to explore the present landscape of neonatal neuroimaging from bench to bedside.

Automated Prediction of Bone Volume Removed in Mastoidectomy.

Nagururu NV, Ishida H, Ding AS, Ishii M, Unberath M, Taylor RH, Munawar A, Sahu M, Creighton FX

pubmed logopapersAug 11 2025
The bone volume drilled by surgeons during mastoidectomy is determined by the need to localize the position, optimize the view, and reach the surgical endpoint while avoiding critical structures. Predicting the volume of bone removed before an operation can significantly enhance surgical training by providing precise, patient-specific guidance and enable the development of more effective computer-assisted and robotic surgical interventions. Single institution, cross-sectional. VR simulation. We developed a deep learning pipeline to automate the prediction of bone volume removed during mastoidectomy using data from virtual reality mastoidectomy simulations. The data set included 15 deidentified temporal bone computed tomography scans. The network was evaluated using fivefold cross-validation, comparing predicted and actual bone removal with metrics such as the Dice score (DSC) and Hausdorff distance (HD). Our method achieved a median DSC of 0.775 (interquartile range [IQR]: 0.725-0.810) and a median HD of 0.492 mm (IQR: 0.298-0.757 mm). Predictions reached the mastoidectomy endpoint of visualizing the horizontal canal and incus in 80% (12/15) of temporal bones. Qualitative analysis indicated that predictions typically produced realistic mastoidectomy endpoints, though some cases showed excessive or insufficient bone removal, particularly at the temporal bone cortex and tegmen mastoideum. This study establishes a foundational step in using deep learning to predict bone volume removal during mastoidectomy. The results indicate that learning-based methods can reasonably approximate the surgical endpoint of mastoidectomy. Further refinement with larger, more diverse data sets and improved model architectures will be essential for enhancing prediction accuracy.

MedReasoner: Reinforcement Learning Drives Reasoning Grounding from Clinical Thought to Pixel-Level Precision

Zhonghao Yan, Muxi Diao, Yuxuan Yang, Jiayuan Xu, Kaizhou Zhang, Ruoyan Jing, Lele Yang, Yanxi Liu, Kongming Liang, Zhanyu Ma

arxiv logopreprintAug 11 2025
Accurately grounding regions of interest (ROIs) is critical for diagnosis and treatment planning in medical imaging. While multimodal large language models (MLLMs) combine visual perception with natural language, current medical-grounding pipelines still rely on supervised fine-tuning with explicit spatial hints, making them ill-equipped to handle the implicit queries common in clinical practice. This work makes three core contributions. We first define Unified Medical Reasoning Grounding (UMRG), a novel vision-language task that demands clinical reasoning and pixel-level grounding. Second, we release U-MRG-14K, a dataset of 14K samples featuring pixel-level masks alongside implicit clinical queries and reasoning traces, spanning 10 modalities, 15 super-categories, and 108 specific categories. Finally, we introduce MedReasoner, a modular framework that distinctly separates reasoning from segmentation: an MLLM reasoner is optimized with reinforcement learning, while a frozen segmentation expert converts spatial prompts into masks, with alignment achieved through format and accuracy rewards. MedReasoner achieves state-of-the-art performance on U-MRG-14K and demonstrates strong generalization to unseen clinical queries, underscoring the significant promise of reinforcement learning for interpretable medical grounding.

Ratio of visceral-to-subcutaneous fat area improves long-term mortality prediction over either measure alone: automated CT-based AI measures with longitudinal follow-up in a large adult cohort.

Liu D, Kuchnia AJ, Blake GM, Lee MH, Garrett JW, Pickhardt PJ

pubmed logopapersAug 11 2025
Fully automated AI-based algorithms can quantify adipose tissue on abdominal CT images. The aim of this study was to investigate the clinical value of these biomarkers by determining the association between adipose tissue measures and all-cause mortality. This retrospective study included 151,141 patients who underwent abdominal CT for any reason between 2000 and 2021. A validated AI-based algorithm quantified subcutaneous (SAT) and visceral (VAT) adipose tissue cross-sectional area. A visceral-to-subcutaneous adipose tissue area ratio (VSR) was calculated. Clinical data (age at the time of CT, sex, date of death, date of last contact) was obtained from a database search of the electronic health record. Hazard ratios (HR) and Kaplan-Meier curves assessed the relationship between adipose tissue measures and mortality. The endpoint of interest was all-cause mortality, with additional subgroup analysis including age and gender. 138,169 patients were included in the final analysis. Higher VSR was associated with increased mortality; this association was strongest in younger women (highest compared to lowest risk quartile HR 3.32 in 18-39y). Lower SAT was associated with increased mortality regardless of sex or age group (HR up to 1.63 in 18-39y). Higher VAT was associated with increased mortality in younger age groups, with the trend weakening and reversing with age; this association was stronger in women. AI-based CT measures of SAT, VAT, and VSR are predictive of mortality, with VSR being the highest performing fat area biomarker overall. These metrics tended to perform better for women and younger patients. Incorporating AI tools can augment patient assessment and management, improving outcome.

Deep Learning-Based Desikan-Killiany Parcellation of the Brain Using Diffusion MRI

Yousef Sadegheih, Dorit Merhof

arxiv logopreprintAug 11 2025
Accurate brain parcellation in diffusion MRI (dMRI) space is essential for advanced neuroimaging analyses. However, most existing approaches rely on anatomical MRI for segmentation and inter-modality registration, a process that can introduce errors and limit the versatility of the technique. In this study, we present a novel deep learning-based framework for direct parcellation based on the Desikan-Killiany (DK) atlas using only diffusion MRI data. Our method utilizes a hierarchical, two-stage segmentation network: the first stage performs coarse parcellation into broad brain regions, and the second stage refines the segmentation to delineate more detailed subregions within each coarse category. We conduct an extensive ablation study to evaluate various diffusion-derived parameter maps, identifying an optimal combination of fractional anisotropy, trace, sphericity, and maximum eigenvalue that enhances parellation accuracy. When evaluated on the Human Connectome Project and Consortium for Neuropsychiatric Phenomics datasets, our approach achieves superior Dice Similarity Coefficients compared to existing state-of-the-art models. Additionally, our method demonstrates robust generalization across different image resolutions and acquisition protocols, producing more homogeneous parcellations as measured by the relative standard deviation within regions. This work represents a significant advancement in dMRI-based brain segmentation, providing a precise, reliable, and registration-free solution that is critical for improved structural connectivity and microstructural analyses in both research and clinical applications. The implementation of our method is publicly available on github.com/xmindflow/DKParcellationdMRI.
Page 3 of 99981 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.