Sort by:
Page 167 of 3593587 results

AI-Driven segmentation and morphogeometric profiling of epicardial adipose tissue in type 2 diabetes.

Feng F, Hasaballa AI, Long T, Sun X, Fernandez J, Carlhäll CJ, Zhao J

pubmed logopapersJul 18 2025
Epicardial adipose tissue (EAT) is associated with cardiometabolic risk in type 2 diabetes (T2D), but its spatial distribution and structural alterations remain understudied. We aim to develop a shape-aware, AI-based method for automated segmentation and morphogeometric analysis of EAT in T2D. A total of 90 participants (45 with T2D and 45 age-, sex-matched controls) underwent cardiac 3D Dixon MRI, enrolled between 2014 and 2018 as part of the sub-study of the Swedish SCAPIS cohort. We developed EAT-Seg, a multi-modal deep learning model incorporating signed distance maps (SDMs) for shape-aware segmentation. Segmentation performance was evaluated using the Dice similarity coefficient (DSC), the 95% Hausdorff distance (HD95), and the average symmetric surface distance (ASSD). Statistical shape analysis combined with partial least squares discriminant analysis (PLS-DA) was applied to point cloud representations of EAT to capture latent spatial variations between groups. Morphogeometric features, including volume, 3D local thickness map, elongation and fragmentation index, were extracted and correlated with PLS-DA latent variables using Pearson correlation. Features with high-correlation were identified as key differentiators and evaluated using a Random Forest classifier. EAT-Seg achieved a DSC of 0.881, a HD95 of 3.213 mm, and an ASSD of 0.602 mm. Statistical shape analysis revealed spatial distribution differences in EAT between T2D and control groups. Morphogeometric feature analysis identified volume and thickness gradient-related features as key discriminators (r > 0.8, P < 0.05). Random Forest classification achieved an AUC of 0.703. This AI-based framework enables accurate segmentation for structurally complex EAT and reveals key morphogeometric differences associated with T2D, supporting its potential as a biomarker for cardiometabolic risk assessment.

Accuracy and Time Efficiency of Artificial Intelligence-Driven Tooth Segmentation on CBCT Images: A Validation Study Using Two Implant Planning Software Programs.

Ntovas P, Sirirattanagool P, Asavanamuang P, Jain S, Tavelli L, Revilla-León M, Galarraga-Vinueza ME

pubmed logopapersJul 18 2025
To assess the accuracy and time efficiency of manual versus artificial intelligence (AI)-driven tooth segmentation on cone-beam computed tomography (CBCT) images, using AI tools integrated within implant planning software, and to evaluate the impact of artifacts, dental arch, tooth type, and region. Fourteen patients who underwent CBCT scans were randomly selected for this study. Using the acquired datasets, 67 extracted teeth were segmented using one manual and two AI-driven tools. The segmentation time for each method was recorded. The extracted teeth were scanned with an intraoral scanner to serve as the reference. The virtual models generated by each segmentation method were superimposed with the surface scan models to calculate volumetric discrepancies. The discrepancy between the evaluated AI-driven and manual segmentation methods ranged from 0.10 to 0.98 mm, with a mean RMS of 0.27 (0.11) mm. Manual segmentation resulted in less RMS deviation compared to both AI-driven methods (CDX; BSB) (p < 0.05). Significant differences were observed between all investigated segmentation methods, both for the overall tooth area and each region, with the apical portion of the root showing the lowest accuracy (p < 0.05). Tooth type did not have a significant effect on segmentation (p > 0.05). Both AI-driven segmentation methods reduced segmentation time compared to manual segmentation (p < 0.05). AI-driven segmentation can generate reliable virtual 3D tooth models, with accuracy comparable to that of manual segmentation performed by experienced clinicians, while also significantly improving time efficiency. To further enhance accuracy in cases involving restoration artifacts, continued development and optimization of AI-driven tooth segmentation models are necessary.

DUSTrack: Semi-automated point tracking in ultrasound videos

Praneeth Namburi, Roger Pallarès-López, Jessica Rosendorf, Duarte Folgado, Brian W. Anthony

arxiv logopreprintJul 18 2025
Ultrasound technology enables safe, non-invasive imaging of dynamic tissue behavior, making it a valuable tool in medicine, biomechanics, and sports science. However, accurately tracking tissue motion in B-mode ultrasound remains challenging due to speckle noise, low edge contrast, and out-of-plane movement. These challenges complicate the task of tracking anatomical landmarks over time, which is essential for quantifying tissue dynamics in many clinical and research applications. This manuscript introduces DUSTrack (Deep learning and optical flow-based toolkit for UltraSound Tracking), a semi-automated framework for tracking arbitrary points in B-mode ultrasound videos. We combine deep learning with optical flow to deliver high-quality and robust tracking across diverse anatomical structures and motion patterns. The toolkit includes a graphical user interface that streamlines the generation of high-quality training data and supports iterative model refinement. It also implements a novel optical-flow-based filtering technique that reduces high-frequency frame-to-frame noise while preserving rapid tissue motion. DUSTrack demonstrates superior accuracy compared to contemporary zero-shot point trackers and performs on par with specialized methods, establishing its potential as a general and foundational tool for clinical and biomechanical research. We demonstrate DUSTrack's versatility through three use cases: cardiac wall motion tracking in echocardiograms, muscle deformation analysis during reaching tasks, and fascicle tracking during ankle plantarflexion. As an open-source solution, DUSTrack offers a powerful, flexible framework for point tracking to quantify tissue motion from ultrasound videos. DUSTrack is available at https://github.com/praneethnamburi/DUSTrack.

UGPL: Uncertainty-Guided Progressive Learning for Evidence-Based Classification in Computed Tomography

Shravan Venkatraman, Pavan Kumar S, Rakesh Raj Madavan, Chandrakala S

arxiv logopreprintJul 18 2025
Accurate classification of computed tomography (CT) images is essential for diagnosis and treatment planning, but existing methods often struggle with the subtle and spatially diverse nature of pathological features. Current approaches typically process images uniformly, limiting their ability to detect localized abnormalities that require focused analysis. We introduce UGPL, an uncertainty-guided progressive learning framework that performs a global-to-local analysis by first identifying regions of diagnostic ambiguity and then conducting detailed examination of these critical areas. Our approach employs evidential deep learning to quantify predictive uncertainty, guiding the extraction of informative patches through a non-maximum suppression mechanism that maintains spatial diversity. This progressive refinement strategy, combined with an adaptive fusion mechanism, enables UGPL to integrate both contextual information and fine-grained details. Experiments across three CT datasets demonstrate that UGPL consistently outperforms state-of-the-art methods, achieving improvements of 3.29%, 2.46%, and 8.08% in accuracy for kidney abnormality, lung cancer, and COVID-19 detection, respectively. Our analysis shows that the uncertainty-guided component provides substantial benefits, with performance dramatically increasing when the full progressive learning pipeline is implemented. Our code is available at: https://github.com/shravan-18/UGPL

OrthoInsight: Rib Fracture Diagnosis and Report Generation Based on Multi-Modal Large Models

Ningyong Wu, Jinzhi Wang, Wenhong Zhao, Chenzhan Yu, Zhigang Xiu, Duwei Dai

arxiv logopreprintJul 18 2025
The growing volume of medical imaging data has increased the need for automated diagnostic tools, especially for musculoskeletal injuries like rib fractures, commonly detected via CT scans. Manual interpretation is time-consuming and error-prone. We propose OrthoInsight, a multi-modal deep learning framework for rib fracture diagnosis and report generation. It integrates a YOLOv9 model for fracture detection, a medical knowledge graph for retrieving clinical context, and a fine-tuned LLaVA language model for generating diagnostic reports. OrthoInsight combines visual features from CT images with expert textual data to deliver clinically useful outputs. Evaluated on 28,675 annotated CT images and expert reports, it achieves high performance across Diagnostic Accuracy, Content Completeness, Logical Coherence, and Clinical Guidance Value, with an average score of 4.28, outperforming models like GPT-4 and Claude-3. This study demonstrates the potential of multi-modal learning in transforming medical image analysis and providing effective support for radiologists.

Cross-modal Causal Intervention for Alzheimer's Disease Prediction

Yutao Jin, Haowen Xiao, Jielei Chu, Fengmao Lv, Yuxiao Li, Tianrui Li

arxiv logopreprintJul 18 2025
Mild Cognitive Impairment (MCI) serves as a prodromal stage of Alzheimer's Disease (AD), where early identification and intervention can effectively slow the progression to dementia. However, diagnosing AD remains a significant challenge in neurology due to the confounders caused mainly by the selection bias of multimodal data and the complex relationships between variables. To address these issues, we propose a novel visual-language causal intervention framework named Alzheimer's Disease Prediction with Cross-modal Causal Intervention (ADPC) for diagnostic assistance. Our ADPC employs large language model (LLM) to summarize clinical data under strict templates, maintaining structured text outputs even with incomplete or unevenly distributed datasets. The ADPC model utilizes Magnetic Resonance Imaging (MRI), functional MRI (fMRI) images and textual data generated by LLM to classify participants into Cognitively Normal (CN), MCI, and AD categories. Because of the presence of confounders, such as neuroimaging artifacts and age-related biomarkers, non-causal models are likely to capture spurious input-output correlations, generating less reliable results. Our framework implicitly eliminates confounders through causal intervention. Experimental results demonstrate the outstanding performance of our method in distinguishing CN/MCI/AD cases, achieving state-of-the-art (SOTA) metrics across most evaluation metrics. The study showcases the potential of integrating causal reasoning with multi-modal learning for neurological disease diagnosis.

Software architecture and manual for novel versatile CT image analysis toolbox -- AnatomyArchive

Lei Xu, Torkel B Brismar

arxiv logopreprintJul 18 2025
We have developed a novel CT image analysis package named AnatomyArchive, built on top of the recent full body segmentation model TotalSegmentator. It provides automatic target volume selection and deselection capabilities according to user-configured anatomies for volumetric upper- and lower-bounds. It has a knowledge graph-based and time efficient tool for anatomy segmentation mask management and medical image database maintenance. AnatomyArchive enables automatic body volume cropping, as well as automatic arm-detection and exclusion, for more precise body composition analysis in both 2D and 3D formats. It provides robust voxel-based radiomic feature extraction, feature visualization, and an integrated toolchain for statistical tests and analysis. A python-based GPU-accelerated nearly photo-realistic segmentation-integrated composite cinematic rendering is also included. We present here its software architecture design, illustrate its workflow and working principle of algorithms as well provide a few examples on how the software can be used to assist development of modern machine learning models. Open-source codes will be released at https://github.com/lxu-medai/AnatomyArchive for only research and educational purposes.

Divide and Conquer: A Large-Scale Dataset and Model for Left-Right Breast MRI Segmentation

Maximilian Rokuss, Benjamin Hamm, Yannick Kirchhoff, Klaus Maier-Hein

arxiv logopreprintJul 18 2025
We introduce the first publicly available breast MRI dataset with explicit left and right breast segmentation labels, encompassing more than 13,000 annotated cases. Alongside this dataset, we provide a robust deep-learning model trained for left-right breast segmentation. This work addresses a critical gap in breast MRI analysis and offers a valuable resource for the development of advanced tools in women's health. The dataset and trained model are publicly available at: www.github.com/MIC-DKFZ/BreastDivider

Converting T1-weighted MRI from 3T to 7T quality using deep learning

Malo Gicquel, Ruoyi Zhao, Anika Wuestefeld, Nicola Spotorno, Olof Strandberg, Kalle Åström, Yu Xiao, Laura EM Wisse, Danielle van Westen, Rik Ossenkoppele, Niklas Mattsson-Carlgren, David Berron, Oskar Hansson, Gabrielle Flood, Jacob Vogel

arxiv logopreprintJul 18 2025
Ultra-high resolution 7 tesla (7T) magnetic resonance imaging (MRI) provides detailed anatomical views, offering better signal-to-noise ratio, resolution and tissue contrast than 3T MRI, though at the cost of accessibility. We present an advanced deep learning model for synthesizing 7T brain MRI from 3T brain MRI. Paired 7T and 3T T1-weighted images were acquired from 172 participants (124 cognitively unimpaired, 48 impaired) from the Swedish BioFINDER-2 study. To synthesize 7T MRI from 3T images, we trained two models: a specialized U-Net, and a U-Net integrated with a generative adversarial network (GAN U-Net). Our models outperformed two additional state-of-the-art 3T-to-7T models in image-based evaluation metrics. Four blinded MRI professionals judged our synthetic 7T images as comparable in detail to real 7T images, and superior in subjective visual quality to 7T images, apparently due to the reduction of artifacts. Importantly, automated segmentations of the amygdalae of synthetic GAN U-Net 7T images were more similar to manually segmented amygdalae (n=20), than automated segmentations from the 3T images that were used to synthesize the 7T images. Finally, synthetic 7T images showed similar performance to real 3T images in downstream prediction of cognitive status using MRI derivatives (n=3,168). In all, we show that synthetic T1-weighted brain images approaching 7T quality can be generated from 3T images, which may improve image quality and segmentation, without compromising performance in downstream tasks. Future directions, possible clinical use cases, and limitations are discussed.

The application of super-resolution ultrasound radiomics models in predicting the failure of conservative treatment for ectopic pregnancy.

Zhang M, Sheng J

pubmed logopapersJul 17 2025
Conservative treatment remains a viable option for selected patients with ectopic pregnancy (EP), but failure may lead to rupture and serious complications. Currently, serum β-hCG is the main predictor for treatment outcomes, yet its accuracy is limited. This study aimed to develop and validate a predictive model that integrates radiomic features derived from super-resolution (SR) ultrasound images with clinical biomarkers to improve risk stratification. A total of 228 patients with EP receiving conservative treatment were retrospectively included, with 169 classified as treatment success and 59 as failure. SR images were generated using a deep learning-based generative adversarial network (GAN). Radiomic features were extracted from both normal-resolution (NR) and SR ultrasound images. Features with intraclass correlation coefficient (ICC) ≥ 0.75 were retained after intra- and inter-observer evaluation. Feature selection involved statistical testing and Least Absolute Shrinkage and Selection Operator (LASSO) regression. Random forest algorithms were used to construct NR and SR models. A clinical model based on serum β-hCG was also developed. The Clin-SR model was constructed by fusing SR radiomics with β-hCG values. Model performance was evaluated using area under the curve (AUC), calibration, and decision curve analysis (DCA). An independent temporal validation cohort (n = 40; 20 failures, 20 successes) was used to validation of the nomogram derived from the Clin-SR model. The SR model significantly outperformed the NR model in the test cohort (AUC: 0.791 ± 0.015 vs. 0.629 ± 0.083). In a representative iteration, the Clin-SR fusion model achieved an AUC of 0.870 ± 0.015, with good calibration and net clinical benefit, suggesting reliable performance in predicting conservative treatment failure. In the independent validation cohort, the nomogram demonstrated good generalizability with an AUC of 0.808 and consistent calibration across risk thresholds. Key contributing radiomic features included Gray Level Variance and Voxel Volume, reflecting lesion heterogeneity and size. The Clin-SR model, which integrates deep learning-enhanced SR ultrasound radiomics with serum β-hCG, offers a robust and non-invasive tool for predicting conservative treatment failure in ectopic pregnancy. This multimodal approach enhances early risk stratification and supports personalized clinical decision-making, potentially reducing overtreatment and emergency interventions.
Page 167 of 3593587 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.