Sort by:
Page 18 of 4064055 results

Brown adipose tissue machine learning nnU-Net V2 network using TriDFusion (3DF).

Lafontaine D, Chahwan S, Barraza G, Ucpinar BA, Kayal G, Gómez-Banoy N, Cohen P, Humm JL, Schöder H

pubmed logopapersAug 13 2025
Recent advances in machine learning have revolutionized medical imaging. Currently, identifying brown adipose tissue (BAT) relies on manual identification and segmentation on Fluorine-<sup>18</sup> fluorodeoxyglucose positron emission tomography/computed tomography (<sup>18</sup>F-FDG PET/CT) scans. However, the process is time-consuming, especially for studies involving a large number of cases, and is subject to bias due to observer dependency. The introduction of machine learning algorithms, such as the PET/CT algorithm implemented in the TriDFusion (3DF) Image Viewer, represents a significant advancement in BAT detection. In the context of cancer care, artificial intelligence (AI)-driven BAT detection holds immense promise for rapid and automatic differentiation between malignant lesions and non-malignant BAT confounds. By leveraging machine learning to discern intricate patterns in imaging data, this study aims to advance the automation of BAT recognition and provide precise quantitative assessment of radiographic features. We used a semi-automatic, threshold-based 3DF workflow to segment 317 PET/CT scans containing BAT. To minimize manual edits, we defined exclusion zones via machine-learning-based CT organ segmentation and used those organ masks to assign each volume of interest (VOI) to its anatomical site. Three physicians then reviewed and corrected all segmentations using the 3DF contour panel. The final, edited masks were used to train an nnU-Net V2 model, which we subsequently applied to 118 independent PET/CT scans. Across all anatomical sites, physicians reviewed the network’s automated segmentations to be approximately 90% accurate. Although nnU-Net V2 effectively identified BAT from PET/CT scans, training an AI model capable of perfect BAT segmentation remains a challenge due to factors such as PET/CT misregistration and the absence of visible BAT activity across contiguous slices.

From Promise to Practical Reality: Transforming Diffusion MRI Analysis with Fast Deep Learning Enhancement

Xinyi Wang, Michael Barnett, Frederique Boonstra, Yael Barnett, Mariano Cabezas, Arkiev D'Souza, Matthew C. Kiernan, Kain Kyle, Meng Law, Lynette Masters, Zihao Tang, Stephen Tisch, Sicong Tu, Anneke Van Der Walt, Dongang Wang, Fernando Calamante, Weidong Cai, Chenyu Wang

arxiv logopreprintAug 13 2025
Fiber orientation distribution (FOD) is an advanced diffusion MRI modeling technique that represents complex white matter fiber configurations, and a key step for subsequent brain tractography and connectome analysis. Its reliability and accuracy, however, heavily rely on the quality of the MRI acquisition and the subsequent estimation of the FODs at each voxel. Generating reliable FODs from widely available clinical protocols with single-shell and low-angular-resolution acquisitions remains challenging but could potentially be addressed with recent advances in deep learning-based enhancement techniques. Despite advancements, existing methods have predominantly been assessed on healthy subjects, which have proved to be a major hurdle for their clinical adoption. In this work, we validate a newly optimized enhancement framework, FastFOD-Net, across healthy controls and six neurological disorders. This accelerated end-to-end deep learning framework enhancing FODs with superior performance and delivering training/inference efficiency for clinical use ($60\times$ faster comparing to its predecessor). With the most comprehensive clinical evaluation to date, our work demonstrates the potential of FastFOD-Net in accelerating clinical neuroscience research, empowering diffusion MRI analysis for disease differentiation, improving interpretability in connectome applications, and reducing measurement errors to lower sample size requirements. Critically, this work will facilitate the more widespread adoption of, and build clinical trust in, deep learning based methods for diffusion MRI enhancement. Specifically, FastFOD-Net enables robust analysis of real-world, clinical diffusion MRI data, comparable to that achievable with high-quality research acquisitions.

MedAtlas: Evaluating LLMs for Multi-Round, Multi-Task Medical Reasoning Across Diverse Imaging Modalities and Clinical Text

Ronghao Xu, Zhen Huang, Yangbo Wei, Xiaoqian Zhou, Zikang Xu, Ting Liu, Zihang Jiang, S. Kevin Zhou

arxiv logopreprintAug 13 2025
Artificial intelligence has demonstrated significant potential in clinical decision-making; however, developing models capable of adapting to diverse real-world scenarios and performing complex diagnostic reasoning remains a major challenge. Existing medical multi-modal benchmarks are typically limited to single-image, single-turn tasks, lacking multi-modal medical image integration and failing to capture the longitudinal and multi-modal interactive nature inherent to clinical practice. To address this gap, we introduce MedAtlas, a novel benchmark framework designed to evaluate large language models on realistic medical reasoning tasks. MedAtlas is characterized by four key features: multi-turn dialogue, multi-modal medical image interaction, multi-task integration, and high clinical fidelity. It supports four core tasks: open-ended multi-turn question answering, closed-ended multi-turn question answering, multi-image joint reasoning, and comprehensive disease diagnosis. Each case is derived from real diagnostic workflows and incorporates temporal interactions between textual medical histories and multiple imaging modalities, including CT, MRI, PET, ultrasound, and X-ray, requiring models to perform deep integrative reasoning across images and clinical texts. MedAtlas provides expert-annotated gold standards for all tasks. Furthermore, we propose two novel evaluation metrics: Round Chain Accuracy and Error Propagation Resistance. Benchmark results with existing multi-modal models reveal substantial performance gaps in multi-stage clinical reasoning. MedAtlas establishes a challenging evaluation platform to advance the development of robust and trustworthy medical AI.

Analysis of the Compaction Behavior of Textile Reinforcements in Low-Resolution In-Situ CT Scans via Machine-Learning and Descriptor-Based Methods

Christian Düreth, Jan Condé-Wolter, Marek Danczak, Karsten Tittmann, Jörn Jaschinski, Andreas Hornig, Maik Gude

arxiv logopreprintAug 13 2025
A detailed understanding of material structure across multiple scales is essential for predictive modeling of textile-reinforced composites. Nesting -- characterized by the interlocking of adjacent fabric layers through local interpenetration and misalignment of yarns -- plays a critical role in defining mechanical properties such as stiffness, permeability, and damage tolerance. This study presents a framework to quantify nesting behavior in dry textile reinforcements under compaction using low-resolution computed tomography (CT). In-situ compaction experiments were conducted on various stacking configurations, with CT scans acquired at 20.22 $\mu$m per voxel resolution. A tailored 3D{-}UNet enabled semantic segmentation of matrix, weft, and fill phases across compaction stages corresponding to fiber volume contents of 50--60 %. The model achieved a minimum mean Intersection-over-Union of 0.822 and an $F1$ score of 0.902. Spatial structure was subsequently analyzed using the two-point correlation function $S_2$, allowing for probabilistic extraction of average layer thickness and nesting degree. The results show strong agreement with micrograph-based validation. This methodology provides a robust approach for extracting key geometrical features from industrially relevant CT data and establishes a foundation for reverse modeling and descriptor-based structural analysis of composite preforms.

The Role of Radiographic Knee Alignment in Knee Replacement Outcomes and Opportunities for Artificial Intelligence-Driven Assessment

Zhisen Hu, David S. Johnson, Aleksei Tiulpin, Timothy F. Cootes, Claudia Lindner

arxiv logopreprintAug 13 2025
Prevalent knee osteoarthritis (OA) imposes substantial burden on health systems with no cure available. Its ultimate treatment is total knee replacement (TKR). Complications from surgery and recovery are difficult to predict in advance, and numerous factors may affect them. Radiographic knee alignment is one of the key factors that impacts TKR outcomes, affecting outcomes such as postoperative pain or function. Recently, artificial intelligence (AI) has been introduced to the automatic analysis of knee radiographs, for example, to automate knee alignment measurements. Existing review articles tend to focus on knee OA diagnosis and segmentation of bones or cartilages in MRI rather than exploring knee alignment biomarkers for TKR outcomes and their assessment. In this review, we first examine the current scoring protocols for evaluating TKR outcomes and potential knee alignment biomarkers associated with these outcomes. We then discuss existing AI-based approaches for generating knee alignment biomarkers from knee radiographs, and explore future directions for knee alignment assessment and TKR outcome prediction.

Using deep learning methods to shorten acquisition time in children's renal cortical imaging

Gan, C., Niu, P., Pan, B., Chen, X., Xu, L., Huang, K., Chen, H., Wang, Q., Ding, L., Yin, Y., Wu, S., Gong, N.-j.

medrxiv logopreprintAug 13 2025
PurposeThis study evaluates the capability of diffusion-based generative models to reconstruct diagnostic-quality renal cortical images from reduced-acquisition-time pediatric 99mTc-DMSA scintigraphy. Materials and MethodsA prospective study was conducted on 99mTc-DMSA scintigraphic data from consecutive pediatric patients with suspected urinary tract infections (UTIs) acquired between November 2023 and October 2024. A diffusion model SR3 was trained to reconstruct standard-quality images from simulated reduced-count data. Performance was benchmarked against U-Net, U2-Net, Restormer, and a Poisson-based variant of SR3 (PoissonSR3). Quantitative assessment employed peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), Frechet inception distance (FID), and learned perceptual image patch similarity (LPIPS). Renal contrast and anatomic fidelity were assessed using the target-to-background ratio (TBR) and the Dice similarity coefficient respectively. Wilcoxon signed-rank tests were used for statistical analysis. ResultsThe training cohort comprised 94 participants (mean age 5.16{+/-}3.90 years; 48 male) with corresponding Poisson-downsampled images, while the test cohort included 36 patients (mean age 6.39{+/-}3.16 years; 14 male). SR3 outperformed all models, achieving the highest PSNR (30.976{+/-}2.863, P<.001), SSIM (0.760{+/-}0.064, P<.001), FID (25.687{+/-}16.223, P<.001), and LPIPS (0.055{+/-}0.022, P<.001). Further, SR3 maintained excellent renal contrast (TBR: left kidney 7.333{+/-}2.176; right kidney 7.156{+/-}1.808) and anatomical consistency (Dice coefficient: left kidney 0.749{+/-}0.200; right kidney 0.745{+/-}0.176), representing significant improvements over the fast scan (all P < .001). While Restormer, U-Net, and PoissonSR3 showed statistically significant improvements across all metrics, U2-Net exhibited limited improvement restricted to SSIM and left kidney TBR (P < .001). ConclusionSR3 enables high-quality reconstruction of 99mTc-DMSA images from 4-fold accelerated acquisitions, demonstrating potential for substantial reduction in imaging duration while preserving both diagnostic image quality and renal anatomical integrity.

Applications of artificial intelligence in liver cancer: A scoping review.

Chierici A, Lareyre F, Iannelli A, Salucki B, Goffart S, Guzzi L, Poggi E, Delingette H, Raffort J

pubmed logopapersAug 13 2025
This review explores the application of Artificial Intelligence (AI) in managing primary liver cancer, focusing on recent advancements. AI, particularly machine learning (ML) and deep learning (DL), shows potential in improving screening, diagnosis, treatment planning, efficacy assessment, prognosis prediction, and follow-up-crucial elements given the high mortality of liver cancer. A systematic search was conducted in the PubMed, Scopus, Embase, and Web of Science databases, focusing on original research published until June 2024 on AI's clinical applications in liver cancer. Studies not relevant or lacking clinical evaluation were excluded. Out of 13,122 screened articles, 62 were selected for full review. The studies highlight significant improvements in detecting hepatocellular carcinoma and intrahepatic cholangiocarcinoma through AI. DL models show high sensitivity and specificity, particularly in early detection. In diagnosis, AI models using CT and MRI data improve precision in distinguishing benign from malignant lesions through multimodal data integration. Recent AI models outperform earlier non-neural network versions, though a gap remains between development and clinical implementation. Many models lack thorough clinical applicability assessments and external validation. AI integration in primary liver cancer management is promising but requires rigorous development and validation practices to enhance clinical outcomes fully.

Exploring the robustness of TractOracle methods in RL-based tractography.

Levesque J, Théberge A, Descoteaux M, Jodoin PM

pubmed logopapersAug 13 2025
Tractography algorithms leverage diffusion MRI to reconstruct the fibrous architecture of the brain's white matter. Among machine learning approaches, reinforcement learning (RL) has emerged as a promising framework for tractography, outperforming traditional methods in several key aspects. TractOracle-RL, a recent RL-based approach, reduces false positives by incorporating anatomical priors into the training process via a reward-based mechanism. In this paper, we investigate four extensions of the original TractOracle-RL framework by integrating recent advances in RL, and we evaluate their performance across five diverse diffusion MRI datasets. Results demonstrate that combining an oracle with the RL framework consistently leads to robust and reliable tractography, regardless of the specific method or dataset used. We also introduce a novel RL training scheme called Iterative Reward Training (IRT), inspired by the Reinforcement Learning from Human Feedback (RLHF) paradigm. Instead of relying on human input, IRT leverages bundle filtering methods to iteratively refine the oracle's guidance throughout training. Experimental results show that RL methods trained with oracle feedback significantly outperform widely used tractography techniques in terms of accuracy and anatomical validity.

Multi-organ AI Endophenotypes Chart the Heterogeneity of Pan-disease in the Brain, Eye, and Heart

Consortium, T. M., Boquet-Pujadas, A., anagnostakis, f., Yang, Z., Tian, Y. E., duggan, m., erus, g., srinivasan, d., Joynes, C., Bai, W., patel, p., Walker, K. A., Zalesky, A., davatzikos, c., WEN, J.

medrxiv logopreprintAug 13 2025
Disease heterogeneity and commonality pose significant challenges to precision medicine, as traditional approaches frequently focus on single disease entities and overlook shared mechanisms across conditions1. Inspired by pan-cancer2 and multi-organ research3, we introduce the concept of "pan-disease" to investigate the heterogeneity and shared etiology in brain, eye, and heart diseases. Leveraging individual-level data from 129,340 participants, as well as summary-level data from the MULTI consortium, we applied a weakly-supervised deep learning model (Surreal-GAN4,5) to multi-organ imaging, genetic, proteomic, and RNA-seq data, identifying 11 AI-derived biomarkers - called Multi-organ AI Endophenotypes (MAEs) - for the brain (Brain 1-6), eye (Eye 1-3), and heart (Heart 1-2), respectively. We found Brain 3 to be a risk factor for Alzheimers disease (AD) progression and mortality, whereas Brain 5 was protective against AD progression. Crucially, in data from an anti-amyloid AD drug (solanezumab6), heterogeneity in cognitive decline trajectories was observed across treatment groups. At week 240, patients with lower brain 1-3 expression had slower cognitive decline, whereas patients with higher expression had faster cognitive decline. A multi-layer causal pathway pinpointed Brain 1 as a mediational endophenotype7 linking the FLRT2 protein to migraine, exemplifying novel therapeutic targets and pathways. Additionally, genes associated with Eye 1 and Eye 3 were enriched in cancer drug-related gene sets with causal links to specific cancer types and proteins. Finally, Heart 1 and Heart 2 had the highest mortality risk and unique medication history profiles, with Heart 1 showing favorable responses to antihypertensive medications and Heart 2 to digoxin treatment. The 11 MAEs provide novel AI dimensional representations for precision medicine and highlight the potential of AI-driven patient stratification for disease risk monitoring, clinical trials, and drug discovery.

Graph Neural Networks for Realistic Bleeding Prediction in Surgical Simulators.

Kakdas YC, De S, Demirel D

pubmed logopapersAug 12 2025
This study presents a novel approach using graph neural networks to predict the risk of internal bleeding using vessel maps derived from patient CT and MRI scans, aimed at enhancing the realism of surgical simulators for emergency scenarios such as trauma, where rapid detection of internal bleeding can be lifesaving. First, medical images are segmented and converted into graph representations of the vasculature, where nodes represent vessel branching points with spatial coordinates and edges encode vessel features such as length and radius. Due to no existing dataset directly labeling bleeding risks, we calculate the bleeding probability for each vessel node using a physics-based heuristic, peripheral vascular resistance via the Hagen-Poiseuille equation. A graph attention network is then trained to regress these probabilities, effectively learning to predict hemorrhage risk from the graph-structured imaging data. The model is trained using a tenfold cross-validation on a combined dataset of 1708 vessel graphs extracted from four public image datasets (MSD, KiTS, AbdomenCT, CT-ORG) with optimization via the Adam optimizer, mean squared error loss, early stopping, and L2 regularization. Our model achieves a mean R-squared of 0.86, reaching up to 0.9188 in optimal configurations and low mean training and validation losses of 0.0069 and 0.0074, respectively, in predicting bleeding risk, with higher performance on well-connected vascular graphs. Finally, we integrate the trained model into an immersive virtual reality environment to simulate intra-abdominal bleeding scenarios for immersive surgical training. The model demonstrates robust predictive performance despite the inherent sparsity of real-life datasets.
Page 18 of 4064055 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.