Sort by:
Page 95 of 3513502 results

An optimized multi-task contrastive learning framework for HIFU lesion detection and segmentation.

Zavar M, Ghaffari HR, Tabatabaee H

pubmed logopapersAug 13 2025
Accurate detection and segmentation of lesions induced by High-Intensity Focused Ultrasound (HIFU) in medical imaging remain significant challenges in automated disease diagnosis. Traditional methods heavily rely on labeled data, which is often scarce, expensive, and time-consuming to obtain. Moreover, existing approaches frequently struggle with variations in medical data and the limited availability of annotated datasets, leading to suboptimal performance. To address these challenges, this paper introduces an innovative framework called the Optimized Multi-Task Contrastive Learning Framework (OMCLF), which leverages self-supervised learning (SSL) and genetic algorithms (GA) to enhance HIFU lesion detection and segmentation. OMCLF integrates classification and segmentation into a unified model, utilizing a shared backbone to extract common features. The framework systematically optimizes feature representations, hyperparameters, and data augmentation strategies tailored for medical imaging, ensuring that critical information, such as lesion details, is preserved. By employing a genetic algorithm, OMCLF explores and optimizes augmentation techniques suitable for medical data, avoiding distortions that could compromise diagnostic accuracy. Experimental results demonstrate that OMCLF outperforms single-task methods in both classification and segmentation tasks while significantly reducing dependency on labeled data. Specifically, OMCLF achieves an accuracy of 93.3% in lesion detection and a Dice score of 92.5% in segmentation, surpassing state-of-the-art methods such as SimCLR and MoCo. The proposed approach achieves superior accuracy in identifying and delineating HIFU-induced lesions, marking a substantial advancement in medical image interpretation and automated diagnosis. OMCLF represents a significant step forward in the evolutionary optimization of self-supervised learning, with potential applications across various medical imaging domains.

Explainable AI Technique in Lung Cancer Detection Using Convolutional Neural Networks

Nishan Rai, Sujan Khatri, Devendra Risal

arxiv logopreprintAug 13 2025
Early detection of lung cancer is critical to improving survival outcomes. We present a deep learning framework for automated lung cancer screening from chest computed tomography (CT) images with integrated explainability. Using the IQ-OTH/NCCD dataset (1,197 scans across Normal, Benign, and Malignant classes), we evaluate a custom convolutional neural network (CNN) and three fine-tuned transfer learning backbones: DenseNet121, ResNet152, and VGG19. Models are trained with cost-sensitive learning to mitigate class imbalance and evaluated via accuracy, precision, recall, F1-score, and ROC-AUC. While ResNet152 achieved the highest accuracy (97.3%), DenseNet121 provided the best overall balance in precision, recall, and F1 (up to 92%, 90%, 91%, respectively). We further apply Shapley Additive Explanations (SHAP) to visualize evidence contributing to predictions, improving clinical transparency. Results indicate that CNN-based approaches augmented with explainability can provide fast, accurate, and interpretable support for lung cancer screening, particularly in resource-limited settings.

Deep Learning Enables Large-Scale Shape and Appearance Modeling in Total-Body DXA Imaging

Arianna Bunnell, Devon Cataldi, Yannik Glaser, Thomas K. Wolfgruber, Steven Heymsfield, Alan B. Zonderman, Thomas L. Kelly, Peter Sadowski, John A. Shepherd

arxiv logopreprintAug 13 2025
Total-body dual X-ray absorptiometry (TBDXA) imaging is a relatively low-cost whole-body imaging modality, widely used for body composition assessment. We develop and validate a deep learning method for automatic fiducial point placement on TBDXA scans using 1,683 manually-annotated TBDXA scans. The method achieves 99.5% percentage correct keypoints in an external testing dataset. To demonstrate the value for shape and appearance modeling (SAM), our method is used to place keypoints on 35,928 scans for five different TBDXA imaging modes, then associations with health markers are tested in two cohorts not used for SAM model generation using two-sample Kolmogorov-Smirnov tests. SAM feature distributions associated with health biomarkers are shown to corroborate existing evidence and generate new hypotheses on body composition and shape's relationship to various frailty, metabolic, inflammation, and cardiometabolic health markers. Evaluation scripts, model weights, automatic point file generation code, and triangulation files are available at https://github.com/hawaii-ai/dxa-pointplacement.

Ultrasonic Texture Analysis for Predicting Acute Myocardial Infarction.

Jamthikar AD, Hathaway QA, Maganti K, Hamirani Y, Bokhari S, Yanamala N, Sengupta PP

pubmed logopapersAug 13 2025
Acute myocardial infarction (MI) alters cardiomyocyte geometry and architecture, leading to changes in the acoustic properties of the myocardium. This study examines ultrasomics-a novel cardiac ultrasound-based radiomics technique to extract high-throughput pixel-level information from images-for identifying ultrasonic texture and morphologic changes associated with infarcted myocardium. We included 684 participants from multisource data: a) a retrospective single-center matched case-control dataset, b) a prospective multicenter matched clinical trial dataset, and c) an open-source international and multivendor dataset. Handcrafted and deep transfer learning-based ultrasomics features from 2- and 4-chamber echocardiographic views were used to train machine learning (ML) models with the use of leave-one-source-out cross-validation for external validation. The ML model showed a higher AUC than transfer learning-based deep features in identifying MI [AUCs: 0.87 [95% CI: 0.84-0.89] vs 0.74 [95% CI: 0.70-0.77]; P < 0.0001]. ML probability was an independent predictor of MI even after adjusting for conventional echocardiographic parameters [adjusted OR: 1.03 [95% CI: 1.01-1.05]; P < 0.0001]. ML probability showed diagnostic value in differentiating acute MI, even in the presence of myocardial dysfunction (averaged longitudinal strain [LS] <16%) (AUC: 0.84 [95% CI: 0.77-0.89]). In addition, combining averaged LS with ML probability significantly improved predictive performance compared with LS alone (AUCs: 0.86 [95% CI: 0.80-0.91] vs 0.80 [95% CI: 0.72-0.87]; P = 0.02). Visualization of ultrasomics features with the use of a Manhattan plot discriminated infarcted and noninfarcted segments (P < 0.001) and facilitated parametric visualization of infarcted myocardium. This study demonstrates the potential of cardiac ultrasomics to distinguish healthy from infarcted myocardium and highlights the need for validation in diverse populations to define its role and incremental value in myocardial tissue characterization beyond conventional echocardiography.

Brown adipose tissue machine learning nnU-Net V2 network using TriDFusion (3DF).

Lafontaine D, Chahwan S, Barraza G, Ucpinar BA, Kayal G, Gómez-Banoy N, Cohen P, Humm JL, Schöder H

pubmed logopapersAug 13 2025
Recent advances in machine learning have revolutionized medical imaging. Currently, identifying brown adipose tissue (BAT) relies on manual identification and segmentation on Fluorine-<sup>18</sup> fluorodeoxyglucose positron emission tomography/computed tomography (<sup>18</sup>F-FDG PET/CT) scans. However, the process is time-consuming, especially for studies involving a large number of cases, and is subject to bias due to observer dependency. The introduction of machine learning algorithms, such as the PET/CT algorithm implemented in the TriDFusion (3DF) Image Viewer, represents a significant advancement in BAT detection. In the context of cancer care, artificial intelligence (AI)-driven BAT detection holds immense promise for rapid and automatic differentiation between malignant lesions and non-malignant BAT confounds. By leveraging machine learning to discern intricate patterns in imaging data, this study aims to advance the automation of BAT recognition and provide precise quantitative assessment of radiographic features. We used a semi-automatic, threshold-based 3DF workflow to segment 317 PET/CT scans containing BAT. To minimize manual edits, we defined exclusion zones via machine-learning-based CT organ segmentation and used those organ masks to assign each volume of interest (VOI) to its anatomical site. Three physicians then reviewed and corrected all segmentations using the 3DF contour panel. The final, edited masks were used to train an nnU-Net V2 model, which we subsequently applied to 118 independent PET/CT scans. Across all anatomical sites, physicians reviewed the network’s automated segmentations to be approximately 90% accurate. Although nnU-Net V2 effectively identified BAT from PET/CT scans, training an AI model capable of perfect BAT segmentation remains a challenge due to factors such as PET/CT misregistration and the absence of visible BAT activity across contiguous slices.

MedAtlas: Evaluating LLMs for Multi-Round, Multi-Task Medical Reasoning Across Diverse Imaging Modalities and Clinical Text

Ronghao Xu, Zhen Huang, Yangbo Wei, Xiaoqian Zhou, Zikang Xu, Ting Liu, Zihang Jiang, S. Kevin Zhou

arxiv logopreprintAug 13 2025
Artificial intelligence has demonstrated significant potential in clinical decision-making; however, developing models capable of adapting to diverse real-world scenarios and performing complex diagnostic reasoning remains a major challenge. Existing medical multi-modal benchmarks are typically limited to single-image, single-turn tasks, lacking multi-modal medical image integration and failing to capture the longitudinal and multi-modal interactive nature inherent to clinical practice. To address this gap, we introduce MedAtlas, a novel benchmark framework designed to evaluate large language models on realistic medical reasoning tasks. MedAtlas is characterized by four key features: multi-turn dialogue, multi-modal medical image interaction, multi-task integration, and high clinical fidelity. It supports four core tasks: open-ended multi-turn question answering, closed-ended multi-turn question answering, multi-image joint reasoning, and comprehensive disease diagnosis. Each case is derived from real diagnostic workflows and incorporates temporal interactions between textual medical histories and multiple imaging modalities, including CT, MRI, PET, ultrasound, and X-ray, requiring models to perform deep integrative reasoning across images and clinical texts. MedAtlas provides expert-annotated gold standards for all tasks. Furthermore, we propose two novel evaluation metrics: Round Chain Accuracy and Error Propagation Resistance. Benchmark results with existing multi-modal models reveal substantial performance gaps in multi-stage clinical reasoning. MedAtlas establishes a challenging evaluation platform to advance the development of robust and trustworthy medical AI.

Analysis of the Compaction Behavior of Textile Reinforcements in Low-Resolution In-Situ CT Scans via Machine-Learning and Descriptor-Based Methods

Christian Düreth, Jan Condé-Wolter, Marek Danczak, Karsten Tittmann, Jörn Jaschinski, Andreas Hornig, Maik Gude

arxiv logopreprintAug 13 2025
A detailed understanding of material structure across multiple scales is essential for predictive modeling of textile-reinforced composites. Nesting -- characterized by the interlocking of adjacent fabric layers through local interpenetration and misalignment of yarns -- plays a critical role in defining mechanical properties such as stiffness, permeability, and damage tolerance. This study presents a framework to quantify nesting behavior in dry textile reinforcements under compaction using low-resolution computed tomography (CT). In-situ compaction experiments were conducted on various stacking configurations, with CT scans acquired at 20.22 $\mu$m per voxel resolution. A tailored 3D{-}UNet enabled semantic segmentation of matrix, weft, and fill phases across compaction stages corresponding to fiber volume contents of 50--60 %. The model achieved a minimum mean Intersection-over-Union of 0.822 and an $F1$ score of 0.902. Spatial structure was subsequently analyzed using the two-point correlation function $S_2$, allowing for probabilistic extraction of average layer thickness and nesting degree. The results show strong agreement with micrograph-based validation. This methodology provides a robust approach for extracting key geometrical features from industrially relevant CT data and establishes a foundation for reverse modeling and descriptor-based structural analysis of composite preforms.

Using deep learning methods to shorten acquisition time in children's renal cortical imaging

Gan, C., Niu, P., Pan, B., Chen, X., Xu, L., Huang, K., Chen, H., Wang, Q., Ding, L., Yin, Y., Wu, S., Gong, N.-j.

medrxiv logopreprintAug 13 2025
PurposeThis study evaluates the capability of diffusion-based generative models to reconstruct diagnostic-quality renal cortical images from reduced-acquisition-time pediatric 99mTc-DMSA scintigraphy. Materials and MethodsA prospective study was conducted on 99mTc-DMSA scintigraphic data from consecutive pediatric patients with suspected urinary tract infections (UTIs) acquired between November 2023 and October 2024. A diffusion model SR3 was trained to reconstruct standard-quality images from simulated reduced-count data. Performance was benchmarked against U-Net, U2-Net, Restormer, and a Poisson-based variant of SR3 (PoissonSR3). Quantitative assessment employed peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), Frechet inception distance (FID), and learned perceptual image patch similarity (LPIPS). Renal contrast and anatomic fidelity were assessed using the target-to-background ratio (TBR) and the Dice similarity coefficient respectively. Wilcoxon signed-rank tests were used for statistical analysis. ResultsThe training cohort comprised 94 participants (mean age 5.16{+/-}3.90 years; 48 male) with corresponding Poisson-downsampled images, while the test cohort included 36 patients (mean age 6.39{+/-}3.16 years; 14 male). SR3 outperformed all models, achieving the highest PSNR (30.976{+/-}2.863, P<.001), SSIM (0.760{+/-}0.064, P<.001), FID (25.687{+/-}16.223, P<.001), and LPIPS (0.055{+/-}0.022, P<.001). Further, SR3 maintained excellent renal contrast (TBR: left kidney 7.333{+/-}2.176; right kidney 7.156{+/-}1.808) and anatomical consistency (Dice coefficient: left kidney 0.749{+/-}0.200; right kidney 0.745{+/-}0.176), representing significant improvements over the fast scan (all P < .001). While Restormer, U-Net, and PoissonSR3 showed statistically significant improvements across all metrics, U2-Net exhibited limited improvement restricted to SSIM and left kidney TBR (P < .001). ConclusionSR3 enables high-quality reconstruction of 99mTc-DMSA images from 4-fold accelerated acquisitions, demonstrating potential for substantial reduction in imaging duration while preserving both diagnostic image quality and renal anatomical integrity.

Applications of artificial intelligence in liver cancer: A scoping review.

Chierici A, Lareyre F, Iannelli A, Salucki B, Goffart S, Guzzi L, Poggi E, Delingette H, Raffort J

pubmed logopapersAug 13 2025
This review explores the application of Artificial Intelligence (AI) in managing primary liver cancer, focusing on recent advancements. AI, particularly machine learning (ML) and deep learning (DL), shows potential in improving screening, diagnosis, treatment planning, efficacy assessment, prognosis prediction, and follow-up-crucial elements given the high mortality of liver cancer. A systematic search was conducted in the PubMed, Scopus, Embase, and Web of Science databases, focusing on original research published until June 2024 on AI's clinical applications in liver cancer. Studies not relevant or lacking clinical evaluation were excluded. Out of 13,122 screened articles, 62 were selected for full review. The studies highlight significant improvements in detecting hepatocellular carcinoma and intrahepatic cholangiocarcinoma through AI. DL models show high sensitivity and specificity, particularly in early detection. In diagnosis, AI models using CT and MRI data improve precision in distinguishing benign from malignant lesions through multimodal data integration. Recent AI models outperform earlier non-neural network versions, though a gap remains between development and clinical implementation. Many models lack thorough clinical applicability assessments and external validation. AI integration in primary liver cancer management is promising but requires rigorous development and validation practices to enhance clinical outcomes fully.

Exploring the robustness of TractOracle methods in RL-based tractography.

Levesque J, Théberge A, Descoteaux M, Jodoin PM

pubmed logopapersAug 13 2025
Tractography algorithms leverage diffusion MRI to reconstruct the fibrous architecture of the brain's white matter. Among machine learning approaches, reinforcement learning (RL) has emerged as a promising framework for tractography, outperforming traditional methods in several key aspects. TractOracle-RL, a recent RL-based approach, reduces false positives by incorporating anatomical priors into the training process via a reward-based mechanism. In this paper, we investigate four extensions of the original TractOracle-RL framework by integrating recent advances in RL, and we evaluate their performance across five diverse diffusion MRI datasets. Results demonstrate that combining an oracle with the RL framework consistently leads to robust and reliable tractography, regardless of the specific method or dataset used. We also introduce a novel RL training scheme called Iterative Reward Training (IRT), inspired by the Reinforcement Learning from Human Feedback (RLHF) paradigm. Instead of relying on human input, IRT leverages bundle filtering methods to iteratively refine the oracle's guidance throughout training. Experimental results show that RL methods trained with oracle feedback significantly outperform widely used tractography techniques in terms of accuracy and anatomical validity.
Page 95 of 3513502 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.