Sort by:
Page 89 of 2252246 results

Predicting clinical outcomes using 18F-FDG PET/CT-based radiomic features and machine learning algorithms in patients with esophageal cancer.

Mutevelizade G, Aydin N, Duran Can O, Teke O, Suner AF, Erdugan M, Sayit E

pubmed logopapersJun 4 2025
This study evaluated the relationship between 18F-fluorodeoxyglucose PET/computed tomography (18F-FDG PET/CT) radiomic features and clinical parameters, including tumor localization, histopathological subtype, lymph node metastasis, mortality, and treatment response, in esophageal cancer (EC) patients undergoing chemoradiotherapy and the predictive performance of various machine learning (ML) models. In this retrospective study, 39 patients with EC who underwent pretreatment 18F-FDG PET/CT and received concurrent chemoradiotherapy were analyzed. Texture features were extracted using LIFEx software. Logistic regression, Naive Bayes, random forest, extreme gradient boosting (XGB), and support vector machine classifiers were applied to predict clinical outcomes. Cox regression and Kaplan-Meier analyses were used to evaluate overall survival (OS), and the accuracy of ML algorithms was quantified using the area under the receiver operating characteristic curve. Radiomic features showed significant associations with several clinical parameters. Lymph node metastasis, tumor localization, and treatment response emerged as predictors of OS. Among the ML models, XGB demonstrated the most consistent and highest predictive performance across clinical outcomes. Radiomic features extracted from 18F-FDG PET/CT, when combined with ML approaches, may aid in predicting treatment response and clinical outcomes in EC. Radiomic features demonstrated value in assessing tumor heterogeneity; however, clinical parameters retained a stronger prognostic value for OS.

Machine Learning to Automatically Differentiate Hypertrophic Cardiomyopathy, Cardiac Light Chain, and Cardiac Transthyretin Amyloidosis: A Multicenter CMR Study.

Weberling LD, Ochs A, Benovoy M, Aus dem Siepen F, Salatzki J, Giannitsis E, Duan C, Maresca K, Zhang Y, Möller J, Friedrich S, Schönland S, Meder B, Friedrich MG, Frey N, André F

pubmed logopapersJun 4 2025
Cardiac amyloidosis is associated with poor outcomes and is caused by the interstitial deposition of misfolded proteins, typically ATTR (transthyretin) or AL (light chains). Although specific therapies during early disease stages exist, the diagnosis is often only established at an advanced stage. Cardiovascular magnetic resonance (CMR) is the gold standard for imaging suspected myocardial disease. However, differentiating cardiac amyloidosis from hypertrophic cardiomyopathy may be challenging, and a reliable method for an image-based classification of amyloidosis subtypes is lacking. This study sought to investigate a CMR machine learning (ML) algorithm to identify and distinguish cardiac amyloidosis. This retrospective, multicenter, multivendor feasibility study included consecutive patients diagnosed with hypertrophic cardiomyopathy or AL/ATTR amyloidosis and healthy volunteers. Standard clinical information, semiautomated CMR imaging data, and qualitative CMR features were integrated into a trained ML algorithm. Four hundred participants (95 healthy, 94 hypertrophic cardiomyopathy, 95 AL, and 116 ATTR) from 56 institutions were included (269 men aged 58.5 [48.4-69.4] years). A 3-stage ML screening cascade sequentially differentiated healthy volunteers from patients, then hypertrophic cardiomyopathy from amyloidosis, and then AL from ATTR. The ML algorithm resulted in an accurate differentiation at each step (area under the curve, 1.0, 0.99, and 0.92, respectively). After reducing included data to demographics and imaging data alone, the performance remained excellent (area under the curve, 0.99, 0.98, and 0.88, respectively), even after removing late gadolinium enhancement imaging data from the model (area under the curve, 1.0, 0.95, 0.86, respectively). A trained ML model using semiautomated CMR imaging data and patient demographics can accurately identify cardiac amyloidosis and differentiate subtypes.

Long-Term Prognostic Implications of Thoracic Aortic Calcification on CT Using Artificial Intelligence-Based Quantification in a Screening Population: A Two-Center Study.

Lee JE, Kim NY, Kim YH, Kwon Y, Kim S, Han K, Suh YJ

pubmed logopapersJun 4 2025
<b>BACKGROUND.</b> The importance of including the thoracic aortic calcification (TAC), in addition to coronary artery calcification (CAC), in prognostic assessments has been difficult to determine, partly due to greater challenge in performing standardized TAC assessments. <b>OBJECTIVE.</b> The purpose of this study was to evaluate long-term prognostic implications of TAC assessed using artificial intelligence (AI)-based quantification on routine chest CT in a screening population. <b>METHODS.</b> This retrospective study included 7404 asymptomatic individuals (median age, 53.9 years; 5875 men, 1529 women) who underwent nongated noncontrast chest CT as part of a national general health screening program at one of two centers from January 2007 to December 2014. A commercial AI program quantified TAC and CAC using Agatston scores, which were stratified into categories. Radiologists manually quantified TAC and CAC in 2567 examinations. The role of AI-based TAC categories in predicting major adverse cardiovascular events (MACE) and all-cause mortality (ACM), independent of AI-based CAC categories as well as clinical and laboratory variables, was assessed by multivariable Cox proportional hazards models using data from both centers and concordance statistics from prognostic models developed and tested using center 1 and center 2 data, respectively. <b>RESULTS.</b> AI-based and manual quantification showed excellent agreement for TAC and CAC (concordance correlation coefficient: 0.967 and 0.895, respectively). The median observation periods were 7.5 years for MACE (383 events in 5342 individuals) and 11.0 years for ACM (292 events in 7404 individuals). When adjusted for AI-based CAC categories along with clinical and laboratory variables, the risk for MACE was not independently associated with any AI-based TAC category; risk of ACM was independently associated with AI-based TAC score of 1001-3000 (HR = 2.14, <i>p</i> = .02) but not with other AI-based TAC categories. When prognostic models were tested, the addition of AI-based TAC categories did not improve model fit relative to models containing clinical variables, laboratory variables, and AI-based CAC categories for MACE (concordance index [C-index] = 0.760-0.760, <i>p</i> = .81) or ACM (C-index = 0.823-0.830, <i>p</i> = .32). <b>CONCLUSION.</b> The addition of TAC to models containing CAC provided limited improvement in risk prediction in an asymptomatic screening population undergoing CT. <b>CLINICAL IMPACT.</b> AI-based quantification provides a standardized approach for better understanding the potential role of TAC as a predictive imaging biomarker.

Synthetic multi-inversion time magnetic resonance images for visualization of subcortical structures

Savannah P. Hays, Lianrui Zuo, Anqi Feng, Yihao Liu, Blake E. Dewey, Jiachen Zhuo, Ellen M. Mowry, Scott D. Newsome Jerry L. Prince, Aaron Carass

arxiv logopreprintJun 4 2025
Purpose: Visualization of subcortical gray matter is essential in neuroscience and clinical practice, particularly for disease understanding and surgical planning.While multi-inversion time (multi-TI) T$_1$-weighted (T$_1$-w) magnetic resonance (MR) imaging improves visualization, it is rarely acquired in clinical settings. Approach: We present SyMTIC (Synthetic Multi-TI Contrasts), a deep learning method that generates synthetic multi-TI images using routinely acquired T$_1$-w, T$_2$-weighted (T$_2$-w), and FLAIR images. Our approach combines image translation via deep neural networks with imaging physics to estimate longitudinal relaxation time (T$_1$) and proton density (PD) maps. These maps are then used to compute multi-TI images with arbitrary inversion times. Results: SyMTIC was trained using paired MPRAGE and FGATIR images along with T$_2$-w and FLAIR images. It accurately synthesized multi-TI images from standard clinical inputs, achieving image quality comparable to that from explicitly acquired multi-TI data.The synthetic images, especially for TI values between 400-800 ms, enhanced visualization of subcortical structures and improved segmentation of thalamic nuclei. Conclusion: SyMTIC enables robust generation of high-quality multi-TI images from routine MR contrasts. It generalizes well to varied clinical datasets, including those with missing FLAIR images or unknown parameters, offering a practical solution for improving brain MR image visualization and analysis.

A Comprehensive Study on Medical Image Segmentation using Deep Neural Networks

Loan Dao, Ngoc Quoc Ly

arxiv logopreprintJun 4 2025
Over the past decade, Medical Image Segmentation (MIS) using Deep Neural Networks (DNNs) has achieved significant performance improvements and holds great promise for future developments. This paper presents a comprehensive study on MIS based on DNNs. Intelligent Vision Systems are often evaluated based on their output levels, such as Data, Information, Knowledge, Intelligence, and Wisdom (DIKIW),and the state-of-the-art solutions in MIS at these levels are the focus of research. Additionally, Explainable Artificial Intelligence (XAI) has become an important research direction, as it aims to uncover the "black box" nature of previous DNN architectures to meet the requirements of transparency and ethics. The study emphasizes the importance of MIS in disease diagnosis and early detection, particularly for increasing the survival rate of cancer patients through timely diagnosis. XAI and early prediction are considered two important steps in the journey from "intelligence" to "wisdom." Additionally, the paper addresses existing challenges and proposes potential solutions to enhance the efficiency of implementing DNN-based MIS.

Recent Advances in Medical Image Classification

Loan Dao, Ngoc Quoc Ly

arxiv logopreprintJun 4 2025
Medical image classification is crucial for diagnosis and treatment, benefiting significantly from advancements in artificial intelligence. The paper reviews recent progress in the field, focusing on three levels of solutions: basic, specific, and applied. It highlights advances in traditional methods using deep learning models like Convolutional Neural Networks and Vision Transformers, as well as state-of-the-art approaches with Vision Language Models. These models tackle the issue of limited labeled data, and enhance and explain predictive results through Explainable Artificial Intelligence.

Average Calibration Losses for Reliable Uncertainty in Medical Image Segmentation

Theodore Barfoot, Luis C. Garcia-Peraza-Herrera, Samet Akcay, Ben Glocker, Tom Vercauteren

arxiv logopreprintJun 4 2025
Deep neural networks for medical image segmentation are often overconfident, compromising both reliability and clinical utility. In this work, we propose differentiable formulations of marginal L1 Average Calibration Error (mL1-ACE) as an auxiliary loss that can be computed on a per-image basis. We compare both hard- and soft-binning approaches to directly improve pixel-wise calibration. Our experiments on four datasets (ACDC, AMOS, KiTS, BraTS) demonstrate that incorporating mL1-ACE significantly reduces calibration errors, particularly Average Calibration Error (ACE) and Maximum Calibration Error (MCE), while largely maintaining high Dice Similarity Coefficients (DSCs). We find that the soft-binned variant yields the greatest improvements in calibration, over the Dice plus cross-entropy loss baseline, but often compromises segmentation performance, with hard-binned mL1-ACE maintaining segmentation performance, albeit with weaker calibration improvement. To gain further insight into calibration performance and its variability across an imaging dataset, we introduce dataset reliability histograms, an aggregation of per-image reliability diagrams. The resulting analysis highlights improved alignment between predicted confidences and true accuracies. Overall, our approach not only enhances the trustworthiness of segmentation predictions but also shows potential for safer integration of deep learning methods into clinical workflows. We share our code here: https://github.com/cai4cai/Average-Calibration-Losses

Advancements in Artificial Intelligence Applications for Cardiovascular Disease Research

Yuanlin Mo, Haishan Huang, Bocheng Liang, Weibo Ma

arxiv logopreprintJun 4 2025
Recent advancements in artificial intelligence (AI) have revolutionized cardiovascular medicine, particularly through integration with computed tomography (CT), magnetic resonance imaging (MRI), electrocardiography (ECG) and ultrasound (US). Deep learning architectures, including convolutional neural networks and generative adversarial networks, enable automated analysis of medical imaging and physiological signals, surpassing human capabilities in diagnostic accuracy and workflow efficiency. However, critical challenges persist, including the inability to validate input data accuracy, which may propagate diagnostic errors. This review highlights AI's transformative potential in precision diagnostics while underscoring the need for robust validation protocols to ensure clinical reliability. Future directions emphasize hybrid models integrating multimodal data and adaptive algorithms to refine personalized cardiovascular care.

Personalized MR-Informed Diffusion Models for 3D PET Image Reconstruction

George Webber, Alexander Hammers, Andrew P. King, Andrew J. Reader

arxiv logopreprintJun 4 2025
Recent work has shown improved lesion detectability and flexibility to reconstruction hyperparameters (e.g. scanner geometry or dose level) when PET images are reconstructed by leveraging pre-trained diffusion models. Such methods train a diffusion model (without sinogram data) on high-quality, but still noisy, PET images. In this work, we propose a simple method for generating subject-specific PET images from a dataset of multi-subject PET-MR scans, synthesizing "pseudo-PET" images by transforming between different patients' anatomy using image registration. The images we synthesize retain information from the subject's MR scan, leading to higher resolution and the retention of anatomical features compared to the original set of PET images. With simulated and real [$^{18}$F]FDG datasets, we show that pre-training a personalized diffusion model with subject-specific "pseudo-PET" images improves reconstruction accuracy with low-count data. In particular, the method shows promise in combining information from a guidance MR scan without overly imposing anatomical features, demonstrating an improved trade-off between reconstructing PET-unique image features versus features present in both PET and MR. We believe this approach for generating and utilizing synthetic data has further applications to medical imaging tasks, particularly because patient-specific PET images can be generated without resorting to generative deep learning or large training datasets.

3D Quantification of Viral Transduction Efficiency in Living Human Retinal Organoids

Rogler, T. S., Salbaum, K. A., Brinkop, A. T., Sonntag, S. M., James, R., Shelton, E. R., Thielen, A., Rose, R., Babutzka, S., Klopstock, T., Michalakis, S., Serwane, F.

biorxiv logopreprintJun 4 2025
The development of therapeutics builds on testing their efficiency in vitro. To optimize gene therapies, for example, fluorescent reporters expressed by treated cells are typically utilized as readouts. Traditionally, their global fluorescence signal has been used as an estimate of transduction efficiency. However, analysis in individual cells within a living 3D tissue remains a challenge. Readout on a single-cell level can be realized via fluo-rescence-based flow cytometry at the cost of tissue dissociation and loss of spatial information. Complementary, spatial information is accessible via immunofluorescence of fixed samples. Both approaches impede time-dependent studies on the delivery of the vector to the cells. Here, quantitative 3D characterization of viral transduction efficiencies in living retinal organoids is introduced. The approach combines quantified gene delivery efficiency in space and time, leveraging human retinal organ-oids, engineered adeno-associated virus (AAV) vectors, confocal live imaging, and deep learning-based image segmentation. The integration of these tools in an organoid imaging and analysis pipeline allows quantitative testing of future treatments and other gene delivery methods. It has the potential to guide the development of therapies in biomedical applications.
Page 89 of 2252246 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.