Sort by:
Page 259 of 4064055 results

Deep learning-based post-hoc noise reduction improves quarter-radiation-dose coronary CT angiography.

Morikawa T, Nishii T, Tanabe Y, Yoshida K, Toshimori W, Fukuyama N, Toritani H, Suekuni H, Fukuda T, Kido T

pubmed logopapersJun 9 2025
To evaluate the impact of deep learning-based post-hoc noise reduction (DLNR) on image quality, coronary artery disease reporting and data system (CAD-RADS) assessment, and diagnostic performance in quarter-dose versus full-dose coronary CT angiography (CCTA) on external datasets. We retrospectively reviewed 221 patients who underwent retrospective electrocardiogram-gated CCTA in 2022-2023. Using dose modulation, either mid-diastole or end-systole was scanned at full dose depending on heart rates, and the other phase at quarter dose. Only patients with motion-free coronaries in both phases were included. Images were acquired using iterative reconstruction, and a residual dense network trained on external datasets denoised the quarter-dose images. Image quality was assessed by comparing noise levels using Tukey's test. Two radiologists independently assessed CAD-RADS, with agreement to full-dose images evaluated by Cohen's kappa. Diagnostic performance for significant stenosis referencing full-dose images was compared between quarter-dose and denoised images by the area under the receiver operating characteristic curve (AUC) using the DeLong test. Among 40 cases (age, 71 ± 7 years; 24 males), DLNR reduced noise from 37 to 18 HU (P < 0.001) in quarter-dose CCTA (full-dose images: 22 HU), and improved CAD-RADS agreement from moderate (0.60 [95 % CI: 0.41-0.78]) to excellent (0.82 [95 % CI: 0.66-0.94]). Denoised images demonstrated a superior AUC (0.97 [95 % CI: 0.95-1.00]) for diagnosing significant stenosis compared with original quarter-dose images (0.93 [95 % CI: 0.89-0.98]; P = 0.032). DLNR for quarter-dose CCTA significantly improved image quality, CAD-RADS agreement, and diagnostic performance for detecting significant stenosis referencing full-dose images.

Dose to circulating blood in intensity-modulated total body irradiation, total marrow irradiation, and total marrow and lymphoid irradiation.

Guo B, Cherian S, Murphy ES, Sauter CS, Sobecks RM, Rotz S, Hanna R, Scott JG, Xia P

pubmed logopapersJun 8 2025
Multi-isocentric intensity-modulated (IM) total body irradiation (TBI), total marrow irradiation (TMI), and total marrow and lymphoid irradiation (TMLI) are gaining popularity. A question arises on the impact of the interplay between blood circulation and dynamic delivery on blood dose. This study answers the question by introducing a new whole-body blood circulation modeling technique. A whole-body CT with intravenous contrast was used to develop the blood circulation model. Fifteen organs and tissues, heart chambers, and great vessels were segmented using a deep-learning-based auto-contouring software. The main blood vessels were segmented using an in-house algorithm. Blood density, velocity, time-to-heart, and perfusion distributions were derived for systole, diastole, and portal circulations and used to simulate trajectories of blood particles during delivery. With the same prescription of 12 Gy in 8 fractions, doses to circulating blood were calculated for three plans: (1) an IM-TBI plan prescribing uniform dose to the whole body while reducing lung and kidney doses; (2) a TMI plan treating all bones; and (3) a TMLI plan treating all bones, major lymph nodes, and spleen; TMI and TMLI plans were optimized to reduce doses to non-target tissue. Circulating blood received 1.57 ± 0.43 Gy, 1.04 ± 0.32 Gy, and 1.09 ± 0.32 Gy in one fraction and 12.60 ± 1.21 Gy, 8.34 ± 0.88 Gy, and 8.71 ± 0.92 Gy in 8 fractions in IM-TBI, TMI, and TMLI, respectively. The interplay effect of blood motion with IM delivery did not change the mean dose, but changed the dose heterogeneity of the circulating blood. Fractionation reduced the blood dose heterogeneity. A novel whole-body blood circulating model was developed based on patient-specific anatomy and realistic blood dynamics, concentration, and perfusion. Using the blood circulation model, we developed a dosimetry tool for circulating blood in IM-TBI, TMI, and TMLI.

Bi-regional and bi-phasic automated machine learning radiomics for defining metastasis to lesser curvature lymph node stations in gastric cancer.

Huang H, Wang S, Deng J, Ye Z, Li H, He B, Fang M, Zhang N, Liu J, Dong D, Liang H, Li G, Tian J, Hu Y

pubmed logopapersJun 8 2025
Lymph node metastasis (LNM) is the primary metastatic mode in gastric cancer (GC), with frequent occurrences in lesser curvature. This study aims to establish a radiomic model to predict the metastatic status of lymph nodes in the lesser curvature for GC. We retrospectively collected data from 939 gastric cancer patients who underwent gastrectomy and D2 lymphadenectomy across two centers. Both the primary lesion and the lesser curvature region were segmented as representative region of interests (ROIs). The combination of bi-regional and bi-phasic CT imaging features were used to build a hybrid radiomic model to predict LNM in the lesser curvature. And the model was validated internally and externally. Further, the potential generalization ability of the hybrid model was investigated in predicting the metastasis status in the supra-pancreatic area. The hybrid model yielded substantially higher performance with AUCs of 0.847 (95% CI, 0.770-0.924) and 0.833 (95% CI, 0.800-0.867) in the two independent test cohorts, compared to the single regional and phasic models. Additionally, the hybrid model achieved AUCs ranging from 0.678 to 0.761 in the prediction of LNM in supra-pancreatic area, showing the potential generalization performance. The CT imaging features of primary tumor and adjacent tissues are significantly associated with LNM. And our as-developed model showed great diagnostic performance and might be of great application in the individual treatment of GC.

Deep learning-based prospective slice tracking for continuous catheter visualization during MRI-guided cardiac catheterization.

Neofytou AP, Kowalik G, Vidya Shankar R, Kunze K, Moon T, Mellor N, Neji R, Razavi R, Pushparajah K, Roujol S

pubmed logopapersJun 8 2025
This proof-of-concept study introduces a novel, deep learning-based, parameter-free, automatic slice-tracking technique for continuous catheter tracking and visualization during MR-guided cardiac catheterization. The proposed sequence includes Calibration and Runtime modes. Initially, Calibration mode identifies the catheter tip's three-dimensional coordinates using a fixed stack of contiguous slices. A U-Net architecture with a ResNet-34 encoder is used to identify the catheter tip location. Once identified, the sequence then switches to Runtime mode, dynamically acquiring three contiguous slices automatically centered on the catheter tip. The catheter location is estimated from each Runtime stack using the same network and fed back to the sequence, enabling prospective slice tracking to keep the catheter in the central slice. If the catheter remains unidentified over several dynamics, the sequence reverts to Calibration mode. This artificial intelligence (AI)-based approach was evaluated prospectively in a three-dimensional-printed heart phantom and 3 patients undergoing MR-guided cardiac catheterization. This technique was also compared retrospectively in 2 patients with a previous non-AI automatic tracking method relying on operator-defined parameters. In the phantom study, the tracking framework achieved 100% accuracy/sensitivity/specificity in both modes. Across all patients, the average accuracy/sensitivity/specificity were 100 ± 0/100 ± 0/100 ± 0% (Calibration) and 98.4 ± 0.8/94.1 ± 2.9/100.0 ± 0.0% (Runtime). The parametric, non-AI technique and the proposed parameter-free AI-based framework yielded identical accuracy (100%) in Calibration mode and similar accuracy range in Runtime mode (Patients 1 and 2: 100%-97%, and 100%-98%, respectively). An AI-based prospective slice-tracking framework was developed for real-time, parameter-free, operator-independent, automatic tracking of gadolinium-filled balloon catheters. Its feasibility was successfully demonstrated in patients undergoing MRI-guided cardiac catheterization.

SMART MRS: A Simulated MEGA-PRESS ARTifacts toolbox for GABA-edited MRS.

Bugler H, Shamaei A, Souza R, Harris AD

pubmed logopapersJun 8 2025
To create a Python-based toolbox to simulate commonly occurring artifacts for single voxel gamma-aminobutyric acid (GABA)-edited MRS data. The toolbox was designed to maximize user flexibility and contains artifact, applied, input/output (I/O), and support functions. The artifact functions can produce spurious echoes, eddy currents, nuisance peaks, line broadening, baseline contamination, linear frequency drifts, and frequency and phase shift artifacts. Applied functions combine or apply specific parameter values to produce recognizable effects such as lipid peak and motion contamination. I/O and support functions provide additional functionality to accommodate different kinds of input data (MATLAB FID-A.mat files, NIfTI-MRS files), which vary by domain (time vs. frequency), MRS data type (e.g., edited vs. non-edited) and scale. A frequency and phase correction machine learning model experiment trained on corrupted simulated data and validated on in vivo data is shown to highlight the utility of our toolbox. Data simulated from the toolbox are complementary for research applications, as demonstrated by training a frequency and phase correction deep learning model that is applied to in vivo data containing artifacts. Visual assessment also confirms the resemblance of simulated artifacts compared to artifacts found in in vivo data. Our easy to install Python artifact simulated toolbox SMART_MRS is useful to enhance the diversity and quality of existing simulated edited-MRS data and is complementary to existing MRS simulation software.

A Narrative Review on Large AI Models in Lung Cancer Screening, Diagnosis, and Treatment Planning

Jiachen Zhong, Yiting Wang, Di Zhu, Ziwei Wang

arxiv logopreprintJun 8 2025
Lung cancer remains one of the most prevalent and fatal diseases worldwide, demanding accurate and timely diagnosis and treatment. Recent advancements in large AI models have significantly enhanced medical image understanding and clinical decision-making. This review systematically surveys the state-of-the-art in applying large AI models to lung cancer screening, diagnosis, prognosis, and treatment. We categorize existing models into modality-specific encoders, encoder-decoder frameworks, and joint encoder architectures, highlighting key examples such as CLIP, BLIP, Flamingo, BioViL-T, and GLoRIA. We further examine their performance in multimodal learning tasks using benchmark datasets like LIDC-IDRI, NLST, and MIMIC-CXR. Applications span pulmonary nodule detection, gene mutation prediction, multi-omics integration, and personalized treatment planning, with emerging evidence of clinical deployment and validation. Finally, we discuss current limitations in generalizability, interpretability, and regulatory compliance, proposing future directions for building scalable, explainable, and clinically integrated AI systems. Our review underscores the transformative potential of large AI models to personalize and optimize lung cancer care.

Simultaneous Segmentation of Ventricles and Normal/Abnormal White Matter Hyperintensities in Clinical MRI using Deep Learning

Mahdi Bashiri Bawil, Mousa Shamsi, Abolhassan Shakeri Bavil

arxiv logopreprintJun 8 2025
Multiple sclerosis (MS) diagnosis and monitoring rely heavily on accurate assessment of brain MRI biomarkers, particularly white matter hyperintensities (WMHs) and ventricular changes. Current segmentation approaches suffer from several limitations: they typically segment these structures independently despite their pathophysiological relationship, struggle to differentiate between normal and pathological hyperintensities, and are poorly optimized for anisotropic clinical MRI data. We propose a novel 2D pix2pix-based deep learning framework for simultaneous segmentation of ventricles and WMHs with the unique capability to distinguish between normal periventricular hyperintensities and pathological MS lesions. Our method was developed and validated on FLAIR MRI scans from 300 MS patients. Compared to established methods (SynthSeg, Atlas Matching, BIANCA, LST-LPA, LST-LGA, and WMH-SynthSeg), our approach achieved superior performance for both ventricle segmentation (Dice: 0.801+/-0.025, HD95: 18.46+/-7.1mm) and WMH segmentation (Dice: 0.624+/-0.061, precision: 0.755+/-0.161). Furthermore, our method successfully differentiated between normal and abnormal hyperintensities with a Dice coefficient of 0.647. Notably, our approach demonstrated exceptional computational efficiency, completing end-to-end processing in approximately 4 seconds per case, up to 36 times faster than baseline methods, while maintaining minimal resource requirements. This combination of improved accuracy, clinically relevant differentiation capability, and computational efficiency addresses critical limitations in current neuroimaging analysis, potentially enabling integration into routine clinical workflows and enhancing MS diagnosis and monitoring.

Transfer Learning and Explainable AI for Brain Tumor Classification: A Study Using MRI Data from Bangladesh

Shuvashis Sarker

arxiv logopreprintJun 8 2025
Brain tumors, regardless of being benign or malignant, pose considerable health risks, with malignant tumors being more perilous due to their swift and uncontrolled proliferation, resulting in malignancy. Timely identification is crucial for enhancing patient outcomes, particularly in nations such as Bangladesh, where healthcare infrastructure is constrained. Manual MRI analysis is arduous and susceptible to inaccuracies, rendering it inefficient for prompt diagnosis. This research sought to tackle these problems by creating an automated brain tumor classification system utilizing MRI data obtained from many hospitals in Bangladesh. Advanced deep learning models, including VGG16, VGG19, and ResNet50, were utilized to classify glioma, meningioma, and various brain cancers. Explainable AI (XAI) methodologies, such as Grad-CAM and Grad-CAM++, were employed to improve model interpretability by emphasizing the critical areas in MRI scans that influenced the categorization. VGG16 achieved the most accuracy, attaining 99.17%. The integration of XAI enhanced the system's transparency and stability, rendering it more appropriate for clinical application in resource-limited environments such as Bangladesh. This study highlights the capability of deep learning models, in conjunction with explainable artificial intelligence (XAI), to enhance brain tumor detection and identification in areas with restricted access to advanced medical technologies.

MRI-mediated intelligent multimodal imaging system: from artificial intelligence to clinical imaging diagnosis.

Li Y, Wang J, Pan X, Shan Y, Zhang J

pubmed logopapersJun 8 2025
MRI, as a mature diagnostic method in clinical application, is favored by doctors and patients, there are also insurmountable bottleneck problems. AI strategies such as multimodal imaging integration and machine learning are used to build an intelligent multimodal imaging system based on MRI data to solve the unmet clinical needs in various medical environments. This review systematically discusses the development of MRI-guided multimodal imaging systems and the application of intelligent multimodal imaging systems integrated with artificial intelligence in the early diagnosis of brain and cardiovascular diseases. The safe and effective deployment of AI in clinical diagnostic equipment can help enhance early accurate diagnosis and personalized patient care.

A review of multimodal fusion-based deep learning for Alzheimer's disease.

Zhang R, Sheng J, Zhang Q, Wang J, Wang B

pubmed logopapersJun 7 2025
Alzheimer's Disease (AD) as one of the most prevalent neurodegenerative disorders worldwide, characterized by significant memory and cognitive decline in its later stages, severely impacting daily lives. Consequently, early diagnosis and accurate assessment are crucial for delaying disease progression. In recent years, multimodal imaging has gained widespread adoption in AD diagnosis and research, particularly the combined use of Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). The complementarity of these modalities in structural and metabolic information offers a unique advantage for comprehensive disease understanding and precise diagnosis. With the rapid advancement of deep learning techniques, efficient fusion of MRI and PET multimodal data has emerged as a prominent research focus. This review systematically surveys the latest advancements in deep learning-based multimodal fusion of MRI and PET images for AD research, with a particular focus on studies published in the past five years (2021-2025). It first introduces the main sources of AD-related data, along with data preprocessing and feature extraction methods. Then, it summarizes performance metrics and multimodal fusion techniques. Next, it explores the application of various deep learning models and their variants in multimodal fusion tasks. Finally, it analyzes the key challenges currently faced in the field, including data scarcity and imbalance, inter-institutional data heterogeneity, etc., and discusses potential solutions and future research directions. This review aims to provide systematic guidance for researchers in the field of MRI and PET multimodal fusion, with the ultimate goal of advancing the development of early AD diagnosis and intervention strategies.
Page 259 of 4064055 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.