Sort by:
Page 278 of 3463455 results

optiGAN: A Deep Learning-Based Alternative to Optical Photon Tracking in Python-Based GATE (10+).

Mummaneni G, Trigila C, Krah N, Sarrut D, Roncali E

pubmed logopapersJun 9 2025
To accelerate optical photon transport simulations in the GATE medical physics framework using a Generative Adversarial Network (GAN), while ensuring high modeling accuracy. Traditionally, detailed optical Monte Carlo methods have been the gold standard for modeling photon interactions in detectors, but their high computational cost remains a challenge. This study explores the integration of optiGAN, a Generative Adversarial Network (GAN) model into GATE 10, the new Python-based version of the GATE medical physics simulation framework released in November 2024.
Approach: The goal of optiGAN is to accelerate optical photon transport simulations while maintaining modelling accuracy. The optiGAN model, based on a GAN architecture, was integrated into GATE 10 as a computationally efficient alternative to traditional optical Monte Carlo simulations. To ensure consistency, optical photon transport modules were implemented in GATE 10 and validated against GATE v9.3 under identical simulation conditions. Subsequently, simulations using full Monte Carlo tracking in GATE 10 were compared to those using GATE 10-optiGAN.
Main results: Validation studies confirmed that GATE 10 produces results consistent with GATE v9.3. Simulations using GATE 10-optiGAN showed over 92% similarity to Monte Carlo-based GATE 10 results, based on the Jensen-Shannon distance across multiple photon transport parameters. optiGAN successfully captured multimodal distributions of photon position, direction, and energy at the photodetector face. Simulation time analysis revealed a reduction of approximately 50% in execution time with GATE 10-optiGAN compared to full Monte Carlo simulations.
Significance: The study confirms both the fidelity of optical photon transport modeling in GATE 10 and the effective integration of deep learning-based acceleration through optiGAN. This advancement enables large-scale, high-fidelity optical simulations with significantly reduced computational cost, supporting broader applications in medical imaging and detector design.

Snap-and-tune: combining deep learning and test-time optimization for high-fidelity cardiovascular volumetric meshing

Daniel H. Pak, Shubh Thaker, Kyle Baylous, Xiaoran Zhang, Danny Bluestein, James S. Duncan

arxiv logopreprintJun 9 2025
High-quality volumetric meshing from medical images is a key bottleneck for physics-based simulations in personalized medicine. For volumetric meshing of complex medical structures, recent studies have often utilized deep learning (DL)-based template deformation approaches to enable fast test-time generation with high spatial accuracy. However, these approaches still exhibit limitations, such as limited flexibility at high-curvature areas and unrealistic inter-part distances. In this study, we introduce a simple yet effective snap-and-tune strategy that sequentially applies DL and test-time optimization, which combines fast initial shape fitting with more detailed sample-specific mesh corrections. Our method provides significant improvements in both spatial accuracy and mesh quality, while being fully automated and requiring no additional training labels. Finally, we demonstrate the versatility and usefulness of our newly generated meshes via solid mechanics simulations in two different software platforms. Our code is available at https://github.com/danpak94/Deep-Cardiac-Volumetric-Mesh.

Advancing respiratory disease diagnosis: A deep learning and vision transformer-based approach with a novel X-ray dataset.

Alghadhban A, Ramadan RA, Alazmi M

pubmed logopapersJun 9 2025
With the increasing prevalence of respiratory diseases such as pneumonia and COVID-19, timely and accurate diagnosis is critical. This paper makes significant contributions to the field of respiratory disease classification by utilizing X-ray images and advanced machine learning techniques such as deep learning (DL) and Vision Transformers (ViT). First, the paper systematically reviews the current diagnostic methodologies, analyzing the recent advancement in DL and ViT techniques through a comprehensive analysis of the review articles published between 2017 and 2024, excluding short reviews and overviews. The review not only analyses the existing knowledge but also identifies the critical gaps in the field as well as the lack of diversity of the comprehensive and diverse datasets for training the machine learning models. To address such limitations, the paper extensively evaluates DL-based models on publicly available datasets, analyzing key performance metrics such as accuracy, precision, recall, and F1-score. Our evaluations reveal that the current datasets are mostly limited to the narrow subsets of pulmonary diseases, which might lead to some challenges, including overfitting, poor generalization, and reduced possibility of using advanced machine learning techniques in real-world applications. For instance, DL and ViT models require extensive data for effective learning. The primary contribution of this paper is not only the review of the most recent articles and surveys of respiratory diseases and DL models, including ViT, but also introduces a novel, diverse dataset comprising 7867 X-ray images from 5263 patients across three local hospitals, covering 49 distinct pulmonary diseases. The dataset is expected to enhance DL and ViT model training and improve the generalization of those models in various real-world medical image scenarios. By addressing the data scarcity issue, this paper paves the for more reliable and robust disease classification, improving clinical decision-making. Additionally, the article highlights the critical challenges that still need to be addressed, such as dataset bias and variations of X-ray image quality, as well as the need for further clinical validation. Furthermore, the study underscores the critical role of DL in medical diagnosis and highlights the necessity of comprehensive, well-annotated datasets to improve model robustness and clinical reliability. Through these contributions, the paper provides the basis and foundation of future research on respiratory disease diagnosis using AI-driven methodologies. Although the paper tries to cover all the work done between 2017 and 2024, this research might have some limitations of this research, including the review period before 2017 might have foundational work. At the same time, the rapid development of AI might make the earlier methods less relevant.

Deep learning-based post-hoc noise reduction improves quarter-radiation-dose coronary CT angiography.

Morikawa T, Nishii T, Tanabe Y, Yoshida K, Toshimori W, Fukuyama N, Toritani H, Suekuni H, Fukuda T, Kido T

pubmed logopapersJun 9 2025
To evaluate the impact of deep learning-based post-hoc noise reduction (DLNR) on image quality, coronary artery disease reporting and data system (CAD-RADS) assessment, and diagnostic performance in quarter-dose versus full-dose coronary CT angiography (CCTA) on external datasets. We retrospectively reviewed 221 patients who underwent retrospective electrocardiogram-gated CCTA in 2022-2023. Using dose modulation, either mid-diastole or end-systole was scanned at full dose depending on heart rates, and the other phase at quarter dose. Only patients with motion-free coronaries in both phases were included. Images were acquired using iterative reconstruction, and a residual dense network trained on external datasets denoised the quarter-dose images. Image quality was assessed by comparing noise levels using Tukey's test. Two radiologists independently assessed CAD-RADS, with agreement to full-dose images evaluated by Cohen's kappa. Diagnostic performance for significant stenosis referencing full-dose images was compared between quarter-dose and denoised images by the area under the receiver operating characteristic curve (AUC) using the DeLong test. Among 40 cases (age, 71 ± 7 years; 24 males), DLNR reduced noise from 37 to 18 HU (P < 0.001) in quarter-dose CCTA (full-dose images: 22 HU), and improved CAD-RADS agreement from moderate (0.60 [95 % CI: 0.41-0.78]) to excellent (0.82 [95 % CI: 0.66-0.94]). Denoised images demonstrated a superior AUC (0.97 [95 % CI: 0.95-1.00]) for diagnosing significant stenosis compared with original quarter-dose images (0.93 [95 % CI: 0.89-0.98]; P = 0.032). DLNR for quarter-dose CCTA significantly improved image quality, CAD-RADS agreement, and diagnostic performance for detecting significant stenosis referencing full-dose images.

Dose to circulating blood in intensity-modulated total body irradiation, total marrow irradiation, and total marrow and lymphoid irradiation.

Guo B, Cherian S, Murphy ES, Sauter CS, Sobecks RM, Rotz S, Hanna R, Scott JG, Xia P

pubmed logopapersJun 8 2025
Multi-isocentric intensity-modulated (IM) total body irradiation (TBI), total marrow irradiation (TMI), and total marrow and lymphoid irradiation (TMLI) are gaining popularity. A question arises on the impact of the interplay between blood circulation and dynamic delivery on blood dose. This study answers the question by introducing a new whole-body blood circulation modeling technique. A whole-body CT with intravenous contrast was used to develop the blood circulation model. Fifteen organs and tissues, heart chambers, and great vessels were segmented using a deep-learning-based auto-contouring software. The main blood vessels were segmented using an in-house algorithm. Blood density, velocity, time-to-heart, and perfusion distributions were derived for systole, diastole, and portal circulations and used to simulate trajectories of blood particles during delivery. With the same prescription of 12 Gy in 8 fractions, doses to circulating blood were calculated for three plans: (1) an IM-TBI plan prescribing uniform dose to the whole body while reducing lung and kidney doses; (2) a TMI plan treating all bones; and (3) a TMLI plan treating all bones, major lymph nodes, and spleen; TMI and TMLI plans were optimized to reduce doses to non-target tissue. Circulating blood received 1.57 ± 0.43 Gy, 1.04 ± 0.32 Gy, and 1.09 ± 0.32 Gy in one fraction and 12.60 ± 1.21 Gy, 8.34 ± 0.88 Gy, and 8.71 ± 0.92 Gy in 8 fractions in IM-TBI, TMI, and TMLI, respectively. The interplay effect of blood motion with IM delivery did not change the mean dose, but changed the dose heterogeneity of the circulating blood. Fractionation reduced the blood dose heterogeneity. A novel whole-body blood circulating model was developed based on patient-specific anatomy and realistic blood dynamics, concentration, and perfusion. Using the blood circulation model, we developed a dosimetry tool for circulating blood in IM-TBI, TMI, and TMLI.

Bi-regional and bi-phasic automated machine learning radiomics for defining metastasis to lesser curvature lymph node stations in gastric cancer.

Huang H, Wang S, Deng J, Ye Z, Li H, He B, Fang M, Zhang N, Liu J, Dong D, Liang H, Li G, Tian J, Hu Y

pubmed logopapersJun 8 2025
Lymph node metastasis (LNM) is the primary metastatic mode in gastric cancer (GC), with frequent occurrences in lesser curvature. This study aims to establish a radiomic model to predict the metastatic status of lymph nodes in the lesser curvature for GC. We retrospectively collected data from 939 gastric cancer patients who underwent gastrectomy and D2 lymphadenectomy across two centers. Both the primary lesion and the lesser curvature region were segmented as representative region of interests (ROIs). The combination of bi-regional and bi-phasic CT imaging features were used to build a hybrid radiomic model to predict LNM in the lesser curvature. And the model was validated internally and externally. Further, the potential generalization ability of the hybrid model was investigated in predicting the metastasis status in the supra-pancreatic area. The hybrid model yielded substantially higher performance with AUCs of 0.847 (95% CI, 0.770-0.924) and 0.833 (95% CI, 0.800-0.867) in the two independent test cohorts, compared to the single regional and phasic models. Additionally, the hybrid model achieved AUCs ranging from 0.678 to 0.761 in the prediction of LNM in supra-pancreatic area, showing the potential generalization performance. The CT imaging features of primary tumor and adjacent tissues are significantly associated with LNM. And our as-developed model showed great diagnostic performance and might be of great application in the individual treatment of GC.

Deep learning-based prospective slice tracking for continuous catheter visualization during MRI-guided cardiac catheterization.

Neofytou AP, Kowalik G, Vidya Shankar R, Kunze K, Moon T, Mellor N, Neji R, Razavi R, Pushparajah K, Roujol S

pubmed logopapersJun 8 2025
This proof-of-concept study introduces a novel, deep learning-based, parameter-free, automatic slice-tracking technique for continuous catheter tracking and visualization during MR-guided cardiac catheterization. The proposed sequence includes Calibration and Runtime modes. Initially, Calibration mode identifies the catheter tip's three-dimensional coordinates using a fixed stack of contiguous slices. A U-Net architecture with a ResNet-34 encoder is used to identify the catheter tip location. Once identified, the sequence then switches to Runtime mode, dynamically acquiring three contiguous slices automatically centered on the catheter tip. The catheter location is estimated from each Runtime stack using the same network and fed back to the sequence, enabling prospective slice tracking to keep the catheter in the central slice. If the catheter remains unidentified over several dynamics, the sequence reverts to Calibration mode. This artificial intelligence (AI)-based approach was evaluated prospectively in a three-dimensional-printed heart phantom and 3 patients undergoing MR-guided cardiac catheterization. This technique was also compared retrospectively in 2 patients with a previous non-AI automatic tracking method relying on operator-defined parameters. In the phantom study, the tracking framework achieved 100% accuracy/sensitivity/specificity in both modes. Across all patients, the average accuracy/sensitivity/specificity were 100 ± 0/100 ± 0/100 ± 0% (Calibration) and 98.4 ± 0.8/94.1 ± 2.9/100.0 ± 0.0% (Runtime). The parametric, non-AI technique and the proposed parameter-free AI-based framework yielded identical accuracy (100%) in Calibration mode and similar accuracy range in Runtime mode (Patients 1 and 2: 100%-97%, and 100%-98%, respectively). An AI-based prospective slice-tracking framework was developed for real-time, parameter-free, operator-independent, automatic tracking of gadolinium-filled balloon catheters. Its feasibility was successfully demonstrated in patients undergoing MRI-guided cardiac catheterization.

SMART MRS: A Simulated MEGA-PRESS ARTifacts toolbox for GABA-edited MRS.

Bugler H, Shamaei A, Souza R, Harris AD

pubmed logopapersJun 8 2025
To create a Python-based toolbox to simulate commonly occurring artifacts for single voxel gamma-aminobutyric acid (GABA)-edited MRS data. The toolbox was designed to maximize user flexibility and contains artifact, applied, input/output (I/O), and support functions. The artifact functions can produce spurious echoes, eddy currents, nuisance peaks, line broadening, baseline contamination, linear frequency drifts, and frequency and phase shift artifacts. Applied functions combine or apply specific parameter values to produce recognizable effects such as lipid peak and motion contamination. I/O and support functions provide additional functionality to accommodate different kinds of input data (MATLAB FID-A.mat files, NIfTI-MRS files), which vary by domain (time vs. frequency), MRS data type (e.g., edited vs. non-edited) and scale. A frequency and phase correction machine learning model experiment trained on corrupted simulated data and validated on in vivo data is shown to highlight the utility of our toolbox. Data simulated from the toolbox are complementary for research applications, as demonstrated by training a frequency and phase correction deep learning model that is applied to in vivo data containing artifacts. Visual assessment also confirms the resemblance of simulated artifacts compared to artifacts found in in vivo data. Our easy to install Python artifact simulated toolbox SMART_MRS is useful to enhance the diversity and quality of existing simulated edited-MRS data and is complementary to existing MRS simulation software.

A Narrative Review on Large AI Models in Lung Cancer Screening, Diagnosis, and Treatment Planning

Jiachen Zhong, Yiting Wang, Di Zhu, Ziwei Wang

arxiv logopreprintJun 8 2025
Lung cancer remains one of the most prevalent and fatal diseases worldwide, demanding accurate and timely diagnosis and treatment. Recent advancements in large AI models have significantly enhanced medical image understanding and clinical decision-making. This review systematically surveys the state-of-the-art in applying large AI models to lung cancer screening, diagnosis, prognosis, and treatment. We categorize existing models into modality-specific encoders, encoder-decoder frameworks, and joint encoder architectures, highlighting key examples such as CLIP, BLIP, Flamingo, BioViL-T, and GLoRIA. We further examine their performance in multimodal learning tasks using benchmark datasets like LIDC-IDRI, NLST, and MIMIC-CXR. Applications span pulmonary nodule detection, gene mutation prediction, multi-omics integration, and personalized treatment planning, with emerging evidence of clinical deployment and validation. Finally, we discuss current limitations in generalizability, interpretability, and regulatory compliance, proposing future directions for building scalable, explainable, and clinically integrated AI systems. Our review underscores the transformative potential of large AI models to personalize and optimize lung cancer care.

Simultaneous Segmentation of Ventricles and Normal/Abnormal White Matter Hyperintensities in Clinical MRI using Deep Learning

Mahdi Bashiri Bawil, Mousa Shamsi, Abolhassan Shakeri Bavil

arxiv logopreprintJun 8 2025
Multiple sclerosis (MS) diagnosis and monitoring rely heavily on accurate assessment of brain MRI biomarkers, particularly white matter hyperintensities (WMHs) and ventricular changes. Current segmentation approaches suffer from several limitations: they typically segment these structures independently despite their pathophysiological relationship, struggle to differentiate between normal and pathological hyperintensities, and are poorly optimized for anisotropic clinical MRI data. We propose a novel 2D pix2pix-based deep learning framework for simultaneous segmentation of ventricles and WMHs with the unique capability to distinguish between normal periventricular hyperintensities and pathological MS lesions. Our method was developed and validated on FLAIR MRI scans from 300 MS patients. Compared to established methods (SynthSeg, Atlas Matching, BIANCA, LST-LPA, LST-LGA, and WMH-SynthSeg), our approach achieved superior performance for both ventricle segmentation (Dice: 0.801+/-0.025, HD95: 18.46+/-7.1mm) and WMH segmentation (Dice: 0.624+/-0.061, precision: 0.755+/-0.161). Furthermore, our method successfully differentiated between normal and abnormal hyperintensities with a Dice coefficient of 0.647. Notably, our approach demonstrated exceptional computational efficiency, completing end-to-end processing in approximately 4 seconds per case, up to 36 times faster than baseline methods, while maintaining minimal resource requirements. This combination of improved accuracy, clinically relevant differentiation capability, and computational efficiency addresses critical limitations in current neuroimaging analysis, potentially enabling integration into routine clinical workflows and enhancing MS diagnosis and monitoring.
Page 278 of 3463455 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.