Sort by:
Page 291 of 3463455 results

Attention-enhanced residual U-Net: lymph node segmentation method with bimodal MRI images.

Qiu J, Chen C, Li M, Hong J, Dong B, Xu S, Lin Y

pubmed logopapersJun 2 2025
In medical images, lymph nodes (LNs) have fuzzy boundaries, diverse shapes and sizes, and structures similar to surrounding tissues. To automatically segment uterine LNs from sagittal magnetic resonance (MRI) scans, we combined T2-weighted imaging (T2WI) and diffusion-weighted imaging (DWI) images and tested the final results in our proposed model. This study used a data set of 158 MRI images of patients with FIGO staged LN confirmed by pathology. To improve the robustness of the model, data augmentation was applied to expand the data set. The training data was manually annotated by two experienced radiologists. The DWI and T2 images were fused and inputted into U-Net. The efficient channel attention (ECA) module was added to U-Net. A residual network was added to the encoding-decoding stage, named Efficient residual U-Net (ERU-Net), to obtain the final segmentation results and calculate the mean intersection-over-union (mIoU). The experimental results demonstrated that the ERU-Net network showed strong segmentation performance, which was significantly better than other segmentation networks. The mIoU reached 0.83, and the average pixel accuracy was 0.91. In addition, the precision was 0.90, and the corresponding recall was 0.91. In this study, ERU-Net successfully achieved the segmentation of LN in uterine MRI images. Compared with other segmentation networks, our network has the best segmentation effect on uterine LN. This provides a valuable reference for doctors to develop more effective and efficient treatment plans.

Decision support using machine learning for predicting adequate bladder filling in prostate radiotherapy: a feasibility study.

Saiyo N, Assawanuwat K, Janthawanno P, Paduka S, Prempetch K, Chanphol T, Sakchatchawan B, Thongsawad S

pubmed logopapersJun 2 2025
This study aimed to develop a model for predicting the bladder volume ratio between daily CBCT and CT to determine adequate bladder filling in patients undergoing treatment for prostate cancer with external beam radiation therapy (EBRT). The model was trained using 465 datasets obtained from 34 prostate cancer patients. A total of 16 features were collected as input data, which included basic patient information, patient health status, blood examination laboratory results, and specific radiation therapy information. The ratio of the bladder volume between daily CBCT (dCBCT) and planning CT (pCT) was used as the model response. The model was trained using a bootstrap aggregation (bagging) algorithm with two machine learning (ML) approaches: classification and regression. The model accuracy was validated using other 93 datasets. For the regression approach, the accuracy of the model was evaluated based on the root mean square error (RMSE) and mean absolute error (MAE). By contrast, the model performance of the classification approach was assessed using sensitivity, specificity, and accuracy scores. The ML model showed promising results in the prediction of the bladder volume ratio between dCBCT and pCT, with an RMSE of 0.244 and MAE of 0.172 for the regression approach, sensitivity of 95.24%, specificity of 92.16%, and accuracy of 93.55% for the classification approach. The prediction model could potentially help the radiological technologist determine whether the bladder is full before treatment, thereby reducing the requirement for re-scan CBCT. HIGHLIGHTS: The bagging model demonstrates strong performance in predicting optimal bladder filling. The model achieves promising results with 95.24% sensitivity and 92.16% specificity. It supports therapists in assessing bladder fullness prior to treatment. It helps reduce the risk of requiring repeat CBCT scans.

Current AI technologies in cancer diagnostics and treatment.

Tiwari A, Mishra S, Kuo TR

pubmed logopapersJun 2 2025
Cancer continues to be a significant international health issue, which demands the invention of new methods for early detection, precise diagnoses, and personalized treatments. Artificial intelligence (AI) has rapidly become a groundbreaking component in the modern era of oncology, offering sophisticated tools across the range of cancer care. In this review, we performed a systematic survey of the current status of AI technologies used for cancer diagnoses and therapeutic approaches. We discuss AI-facilitated imaging diagnostics using a range of modalities such as computed tomography, magnetic resonance imaging, positron emission tomography, ultrasound, and digital pathology, highlighting the growing role of deep learning in detecting early-stage cancers. We also explore applications of AI in genomics and biomarker discovery, liquid biopsies, and non-invasive diagnoses. In therapeutic interventions, AI-based clinical decision support systems, individualized treatment planning, and AI-facilitated drug discovery are transforming precision cancer therapies. The review also evaluates the effects of AI on radiation therapy, robotic surgery, and patient management, including survival predictions, remote monitoring, and AI-facilitated clinical trials. Finally, we discuss important challenges such as data privacy, interpretability, and regulatory issues, and recommend future directions that involve the use of federated learning, synthetic biology, and quantum-boosted AI. This review highlights the groundbreaking potential of AI to revolutionize cancer care by making diagnostics, treatments, and patient management more precise, efficient, and personalized.

MobileTurkerNeXt: investigating the detection of Bankart and SLAP lesions using magnetic resonance images.

Gurger M, Esmez O, Key S, Hafeez-Baig A, Dogan S, Tuncer T

pubmed logopapersJun 2 2025
The landscape of computer vision is predominantly shaped by two groundbreaking methodologies: transformers and convolutional neural networks (CNNs). In this study, we aim to introduce an innovative mobile CNN architecture designed for orthopedic imaging that efficiently identifies both Bankart and SLAP lesions. Our approach involved the collection of two distinct magnetic resonance (MR) image datasets, with the primary goal of automating the detection of Bankart and SLAP lesions. A novel mobile CNN, dubbed MobileTurkerNeXt, forms the cornerstone of this research. This newly developed model, comprising roughly 1 million trainable parameters, unfolds across four principal stages: the stem, main, downsampling, and output phases. The stem phase incorporates three convolutional layers to initiate feature extraction. In the main phase, we introduce an innovative block, drawing inspiration from ConvNeXt, EfficientNet, and ResNet architectures. The downsampling phase utilizes patchify average pooling and pixel-wise convolution to effectively reduce spatial dimensions, while the output phase is meticulously engineered to yield classification outcomes. Our experimentation with MobileTurkerNeXt spanned three comparative scenarios: Bankart versus normal, SLAP versus normal, and a tripartite comparison of Bankart, SLAP, and normal cases. The model demonstrated exemplary performance, achieving test classification accuracies exceeding 96% across these scenarios. The empirical results underscore the MobileTurkerNeXt's superior classification process in differentiating among Bankart, SLAP, and normal conditions in orthopedic imaging. This underscores the potential of our proposed mobile CNN in advancing diagnostic capabilities and contributing significantly to the field of medical image analysis.

Multicycle Dosimetric Behavior and Dose-Effect Relationships in [<sup>177</sup>Lu]Lu-DOTATATE Peptide Receptor Radionuclide Therapy.

Kayal G, Roseland ME, Wang C, Fitzpatrick K, Mirando D, Suresh K, Wong KK, Dewaraja YK

pubmed logopapersJun 2 2025
We investigated pharmacokinetics, dosimetric patterns, and absorbed dose (AD)-effect correlations in [<sup>177</sup>Lu]Lu-DOTATATE peptide receptor radionuclide therapy (PRRT) for metastatic neuroendocrine tumors (NETs) to develop strategies for future personalized dosimetry-guided treatments. <b>Methods:</b> Patients treated with standard [<sup>177</sup>Lu]Lu-DOTATATE PRRT were recruited for serial SPECT/CT imaging. Kidneys were segmented on CT using a deep learning algorithm, and tumors were segmented at each cycle using a SPECT gradient-based tool, guided by radiologist-defined contours on baseline CT/MRI. Dosimetry was performed using an automated workflow that included contour intensity-based SPECT-SPECT registration, generation of Monte Carlo dose-rate maps, and dose-rate fitting. Lesion-level response at first follow-up was evaluated using both radiologic (RECIST and modified RECIST) and [<sup>68</sup>Ga]Ga-DOTATATE PET-based criteria. Kidney toxicity was evaluated based on the estimated glomerular filtration rate (eGFR) at 9 mo after PRRT. <b>Results:</b> Dosimetry was performed after cycle 1 in 30 patients and after all cycles in 22 of 30 patients who completed SPECT/CT imaging after each cycle. Median cumulative tumor (<i>n</i> = 78) AD was 2.2 Gy/GBq (range, 0.1-20.8 Gy/GBq), whereas median kidney AD was 0.44 Gy/GBq (range, 0.25-0.96 Gy/GBq). The tumor-to-kidney AD ratio decreased with each cycle (median, 6.4, 5.7, 4.7, and 3.9 for cycles 1-4) because of a decrease in tumor AD, while kidney AD remained relatively constant. Higher-grade (grade 2) and pancreatic NETs showed a significantly larger drop in AD with each cycle, as well as significantly lower AD and effective half-life (T<sub>eff</sub>), than did low-grade (grade 1) and small intestinal NETs, respectively. T<sub>eff</sub> remained relatively constant with each cycle for both tumors and kidneys. Kidney T<sub>eff</sub> and AD were significantly higher in patients with low eGFR than in those with high eGFR. Tumor AD was not significantly associated with response measures. There was no nephrotoxicity higher than grade 2; however, a significant negative association was found in univariate analyses between eGFR at 9 mo and AD to the kidney, which improved in a multivariable model that also adjusted for baseline eGFR (cycle 1 AD, <i>P</i> = 0.020, adjusted <i>R</i> <sup>2</sup> = 0.57; cumulative AD, <i>P</i> = 0.049, adjusted <i>R</i> <sup>2</sup> = 0.65). The association between percentage change in eGFR and AD to the kidney was also significant in univariate analysis and after adjusting for baseline eGFR (cycle 1 AD, <i>P</i> = 0.006, adjusted <i>R</i> <sup>2</sup> = 0.21; cumulative AD, <i>P</i> = 0.019, adjusted <i>R</i> <sup>2</sup> = 0.21). <b>Conclusion:</b> The dosimetric behavior we report over different cycles and for different NET subgroups can be considered when optimizing PRRT to individual patients. The models we present for the relationship between eGFR and AD have potential for clinical use in predicting renal function early in the treatment course. Furthermore, reported pharmacokinetics for patient subgroups allow more appropriate selection of population parameters to be used in protocols with fewer imaging time points that facilitate more widespread adoption of dosimetry.

Tomographic Foundation Model -- FORCE: Flow-Oriented Reconstruction Conditioning Engine

Wenjun Xia, Chuang Niu, Ge Wang

arxiv logopreprintJun 2 2025
Computed tomography (CT) is a major medical imaging modality. Clinical CT scenarios, such as low-dose screening, sparse-view scanning, and metal implants, often lead to severe noise and artifacts in reconstructed images, requiring improved reconstruction techniques. The introduction of deep learning has significantly advanced CT image reconstruction. However, obtaining paired training data remains rather challenging due to patient motion and other constraints. Although deep learning methods can still perform well with approximately paired data, they inherently carry the risk of hallucination due to data inconsistencies and model instability. In this paper, we integrate the data fidelity with the state-of-the-art generative AI model, referred to as the Poisson flow generative model (PFGM) with a generalized version PFGM++, and propose a novel CT framework: Flow-Oriented Reconstruction Conditioning Engine (FORCE). In our experiments, the proposed method shows superior performance in various CT imaging tasks, outperforming existing unsupervised reconstruction approaches.

Beyond Pixel Agreement: Large Language Models as Clinical Guardrails for Reliable Medical Image Segmentation

Jiaxi Sheng, Leyi Yu, Haoyue Li, Yifan Gao, Xin Gao

arxiv logopreprintJun 2 2025
Evaluating AI-generated medical image segmentations for clinical acceptability poses a significant challenge, as traditional pixelagreement metrics often fail to capture true diagnostic utility. This paper introduces Hierarchical Clinical Reasoner (HCR), a novel framework that leverages Large Language Models (LLMs) as clinical guardrails for reliable, zero-shot quality assessment. HCR employs a structured, multistage prompting strategy that guides LLMs through a detailed reasoning process, encompassing knowledge recall, visual feature analysis, anatomical inference, and clinical synthesis, to evaluate segmentations. We evaluated HCR on a diverse dataset across six medical imaging tasks. Our results show that HCR, utilizing models like Gemini 2.5 Flash, achieved a classification accuracy of 78.12%, performing comparably to, and in instances exceeding, dedicated vision models such as ResNet50 (72.92% accuracy) that were specifically trained for this task. The HCR framework not only provides accurate quality classifications but also generates interpretable, step-by-step reasoning for its assessments. This work demonstrates the potential of LLMs, when appropriately guided, to serve as sophisticated evaluators, offering a pathway towards more trustworthy and clinically-aligned quality control for AI in medical imaging.

Medical World Model: Generative Simulation of Tumor Evolution for Treatment Planning

Yijun Yang, Zhao-Yang Wang, Qiuping Liu, Shuwen Sun, Kang Wang, Rama Chellappa, Zongwei Zhou, Alan Yuille, Lei Zhu, Yu-Dong Zhang, Jieneng Chen

arxiv logopreprintJun 2 2025
Providing effective treatment and making informed clinical decisions are essential goals of modern medicine and clinical care. We are interested in simulating disease dynamics for clinical decision-making, leveraging recent advances in large generative models. To this end, we introduce the Medical World Model (MeWM), the first world model in medicine that visually predicts future disease states based on clinical decisions. MeWM comprises (i) vision-language models to serve as policy models, and (ii) tumor generative models as dynamics models. The policy model generates action plans, such as clinical treatments, while the dynamics model simulates tumor progression or regression under given treatment conditions. Building on this, we propose the inverse dynamics model that applies survival analysis to the simulated post-treatment tumor, enabling the evaluation of treatment efficacy and the selection of the optimal clinical action plan. As a result, the proposed MeWM simulates disease dynamics by synthesizing post-treatment tumors, with state-of-the-art specificity in Turing tests evaluated by radiologists. Simultaneously, its inverse dynamics model outperforms medical-specialized GPTs in optimizing individualized treatment protocols across all metrics. Notably, MeWM improves clinical decision-making for interventional physicians, boosting F1-score in selecting the optimal TACE protocol by 13%, paving the way for future integration of medical world models as the second readers.

Direct parametric reconstruction in dynamic PET using deep image prior and a novel parameter magnification strategy.

Hong X, Wang F, Sun H, Arabi H, Lu L

pubmed logopapersJun 2 2025
Multiple parametric imaging in positron emission tomography (PET) is challenging due to the noisy dynamic data and the complex mapping to kinetic parameters. Although methods like direct parametric reconstruction have been proposed to improve the image quality, limitations persist, particularly for nonlinear and small-value micro-parameters (e.g., k<sub>2</sub>, k<sub>3</sub>). This study presents a novel unsupervised deep learning approach to reconstruct and improve the quality of these micro-parameters. We proposed a direct parametric image reconstruction model, DIP-PM, integrating deep image prior (DIP) with a parameter magnification (PM) strategy. The model employs a U-Net generator to predict multiple parametric images using a CT image prior, with each output channel subsequently magnified by a factor to adjust the intensity. The model was optimized with a log-likelihood loss computed between the measured projection data and forward projected data. Two tracer datasets were simulated for evaluation: <sup>82</sup>Rb data using the 1-tissue compartment (1 TC) model and <sup>18</sup>F-FDG data using the 2-tissue compartment (2 TC) model, with 10-fold magnification applied to the 1 TC k<sub>2</sub> and the 2 TC k<sub>3</sub>, respectively. DIP-PM was compared to the indirect method, direct algorithm (OTEM) and the DIP method without parameter magnification (DIP-only). Performance was assessed on phantom data using peak signal-to-noise ratio (PSNR), normalized root mean square error (NRMSE) and structural similarity index (SSIM), as well as on real <sup>18</sup>F-FDG scan from a male subject. For the 1 TC model, OTEM performed well in K<sub>1</sub> reconstruction, but both indirect and OTEM methods showed high noise and poor performance in k<sub>2</sub>. The DIP-only method suppressed noise in k<sub>2</sub>, but failed to reconstruct fine structures in the myocardium. DIP-PM outperformed other methods with well-preserved detailed structures, particularly in k<sub>2</sub>, achieving the best metrics (PSNR: 19.00, NRMSE: 0.3002, SSIM: 0.9289). For the 2 TC model, traditional methods exhibited high noise and blurred structures in estimating all nonlinear parameters (K<sub>1</sub>, k<sub>2</sub>, k<sub>3</sub>), while DIP-based methods significantly improved image quality. DIP-PM outperformed all methods in k<sub>3</sub> (PSNR: 21.89, NRMSE: 0.4054, SSIM: 0.8797), and consequently produced the most accurate 2 TC K<sub>i</sub> images (PSNR: 22.74, NRMSE: 0.4897, SSIM: 0.8391). On real FDG data, DIP-PM also showed evident advantages in estimating K<sub>1</sub>, k<sub>2</sub> and k<sub>3</sub> while preserving myocardial structures. The results underscore the efficacy of the DIP-based direct parametric imaging in generating and improving quality of PET parametric images. This study suggests that the proposed DIP-PM method with the parameter magnification strategy can enhance the fidelity of nonlinear micro-parameter images.

Efficiency and Quality of Generative AI-Assisted Radiograph Reporting.

Huang J, Wittbrodt MT, Teague CN, Karl E, Galal G, Thompson M, Chapa A, Chiu ML, Herynk B, Linchangco R, Serhal A, Heller JA, Abboud SF, Etemadi M

pubmed logopapersJun 2 2025
Diagnostic imaging interpretation involves distilling multimodal clinical information into text form, a task well-suited to augmentation by generative artificial intelligence (AI). However, to our knowledge, impacts of AI-based draft radiological reporting remain unstudied in clinical settings. To prospectively evaluate the association of radiologist use of a workflow-integrated generative model capable of providing draft radiological reports for plain radiographs across a tertiary health care system with documentation efficiency, the clinical accuracy and textual quality of final radiologist reports, and the model's potential for detecting unexpected, clinically significant pneumothorax. This prospective cohort study was conducted from November 15, 2023, to April 24, 2024, at a tertiary care academic health system. The association between use of the generative model and radiologist documentation efficiency was evaluated for radiographs documented with model assistance compared with a baseline set of radiographs without model use, matched by study type (chest or nonchest). Peer review was performed on model-assisted interpretations. Flagging of pneumothorax requiring intervention was performed on radiographs prospectively. The primary outcomes were association of use of the generative model with radiologist documentation efficiency, assessed by difference in documentation time with and without model use using a linear mixed-effects model; for peer review of model-assisted reports, the difference in Likert-scale ratings using a cumulative-link mixed model; and for flagging pneumothorax requiring intervention, sensitivity and specificity. A total of 23 960 radiographs (11 980 each with and without model use) were used to analyze documentation efficiency. Interpretations with model assistance (mean [SE], 159.8 [27.0] seconds) were faster than the baseline set of those without (mean [SE], 189.2 [36.2] seconds) (P = .02), representing a 15.5% documentation efficiency increase. Peer review of 800 studies showed no difference in clinical accuracy (χ2 = 0.68; P = .41) or textual quality (χ2 = 3.62; P = .06) between model-assisted interpretations and nonmodel interpretations. Moreover, the model flagged studies containing a clinically significant, unexpected pneumothorax with a sensitivity of 72.7% and specificity of 99.9% among 97 651 studies screened. In this prospective cohort study of clinical use of a generative model for draft radiological reporting, model use was associated with improved radiologist documentation efficiency while maintaining clinical quality and demonstrated potential to detect studies containing a pneumothorax requiring immediate intervention. This study suggests the potential for radiologist and generative AI collaboration to improve clinical care delivery.
Page 291 of 3463455 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.