Sort by:
Page 163 of 1651650 results

Ordered-subsets Multi-diffusion Model for Sparse-view CT Reconstruction

Pengfei Yu, Bin Huang, Minghui Zhang, Weiwen Wu, Shaoyu Wang, Qiegen Liu

arxiv logopreprintMay 15 2025
Score-based diffusion models have shown significant promise in the field of sparse-view CT reconstruction. However, the projection dataset is large and riddled with redundancy. Consequently, applying the diffusion model to unprocessed data results in lower learning effectiveness and higher learning difficulty, frequently leading to reconstructed images that lack fine details. To address these issues, we propose the ordered-subsets multi-diffusion model (OSMM) for sparse-view CT reconstruction. The OSMM innovatively divides the CT projection data into equal subsets and employs multi-subsets diffusion model (MSDM) to learn from each subset independently. This targeted learning approach reduces complexity and enhances the reconstruction of fine details. Furthermore, the integration of one-whole diffusion model (OWDM) with complete sinogram data acts as a global information constraint, which can reduce the possibility of generating erroneous or inconsistent sinogram information. Moreover, the OSMM's unsupervised learning framework provides strong robustness and generalizability, adapting seamlessly to varying sparsity levels of CT sinograms. This ensures consistent and reliable performance across different clinical scenarios. Experimental results demonstrate that OSMM outperforms traditional diffusion models in terms of image quality and noise resilience, offering a powerful and versatile solution for advanced CT imaging in sparse-view scenarios.

Assessing artificial intelligence in breast screening with stratified results on 306 839 mammograms across geographic regions, age, breast density and ethnicity: A Retrospective Investigation Evaluating Screening (ARIES) study.

Oberije CJG, Currie R, Leaver A, Redman A, Teh W, Sharma N, Fox G, Glocker B, Khara G, Nash J, Ng AY, Kecskemethy PD

pubmed logopapersMay 14 2025
Evaluate an Artificial Intelligence (AI) system in breast screening through stratified results across age, breast density, ethnicity and screening centres, from different UK regions. A large-scale retrospective study evaluating two variations of using AI as an independent second reader in double reading was executed. Stratifications were conducted for clinical and operational metrics. Data from 306 839 mammography cases screened between 2017 and 2021 were used and included three different UK regions.The impact on safety and effectiveness was assessed using clinical metrics: cancer detection rate and positive predictive value, stratified according to age, breast density and ethnicity. Operational impact was assessed through reading workload and recall rate, measured overall and per centre.Non-inferiority was tested for AI workflows compared with human double reading, and when passed, superiority was tested. AI interval cancer (IC) flag rate was assessed to estimate additional cancer detection opportunity with AI that cannot be assessed retrospectively. The AI workflows passed non-inferiority or superiority tests for every metric across all subgroups, with workload savings between 38.3% and 43.7%. The AI standalone flagged 41.2% of ICs overall, ranging between 33.3% and 46.8% across subgroups, with the highest detection rate for dense breasts. Human double reading and AI workflows showed the same performance disparities across subgroups. The AI integrations maintained or improved performance at all metrics for all subgroups while achieving significant workload reduction. Moreover, complementing these integrations with AI as an additional reader can improve cancer detection. The granularity of assessment showed that screening with the AI-system integrations was as safe as standard double reading across heterogeneous populations.

Automated scout-image-based estimation of contrast agent dosing: a deep learning approach

Schirrmeister, R., Taleb, L., Friemel, P., Reisert, M., Bamberg, F., Weiss, J., Rau, A.

medrxiv logopreprintMay 12 2025
We developed and tested a deep-learning-based algorithm for the approximation of contrast agent dosage based on computed tomography (CT) scout images. We prospectively enrolled 817 patients undergoing clinically indicated CT imaging, predominantly of the thorax and/or abdomen. Patient weight was collected by study staff prior to the examination 1) with a weight scale and 2) as self-reported. Based on the scout images, we developed an EfficientNet convolutional neural network pipeline to estimate the optimal contrast agent dose based on patient weight and provide a browser-based user interface as a versatile open-source tool to account for different contrast agent compounds. We additionally analyzed the body-weight-informative CT features by synthesizing representative examples for different weights using in-context learning and dataset distillation. The cohort consisted of 533 thoracic, 70 abdominal and 229 thoracic-abdominal CT scout scans. Self-reported patient weight was statistically significantly lower than manual measurements (75.13 kg vs. 77.06 kg; p < 10-5, Wilcoxon signed-rank test). Our pipeline predicted patient weight with a mean absolute error of 3.90 {+/-} 0.20 kg (corresponding to a roughly 4.48 - 11.70 ml difference in contrast agent depending on the agent) in 5-fold cross-validation and is publicly available at https://tinyurl.com/ct-scout-weight. Interpretability analysis revealed that both larger anatomical shape and higher overall attenuation were predictive of body weight. Our open-source deep learning pipeline allows for the automatic estimation of accurate contrast agent dosing based on scout images in routine CT imaging studies. This approach has the potential to streamline contrast agent dosing workflows, improve efficiency, and enhance patient safety by providing quick and accurate weight estimates without additional measurements or reliance on potentially outdated records. The models performance may vary depending on patient positioning and scout image quality and the approach requires validation on larger patient cohorts and other clinical centers. Author SummaryAutomation of medical workflows using AI has the potential to increase reproducibility while saving costs and time. Here, we investigated automating the estimation of the required contrast agent dosage for CT examinations. We trained a deep neural network to predict the body weight from the initial 2D CT Scout images that are required prior to the actual CT examination. The predicted weight is then converted to a contrast agent dosage based on contrast-agent-specific conversion factors. To facilitate application in clinical routine, we developed a user-friendly browser-based user interface that allows clinicians to select a contrast agent or input a custom conversion factor to receive dosage suggestions, with local data processing in the browser. We also investigate what image characteristics predict body weight and find plausible relationships such as higher attenuation and larger anatomical shapes correlating with higher body weights. Our work goes beyond prior work by implementing a single model for a variety of anatomical regions, providing an accessible user interface and investigating the predictive characteristics of the images.

CirnetamorNet: An ultrasonic temperature measurement network for microwave hyperthermia based on deep learning.

Cui F, Du Y, Qin L, Li B, Li C, Meng X

pubmed logopapersMay 9 2025
Microwave thermotherapy is a promising approach for cancer treatment, but accurate noninvasive temperature monitoring remains challenging. This study aims to achieve accurate temperature prediction during microwave thermotherapy by efficiently integrating multi-feature data, thereby improving the accuracy and reliability of noninvasive thermometry techniques. We proposed an enhanced recurrent neural network architecture, namely CirnetamorNet. The experimental data acquisition system is developed by using the material that simulates the characteristics of human tissue to construct the body model. Ultrasonic image data at different temperatures were collected, and 5 parameters with high temperature correlation were extracted from gray scale covariance matrix and Homodyned-K distribution. Using multi-feature data as input and temperature prediction as output, the CirnetamorNet model is constructed by multi-head attention mechanism. Model performance was evaluated by analyzing training losses, predicting mean square error and accuracy, and ablation experiments were performed to evaluate the contribution of each module. Compared with common models, the CirnetamorNet model performs well, with training losses as low as 1.4589 and mean square error of only 0.1856. Its temperature prediction accuracy of 0.3°C exceeds that of many advanced models. Ablation experiments show that the removal of any key module of the model will lead to performance degradation, which proves that the collaboration of all modules is significant for improving the performance of the model. The proposed CirnetamorNet model exhibits exceptional performance in noninvasive thermometry for microwave thermotherapy. It offers a novel approach to multi-feature data fusion in the medical field and holds significant practical application value.

Prompt Engineering for Large Language Models in Interventional Radiology.

Dietrich N, Bradbury NC, Loh C

pubmed logopapersMay 7 2025
Prompt engineering plays a crucial role in optimizing artificial intelligence (AI) and large language model (LLM) outputs by refining input structure, a key factor in medical applications where precision and reliability are paramount. This Clinical Perspective provides an overview of prompt engineering techniques and their relevance to interventional radiology (IR). It explores key strategies, including zero-shot, one- or few-shot, chain-of-thought, tree-of-thought, self-consistency, and directional stimulus prompting, demonstrating their application in IR-specific contexts. Practical examples illustrate how these techniques can be effectively structured for workplace and clinical use. Additionally, the article discusses best practices for designing effective prompts and addresses challenges in the clinical use of generative AI, including data privacy and regulatory concerns. It concludes with an outlook on the future of generative AI in IR, highlighting advances including retrieval-augmented generation, domain-specific LLMs, and multimodal models.

Multistage Diffusion Model With Phase Error Correction for Fast PET Imaging.

Gao Y, Huang Z, Xie X, Zhao W, Yang Q, Yang X, Yang Y, Zheng H, Liang D, Liu J, Chen R, Hu Z

pubmed logopapersMay 7 2025
Fast PET imaging is clinically important for reducing motion artifacts and improving patient comfort. While recent diffusion-based deep learning methods have shown promise, they often fail to capture the true PET degradation process, suffer from accumulated inference errors, introduce artifacts, and require extensive reconstruction iterations. To address these challenges, we propose a novel multistage diffusion framework tailored for fast PET imaging. At the coarse level, we design a multistage structure to approximate the temporal non-linear PET degradation process in a data-driven manner, using paired PET images collected under different acquisition duration. A Phase Error Correction Network (PECNet) ensures consistency across stages by correcting accumulated deviations. At the fine level, we introduce a deterministic cold diffusion mechanism, which simulates intra-stage degradation through interpolation between known acquisition durations-significantly reducing reconstruction iterations to as few as 10. Evaluations on [<sup>68</sup>Ga]FAPI and [<sup>18</sup>F]FDG PET datasets demonstrate the superiority of our approach, achieving peak PSNRs of 36.2 dB and 39.0 dB, respectively, with average SSIMs over 0.97. Our framework offers high-fidelity PET imaging with fewer iterations, making it practical for accelerated clinical imaging.

The added value of artificial intelligence using Quantib Prostate for the detection of prostate cancer at multiparametric magnetic resonance imaging.

Russo T, Quarta L, Pellegrino F, Cosenza M, Camisassa E, Lavalle S, Apostolo G, Zaurito P, Scuderi S, Barletta F, Marzorati C, Stabile A, Montorsi F, De Cobelli F, Brembilla G, Gandaglia G, Briganti A

pubmed logopapersMay 7 2025
Artificial intelligence (AI) has been proposed to assist radiologists in reporting multiparametric magnetic resonance imaging (mpMRI) of the prostate. We evaluate the diagnostic performance of radiologists with different levels of experience when reporting mpMRI with the support of available AI-based software (Quantib Prostate). This is a single-center study (NCT06298305) involving 110 patients. Those with a positive mpMRI (PI-RADS ≥ 3) underwent targeted plus systematic biopsy (TBx plus SBx), while those with a negative mpMRI but a high clinical suspicion of prostate cancer (PCa) underwent SBx. Three readers with different levels of experience, identified as R1, R2, and R3 reviewed all mpMRI. Inter-reader agreement among the three readers with or without the assistance of Quantib Prostate as well as sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and diagnostic accuracy for the detection of clinically significant PCa (csPCa) were assessed. 102 patients underwent prostate biopsy and the csPCa detection rate was 47%. Using Quantib Prostate resulted in an increased number of lesions identified for R3 (101 vs. 127). Inter-reader agreement slightly increased when using Quantib Prostate from 0.37 to 0.41 without vs. with Quantib Prostate, respectively. PPV, NPV and diagnostic accuracy (measured by the area under the curve [AUC]) of R3 improved (0.51 vs. 0.55, 0.65 vs.0.82 and 0.56 vs. 0.62, respectively). Conversely, no changes were observed for R1 and R2. Using Quantib Prostate did not enhance the detection rate of csPCa for readers with some experience in prostate imaging. However, for an inexperienced reader, this AI-based software is demonstrated to improve the performance. Name of registry: clinicaltrials.gov. NCT06298305. Date of registration: 2022-09.

New Targets for Imaging in Nuclear Medicine.

Brink A, Paez D, Estrada Lobato E, Delgado Bolton RC, Knoll P, Korde A, Calapaquí Terán AK, Haidar M, Giammarile F

pubmed logopapersMay 6 2025
Nuclear medicine is rapidly evolving with new molecular imaging targets and advanced computational tools that promise to enhance diagnostic precision and personalized therapy. Recent years have seen a surge in novel PET and SPECT tracers, such as those targeting prostate-specific membrane antigen (PSMA) in prostate cancer, fibroblast activation protein (FAP) in tumor stroma, and tau protein in neurodegenerative disease. These tracers enable more specific visualization of disease processes compared to traditional agents, fitting into a broader shift toward precision imaging in oncology and neurology. In parallel, artificial intelligence (AI) and machine learning techniques are being integrated into tracer development and image analysis. AI-driven methods can accelerate radiopharmaceutical discovery, optimize pharmacokinetic properties, and assist in interpreting complex imaging datasets. This editorial provides an expanded overview of emerging imaging targets and techniques, including theranostic applications that pair diagnosis with radionuclide therapy, and examines how AI is augmenting nuclear medicine. We discuss the implications of these advancements within the field's historical trajectory and address the regulatory, manufacturing, and clinical challenges that must be navigated. Innovations in molecular targeting and AI are poised to transform nuclear medicine practice, enabling more personalized diagnostics and radiotheranostic strategies in the era of precision healthcare.

Designing a computer-assisted diagnosis system for cardiomegaly detection and radiology report generation.

Zhu T, Xu K, Son W, Linton-Reid K, Boubnovski-Martell M, Grech-Sollars M, Lain AD, Posma JM

pubmed logopapersMay 1 2025
Chest X-ray (CXR) is a diagnostic tool for cardiothoracic assessment. They make up 50% of all diagnostic imaging tests. With hundreds of images examined every day, radiologists can suffer from fatigue. This fatigue may reduce diagnostic accuracy and slow down report generation. We describe a prototype computer-assisted diagnosis (CAD) pipeline employing computer vision (CV) and Natural Language Processing (NLP). It was trained and evaluated on the publicly available MIMIC-CXR dataset. We perform image quality assessment, view labelling, and segmentation-based cardiomegaly severity classification. We use the output of the severity classification for large language model-based report generation. Four board-certified radiologists assessed the output accuracy of our CAD pipeline. Across the dataset composed of 377,100 CXR images and 227,827 free-text radiology reports, our system identified 0.18% of cases with mixed-sex mentions, 0.02% of poor quality images (F1 = 0.81), and 0.28% of wrongly labelled views (accuracy 99.4%). We assigned views for 4.18% of images which have unlabelled views. Our binary cardiomegaly classification model has 95.2% accuracy. The inter-radiologist agreement on evaluating the generated report's semantics and correctness for radiologist-MIMIC is 0.62 (strict agreement) and 0.85 (relaxed agreement) similar to the radiologist-CAD agreement of 0.55 (strict) and 0.93 (relaxed). Our work found and corrected several incorrect or missing metadata annotations for the MIMIC-CXR dataset. The performance of our CAD system suggests performance on par with human radiologists. Future improvements revolve around improved text generation and the development of CV tools for other diseases.

Radiomics of Dynamic Contrast-Enhanced MRI for Predicting Radiation-Induced Hepatic Toxicity After Intensity Modulated Radiotherapy for Hepatocellular Carcinoma: A Machine Learning Predictive Model Based on the SHAP Methodology.

Liu F, Chen L, Wu Q, Li L, Li J, Su T, Li J, Liang S, Qing L

pubmed logopapersJan 1 2025
To develop an interpretable machine learning (ML) model using dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) radiomic data, dosimetric parameters, and clinical data for predicting radiation-induced hepatic toxicity (RIHT) in patients with hepatocellular carcinoma (HCC) following intensity-modulated radiation therapy (IMRT). A retrospective analysis of 150 HCC patients was performed, with a 7:3 ratio used to divide the data into training and validation cohorts. Radiomic features from the original MRI sequences and Delta-radiomic features were extracted. Seven ML models based on radiomics were developed: logistic regression (LR), random forest (RF), support vector machine (SVM), eXtreme Gradient Boosting (XGBoost), adaptive boosting (AdaBoost), decision tree (DT), and artificial neural network (ANN). The predictive performance of the models was evaluated using receiver operating characteristic (ROC) curve analysis and calibration curves. Shapley additive explanations (SHAP) were employed to interpret the contribution of each variable and its risk threshold. Original radiomic features and Delta-radiomic features were extracted from DCE-MRI images and filtered to generate Radiomics-scores and Delta-Radiomics-scores. These were then combined with independent risk factors (Body Mass Index (BMI), V5, and pre-Child-Pugh score(pre-CP)) identified through univariate and multivariate logistic regression and Spearman correlation analysis to construct the ML models. In the training cohort, the AUC values were 0.8651 for LR, 0.7004 for RF, 0.6349 for SVM, 0.6706 for XGBoost, 0.7341 for AdaBoost, 0.6806 for Decision Tree, and 0.6786 for ANN. The corresponding accuracies were 84.4%, 65.6%, 75.0%, 65.6%, 71.9%, 68.8%, and 71.9%, respectively. The validation cohort further confirmed the superiority of the LR model, which was selected as the optimal model. SHAP analysis revealed that Delta-radiomics made a substantial positive contribution to the model. The interpretable ML model based on radiomics provides a non-invasive tool for predicting RIHT in patients with HCC, demonstrating satisfactory discriminative performance.
Page 163 of 1651650 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.