Sort by:
Page 195 of 3623611 results

Noise-inspired diffusion model for generalizable low-dose CT reconstruction.

Gao Q, Chen Z, Zeng D, Zhang J, Ma J, Shan H

pubmed logopapersJul 8 2025
The generalization of deep learning-based low-dose computed tomography (CT) reconstruction models to doses unseen in the training data is important and remains challenging. Previous efforts heavily rely on paired data to improve the generalization performance and robustness through collecting either diverse CT data for re-training or a few test data for fine-tuning. Recently, diffusion models have shown promising and generalizable performance in low-dose CT (LDCT) reconstruction, however, they may produce unrealistic structures due to the CT image noise deviating from Gaussian distribution and imprecise prior information from the guidance of noisy LDCT images. In this paper, we propose a noise-inspired diffusion model for generalizable LDCT reconstruction, termed NEED, which tailors diffusion models for noise characteristics of each domain. First, we propose a novel shifted Poisson diffusion model to denoise projection data, which aligns the diffusion process with the noise model in pre-log LDCT projections. Second, we devise a doubly guided diffusion model to refine reconstructed images, which leverages LDCT images and initial reconstructions to more accurately locate prior information and enhance reconstruction fidelity. By cascading these two diffusion models for dual-domain reconstruction, our NEED requires only normal-dose data for training and can be effectively extended to various unseen dose levels during testing via a time step matching strategy. Extensive qualitative, quantitative, and segmentation-based evaluations on two datasets demonstrate that our NEED consistently outperforms state-of-the-art methods in reconstruction and generalization performance. Source code is made available at https://github.com/qgao21/NEED.

The correlation of liquid biopsy genomic data to radiomics in colon, pancreatic, lung and prostatic cancer patients.

Italiano A, Gautier O, Dupont J, Assi T, Dawi L, Lawrance L, Bone A, Jardali G, Choucair A, Ammari S, Bayle A, Rouleau E, Cournede PH, Borget I, Besse B, Barlesi F, Massard C, Lassau N

pubmed logopapersJul 8 2025
With the advances in artificial intelligence (AI) and precision medicine, radiomics has emerged as a promising tool in the field of oncology. Radiogenomics integrates radiomics with genomic data, potentially offering a non-invasive method for identifying biomarkers relevant to cancer therapy. Liquid biopsy (LB) has further revolutionized cancer diagnostics by detecting circulating tumor DNA (ctDNA), enabling real-time molecular profiling. This study explores the integration of radiomics and LB to predict genomic alterations in solid tumors, including lung, colon, pancreatic, and prostate cancers. A retrospective study was conducted on 418 patients from the STING trial (NCT04932525), all of whom underwent both LB and CT imaging. Predictive models were developed using an XGBoost logistic classifier, with statistical analysis performed to compare tumor volumes, lesion counts, and affected organs across molecular subtypes. Performance was evaluated using area under the curve (AUC) values and cross-validation techniques. Radiomic models demonstrated moderate-to-good performance in predicting genomic alterations. KRAS mutations were best identified in pancreatic cancer (AUC=0.97), while moderate discrimination was noted in lung (AUC=0.66) and colon cancer (AUC=0.64). EGFR mutations in lung cancer were detected with an AUC of 0.74, while BRAF mutations showed good discriminatory ability in both lung (AUC=0.79) and colon cancer (AUC=0.76). In the radiomics predictive model, AR mutations in prostate cancer showed limited discrimination (AUC = 0.63). This study highlights the feasibility of integrating radiomics and LB for non-invasive genomic profiling in solid tumors, demonstrating significant potential in patient stratification and personalized oncology care. While promising, further prospective validation is required to enhance the generalizability of these models.

Integrating radiomic texture analysis and deep learning for automated myocardial infarction detection in cine-MRI.

Xu W, Shi X

pubmed logopapersJul 8 2025
Robust differentiation between infarcted and normal myocardial tissue is essential for improving diagnostic accuracy and personalizing treatment in myocardial infarction (MI). This study proposes a hybrid framework combining radiomic texture analysis with deep learning-based segmentation to enhance MI detection on non-contrast cine cardiac magnetic resonance (CMR) imaging.The approach incorporates radiomic features derived from the Gray-Level Co-Occurrence Matrix (GLCM) and Gray-Level Run Length Matrix (GLRLM) methods into a modified U-Net segmentation network. A three-stage feature selection pipeline was employed, followed by classification using multiple machine learning models. Early and intermediate fusion strategies were integrated into the hybrid architecture. The model was validated on cine-CMR data from the SCD and Kaggle datasets.Joint Entropy, Max Probability, and RLNU emerged as the most discriminative features, with Joint Entropy achieving the highest AUC (0.948). The hybrid model outperformed standalone U-Net in segmentation (Dice = 0.887, IoU = 0.803, HD95 = 4.48 mm) and classification (accuracy = 96.30%, AUC = 0.97, precision = 0.96, recall = 0.94, F1-score = 0.96). Dimensionality reduction via PCA and t-SNE confirmed distinct class separability. Correlation coefficients (r = 0.95-0.98) and Bland-Altman plots demonstrated high agreement between predicted and reference infarct sizes.Integrating radiomic features into a deep learning segmentation pipeline improves MI detection and interpretability in cine-CMR. This scalable and explainable hybrid framework holds potential for broader applications in multimodal cardiac imaging and automated myocardial tissue characterization.

CineMyoPS: Segmenting Myocardial Pathologies from Cine Cardiac MR.

Ding W, Li L, Qiu J, Lin B, Yang M, Huang L, Wu L, Wang S, Zhuang X

pubmed logopapersJul 7 2025
Myocardial infarction (MI) is a leading cause of death worldwide. Late gadolinium enhancement (LGE) and T2-weighted cardiac magnetic resonance (CMR) imaging can respectively identify scarring and edema areas, both of which are essential for MI risk stratification and prognosis assessment. Although combining complementary information from multi-sequence CMR is useful, acquiring these sequences can be time-consuming and prohibitive, e.g., due to the administration of contrast agents. Cine CMR is a rapid and contrast-free imaging technique that can visualize both motion and structural abnormalities of the myocardium induced by acute MI. Therefore, we present a new end-to-end deep neural network, referred to as CineMyoPS, to segment myocardial pathologies, i.e., scars and edema, solely from cine CMR images. Specifically, CineMyoPS extracts both motion and anatomy features associated with MI. Given the interdependence between these features, we design a consistency loss (resembling the co-training strategy) to facilitate their joint learning. Furthermore, we propose a time-series aggregation strategy to integrate MI-related features across the cardiac cycle, thereby enhancing segmentation accuracy for myocardial pathologies. Experimental results on a multi-center dataset demonstrate that CineMyoPS achieves promising performance in myocardial pathology segmentation, motion estimation, and anatomy segmentation.

A Deep Learning Model Integrating Clinical and MRI Features Improves Risk Stratification and Reduces Unnecessary Biopsies in Men with Suspected Prostate Cancer.

Bacchetti E, De Nardin A, Giannarini G, Cereser L, Zuiani C, Crestani A, Girometti R, Foresti GL

pubmed logopapersJul 7 2025
<b>Background:</b> Accurate upfront risk stratification in suspected clinically significant prostate cancer (csPCa) may reduce unnecessary prostate biopsies. Integrating clinical and Magnetic Resonance Imaging (MRI) variables using deep learning could improve prediction. <b>Methods:</b> We retrospectively analysed 538 men who underwent MRI and biopsy between April 2019-September 2024. A fully connected neural network was trained using 5-fold cross-validation. Model 1 included clinical features (age, prostate-specific antigen [PSA], PSA density, digital rectal examination, family history, prior negative biopsy, and ongoing therapy). Model 2 used MRI-derived Prostate Imaging Reporting and Data System (PI-RADS) categories. Model 3 used all previous variables as well as lesion size, location, and prostate volume as determined on MRI. <b>Results:</b> Model 3 achieved the highest area under the receiver operating characteristic curve (AUC = 0.822), followed by Model 2 (AUC = 0.778) and Model 1 (AUC = 0.716). Sensitivities for detecting clinically significant prostate cancer (csPCa) were 87.4%, 91.6%, and 86.8% for Models 1, 2, and 3, respectively. Although Model 3 had slightly lower sensitivity than Model 2, it showed higher specificity, reducing false positives and avoiding 43.4% and 21.2% more biopsies compared to Models 1 and 2. Decision curve analysis showed M2 had the highest net benefit at risk thresholds ≤ 20%, while M3 was superior above 20%. <b>Conclusions:</b> Model 3 improved csPCa risk stratification, particularly in biopsy-averse settings, while Model 2 was more effective in cancer-averse scenarios. These models support personalized, context-sensitive biopsy decisions.

Deep Learning Model Based on Dual-energy CT for Assessing Cervical Lymph Node Metastasis in Oral Squamous Cell Carcinoma.

Qi YM, Zhang LJ, Wang Y, Duan XH, Li YJ, Xiao EH, Luo YH

pubmed logopapersJul 7 2025
Accurate detection of lymph node metastasis (LNM) in oral squamous cell carcinoma (OSCC) is crucial for treatment planning. This study developed a deep learning model using dual-energy CT to improve LNM detection. Preoperative dual-energy CT images (Iodine Map, Fat Map, monoenergetic 70 keV, and RHO/Z Map) and clinical data were collected from two centers. From the first center, 248 patients were divided into training (n=198) and internal validation (n=50) cohorts (8:2 ratio), while 106 patients from the second center comprised the external validation cohort. Region-of-interest images from all four sequences were stacked along the channel dimension to generate fused four-channel composite images. 16 deep learning models were developed as follows: three architectures (Crossformer, Densenet169, Squeezenet1_0) applied to each single-sequence/fused image, followed by MLP integration. Additionally, a Crossformer_Transformer model was constructed based on fused image. The top-performing model was compared against radiologists' assessments. Among the 16 deep learning models trained in this study, the Crossformer_Transformer model demonstrated the best diagnostic performance in predicting LNM in OSCC patients, with an AUC of 0.960 (95% CI: 0.9355-0.9842) on the training dataset, and 0.881 (95% CI: 0.7396-1.0000) and 0.881 (95% CI: 0.8033-0.9590) on the internal and external validation sets, respectively. The average AUC for radiologists across both validation cohorts (0.723-0.819) was lower than that of the model. The Crossformer_Transformer model, validated on multicenter data, shows strong potential for improving preoperative risk assessment and clinical decision-making in cervical LNM for OSCC patients.

Usefulness of compressed sensing coronary magnetic resonance angiography with deep learning reconstruction.

Tabo K, Kido T, Matsuda M, Tokui S, Mizogami G, Takimoto Y, Matsumoto M, Miyoshi M, Kido T

pubmed logopapersJul 7 2025
Coronary magnetic resonance angiography (CMRA) scans are generally time-consuming. CMRA with compressed sensing (CS) and artificial intelligence (AI) (CSAI CMRA) is expected to shorten the imaging time while maintaining image quality. This study aimed to evaluate the usefulness of CS and AI for non-contrast CMRA. Twenty volunteers underwent both CS and conventional CMRA. Conventional CMRA employed parallel imaging (PI) with an acceleration factor of 2. CS CMRA employed a combination of PI and CS with an acceleration factor of 3. Deep learning reconstruction was performed offline on the CS CMRA data after scanning, which was defined as CSAI CMRA. We compared the imaging time, image quality, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and vessel sharpness for each CMRA scan. The CS CMRA scan time was significantly shorter than that of conventional CMRA (460 s [343,753 s] vs. 727 s [567,939 s], p < 0.001). The image quality scores of the left anterior descending artery (LAD) and left circumflex artery (LCX) were significantly higher in conventional CMRA (LAD: 3.3 ± 0.7, LCX: 3.3 ± 0.7) and CSAI CMRA (LAD: 3.7 ± 0.6, LCX: 3.5 ± 0.7) than the CS CMRA (LAD: 2.9 ± 0.6, LCX: 2.9 ± 0.6) (p < 0.05). The right coronary artery scores did not vary among the three groups (p = 0.087). The SNR and CNR were significantly higher in CSAI CMRA (SNR: 12.3 [9.7, 13.7], CNR: 12.3 [10.5, 14.5]) and CS CMRA (SNR: 10.5 [8.2, 12.6], CNR: 9.5 [7.9, 12.6]) than conventional CMRA (SNR: 9.0 [7.8, 11.1], CNR: 7.7 [6.0, 10.1]) (p < 0.01). The vessel sharpness was significantly higher in CSAI CMRA (LAD: 0.87 [0.78, 0.91]) (p < 0.05), with no significant difference between the CS CMRA (LAD: 0.77 [0.71, 0.83]) and conventional CMRA (LAD: 0.77 [0.71, 0.86]). CSAI CMRA can shorten the imaging time while maintaining good image quality.

Performance of a deep-learning-based lung nodule detection system using 0.25-mm thick ultra-high-resolution CT images.

Higashibori H, Fukumoto W, Kusuda S, Yokomachi K, Mitani H, Nakamura Y, Awai K

pubmed logopapersJul 7 2025
Artificial intelligence (AI) algorithms for lung nodule detection assist radiologists. As their performance using ultra-high-resolution CT (U-HRCT) images has not been evaluated, we investigated the usefulness of 0.25-mm slices at U-HRCT using the commercially available deep-learning-based lung nodule detection (DL-LND) system. We enrolled 63 patients who underwent U-HRCT for lung cancer and suspected lung cancer. Two board-certified radiologists identified nodules more than 4 mm in diameter on 1-mm HRCT slices and set the reference standard consensually. They recorded all lesions detected on 5-, 1-, and 0.25-mm slices by the DL-LND system. Unidentified nodules were included in the reference standard. To examine the performance of the DL-LND system, the sensitivity, and positive predictive value (PPV) and the number of false positive (FP) nodules were recorded. The mean number of lesions detected on 5-, 1-, and 0.25-mm slices was 5.1, 7.8 and 7.2 per CT scan. On 5-mm slices the sensitivity and PPV were 79.8% and 46.4%; on 1-mm slices they were 91.5% and 34.8%, and on 0.25-mm slices they were 86.7% and 36.1%. The sensitivity was significantly higher on 1- than 5-mm slices (p < 0.01) while the PPV was significantly lower on 1- than 5-mm slices (p < 0.01). A slice thickness of 0.25 mm failed to improve its performance. The mean number of FP nodules on 5-, 1-, and 0.25-mm slices was 2.8, 5.2, and 4.7 per CT scan. We found that 1 mm was the best slice thickness for U-HRCT images using the commercially available DL-LND system.

Dynamic abdominal MRI image generation using cGANs: A generalized model for various breathing patterns with extensive evaluation.

Cordón-Avila A, Ballı ÖF, Damme K, Abayazid M

pubmed logopapersJul 7 2025
Organ motion is a limiting factor during the treatment of abdominal tumors. During abdominal interventions, medical images are acquired to provide guidance, however, this increases operative time and radiation exposure. In this paper, conditional generative adversarial networks are implemented to generate dynamic magnetic resonance images using external abdominal motion as a surrogate signal. The generator was trained to account for breathing variability, and different models were investigated to improve motion quality. Additionally, an objective and subjective study were conducted to assess image and motion quality. The objective study included different metrics, such as structural similarity index measure (SSIM) and mean absolute error (MAE). In the subjective study, 32 clinical experts participated in evaluating the generated images by completing different tasks. The tasks involved identifying images and videos as real or fake, via a questionnaire allowing experts to assess the realism in static images and dynamic sequences. The results of the best-performing model displayed an SSIM of 0.73 ± 0.13, and the MAE was below 4.5 and 1.8 mm for the superior-inferior and anterior-posterior directions of motion. The proposed framework was compared to a related method that utilized a set of convolutional neural networks combined with recurrent layers. In the subjective study, more than 50% of the generated images and dynamic sequences were classified as real, except for one task. Synthetic images have the potential to reduce the need for acquiring intraoperative images, decreasing time and radiation exposure. A video summary can be found in the supplementary material.

MedGemma Technical Report

Andrew Sellergren, Sahar Kazemzadeh, Tiam Jaroensri, Atilla Kiraly, Madeleine Traverse, Timo Kohlberger, Shawn Xu, Fayaz Jamil, Cían Hughes, Charles Lau, Justin Chen, Fereshteh Mahvar, Liron Yatziv, Tiffany Chen, Bram Sterling, Stefanie Anna Baby, Susanna Maria Baby, Jeremy Lai, Samuel Schmidgall, Lu Yang, Kejia Chen, Per Bjornsson, Shashir Reddy, Ryan Brush, Kenneth Philbrick, Howard Hu, Howard Yang, Richa Tiwari, Sunny Jansen, Preeti Singh, Yun Liu, Shekoofeh Azizi, Aishwarya Kamath, Johan Ferret, Shreya Pathak, Nino Vieillard, Ramona Merhej, Sarah Perrin, Tatiana Matejovicova, Alexandre Ramé, Morgane Riviere, Louis Rouillard, Thomas Mesnard, Geoffrey Cideron, Jean-bastien Grill, Sabela Ramos, Edouard Yvinec, Michelle Casbon, Elena Buchatskaya, Jean-Baptiste Alayrac, Dmitry Lepikhin, Vlad Feinberg, Sebastian Borgeaud, Alek Andreev, Cassidy Hardin, Robert Dadashi, Léonard Hussenot, Armand Joulin, Olivier Bachem, Yossi Matias, Katherine Chou, Avinatan Hassidim, Kavi Goel, Clement Farabet, Joelle Barral, Tris Warkentin, Jonathon Shlens, David Fleet, Victor Cotruta, Omar Sanseviero, Gus Martins, Phoebe Kirk, Anand Rao, Shravya Shetty, David F. Steiner, Can Kirmizibayrak, Rory Pilgrim, Daniel Golden, Lin Yang

arxiv logopreprintJul 7 2025
Artificial intelligence (AI) has significant potential in healthcare applications, but its training and deployment faces challenges due to healthcare's diverse data, complex tasks, and the need to preserve privacy. Foundation models that perform well on medical tasks and require less task-specific tuning data are critical to accelerate the development of healthcare AI applications. We introduce MedGemma, a collection of medical vision-language foundation models based on Gemma 3 4B and 27B. MedGemma demonstrates advanced medical understanding and reasoning on images and text, significantly exceeding the performance of similar-sized generative models and approaching the performance of task-specific models, while maintaining the general capabilities of the Gemma 3 base models. For out-of-distribution tasks, MedGemma achieves 2.6-10% improvement on medical multimodal question answering, 15.5-18.1% improvement on chest X-ray finding classification, and 10.8% improvement on agentic evaluations compared to the base models. Fine-tuning MedGemma further improves performance in subdomains, reducing errors in electronic health record information retrieval by 50% and reaching comparable performance to existing specialized state-of-the-art methods for pneumothorax classification and histopathology patch classification. We additionally introduce MedSigLIP, a medically-tuned vision encoder derived from SigLIP. MedSigLIP powers the visual understanding capabilities of MedGemma and as an encoder achieves comparable or better performance than specialized medical image encoders. Taken together, the MedGemma collection provides a strong foundation of medical image and text capabilities, with potential to significantly accelerate medical research and development of downstream applications. The MedGemma collection, including tutorials and model weights, can be found at https://goo.gle/medgemma.
Page 195 of 3623611 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.