Sort by:
Page 35 of 78779 results

Domain-randomized deep learning for neuroimage analysis

Malte Hoffmann

arxiv logopreprintJul 17 2025
Deep learning has revolutionized neuroimage analysis by delivering unprecedented speed and accuracy. However, the narrow scope of many training datasets constrains model robustness and generalizability. This challenge is particularly acute in magnetic resonance imaging (MRI), where image appearance varies widely across pulse sequences and scanner hardware. A recent domain-randomization strategy addresses the generalization problem by training deep neural networks on synthetic images with randomized intensities and anatomical content. By generating diverse data from anatomical segmentation maps, the approach enables models to accurately process image types unseen during training, without retraining or fine-tuning. It has demonstrated effectiveness across modalities including MRI, computed tomography, positron emission tomography, and optical coherence tomography, as well as beyond neuroimaging in ultrasound, electron and fluorescence microscopy, and X-ray microtomography. This tutorial paper reviews the principles, implementation, and potential of the synthesis-driven training paradigm. It highlights key benefits, such as improved generalization and resistance to overfitting, while discussing trade-offs such as increased computational demands. Finally, the article explores practical considerations for adopting the technique, aiming to accelerate the development of generalizable tools that make deep learning more accessible to domain experts without extensive computational resources or machine learning knowledge.

AortaDiff: Volume-Guided Conditional Diffusion Models for Multi-Branch Aortic Surface Generation

Delin An, Pan Du, Jian-Xun Wang, Chaoli Wang

arxiv logopreprintJul 17 2025
Accurate 3D aortic construction is crucial for clinical diagnosis, preoperative planning, and computational fluid dynamics (CFD) simulations, as it enables the estimation of critical hemodynamic parameters such as blood flow velocity, pressure distribution, and wall shear stress. Existing construction methods often rely on large annotated training datasets and extensive manual intervention. While the resulting meshes can serve for visualization purposes, they struggle to produce geometrically consistent, well-constructed surfaces suitable for downstream CFD analysis. To address these challenges, we introduce AortaDiff, a diffusion-based framework that generates smooth aortic surfaces directly from CT/MRI volumes. AortaDiff first employs a volume-guided conditional diffusion model (CDM) to iteratively generate aortic centerlines conditioned on volumetric medical images. Each centerline point is then automatically used as a prompt to extract the corresponding vessel contour, ensuring accurate boundary delineation. Finally, the extracted contours are fitted into a smooth 3D surface, yielding a continuous, CFD-compatible mesh representation. AortaDiff offers distinct advantages over existing methods, including an end-to-end workflow, minimal dependency on large labeled datasets, and the ability to generate CFD-compatible aorta meshes with high geometric fidelity. Experimental results demonstrate that AortaDiff performs effectively even with limited training data, successfully constructing both normal and pathologically altered aorta meshes, including cases with aneurysms or coarctation. This capability enables the generation of high-quality visualizations and positions AortaDiff as a practical solution for cardiovascular research.

Exploring ChatGPT's potential in diagnosing oral and maxillofacial pathologies: a study of 123 challenging cases.

Tassoker M

pubmed logopapersJul 17 2025
This study aimed to evaluate the diagnostic performance of ChatGPT-4o, a large language model developed by OpenAI, in challenging cases of oral and maxillofacial diseases presented in the <i>Clinicopathologic Conference</i> section of the journal <i>Oral Surgery</i>, <i>Oral Medicine</i>, <i>Oral Pathology</i>, <i>Oral Radiology</i>. A total of 123 diagnostically challenging oral and maxillofacial cases published in the aforementioned journal were retrospectively collected. The case presentations, which included detailed clinical, radiographic, and sometimes histopathologic descriptions, were input into ChatGPT-4o. The model was prompted to provide a single most likely diagnosis for each case. These outputs were then compared to the final diagnoses established by expert consensus in each original case report. The accuracy of ChatGPT-4o was calculated based on exact diagnostic matches. ChatGPT-4o correctly diagnosed 96 out of 123 cases, achieving an overall diagnostic accuracy of 78%. Nevertheless, even in cases where the exact diagnosis was not provided, the model often suggested one of the clinically reasonable differential diagnoses. ChatGPT-4o demonstrates a promising ability to assist in the diagnostic process of complex maxillofacial conditions, with a relatively high accuracy rate in challenging cases. While it is not a replacement for expert clinical judgment, large language models may offer valuable decision support in oral and maxillofacial radiology, particularly in educational or consultative contexts. Not applicable.

Pixel Perfect MegaMed: A Megapixel-Scale Vision-Language Foundation Model for Generating High Resolution Medical Images

Zahra TehraniNasab, Amar Kumar, Tal Arbel

arxiv logopreprintJul 17 2025
Medical image synthesis presents unique challenges due to the inherent complexity and high-resolution details required in clinical contexts. Traditional generative architectures such as Generative Adversarial Networks (GANs) or Variational Auto Encoder (VAEs) have shown great promise for high-resolution image generation but struggle with preserving fine-grained details that are key for accurate diagnosis. To address this issue, we introduce Pixel Perfect MegaMed, the first vision-language foundation model to synthesize images at resolutions of 1024x1024. Our method deploys a multi-scale transformer architecture designed specifically for ultra-high resolution medical image generation, enabling the preservation of both global anatomical context and local image-level details. By leveraging vision-language alignment techniques tailored to medical terminology and imaging modalities, Pixel Perfect MegaMed bridges the gap between textual descriptions and visual representations at unprecedented resolution levels. We apply our model to the CheXpert dataset and demonstrate its ability to generate clinically faithful chest X-rays from text prompts. Beyond visual quality, these high-resolution synthetic images prove valuable for downstream tasks such as classification, showing measurable performance gains when used for data augmentation, particularly in low-data regimes. Our code is accessible through the project website - https://tehraninasab.github.io/pixelperfect-megamed.

Multi-modal Risk Stratification in Heart Failure with Preserved Ejection Fraction Using Clinical and CMR-derived Features: An Approach Incorporating Model Explainability.

Zhang S, Lin Y, Han D, Pan Y, Geng T, Ge H, Zhao J

pubmed logopapersJul 17 2025
Heart failure with preserved ejection fraction (HFpEF) poses significant diagnostic and prognostic challenges due to its clinical heterogeneity. This study proposes a multi-modal, explainable machine learning framework that integrates clinical variables and cardiac magnetic resonance (CMR)-derived features, particularly epicardial adipose tissue (EAT) volume, to improve risk stratification and outcome prediction in patients with HFpEF. A retrospective cohort of 301 participants (171 in the HFpEF group and 130 in the control group) was analyzed. Baseline characteristics, CMR-derived EAT volume, and laboratory biomarkers were integrated into machine learning models. Model performance was evaluated using accuracy, precision, recall, and F1-score. Additionally, receiver operating characteristic area under the curve (ROC-AUC) and precision-recall area under the curve (PR-AUC) were employed to assess discriminative power across varying decision thresholds. Hyperparameter optimization and ensemble techniques were applied to enhance predictive performance. HFpEF patients exhibited significantly higher EAT volume (70.9±27.3 vs. 41.9±18.3 mL, p<0.001) and NT-proBNP levels (1574 [963,2722] vs. 33 [10,100] pg/mL, p<0.001), along with a greater prevalence of comorbidities. The voting classifier demonstrated the highest accuracy for HFpEF diagnosis (0.94), with a precision of 0.96, recall of 0.94, and an F1-score of 0.95. For prognostic tasks, AdaBoost, XGBoost and Random Forest yielded superior performance in predicting adverse clinical outcomes, including rehospitalization and all-cause mortality (accuracy: 0.95). Key predictive features identified included EAT volume, right atrioventricular groove (Right AVG), tricuspid regurgitation velocity (TRV), and metabolic syndrome. Explainable models combining clinical and CMR-derived features, especially EAT volume, improve support for HFpEF diagnosis and outcome prediction. These findings highlight the value of a data-driven, interpretable approach to characterizing HFpEF phenotypes and may facilitate individualized risk assessment in selected populations.

Evolving techniques in the endoscopic evaluation and management of pancreas cystic lesions.

Maloof T, Karaisz F, Abdelbaki A, Perumal KD, Krishna SG

pubmed logopapersJul 17 2025
Accurate diagnosis of pancreatic cystic lesions (PCLs) is essential to guide appropriate management and reduce unnecessary surgeries. Despite multiple guidelines in PCL management, a substantial proportion of patients still undergo major resections for benign cysts, and a majority of resected intraductal papillary mucinous neoplasms (IPMNs) show only low-grade dysplasia, leading to significant clinical, financial, and psychological burdens. This review highlights emerging endoscopic approaches that enhance diagnostic accuracy and support organ-sparing, minimally invasive management of PCLs. Recent studies suggest that endoscopic ultrasound (EUS) and its accessory techniques, such as contrast-enhanced EUS and needle-based confocal laser endomicroscopy, as well as next-generation sequencing analysis of cyst fluid, not only accurately characterize PCLs but are also well tolerated and cost-effective. Additionally, emerging therapeutics such as EUS-guided radiofrequency ablation (RFA) and EUS-chemoablation are promising as minimally invasive treatments for high-risk mucinous PCLs in patients who are not candidates for surgery. Accurate diagnosis of PCLs remains challenging, leading to many patients undergoing unnecessary surgery. Emerging endoscopic imaging biomarkers, artificial intelligence analysis, and molecular biomarkers enhance diagnostic precision. Additionally, novel endoscopic ablative therapies offer safe, minimally invasive, organ-sparing treatment options, thereby reducing the healthcare resource burdens associated with overtreatment.

Cross-Modal conditional latent diffusion model for Brain MRI to Ultrasound image translation.

Jiang S, Wang L, Li Y, Yang Z, Zhou Z, Li B

pubmed logopapersJul 16 2025
Intraoperative brain ultrasound (US) provides real-time information on lesions and tissues, making it crucial for brain tumor resection. However, due to limitations such as imaging angles and operator techniques, US data is limited in size and difficult to annotate, hindering advancements in intelligent image processing. In contrast, Magnetic Resonance Imaging (MRI) data is more abundant and easier to annotate. If MRI data and models can be effectively transferred to the US domain, generating high-quality US data would greatly enhance US image processing and improve intraoperative US readability.&#xD;Approach. We propose a Cross-Modal Conditional Latent Diffusion Model (CCLD) for brain MRI-to-US image translation. We employ a noise mask restoration strategy to pretrain an efficient encoder-decoder, enhancing feature extraction, compression, and reconstruction capabilities while reducing computational costs. Furthermore, CCLD integrates the Frequency-Decomposed Feature Optimization Module (FFOM) and the Adaptive Multi-Frequency Feature Fusion Module (AMFM) to effectively leverage MRI structural information and US texture characteristics, ensuring structural accuracy while enhancing texture details in the synthetic US images.&#xD;Main results. Compared with state-of-the-art methods, our approach achieves superior performance on the ReMIND dataset, obtaining the best Learned Perceptual Image Patch Similarity (LPIPS) score of 19.1%, Mean Absolute Error (MAE) of 4.21%, as well as the highest Peak Signal-to-Noise Ratio (PSNR) of 25.36 dB and Structural Similarity Index (SSIM) of 86.91%. &#xD;Significance. Experimental results demonstrate that CCLD effectively improves the quality and realism of synthetic ultrasound images, offering a new research direction for the generation of high-quality US datasets and the enhancement of ultrasound image readability.&#xD.

Late gadolinium enhancement imaging and sudden cardiac death.

Prasad SK, Akbari T, Bishop MJ, Halliday BP, Leyva-Leon F, Marchlinski F

pubmed logopapersJul 16 2025
The prediction and management of sudden cardiac death risk continue to pose significant challenges in cardiovascular care despite advances in therapies over the last two decades. Late gadolinium enhancement (LGE) on cardiac magnetic resonance-a marker of myocardial fibrosis-is a powerful non-invasive tool with the potential to aid the prediction of sudden death and direct the use of preventative therapies in several cardiovascular conditions. In this state-of-the-art review, we provide a critical appraisal of the current evidence base underpinning the utility of LGE in both ischaemic and non-ischaemic cardiomyopathies together with a focus on future perspectives and the role for machine learning and digital twin technologies.

From Referral to Reporting: The Potential of Large Language Models in the Radiological Workflow.

Fink A, Rau S, Kästingschäfer K, Weiß J, Bamberg F, Russe MF

pubmed logopapersJul 16 2025
Large language models (LLMs) hold great promise for optimizing and supporting radiology workflows amidst rising workloads. This review examines potential applications in daily radiology practice, as well as remaining challenges and potential solutions.Presentation of potential applications and challenges, illustrated with practical examples and concrete optimization suggestions.LLM-based assistance systems have potential applications in almost all language-based process steps of the radiological workflow. Significant progress has been made in areas such as report generation, particularly with retrieval-augmented generation (RAG) and multi-step reasoning approaches. However, challenges related to hallucinations, reproducibility, and data protection, as well as ethical concerns, need to be addressed before widespread implementation.LLMs have immense potential in radiology, particularly for supporting language-based process steps, with technological advances such as RAG and cloud-based approaches potentially accelerating clinical implementation. · LLMs can optimize reporting and other language-based processes in radiology with technologies such as RAG and multi-step reasoning approaches.. · Challenges such as hallucinations, reproducibility, privacy, and ethical concerns must be addressed before widespread adoption.. · RAG and cloud-based approaches could help overcome these challenges and advance the clinical implementation of LLMs.. · Fink A, Rau S, Kästingschäfer K et al. From Referral to Reporting: The Potential of Large Language Models in the Radiological Workflow. Rofo 2025; DOI 10.1055/a-2641-3059.

Illuminating radiogenomic signatures in pediatric-type diffuse gliomas: insights into molecular, clinical, and imaging correlations. Part II: low-grade group.

Kurokawa R, Hagiwara A, Ito R, Ueda D, Saida T, Sakata A, Nishioka K, Sugawara S, Takumi K, Watabe T, Ide S, Kawamura M, Sofue K, Hirata K, Honda M, Yanagawa M, Oda S, Iima M, Naganawa S

pubmed logopapersJul 16 2025
The fifth edition of the World Health Organization classification of central nervous system tumors represents a significant advancement in the molecular-genetic classification of pediatric-type diffuse gliomas. This article comprehensively summarizes the clinical, molecular, and radiological imaging features in pediatric-type low-grade gliomas (pLGGs), including MYB- or MYBL1-altered tumors, polymorphous low-grade neuroepithelial tumor of the young (PLNTY), and diffuse low-grade glioma, MAPK pathway-altered. Most pLGGs harbor alterations in the RAS/MAPK pathway, functioning as "one pathway disease". Specific magnetic resonance imaging features, such as the T2-fluid-attenuated inversion recovery (FLAIR) mismatch sign in MYB- or MYBL1-altered tumors and the transmantle-like sign in PLNTYs, may serve as non-invasive biomarkers for underlying molecular alterations. Recent advances in radiogenomics have enabled the differentiation of BRAF fusion from BRAF V600E mutant tumors based on magnetic resonance imaging characteristics. Machine learning approaches have further enhanced our ability to predict molecular subtypes from imaging features. These radiology-molecular correlations offer potential clinical utility in treatment planning and prognostication, especially as targeted therapies against the MAPK pathway emerge. Continued research is needed to refine our understanding of genotype-phenotype correlations in less common molecular alterations and to validate these imaging biomarkers in larger cohorts.
Page 35 of 78779 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.