Sort by:
Page 23 of 99982 results

Beyond the norm: Exploring the diverse facets of adrenal lesions.

Afif S, Mahmood Z, Zaheer A, Azadi JR

pubmed logopapersAug 26 2025
Radiological diagnosis of adrenal lesions can be challenging due to the overlap between benign and malignant imaging features. The primary challenge in managing adrenal lesions is to accurately identify and characterize them to minimize unnecessary diagnostic examinations and interventions. However, there are substantial risks of underdiagnosis and misdiagnosis. This review article provides a comprehensive overview of typical, atypical, and overlapping imaging features of both common and rare adrenal lesions and explores emerging applications of artificial intelligence powered analysis of CT and MRI, which could play a pivotal role in distinguishing benign from malignant and functioning from non-functioning adrenal lesions with significant diagnostic accuracy, thereby enhancing diagnostic confidence and potentially reducing unnecessary interventions.

Stress-testing cross-cancer generalizability of 3D nnU-Net for PET-CT tumor segmentation: multi-cohort evaluation with novel oesophageal and lung cancer datasets

Soumen Ghosh, Christine Jestin Hannan, Rajat Vashistha, Parveen Kundu, Sandra Brosda, Lauren G. Aoude, James Lonie, Andrew Nathanson, Jessica Ng, Andrew P. Barbour, Viktor Vegh

arxiv logopreprintAug 26 2025
Robust generalization is essential for deploying deep learning based tumor segmentation in clinical PET-CT workflows, where anatomical sites, scanners, and patient populations vary widely. This study presents the first cross cancer evaluation of nnU-Net on PET-CT, introducing two novel, expert-annotated whole-body datasets. 279 patients with oesophageal cancer (Australian cohort) and 54 with lung cancer (Indian cohort). These cohorts complement the public AutoPET dataset and enable systematic stress-testing of cross domain performance. We trained and tested 3D nnUNet models under three paradigms. Target only (oesophageal), public only (AutoPET), and combined training. For the tested sets, the oesophageal only model achieved the best in-domain accuracy (mean DSC, 57.8) but failed on external Indian lung cohort (mean DSC less than 3.4), indicating severe overfitting. The public only model generalized more broadly (mean DSC, 63.5 on AutoPET, 51.6 on Indian lung cohort) but underperformed in oesophageal Australian cohort (mean DSC, 26.7). The combined approach provided the most balanced results (mean DSC, lung (52.9), oesophageal (40.7), AutoPET (60.9)), reducing boundary errors and improving robustness across all cohorts. These findings demonstrate that dataset diversity, particularly multi demographic, multi center and multi cancer integration, outweighs architectural novelty as the key driver of robust generalization. This work presents the demography based cross cancer deep learning segmentation evaluation and highlights dataset diversity, rather than model complexity, as the foundation for clinically robust segmentation.

Physical foundations for trustworthy medical imaging: A survey for artificial intelligence researchers.

Cobo M, Corral Fontecha D, Silva W, Lloret Iglesias L

pubmed logopapersAug 26 2025
Artificial intelligence in medical imaging has grown rapidly in the past decade, driven by advances in deep learning and widespread access to computing resources. Applications cover diverse imaging modalities, including those based on electromagnetic radiation (e.g., X-rays), subatomic particles (e.g., nuclear imaging), and acoustic waves (ultrasound). Each modality features and limitations are defined by its underlying physics. However, many artificial intelligence practitioners lack a solid understanding of the physical principles involved in medical image acquisition. This gap hinders leveraging the full potential of deep learning, as incorporating physics knowledge into artificial intelligence systems promotes trustworthiness, especially in limited data scenarios. This work reviews the fundamental physical concepts behind medical imaging and examines their influence on recent developments in artificial intelligence, particularly, generative models and reconstruction algorithms. Finally, we describe physics-informed machine learning approaches to improve feature learning in medical imaging.

Efficient 3D Biomedical Image Segmentation by Parallelly Multiscale Transformer-CNN Aggregation Network.

Liu W, He Y, Man T, Zhu F, Chen Q, Huang Y, Feng X, Li B, Wan Y, He J, Deng S

pubmed logopapersAug 25 2025
Accurate and automated segmentation of 3D biomedical images is a sophisticated imperative in clinical diagnosis, imaging-guided surgery, and prognosis judgment. Although the burgeoning of deep learning technologies has fostered smart segmentators, the successive and simultaneous garnering global and local features still remains challenging, which is essential for an exact and efficient imageological assay. To this end, a segmentation solution dubbed the mixed parallel shunted transformer (MPSTrans) is developed here, highlighting 3D-MPST blocks in a U-form framework. It enabled not only comprehensive characteristic capture and multiscale slice synchronization but also deep supervision in the decoder to facilitate the fetching of hierarchical representations. Performing on an unpublished colon cancer data set, this model achieved an impressive increase in dice similarity coefficient (DSC) and a 1.718 mm decease in Hausdorff distance at 95% (HD95), alongside a substantial shrink of computational load of 56.7% in giga floating-point operations per second (GFLOPs). Meanwhile, MPSTrans outperforms other mainstream methods (Swin UNETR, UNETR, nnU-Net, PHTrans, and 3D U-Net) on three public multiorgan (aorta, gallbladder, kidney, liver, pancreas, spleen, stomach, etc.) and multimodal (CT, PET-CT, and MRI) data sets of medical segmentation decathlon (MSD) brain tumor, multiatlas labeling beyond cranial vault (BCV), and automated cardiac diagnosis challenge (ACDC), accentuating its adaptability. These results reflect the potential of MPSTrans to advance the state-of-the-art in biomedical imaging analysis, which would offer a robust tool for enhanced diagnostic capacity.

DYNAFormer: Enhancing transformer segmentation with dynamic anchor mask for medical imaging.

Nguyen TC, Phung KA, Dao TTP, Nguyen-Mau TH, Nguyen-Quang T, Pham CN, Le TN, Shen J, Nguyen TV, Tran MT

pubmed logopapersAug 25 2025
Polyp shape is critical for diagnosing colorectal polyps and assessing cancer risk, yet there is limited data on segmenting pedunculated and sessile polyps. This paper introduces PolypDB_INS, a dataset of 4403 images containing 4918 annotated polyps, specifically for sessile and pedunculated polyps. In addition, we propose DYNAFormer, a novel transformer-based model utilizing an anchor mask-guided mechanism that incorporates cross-attention, dynamic query updates, and query denoising for improved object segmentation. Treating each positional query as an anchor mask dynamically updated through decoder layers enhances perceptual information regarding the object's position, allowing for more precise segmentation of complex structures like polyps. Extensive experiments on the PolypDB_INS dataset using standard evaluation metrics for both instance and semantic segmentation show that DYNAFormer significantly outperforms state-of-the-art methods. Ablation studies confirm the effectiveness of the proposed techniques, highlighting the model's robustness for diagnosing colorectal cancer. The source code and dataset are available at https://github.com/ntcongvn/DYNAFormer https://github.com/ntcongvn/DYNAFormer.

Deep learning steganography for big data security using squeeze and excitation with inception architectures.

Issac BM, Kumar SN, Zafar S, Shakil KA, Wani MA

pubmed logopapersAug 25 2025
With the exponential growth of big data in domains such as telemedicine and digital forensics, the secure transmission of sensitive medical information has become a critical concern. Conventional steganographic methods often fail to maintain diagnostic integrity or exhibit robustness against noise and transformations. In this study, we propose a novel deep learning-based steganographic framework that combines Squeeze-and-Excitation (SE) blocks, Inception modules, and residual connections to address these challenges. The encoder integrates dilated convolutions and SE attention to embed secret medical images within natural cover images, while the decoder employs residual and multi-scale Inception-based feature extraction for accurate reconstruction. Designed for deployment on NVIDIA Jetson TX2, the model ensures real-time, low-power operation suitable for edge healthcare applications. Experimental evaluation on MRI and OCT datasets demonstrates the model's efficacy, achieving Peak Signal-to-Noise Ratio (PSNR) values of 39.02 and 38.75, and Structural Similarity Index (SSIM) values of 0.9757, confirming minimal visual distortion. This research contributes to advancing secure, high-capacity steganographic systems for practical use in privacy-sensitive environments.

Diffusion-Based Data Augmentation for Medical Image Segmentation

Maham Nazir, Muhammad Aqeel, Francesco Setti

arxiv logopreprintAug 25 2025
Medical image segmentation models struggle with rare abnormalities due to scarce annotated pathological data. We propose DiffAug a novel framework that combines textguided diffusion-based generation with automatic segmentation validation to address this challenge. Our proposed approach uses latent diffusion models conditioned on medical text descriptions and spatial masks to synthesize abnormalities via inpainting on normal images. Generated samples undergo dynamic quality validation through a latentspace segmentation network that ensures accurate localization while enabling single-step inference. The text prompts, derived from medical literature, guide the generation of diverse abnormality types without requiring manual annotation. Our validation mechanism filters synthetic samples based on spatial accuracy, maintaining quality while operating efficiently through direct latent estimation. Evaluated on three medical imaging benchmarks (CVC-ClinicDB, Kvasir-SEG, REFUGE2), our framework achieves state-of-the-art performance with 8-10% Dice improvements over baselines and reduces false negative rates by up to 28% for challenging cases like small polyps and flat lesions critical for early detection in screening applications.

Evaluating the diagnostic accuracy of AI in ischemic and hemorrhagic stroke: A comprehensive meta-analysis.

Gul N, Fatima Y, Shaikh HS, Raheel M, Ali A, Hasan SU

pubmed logopapersAug 25 2025
Stroke poses a significant health challenge, with ischemic and hemorrhagic subtypes requiring timely and accurate diagnosis for effective management. Traditional imaging techniques like CT have limitations, particularly in early ischemic stroke detection. Recent advancements in artificial intelligence (AI) offer potential improvements in stroke diagnosis by enhancing imaging interpretation. This meta-analysis aims to evaluate the diagnostic accuracy of AI systems compared to human experts in detecting ischemic and hemorrhagic strokes. The review was conducted following PRISMA-DTA guidelines. Studies included stroke patients evaluated in emergency settings using AI-Based models on CT or MRI imaging, with human radiologists as the reference standard. Databases searched were MEDLINE, Scopus, and Cochrane Central, up to January 1, 2024. The primary outcome measured was diagnostic accuracy, including sensitivity, specificity, and AUROC and the methodological quality was assessed using QUADAS-2. Nine studies met the inclusion criteria and were included. The pooled analysis for ischemic stroke revealed a mean sensitivity of 86.9% (95% CI: 69.9%-95%) and specificity of 88.6% (95% CI: 77.8%-94.5%). For hemorrhagic stroke, the pooled sensitivity and specificity were 90.6% (95% CI: 86.2%-93.6%) and 93.9% (95% CI: 87.6%-97.2%), respectively. The diagnostic odds ratios indicated strong diagnostic efficacy, particularly for hemorrhagic stroke (DOR: 148.8, 95% CI: 79.9-277.2). AI-Based systems exhibit high diagnostic accuracy for both ischemic and hemorrhagic strokes, closely approaching that of human radiologists. These findings underscore the potential of AI to improve diagnostic precision and expedite clinical decision-making in acute stroke settings.

Motion Management in Positron Emission Tomography/Computed Tomography and Positron Emission Tomography/Magnetic Resonance.

Guo L, Liu C, Soultanidis G

pubmed logopapersAug 25 2025
Motion in clinical positron emission tomography (PET) examinations degrades image quality and quantification, requiring tailored correction strategies. Recent advancements integrate external devices and/or data-driven motion tracking with image registration and motion modeling, particularly deep learning-based methods, to address complex motion scenarios. The development of total-body PET systems with long axial field-of-view enables advanced motion correction by leveraging extended coverage and continuous acquisition. These innovations enhance the accuracy of motion estimation and correction across various clinical applications, improve quantitative reliability in static and dynamic imaging, and enable more precise assessments in oncology, neurology, and cardiovascular PET studies.

Multimodal Positron Emission Tomography/Computed Tomography Radiomics Combined with a Clinical Model for Preoperative Prediction of Invasive Pulmonary Adenocarcinoma in Ground-Glass Nodules.

Wang X, Li P, Li Y, Zhang R, Duan F, Wang D

pubmed logopapersAug 25 2025
To develop and validate predictive models based on <sup>18</sup>F-fluorodeoxyglucose positron emission tomography/computed tomography (<sup>18</sup>F-FDG PET/CT) radiomics and a clinical model for differentiating invasive adenocarcinoma (IAC) from non-invasive ground-glass nodules (GGNs) in early-stage lung cancer. A total of 164 patients with GGNs histologically confirmed as part of the lung adenocarcinoma spectrum (including both invasive and non-invasive subtypes) who underwent preoperative <sup>18</sup>F-FDG PET/CT and surgery. Radiomic features were extracted from PET and CT images. Models were constructed using support vector machine (SVM), random forest (RF), and extreme gradient boosting (XGBoost). Five predictive models (CT, PET, PET/CT, Clinical, Combined) were evaluated using receiver operating characteristic (ROC) curves, decision curve analysis (DCA), and calibration curves. Statistical comparisons were performed using DeLong's test, net reclassification improvement (NRI), and integrated discrimination improvement (IDI). The Combined model, integrating PET/CT radiomic features with the clinical model, achieved the highest diagnostic performance (AUC: 0.950 in training, 0.911 in test). It consistently showed superior IDI and NRI across both cohorts and significantly outperformed the clinical model (DeLong p = 0.027), confirming its enhanced predictive power through multimodal integration. A clinical nomogram was constructed from the final model to support individualized risk stratification. Integrating PET/CT radiomic features with a clinical model significantly enhances the preoperative prediction of GGN invasiveness. This multimodal image data may assist in preoperative risk stratification and support personalized surgical decision-making in early-stage lung adenocarcinoma.
Page 23 of 99982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.