Sort by:
Page 162 of 6486473 results

Garcia GM, Young P, Dawood L, Elshikh M

pubmed logopapersSep 16 2025
This study aims to provide a comprehensive comparison of the performance and reproducibility of two commercially available artificial intelligence (AI) software computer-aided triage and notification solutions, Vendor A (Aidoc) and Vendor B (Viz.ai), for the detection of intracranial hemorrhage (ICH) on non-contrast enhanced head CT (NCHCT) scans performed within a single academic institution. The retrospective analysis was conducted on a large patient cohort from multiple healthcare settings within a single academic institution, utilizing standardized scanning protocols. Sensitivity, specificity, false positive, and false negative rates were evaluated for both vendors. Outputs assessed included AI-generated case-level classification. Among 4,081 scans, 595 were positive for ICH. Vendor A demonstrated a sensitivity of 94.4% and specificity of 97.4%, PPV of 85.9%, and NPV of 99.1%. Vendor B showed a sensitivity of 59.5% and specificity of 99.0%, PPV of 90.0%, and NPV of 92.6%. Vendor A had 20 false negatives, which primarily involved subdural and intraparenchymal hemorrhages, and 97 false positives, which appear to be related to motion artifact. Vendor B had 145 false negatives, largely comprised of subdural and subarachnoid hemorrhages, and 36 false positives, which appeared to be related to motion artifact and calcified or dense lesions. Concordantly, 18 cases were false negatives and 11 cases were false positives for both AI solutions. The findings of this study provide valuable information for clinicians and healthcare institutions considering the implementation of AI software for computer aided-triage and notification in the detection of intracranial hemorrhage. The discussion encompasses the implications of the results, the importance of evaluating AI findings in context-especially in the absence of explainability tools, potential areas for improvement, and the relevance of standardized scanning protocols in ensuring the reliability of AI-based diagnostic tools in clinical practice. ICH = Intracranial Hemorrhage; NCHCT = Non-contrast Enhanced Head CT; AI = Artificial Intelligence; SDH = Subdural Hemorrhage; SAH = Subarachnoid Hemorrhage; IPH = Intraparenchymal Hemorrhage; IVH = Intraventricular Hemorrhage; PPV = Positive Predictive Value; NPV = Negative Predictive Value; CADt = Computer-Aided Triage; PACS = Picture Archiving and Communication System; FN = False Negative; FP = False Positive; CI = Confidence Interval.

Barraclough JY, Gandomkar Z, Fletcher RA, Barbieri S, Kuo NI, Rodgers A, Douglas K, Poppe KK, Woodward M, Luxan BG, Neal B, Jorm L, Brennan P, Arnott C

pubmed logopapersSep 16 2025
Cardiovascular risk is underassessed in women. Many women undergo screening mammography in midlife when the risk of cardiovascular disease rises. Mammographic features such as breast arterial calcification and tissue density are associated with cardiovascular risk. We developed and tested a deep learning algorithm for cardiovascular risk prediction based on routine mammography images. Lifepool is a cohort of women with at least one screening mammogram linked to hospitalisation and death databases. A deep learning model based on DeepSurv architecture was developed to predict major cardiovascular events from mammography images. Model performance was compared against standard risk prediction models using the concordance index, comparative to the Harrells C-statistic. There were 49 196 women included, with a median follow-up of 8.8 years (IQR 7.7-10.6), among whom 3392 experienced a first major cardiovascular event. The DeepSurv model using mammography features and participant age had a concordance index of 0.72 (95% CI 0.71 to 0.73), with similar performance to modern models containing age and clinical variables including the New Zealand 'PREDICT' tool and the American Heart Association 'PREVENT' equations. A deep learning algorithm based on only mammographic features and age predicted cardiovascular risk with performance comparable to traditional cardiovascular risk equations. Risk assessments based on mammography may be a novel opportunity for improving cardiovascular risk screening in women.

Fan M, Zhu Z, Yu Z, Du J, Xie S, Pan X, Chen S, Li L

pubmed logopapersSep 16 2025
Pathological diagnosis remains the gold standard for diagnosing breast cancer and is highly accurate and sensitive, which is crucial for assessing pathological complete response (pCR) and lymph node metastasis (LNM) following neoadjuvant chemotherapy (NACT). Dynamic contrast-enhanced MRI (DCE-MRI) is a noninvasive technique that provides detailed morphological and functional insights into tumors. The optimal complementarity of these two modalities, particularly in situations where one is unavailable, and their integration to enhance therapeutic predictions have not been fully explored. To this end, we propose a cross-modality image transformer (CMIT) model designed for feature synthesis and fusion to predict pCR and LNM in breast cancer. This model enables interaction and integration between the two modalities via a transformer's cross-attention module. A modality information transfer module is developed to produce synthetic pathological image features (sPIFs) from DCE-MRI data and synthetic DCE-MRI features (sMRIs) from pathological images. During training, the model leverages both real and synthetic imaging features to increase the predictive performance. In the prediction phase, the synthetic imaging features are fused with the corresponding real imaging feature to make predictions. The experimental results demonstrate that the proposed CMIT model, which integrates DCE-MRI with sPIFs or histopathological images with sMRI, outperforms (with AUCs of 0.809 and 0.852, respectively) the use of MRI or pathological images alone in predicting the pCR to NACT. Similar improvements were observed in LNM prediction. For LNM prediction, the DCE-MRI model's performance improved from an AUC of 0.637 to 0.712, while the DCE-MRI-guided histopathological model achieved an AUC of 0.792. Notably, our proposed model can predict treatment response effectively via DCE-MRI, regardless of the availability of actual histopathological images.

Lesta GD, Deserno TM, Abani S, Janisch J, Hänsch A, Laue M, Winzer S, Dickinson PJ, De Decker S, Gutierrez-Quintana R, Subbotin A, Bocharova K, McLarty E, Lemke L, Wang-Leandro A, Spohn F, Volk HA, Nessler JN

pubmed logopapersSep 16 2025
Brain extraction is a common preprocessing step when working with intracranial medical imaging data. While several tools exist to automate the preprocessing of magnetic resonance imaging (MRI) of the human brain, none are available for canine MRIs. We present a pipeline mapping separate 2D scans to a 3D image, and a neural network for canine brain extraction. The training dataset consisted of T1-weighted and contrast-enhanced images from 68 dogs of different breeds, all cranial conformations (mesaticephalic, dolichocephalic, brachycephalic), with several pathological conditions, taken at three institutions. Testing was performed on a similarly diverse group of 10 dogs with images from a 4th institution. The model achieved excellent results in terms of Dice ([Formula: see text]) and Jaccard ([Formula: see text]) metrics and generalised well across different MRI scanners, the three aforementioned skull types, and variations in head size and breed. The pipeline was effective for a combination of one to three acquisition planes (i.e., transversal, dorsal, and sagittal). Aside from the T1 weighted imaging training datasets, the model also performed well on other MRI sequences with Jaccardian indices and median Dice scores ranging from 0.86 to 0.89 and 0.92 to 0.94, respectively. Our approach was robust for automated brain extraction. Variations in canine anatomy and performance degradation in multi-scanner data can largely be mitigated through normalisation and augmentation techniques. Brain extraction, as a preprocessing step, can improve the accuracy of an algorithm for abnormality classification in MRI image slices.

Samer Al-Hamadani

arxiv logopreprintSep 16 2025
The rapid advancement of artificial intelligence (AI) in healthcare imaging has revolutionized diagnostic medicine and clinical decision-making processes. This work presents an intelligent multimodal framework for medical image analysis that leverages Vision-Language Models (VLMs) in healthcare diagnostics. The framework integrates Google Gemini 2.5 Flash for automated tumor detection and clinical report generation across multiple imaging modalities including CT, MRI, X-ray, and Ultrasound. The system combines visual feature extraction with natural language processing to enable contextual image interpretation, incorporating coordinate verification mechanisms and probabilistic Gaussian modeling for anomaly distribution. Multi-layered visualization techniques generate detailed medical illustrations, overlay comparisons, and statistical representations to enhance clinical confidence, with location measurement achieving 80 pixels average deviation. Result processing utilizes precise prompt engineering and textual analysis to extract structured clinical information while maintaining interpretability. Experimental evaluations demonstrated high performance in anomaly detection across multiple modalities. The system features a user-friendly Gradio interface for clinical workflow integration and demonstrates zero-shot learning capabilities to reduce dependence on large datasets. This framework represents a significant advancement in automated diagnostic support and radiological workflow efficiency, though clinical validation and multi-center evaluation are necessary prior to widespread adoption.

Haodong Li, Shuo Han, Haiyang Mao, Yu Shi, Changsheng Fang, Jianjia Zhang, Weiwen Wu, Hengyong Yu

arxiv logopreprintSep 16 2025
Sparse-View CT (SVCT) reconstruction enhances temporal resolution and reduces radiation dose, yet its clinical use is hindered by artifacts due to view reduction and domain shifts from scanner, protocol, or anatomical variations, leading to performance degradation in out-of-distribution (OOD) scenarios. In this work, we propose a Cross-Distribution Diffusion Priors-Driven Iterative Reconstruction (CDPIR) framework to tackle the OOD problem in SVCT. CDPIR integrates cross-distribution diffusion priors, derived from a Scalable Interpolant Transformer (SiT), with model-based iterative reconstruction methods. Specifically, we train a SiT backbone, an extension of the Diffusion Transformer (DiT) architecture, to establish a unified stochastic interpolant framework, leveraging Classifier-Free Guidance (CFG) across multiple datasets. By randomly dropping the conditioning with a null embedding during training, the model learns both domain-specific and domain-invariant priors, enhancing generalizability. During sampling, the globally sensitive transformer-based diffusion model exploits the cross-distribution prior within the unified stochastic interpolant framework, enabling flexible and stable control over multi-distribution-to-noise interpolation paths and decoupled sampling strategies, thereby improving adaptation to OOD reconstruction. By alternating between data fidelity and sampling updates, our model achieves state-of-the-art performance with superior detail preservation in SVCT reconstructions. Extensive experiments demonstrate that CDPIR significantly outperforms existing approaches, particularly under OOD conditions, highlighting its robustness and potential clinical value in challenging imaging scenarios.

Maksim Penkin, Andrey Krylov

arxiv logopreprintSep 16 2025
Medical image enhancement and segmentation are critical yet challenging tasks in modern clinical practice, constrained by artifacts and complex anatomical variations. Traditional deep learning approaches often rely on complex architectures with limited interpretability. While Kolmogorov-Arnold networks offer interpretable solutions, their reliance on flattened feature representations fundamentally disrupts the intrinsic spatial structure of imaging data. To address this issue we propose a Functional Kolmogorov-Arnold Network (FunKAN) -- a novel interpretable neural framework, designed specifically for image processing, that formally generalizes the Kolmogorov-Arnold representation theorem onto functional spaces and learns inner functions using Fourier decomposition over the basis Hermite functions. We explore FunKAN on several medical image processing tasks, including Gibbs ringing suppression in magnetic resonance images, benchmarking on IXI dataset. We also propose U-FunKAN as state-of-the-art binary medical segmentation model with benchmarks on three medical datasets: BUSI (ultrasound images), GlaS (histological structures) and CVC-ClinicDB (colonoscopy videos), detecting breast cancer, glands and polyps, respectively. Experiments on those diverse datasets demonstrate that our approach outperforms other KAN-based backbones in both medical image enhancement (PSNR, TV) and segmentation (IoU, F1). Our work bridges the gap between theoretical function approximation and medical image analysis, offering a robust, interpretable solution for clinical applications.

Prahlad G Menon

arxiv logopreprintSep 16 2025
Fontan palliation for univentricular congenital heart disease progresses to hemodynamic failure with complex flow patterns poorly characterized by conventional 2D imaging. Current assessment relies on fluoroscopic angiography, providing limited 3D geometric information essential for computational fluid dynamics (CFD) analysis and surgical planning. A multi-step AI pipeline was developed utilizing Google's Gemini 2.5 Flash (2.5B parameters) for systematic, iterative processing of fluoroscopic angiograms through transformer-based neural architecture. The pipeline encompasses medical image preprocessing, vascular segmentation, contrast enhancement, artifact removal, and virtual hemodynamic flow visualization within 2D projections. Final views were processed through Tencent's Hunyuan3D-2mini (384M parameters) for stereolithography file generation. The pipeline successfully generated geometrically optimized 2D projections from single-view angiograms after 16 processing steps using a custom web interface. Initial iterations contained hallucinated vascular features requiring iterative refinement to achieve anatomically faithful representations. Final projections demonstrated accurate preservation of complex Fontan geometry with enhanced contrast suitable for 3D conversion. AI-generated virtual flow visualization identified stagnation zones in central connections and flow patterns in branch arteries. Complete processing required under 15 minutes with second-level API response times. This approach demonstrates clinical feasibility of generating CFD-suitable geometries from routine angiographic data, enabling 3D generation and rapid virtual flow visualization for cursory insights prior to full CFD simulation. While requiring refinement cycles for accuracy, this establishes foundation for democratizing advanced geometric and hemodynamic analysis using readily available imaging data.

Mirzaei G, Gupta A, Adeli H

pubmed logopapersSep 16 2025
Medical imaging plays a crucial role in the accurate diagnosis and prognosis of various medical conditions, with each modality offering unique and complementary insights into the body's structure and function. However, no single imaging technique can capture the full spectrum of necessary information. Data fusion has emerged as a powerful tool to integrate information from different perspectives, including multiple modalities, views, temporal sequences, and spatial scales. By combining data, fusion techniques provide a more comprehensive understanding, significantly enhancing the precision and reliability of clinical analyses. This paper presents an overview of data fusion approaches - covering multi-view, multi-modal, and multi-scale strategies - across imaging modalities such as MRI, CT, PET, SPECT, EEG, and MEG, with a particular emphasis on applications in neurological disorders. Furthermore, we highlight the latest advancements in data fusion methods and key studies published since 2016, illustrating the progress and growing impact of this interdisciplinary field.

Li S, Chen P, Zhang J, Wang B

pubmed logopapersSep 16 2025
Accurate segmentation of lesion areas from Lugol's Iodine Staining images is crucial for screening pre-cancerous cervical lesions. However, in underdeveloped regions lacking skilled clinicians, this method may lead to misdiagnosis and missed diagnoses. In recent years, deep learning methods have been widely applied to assist in medical image segmentation. This study aims to improve the accuracy of cervical cancer lesion segmentation by addressing the limitations of Convolutional Neural Networks (CNNs) and attention mechanisms in capturing global features and refining upsampling details. This paper presents a Multi-Scale Bidirectional Lesion Enhancement Network, named MBLEformer, which employs the Swin Transformer encoder to extract image features at multiple stages and utilizes a multi-scale attention mechanism to capture semantic features from different perspectives. Additionally, a bidirectional lesion enhancement upsampling strategy is introduced to refine the edge details of lesion areas. Experimental results demonstrate that the proposed model exhibits superior segmentation performance on a proprietary cervical cancer colposcopic dataset, outperforming other medical image segmentation methods, with a mean Intersection over Union (mIoU) of 82.5%, accuracy, and specificity of 94.9% and 83.6%. MBLEformer significantly improves the accuracy of lesion segmentation in iodine-stained cervical cancer images, with the potential to enhance the efficiency and accuracy of pre-cancerous lesion diagnosis and help address the issue of imbalanced medical resources.
Page 162 of 6486473 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.