Sort by:
Page 179 of 3593587 results

Deep-learning reconstruction for noise reduction in respiratory-triggered single-shot phase sensitive inversion recovery myocardial delayed enhancement cardiac magnetic resonance.

Tang M, Wang H, Wang S, Wali E, Gutbrod J, Singh A, Landeras L, Janich MA, Mor-Avi V, Patel AR, Patel H

pubmed logopapersJul 14 2025
Phase-sensitive inversion recovery late gadolinium enhancement (LGE) improves tissue contrast, however it is challenging to combine with a free-breathing acquisition. Deep-learning (DL) algorithms have growing applications in cardiac magnetic resonance imaging (CMR) to improve image quality. We compared a novel combination of a free-breathing single-shot phase-sensitive LGE with respiratory triggering (FB-PS) sequence with DL noise reduction reconstruction algorithm to a conventional segmented phase-sensitive LGE acquired during breath holding (BH-PS). 61 adult subjects (29 male, age 51 ± 15) underwent clinical CMR (1.5 T) with the FB-PS sequence and the conventional BH-PS sequence. DL noise reduction was incorporated into the image reconstruction pipeline. Qualitative metrics included image quality, artifact severity, diagnostic confidence. Quantitative metrics included septal-blood border sharpness, LGE sharpness, blood-myocardium apparent contrast-to-noise ratio (CNR), LGE-myocardium CNR, LGE apparent signal-to-noise ratio (SNR), and LGE burden. The sequences were compared via paired t-tests. 27 subjects had positive LGE. Average time to acquire a slice for FB-PS was 4-12 s versus ~32-38 s for BH-PS (including breath instructions and break time in between breath hold). FB-PS with medium DL noise reduction had better image quality (FB-PS 3.0 ± 0.7 vs. BH-PS 1.5 ± 0.6, p < 0.0001), less artifact (4.8 ± 0.5 vs. 3.4 ± 1.1, p < 0.0001), and higher diagnostic confidence (4.0 ± 0.6 vs. 2.6 ± 0.8, p < 0.0001). Septum sharpness in FB-PS with DL reconstruction versus BH-PS was not significantly different. There was no significant difference in LGE sharpness or LGE burden. FB-PS had superior blood-myocardium CNR (17.2 ± 6.9 vs. 16.4 ± 6.0, p = 0.040), LGE-myocardium CNR (12.1 ± 7.2 vs. 10.4 ± 6.6, p = 0.054), and LGE SNR (59.8 ± 26.8 vs. 31.2 ± 24.1, p < 0.001); these metrics further improved with DL noise reduction. A FB-PS sequence shortens scan time by over 5-fold and reduces motion artifact. Combined with a DL noise reduction algorithm, FB-PS provides better or similar image quality compared to BH-PS. This is a promising solution for patients who cannot hold their breath.

Comparing large language models and text embedding models for automated classification of textual, semantic, and critical changes in radiology reports.

Lindholz M, Burdenski A, Ruppel R, Schulze-Weddige S, Baumgärtner GL, Schobert I, Haack AM, Eminovic S, Milnik A, Hamm CA, Frisch A, Penzkofer T

pubmed logopapersJul 14 2025
Radiology reports can change during workflows, especially when residents draft preliminary versions that attending physicians finalize. We explored how large language models (LLMs) and embedding techniques can categorize these changes into textual, semantic, or clinically actionable types. We evaluated 400 adult CT reports drafted by residents against finalized versions by attending physicians. Changes were rated on a five-point scale from no changes to critical ones. We examined open-source LLMs alongside traditional metrics like normalized word differences, Levenshtein and Jaccard similarity, and text embedding similarity. Model performance was assessed using quadratic weighted Cohen's kappa (κ), (balanced) accuracy, F<sub>1</sub>, precision, and recall. Inter-rater reliability among evaluators was excellent (κ = 0.990). Of the reports analyzed, 1.3 % contained critical changes. The tested methods showed significant performance differences (P < 0.001). The Qwen3-235B-A22B model using a zero-shot prompt, most closely aligned with human assessments of changes in clinical reports, achieving a κ of 0.822 (SD 0.031). The best conventional metric, word difference, had a κ of 0.732 (SD 0.048), the difference between the two showed statistical significance in unadjusted post-hoc tests (P = 0.038) but lost significance after adjusting for multiple testing (P = 0.064). Embedding models underperformed compared to LLMs and classical methods, showing statistical significance in most cases. Large language models like Qwen3-235B-A22B demonstrated moderate to strong alignment with expert evaluations of the clinical significance of changes in radiology reports. LLMs outperformed embedding methods and traditional string and word approaches, achieving statistical significance in most instances. This demonstrates their potential as tools to support peer review.

A Brain Tumor Segmentation Method Based on CLIP and 3D U-Net with Cross-Modal Semantic Guidance and Multi-Level Feature Fusion

Mingda Zhang

arxiv logopreprintJul 14 2025
Precise segmentation of brain tumors from magnetic resonance imaging (MRI) is essential for neuro-oncology diagnosis and treatment planning. Despite advances in deep learning methods, automatic segmentation remains challenging due to tumor morphological heterogeneity and complex three-dimensional spatial relationships. Current techniques primarily rely on visual features extracted from MRI sequences while underutilizing semantic knowledge embedded in medical reports. This research presents a multi-level fusion architecture that integrates pixel-level, feature-level, and semantic-level information, facilitating comprehensive processing from low-level data to high-level concepts. The semantic-level fusion pathway combines the semantic understanding capabilities of Contrastive Language-Image Pre-training (CLIP) models with the spatial feature extraction advantages of 3D U-Net through three mechanisms: 3D-2D semantic bridging, cross-modal semantic guidance, and semantic-based attention mechanisms. Experimental validation on the BraTS 2020 dataset demonstrates that the proposed model achieves an overall Dice coefficient of 0.8567, representing a 4.8% improvement compared to traditional 3D U-Net, with a 7.3% Dice coefficient increase in the clinically important enhancing tumor (ET) region.

The Potential of ChatGPT as an Aiding Tool for the Neuroradiologist

nikola, s., paz, d.

medrxiv logopreprintJul 14 2025
PurposeThis study aims to explore whether ChatGPT can serve as an assistive tool for neuroradiologists in establishing a reasonable differential diagnosis in central nervous system tumors based on MRI images characteristics. MethodsThis retrospective study included 50 patients aged 18-90 who underwent imaging and surgery at the Western Galilee Medical Center. ChatGPT was provided with demographic and radiological information of the patients to generate differential diagnoses. We compared ChatGPTs performance to an experienced neuroradiologist, using pathological reports as the gold standard. Quantitative data were described using means and standard deviations, median and range. Qualitative data were described using frequencies and percentages. The level of agreement between examiners (neuroradiologist versus ChatGPT) was assessed using Fleiss kappa coefficient. A significance value below 5% was considered statistically significant. Statistical analysis was performed using IBM SPSS Statistics, version 27. ResultsThe results showed that while ChatGPT demonstrated good performance, particularly in identifying common tumors such as glioblastoma and meningioma, its overall accuracy (48%) was lower than that of the neuroradiologist (70%). The AI tool showed moderate agreement with the neuroradiologist (kappa = 0.445) and with pathology results (kappa = 0.419). ChatGPTs performance varied across tumor types, performing better with common tumors but struggling with rarer ones. ConclusionThis study suggests that ChatGPT has the potential to serve as an assistive tool in neuroradiology for establishing a reasonable differential diagnosis in central nervous system tumors based on MRI images characteristics. However, its limitations and potential risks must be considered, and it should therefore be used with caution.

Explainable AI for Precision Oncology: A Task-Specific Approach Using Imaging, Multi-omics, and Clinical Data

Park, Y., Park, S., Bae, E.

medrxiv logopreprintJul 14 2025
Despite continued advances in oncology, cancer remains a leading cause of global mortality, highlighting the need for diagnostic and prognostic tools that are both accurate and interpretable. Unimodal approaches often fail to capture the biological and clinical complexity of tumors. In this study, we present a suite of task-specific AI models that leverage CT imaging, multi-omics profiles, and structured clinical data to address distinct challenges in segmentation, classification, and prognosis. We developed three independent models across large public datasets. Task 1 applied a 3D U-Net to segment pancreatic tumors from CT scans, achieving a Dice Similarity Coefficient (DSC) of 0.7062. Task 2 employed a hierarchical ensemble of omics-based classifiers to distinguish tumor from normal tissue and classify six major cancer types with 98.67% accuracy. Task 3 benchmarked classical machine learning models on clinical data for prognosis prediction across three cancers (LIHC, KIRC, STAD), achieving strong performance (e.g., C-index of 0.820 in KIRC, AUC of 0.978 in LIHC). Across all tasks, explainable AI methods such as SHAP and attention-based visualization enabled transparent interpretation of model outputs. These results demonstrate the value of tailored, modality-aware models and underscore the clinical potential of applying such tailored AI systems for precision oncology. Technical FoundationsO_LISegmentation (Task 1): A custom 3D U-Net was trained using the Task07_Pancreas dataset from the Medical Segmentation Decathlon (MSD). CT images were preprocessed with MONAI-based pipelines, resampled to (64, 96, 96) voxels, and intensity-windowed to HU ranges of -100 to 240. C_LIO_LIClassification (Task 2): Multi-omics data from TCGA--including gene expression, methylation, miRNA, CNV, and mutation profiles--were log-transformed and normalized. Five modality-specific LightGBM classifiers generated meta-features for a late-fusion ensemble. Stratified 5-fold cross-validation was used for evaluation. C_LIO_LIPrognosis (Task 3): Clinical variables from TCGA were curated and imputed (median/mode), with high-missing-rate columns removed. Survival models (e.g., Cox-PH, Random Forest, XGBoost) were trained with early stopping. No omics or imaging data were used in this task. C_LIO_LIInterpretability: SHAP values were computed for all tree-based models, and attention-based overlays were used in imaging tasks to visualize salient regions. C_LI

A Multi-Modal Deep Learning Framework for Predicting PSA Progression-Free Survival in Metastatic Prostate Cancer Using PSMA PET/CT Imaging

Ghaderi, H., Shen, C., Issa, W., Pomper, M. G., Oz, O. K., Zhang, T., Wang, J., Yang, D. X.

medrxiv logopreprintJul 14 2025
PSMA PET/CT imaging has been increasingly utilized in the management of patients with metastatic prostate cancer (mPCa). Imaging biomarkers derived from PSMA PET may provide improved prognostication and prediction of treatment response for mPCa patients. This study investigates a novel deep learning-derived imaging biomarker framework for outcome prediction using multi-modal PSMA PET/CT and clinical features. A single institution cohort of 99 mPCa patients with 396 lesions was evaluated. Imaging features were extracted from cropped lesion areas and combined with clinical variables including body mass index, ECOG performance status, prostate specific antigen (PSA) level, Gleason score, and treatments received. The PSA progression-free survival (PFS) model was trained using a ResNet architecture with a Cox proportional hazards loss function using five-fold cross-validation. Performance was assessed using concordance index (C-index) and Kaplan-Meier survival analysis. Among evaluated model architectures, the ResNet-18 backbone offered the best performance. The multi-modal deep learning framework achieved a 5-fold cross-validation C-index ranging from 0.75 to 0.94, outperforming models incorporating imaging only (0.70-0.89) and clinical features only (0.53-0.65). Kaplan-Meir survival analysis performed on the deep learning-derived predictions demonstrated clear risk stratification, with a median PSA progression free survival (PFS) of 19.7 months in the high-risk group and 26 months in the low-risk group (P < 0.001). Deep learning-derived imaging biomarker based on PSMA PET/CT can effectively predict PSA PFS for mPCa patients. Further clinical validation in prospective cohorts is warranted.

Early breast cancer detection via infrared thermography using a CNN enhanced with particle swarm optimization.

Alzahrani RM, Sikkandar MY, Begum SS, Babetat AFS, Alhashim M, Alduraywish A, Prakash NB, Ng EYK

pubmed logopapersJul 13 2025
Breast cancer remains the most prevalent cause of cancer-related mortality among women worldwide, with an estimated incidence exceeding 500,000 new cases annually. Timely diagnosis is vital for enhancing therapeutic outcomes and increasing survival probabilities. Although conventional diagnostic tools such as mammography are widely used and generally effective, they are often invasive, costly, and exhibit reduced efficacy in patients with dense breast tissue. Infrared thermography, by contrast, offers a non-invasive and economical alternative; however, its clinical adoption has been limited, largely due to difficulties in accurate thermal image interpretation and the suboptimal tuning of machine learning algorithms. To overcome these limitations, this study proposes an automated classification framework that employs convolutional neural networks (CNNs) for distinguishing between malignant and benign thermographic breast images. An Enhanced Particle Swarm Optimization (EPSO) algorithm is integrated to automatically fine-tune CNN hyperparameters, thereby minimizing manual effort and enhancing computational efficiency. The methodology also incorporates advanced image preprocessing techniques-including Mamdani fuzzy logic-based edge detection, Contrast-Limited Adaptive Histogram Equalization (CLAHE) for contrast enhancement, and median filtering for noise suppression-to bolster classification performance. The proposed model achieves a superior classification accuracy of 98.8%, significantly outperforming conventional CNN implementations in terms of both computational speed and predictive accuracy. These findings suggest that the developed system holds substantial potential for early, reliable, and cost-effective breast cancer screening in real-world clinical environments.

AI-Enhanced Pediatric Pneumonia Detection: A CNN-Based Approach Using Data Augmentation and Generative Adversarial Networks (GANs)

Abdul Manaf, Nimra Mughal

arxiv logopreprintJul 13 2025
Pneumonia is a leading cause of mortality in children under five, requiring accurate chest X-ray diagnosis. This study presents a machine learning-based Pediatric Chest Pneumonia Classification System to assist healthcare professionals in diagnosing pneumonia from chest X-ray images. The CNN-based model was trained on 5,863 labeled chest X-ray images from children aged 0-5 years from the Guangzhou Women and Children's Medical Center. To address limited data, we applied augmentation techniques (rotation, zooming, shear, horizontal flipping) and employed GANs to generate synthetic images, addressing class imbalance. The system achieved optimal performance using combined original, augmented, and GAN-generated data, evaluated through accuracy and F1 score metrics. The final model was deployed via a Flask web application, enabling real-time classification with probability estimates. Results demonstrate the potential of deep learning and GANs in improving diagnostic accuracy and efficiency for pediatric pneumonia classification, particularly valuable in resource-limited clinical settings https://github.com/AbdulManaf12/Pediatric-Chest-Pneumonia-Classification

Brain Stroke Detection and Classification Using CT Imaging with Transformer Models and Explainable AI

Shomukh Qari, Maha A. Thafar

arxiv logopreprintJul 13 2025
Stroke is one of the leading causes of death globally, making early and accurate diagnosis essential for improving patient outcomes, particularly in emergency settings where timely intervention is critical. CT scans are the key imaging modality because of their speed, accessibility, and cost-effectiveness. This study proposed an artificial intelligence framework for multiclass stroke classification (ischemic, hemorrhagic, and no stroke) using CT scan images from a dataset provided by the Republic of Turkey's Ministry of Health. The proposed method adopted MaxViT, a state-of-the-art Vision Transformer, as the primary deep learning model for image-based stroke classification, with additional transformer variants (vision transformer, transformer-in-transformer, and ConvNext). To enhance model generalization and address class imbalance, we applied data augmentation techniques, including synthetic image generation. The MaxViT model trained with augmentation achieved the best performance, reaching an accuracy and F1-score of 98.00%, outperforming all other evaluated models and the baseline methods. The primary goal of this study was to distinguish between stroke types with high accuracy while addressing crucial issues of transparency and trust in artificial intelligence models. To achieve this, Explainable Artificial Intelligence (XAI) was integrated into the framework, particularly Grad-CAM++. It provides visual explanations of the model's decisions by highlighting relevant stroke regions in the CT scans and establishing an accurate, interpretable, and clinically applicable solution for early stroke detection. This research contributed to the development of a trustworthy AI-assisted diagnostic tool for stroke, facilitating its integration into clinical practice and enhancing access to timely and optimal stroke diagnosis in emergency departments, thereby saving more lives.

Central Obesity-related Brain Alterations Predict Cognitive Impairments in First Episode of Psychosis.

Kolenič M, McWhinney SR, Selitser M, Šafářová N, Franke K, Vochoskova K, Burdick K, Španiel F, Hajek T

pubmed logopapersJul 13 2025
Cognitive impairment is a key contributor to disability and poor outcomes in schizophrenia, yet it is not adequately addressed by currently available treatments. Thus, it is important to search for preventable or treatable risk factors for cognitive impairment. Here, we hypothesized that obesity-related neurostructural alterations will be associated with worse cognitive outcomes in people with first episode of psychosis (FEP). This observational study presents cross-sectional data from the Early-Stage Schizophrenia Outcome project. We acquired T1-weighted 3D MRI scans in 440 participants with FEP at the time of the first hospitalization and in 257 controls. Metabolic assessments included body mass index (BMI), waist-to-hip ratio (WHR), serum concentrations of triglycerides, cholesterol, glucose, insulin, and hs-CRP. We chose machine learning-derived brain age gap estimate (BrainAGE) as our measure of neurostructural changes and assessed attention, working memory and verbal learning using Digit Span and the Auditory Verbal Learning Test. Among obesity/metabolic markers, only WHR significantly predicted both, higher BrainAGE (t(281)=2.53, p=0.012) and worse verbal learning (t(290) = -2.51, P = .026). The association between FEP and verbal learning was partially mediated by BrainAGE (average causal mediated effects, ACME = -0.04 [-0.10, -0.01], P = .022) and the higher BrainAGE in FEP was partially mediated by higher WHR (ACME = 0.08 [0.02, 0.15], P = .006). Central obesity-related brain alterations were linked with worse cognitive performance already early in the course of psychosis. These structure-function links suggest that preventing or treating central obesity could target brain and cognitive impairments in FEP.
Page 179 of 3593587 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.