Sort by:
Page 24 of 3433422 results

Faster, Self-Supervised Super-Resolution for Anisotropic Multi-View MRI Using a Sparse Coordinate Loss

Maja Schlereth, Moritz Schillinger, Katharina Breininger

arxiv logopreprintSep 9 2025
Acquiring images in high resolution is often a challenging task. Especially in the medical sector, image quality has to be balanced with acquisition time and patient comfort. To strike a compromise between scan time and quality for Magnetic Resonance (MR) imaging, two anisotropic scans with different low-resolution (LR) orientations can be acquired. Typically, LR scans are analyzed individually by radiologists, which is time consuming and can lead to inaccurate interpretation. To tackle this, we propose a novel approach for fusing two orthogonal anisotropic LR MR images to reconstruct anatomical details in a unified representation. Our multi-view neural network is trained in a self-supervised manner, without requiring corresponding high-resolution (HR) data. To optimize the model, we introduce a sparse coordinate-based loss, enabling the integration of LR images with arbitrary scaling. We evaluate our method on MR images from two independent cohorts. Our results demonstrate comparable or even improved super-resolution (SR) performance compared to state-of-the-art (SOTA) self-supervised SR methods for different upsampling scales. By combining a patient-agnostic offline and a patient-specific online phase, we achieve a substantial speed-up of up to ten times for patient-specific reconstruction while achieving similar or better SR quality. Code is available at https://github.com/MajaSchle/tripleSR.

Self-Supervised Cross-Encoder for Neurodegenerative Disease Diagnosis

Fangqi Cheng, Yingying Zhao, Xiaochen Yang

arxiv logopreprintSep 9 2025
Deep learning has shown significant potential in diagnosing neurodegenerative diseases from MRI data. However, most existing methods rely heavily on large volumes of labeled data and often yield representations that lack interpretability. To address both challenges, we propose a novel self-supervised cross-encoder framework that leverages the temporal continuity in longitudinal MRI scans for supervision. This framework disentangles learned representations into two components: a static representation, constrained by contrastive learning, which captures stable anatomical features; and a dynamic representation, guided by input-gradient regularization, which reflects temporal changes and can be effectively fine-tuned for downstream classification tasks. Experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset demonstrate that our method achieves superior classification accuracy and improved interpretability. Furthermore, the learned representations exhibit strong zero-shot generalization on the Open Access Series of Imaging Studies (OASIS) dataset and cross-task generalization on the Parkinson Progression Marker Initiative (PPMI) dataset. The code for the proposed method will be made publicly available.

Transposing intensive care innovation from modern warfare to other resource-limited settings.

Jarrassier A, de Rocquigny G, Delagarde C, Ezanno AC, Josse F, Dubost C, Duranteau O, Boussen S, Pasquier P

pubmed logopapersSep 9 2025
Delivering intensive care in conflict zones and other resource-limited settings presents unique clinical, logistical, and ethical challenges. These contexts, characterized by disrupted infrastructure, limited personnel, and prolonged field care, require adapted strategies to ensure critical care delivery under resource-limited settings. This scoping review aims to identify and characterize medical innovations developed or implemented in recent conflicts that may be relevant and transposable to intensive care units operating in other resource-limited settings. A scoping review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) guidelines. Five major databases were searched for English-language publications from 2014 to 2025. Studies describing innovations applicable to intensive care in modern warfare or resource-limited settings were included. While many studies relied on experimental or simulated models, a subset described real-world applications in resource-limited environments, including ultrasound-guided regional analgesia, resuscitative endovascular balloon occlusion of the aorta, portable blood transfusion platforms, and artificial intelligence-supported monitoring of traumatic brain injury. Training strategies such as teleconsultation/telementoring and low-cost simulation were also emphasized. Few of these intensive care innovations were validated in real-life wartime conditions. Innovations from modern warfare offer pragmatic and potentially transposable solutions for intensive care in resource-limited settings. Successfully adapting them requires validation and contextual adaptation, as well as the implementation of concrete collaborative strategies, including tailored training programs, joint simulation exercises, and structured knowledge translation initiatives, to ensure effective and sustainable integration.

Comparison of DLIR and ASIR-V algorithms for virtual monoenergetic imaging in carotid CTA under a triple-low protocol.

Long J, Wang C, Yu M, Liu X, Xu W, Liu Z, Wang C, Wu Y, Sun A, Zhang S, Hu C, Xu K, Meng Y

pubmed logopapersSep 9 2025
Stroke, frequently associated with carotid artery disease, is evaluated using carotid computed tomography angiography (CTA). Dual-energy CTA (DE-CTA) enhances imaging quality but presents challenges in maintaining high image clarity with low-dose scans. To compare the image quality of 50 keV virtual monoenergetic images (VMI) generated using Deep Learning Image Reconstruction (DLIR) and Adaptive Statistical Iterative Reconstruction-V (ASIR-V) algorithms under a triple-low scanning protocol in carotid CTA. A prospective study was conducted with 120 patients undergoing DE-CTA. The control group (Group 1), with a noise index (NI) of 4.0 and a contrast agent dose of 0.5 mL/kg, used the ASIR-V algorithm. The experimental group was divided into four subgroups: Group 2 (ASIR-V 50%), Group 3 (DLIR-L), Group 4 (DLIR-M), and Group 5 (DLIR-H), with a higher NI of 13.0 and a reduced contrast agent dose of 0.4 mL/kg. Objective image quality was assessed through signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and standard deviation (SD), while subjective quality was evaluated using a 5-point Likert scale. Radiation dose and contrast agent volume were also measured. The triple-low scanning protocol reduced radiation exposure by 53.2%, contrast agent volume by 19.7%, and injection rate by 19.8%. The DLIR-H setting outperformed ASIR-V, demonstrating superior image quality, better noise suppression, and improved contrast in small vessels. VMI at 50 keV showed enhanced diagnostic clarity with minimal radiation and contrast agent usage. The DLIR algorithm, particularly at high settings, significantly enhances image quality in DE-CTA VMI under a triple-low scanning protocol, offering a better balance between radiation dose reduction and image clarity.

Early Detection of Lung Metastases in Breast Cancer Using YOLOv10 and Transfer Learning: A Diagnostic Accuracy Study.

Taş HG, Taş MBH, Yildiz E, Aydin S

pubmed logopapersSep 9 2025
BACKGROUND This study used CT imaging analyzed with deep learning techniques to assess the diagnostic accuracy of lung metastasis detection in patients with breast cancer. The aim of the research was to create and verify a system for detecting malignant and metastatic lung lesions that uses YOLOv10 and transfer learning. MATERIAL AND METHODS From January 2023 to 2024, CT scans of 16 patients with breast cancer who had confirmed lung metastases were gathered retrospectively from Erzincan Mengücek Gazi Training and Research Hospital. The YOLOv10 deep learning system was used to assess a labeled dataset of 1264 enhanced CT images. RESULTS A total of 1264 labeled images from 16 patients were included. With an accuracy of 96.4%, sensitivity of 94.1%, specificity of 97.1%, and precision of 90.3%, the ResNet-50 model performed best. The robustness of the model was shown by the remarkable area under the curve (AUC), which came in at 0.96. After dataset tuning, the GoogLeNet model's accuracy was 97.3%. These results highlight our approach's improved diagnostic capabilities over current approaches. CONCLUSIONS This study shows how YOLOv10 and transfer learning can be used to improve the diagnostic precision of pulmonary metastases in patients with breast cancer. The model's effectiveness is demonstrated by the excellent performance metrics attained, opening the door for its application in clinical situations. The suggested approach supports prompt and efficient treatment decisions by lowering radiologists; workload and improving the early diagnosis of metastatic lesions.

MedicalPatchNet: A Patch-Based Self-Explainable AI Architecture for Chest X-ray Classification

Patrick Wienholt, Christiane Kuhl, Jakob Nikolas Kather, Sven Nebelung, Daniel Truhn

arxiv logopreprintSep 9 2025
Deep neural networks excel in radiological image classification but frequently suffer from poor interpretability, limiting clinical acceptance. We present MedicalPatchNet, an inherently self-explainable architecture for chest X-ray classification that transparently attributes decisions to distinct image regions. MedicalPatchNet splits images into non-overlapping patches, independently classifies each patch, and aggregates predictions, enabling intuitive visualization of each patch's diagnostic contribution without post-hoc techniques. Trained on the CheXpert dataset (223,414 images), MedicalPatchNet matches the classification performance (AUROC 0.907 vs. 0.908) of EfficientNet-B0, while substantially improving interpretability: MedicalPatchNet demonstrates substantially improved interpretability with higher pathology localization accuracy (mean hit-rate 0.485 vs. 0.376 with Grad-CAM) on the CheXlocalize dataset. By providing explicit, reliable explanations accessible even to non-AI experts, MedicalPatchNet mitigates risks associated with shortcut learning, thus improving clinical trust. Our model is publicly available with reproducible training and inference scripts and contributes to safer, explainable AI-assisted diagnostics across medical imaging domains. We make the code publicly available: https://github.com/TruhnLab/MedicalPatchNet

Development of an MRI-Based Comprehensive Model Fusing Clinical, Habitat Radiomics, and Deep Learning Models for Preoperative Identification of Tumor Deposits in Rectal Cancer.

Li X, Zhu Y, Wei Y, Chen Z, Wang Z, Li Y, Jin X, Chen Z, Zhan J, Chen X, Wang M

pubmed logopapersSep 9 2025
Tumor deposits (TDs) are an important prognostic factor in rectal cancer. However, integrated models combining clinical, habitat radiomics, and deep learning (DL) features for preoperative TDs detection remain unexplored. To investigate fusion models based on MRI for preoperative TDs identification and prognosis in rectal cancer. Retrospective. Surgically diagnosed rectal cancer patients (n = 635): training (n = 259) and internal validation (n = 112) from center 1; center 2 (n = 264) for external validation. 1.5/3T, T2-weighted image (T2WI) using fast spin echo sequence. Four models (clinical, habitat radiomics, DL, fusion) were developed for preoperative TDs diagnosis (184 TDs positive). T2WI was segmented using nnUNet, and habitat radiomics and DL features were extracted separately. Clinical parameters were analyzed independently. The fusion model integrated selected features from all three approaches through two-stage selection. Disease-free survival (DFS) analysis was used to assess the models' prognostic performance. Intraclass correlation coefficient (ICC), logistic regression, Mann-Whitney U tests, Chi-squared tests, LASSO, area under the curve (AUC), decision curve analysis (DCA), calibration curves, Kaplan-Meier analysis. The AUCs for the four models ranged from 0.778 to 0.930 in the training set. In the internal validation cohort, the AUCs of clinical, habitat radiomics, DL, and fusion models were 0.785 (95% CI 0.767-0.803), 0.827 (95% CI 0.809-0.845), 0.828 (95% CI 0.815-0.841), and 0.862 (95% CI 0.828-0.896), respectively. In the external validation cohort, the corresponding AUCs were 0.711 (95% CI 0.599-0.644), 0.817 (95% CI 0.801-0.833), 0.759 (95% CI 0.743-0.773), and 0.820 (95% CI 0.770-0.860), respectively. TDs-positive patients predicted by the fusion model had significantly poorer DFS (median: 30.7 months) than TDs-negative patients (median follow-up period: 39.9 months). A fusion model may identify TDs in rectal cancer and could allow to stratify DFS risk. 3.

Brain CT for Diagnosis of Intracranial Disease in Ambulatory Cancer Patients: Assessment of the Diagnostic Value of Scanning Without Contrast Prior to With Contrast.

Wang E, Darbandi A, Tu L, Ballester LY, Morales CJ, Chen M, Gule-Monroe MK, Johnson JM

pubmed logopapersSep 9 2025
Brain imaging with MRI or CT is standard in screening for intracranial disease among ambulatory cancer patients. Although MRI offers greater sensitivity, CT is frequently employed due to its accessibility, affordability, and faster acquisition time. However, the necessity of routinely performing a non-contrast CT with the contrast-enhanced study is unknown. This study evaluates the clinical and economic utility of the non-contrast portion of the brain CT examination. A board-certified neuroradiologist reviewed 737 brain CT reports from outpatients at MD Anderson Cancer Center who underwent contrast and non-contrast CT for cancer staging (October 2014 to March 2016) to assess if significant findings were identified only on non-contrast CT. A GPT-3 model was then fine-tuned to extract reports with a high likelihood of unique and significant non-contrast findings from 1,980 additional brain CT reports (January 2017 to April 2022). These reports were manually reviewed by two neuroradiologists, with adjudication by a third reviewer if needed. The incremental cost-effectiveness ratio of non-contrast CT inclusion was then calculated based on Medicare reimbursement and the 95% confidence interval of the proportion of all reports in which non-contrast CT was necessary for identifying significant findings RESULTS: Seven of 737 reports in the initial dataset revealed significant findings unique to the non-contrast CT, all of which were hemorrhage. The GPT-3 model identified 145 additional reports with a high unique non-contrast CT finding likelihood for manual review from the second dataset of 1,980 reports. 19 of these reports were found to have unique and significant non-contrast CT findings. In total, 0.96% (95% CI: 0.63% -1.40%) of reports had significant findings identified only on non-contrast CT. The incremental cost-effectiveness ratio for identification of a single significant finding on non-contrast CT missed on the contrast-enhanced study was $1,855 to $4,122. In brain CT for ambulatory screening for intracranial disease in cancer patients, non-contrast CT offers limited additional diagnostic value compared to contrast-enhanced CT alone. Considering the associated financial cost, workload, and patient radiation exposure associated with performing a non-contrast CT, contrast-enhanced brain CT alone is sufficient for cancer staging in asymptomatic cancer patients. GPT-3= Generative Pretrained Transformers 3.

[<sup>99m</sup>Tc]Tc-Sestamibi/[<sup>99m</sup>Tc]NaTcO<sub>4</sub> Subtraction SPECT of Parathyroid Glands Using Analysis of Principal Components.

Maříková I, Balogová S, Zogala D, Ptáčník V, Raška I, Libánský P, Talbot JN, Šámal M, Trnka J

pubmed logopapersSep 9 2025
The aim of the study was to validate a new method for semiautomatic subtraction of [<sup>99m</sup>Tc]Tc-sestamibi and [<sup>99m</sup>Tc]NaTcO<sub>4</sub> SPECT 3-dimensional datasets using principal component analysis (PCA) against the results of parathyroid surgery and to compare its performance with an interactive method for visual comparison of images. We also sought to identify factors that affect the accuracy of lesion detection using the two methods. <b>Methods:</b> Scintigraphic data from [<sup>99m</sup>Tc]Tc-sestamibi and [<sup>99m</sup>Tc]NaTcO<sub>4</sub> SPECT were analyzed using semiautomatic subtraction of the 2 registered datasets based on PCA applied to the region of interest including the thyroid and an interactive method for visual comparison of the 2 image datasets. The findings of both methods were compared with those of surgery. Agreement with surgery was assessed with respect to the lesion quadrant, affected side of the neck, and the patient positivity regardless of location. <b>Results:</b> The results of parathyroid surgery and histology were available for 52 patients who underwent [<sup>99m</sup>Tc]Tc-sestamibi/[<sup>99m</sup>Tc]NaTcO<sub>4</sub> SPECT. Semiautomatic image subtraction identified the correct lesion quadrant in 46 patients (88%), the correct side of the neck in 51 patients (98%), and true pathologic lesions regardless of location in 51 patients (98%). Visual interactive analysis identified the correct lesion quadrant in 44 patients (85%), correct side of the neck in 49 patients (94%), and true pathologic lesions regardless of location in 50 patients (96%). There was no significant difference between the results of the 2 methods (<i>P</i> > 0.05). The factors supporting lesion detection were accurate positioning of the patient on the camera table, which facilitated subsequent image registration of the neck, and, after excluding ectopic parathyroid glands, focusing detection on the thyroid ROI. <b>Conclusion:</b> The results of semiautomatic subtraction of [<sup>99m</sup>Tc]Tc-sestamibi/[<sup>99m</sup>Tc]NaTcO<sub>4</sub> SPECT using PCA had good agreement with the findings from surgery as well as the visual interactive method, comparable to the high diagnostic accuracy of [<sup>99m</sup>Tc]Tc-sestamibi/[<sup>123</sup>I]NaI subtraction scintigraphy and [<sup>18</sup>F]fluorocholine PET/CT reported in the literature. The main advantages of semiautomatic subtraction are minimum user interaction and automatic adjustment of the subtraction weight. Principal component images may serve as optimized input objects, potentially useful in machine-learning algorithms aimed at fully automated detection of hyperfunctioning parathyroid glands.

Use of artificial intelligence for classification of fractures around the elbow in adults according to the 2018 AO/OTA classification system.

Pettersson A, Axenhus M, Stukan T, Ljungberg O, Nåsell H, Razavian AS, Gordon M

pubmed logopapersSep 9 2025
This study evaluates the accuracy of an Artificial Intelligence (AI) system, specifically a convolutional neural network (CNN), in classifying elbow fractures using the detailed 2018 AO/OTA fracture classification system. A retrospective analysis of 5,367 radiograph exams visualizing the elbow from adult patients (2002-2016) was conducted using a deep neural network. Radiographs were manually categorized according to the 2018 AO/OTA system by orthopedic surgeons. A pretrained Efficientnet B4 network with squeeze and excitation layers was fine-tuned. Performance was assessed against a test set of 208 radiographs reviewed independently by four orthopedic surgeons, with disagreements resolved via consensus. The study evaluated 54 distinct fracture types, each with a minimum of 10 cases, ensuring adequate dataset representation. Overall fracture detection achieved an AUC of 0.88 (95% CI 0.83-0.93). The weighted mean AUC was 0.80 for proximal radius fractures, 0.86 for proximal ulna, and 0.85 for distal humerus. These results underscore the AI system's ability to accurately detect and classify a broad spectrum of elbow fractures. AI systems, such as CNNs, can enhance clinicians' ability to identify and classify elbow fractures, offering a complementary tool to improve diagnostic accuracy and optimize treatment decisions. The findings suggest AI can reduce the risk of undiagnosed fractures, enhancing clinical outcomes and radiologic evaluation.
Page 24 of 3433422 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.