Sort by:
Page 136 of 3363359 results

Clinician-Led Code-Free Deep Learning for Detecting Papilloedema and Pseudopapilloedema Using Optic Disc Imaging

Shenoy, R., Samra, G. S., Sekhri, R., Yoon, H.-J., Teli, S., DeSilva, I., Tu, Z., Maconachie, G. D., Thomas, M. G.

medrxiv logopreprintJun 26 2025
ImportanceDifferentiating pseudopapilloedema from papilloedema is challenging, but critical for prompt diagnosis and to avoid unnecessary invasive procedures. Following diagnosis of papilloedema, objectively grading severity is important for determining urgency of management and therapeutic response. Automated machine learning (AutoML) has emerged as a promising tool for diagnosis in medical imaging and may provide accessible opportunities for consistent and accurate diagnosis and severity grading of papilloedema. ObjectiveThis study evaluates the feasibility of AutoML models for distinguishing the presence and severity of papilloedema using near infrared reflectance images (NIR) obtained from standard optical coherence tomography (OCT), comparing the performance of different AutoML platforms. Design, setting and participantsA retrospective cohort study was conducted using data from University Hospitals of Leicester, NHS Trust. The study involved 289 adults and children patients (813 images) who underwent optic nerve head-centred OCT imaging between 2021 and 2024. The dataset included patients with normal optic discs (69 patients, 185 images), papilloedema (135 patients, 372 images), and optic disc drusen (ODD) (85 patients, 256 images). AutoML platforms - Amazon Rekognition, Medic Mind (MM) and Google Vertex were evaluated for their ability to classify and grade papilloedema severity. Main outcomes and measuresTwo classification tasks were performed: (1) distinguishing papilloedema from normal discs and ODD; (2) grading papilloedema severity (mild/moderate vs. severe). Model performance was evaluated using area under the curve (AUC), precision, recall, F1 score, and confusion matrices for all six models. ResultsAmazon Rekognition outperformed the other platforms, achieving the highest AUC (0.90) and F1 score (0.81) in distinguishing papilloedema from normal/ODD. For papilloedema severity grading, Amazon Rekognition also performed best, with an AUC of 0.90 and F1 score of 0.79. Google Vertex and Medic Mind demonstrated good performance but had slightly lower accuracy and higher misclassification rates. Conclusions and relevanceThis evaluation of three widely available AutoML platforms using NIR images obtained from standard OCT shows promise in distinguishing and grading papilloedema. These models provide an accessible, scalable solution for clinical teams without coding expertise to feasibly develop intelligent diagnostic systems to recognise and characterise papilloedema. Further external validation and prospective testing is needed to confirm their clinical utility and applicability in diverse settings. Key PointsQuestion: Can clinician-led, code-free deep learning models using automated machine learning (AutoML) accurately differentiate papilloedema from pseudopapilloedema using optic disc imaging? Findings: Three widely available AutoML platforms were used to develop models that successfully distinguish the presence and severity of papilloedema on optic disc imaging, with Amazon Rekognition demonstrating the highest performance. Meaning: AutoML may assist clinical teams, even those with limited coding expertise, in diagnosing papilloedema, potentially reducing the need for invasive investigations.

Self-supervised learning for MRI reconstruction: a review and new perspective.

Li X, Huang J, Sun G, Yang Z

pubmed logopapersJun 26 2025
To review the latest developments in self-supervised deep learning (DL) techniques for magnetic resonance imaging (MRI) reconstruction, emphasizing their potential to overcome the limitations of supervised methods dependent on fully sampled k-space data. While DL has significantly advanced MRI, supervised approaches require large amounts of fully sampled k-space data for training-a major limitation given the impracticality and expense of acquiring such data clinically. Self-supervised learning has emerged as a promising alternative, enabling model training using only undersampled k-space data, thereby enhancing feasibility and driving research interest. We conducted a comprehensive literature review to synthesize recent progress in self-supervised DL for MRI reconstruction. The analysis focused on methods and architectures designed to improve image quality, reduce scanning time, and address data scarcity challenges, drawing from peer-reviewed publications and technical innovations in the field. Self-supervised DL holds transformative potential for MRI reconstruction, offering solutions to data limitations while maintaining image quality and accelerating scans. Key challenges include robustness across diverse anatomies, standardization of validation, and clinical integration. Future research should prioritize hybrid methodologies, domain-specific adaptations, and rigorous clinical validation. This review consolidates advancements and unresolved issues, providing a foundation for next-generation medical imaging technologies.

Morphology-based radiological-histological correlation on ultra-high-resolution energy-integrating detector CT using cadaveric human lungs: nodule and airway analysis.

Hata A, Yanagawa M, Ninomiya K, Kikuchi N, Kurashige M, Nishigaki D, Doi S, Yamagata K, Yoshida Y, Ogawa R, Tokuda Y, Morii E, Tomiyama N

pubmed logopapersJun 26 2025
To evaluate the depiction capability of fine lung nodules and airways using high-resolution settings on ultra-high-resolution energy-integrating detector CT (UHR-CT), incorporating large matrix sizes, thin-slice thickness, and iterative reconstruction (IR)/deep-learning reconstruction (DLR), using cadaveric human lungs and corresponding histological images. Images of 20 lungs were acquired using conventional CT (CCT), UHR-CT, and photon-counting detector CT (PCD-CT). CCT images were reconstructed with a 512 matrix and IR (CCT-512-IR). UHR-CT images were reconstructed with four settings by varying the matrix size and the reconstruction method: UHR-512-IR, UHR-1024-IR, UHR-2048-IR, and UHR-1024-DLR. Two imaging settings of PCD-CT were used: PCD-512-IR and PCD-1024-IR. CT images were visually evaluated and compared with histology. Overall, 6769 nodules (median: 1321 µm) and 92 airways (median: 851 µm) were evaluated. For nodules, UHR-2048-IR outperformed CCT-512-IR, UHR-512-IR, and UHR-1024-IR (p < 0.001). UHR-1024-DLR showed no significant difference from UHR-2048-IR in the overall nodule score after Bonferroni correction (uncorrected p = 0.043); however, for nodules > 1000 μm, UHR-2048-IR demonstrated significantly better scores than UHR-1024-DLR (p = 0.003). For airways, UHR-1024-IR and UHR-512-IR showed significant differences (p < 0.001), with no notable differences among UHR-1024-IR, UHR-2048-IR, and UHR-1024-DLR. UHR-2048-IR detected nodules and airways with median diameters of 604 µm and 699 µm, respectively. No significant difference was observed between UHR-512-IR and PCD-512-IR (p > 0.1). PCD-1024-IR outperformed UHR-CTs for nodules > 1000 μm (p ≤ 0.001), while UHR-1024-DLR outperformed PCD-1024-IR for airways > 1000 μm (p = 0.005). UHR-2048-IR demonstrated the highest scores among the evaluated EID-CT images. UHR-CT showed potential for detecting submillimeter nodules and airways. With the 512 matrix, UHR-CT demonstrated performance comparable to PCD-CT. Question There are scarce data evaluating the depiction capabilities of ultra-high-resolution energy-integrating detector CT (UHR-CT) for fine structures, nor any comparisons with photon-counting detector CT (PCD-CT). Findings UHR-CT depicted nodules and airways with median diameters of 604 µm and 699 µm, showing no significant difference from PCD-CT with the 512 matrix. Clinical relevance High-resolution imaging is crucial for lung diagnosis. UHR-CT has the potential to contribute to pulmonary nodule diagnosis and airway disease evaluation by detecting fine opacities and airways.

Deep Learning Model for Automated Segmentation of Orbital Structures in MRI Images.

Bakhshaliyeva E, Reiner LN, Chelbi M, Nawabi J, Tietze A, Scheel M, Wattjes M, Dell'Orco A, Meddeb A

pubmed logopapersJun 26 2025
Magnetic resonance imaging (MRI) is a crucial tool for visualizing orbital structures and detecting eye pathologies. However, manual segmentation of orbital anatomy is challenging due to the complexity and variability of the structures. Recent advancements in deep learning (DL), particularly convolutional neural networks (CNNs), offer promising solutions for automated segmentation in medical imaging. This study aimed to train and evaluate a U-Net-based model for the automated segmentation of key orbital structures. This retrospective study included 117 patients with various orbital pathologies who underwent orbital MRI. Manual segmentation was performed on four anatomical structures: the ocular bulb, ocular tumors, retinal detachment, and the optic nerve. Following the UNet autoconfiguration by nnUNet, we conducted a five-fold cross-validation and evaluated the model's performances using Dice Similarity Coefficient (DSC) and Relative Absolute Volume Difference (RAVD) as metrics. nnU-Net achieved high segmentation performance for the ocular bulb (mean DSC: 0.931) and the optic nerve (mean DSC: 0.820). Segmentation of ocular tumors (mean DSC: 0.788) and retinal detachment (mean DSC: 0.550) showed greater variability, with performance declining in more challenging cases. Despite these challenges, the model achieved high detection rates, with ROC AUCs of 0.90 for ocular tumors and 0.78 for retinal detachment. This study demonstrates nnU-Net's capability for accurate segmentation of orbital structures, particularly the ocular bulb and optic nerve. However, challenges remain in the segmentation of tumors and retinal detachment due to variability and artifacts. Future improvements in deep learning models and broader, more diverse datasets may enhance segmentation performance, ultimately aiding in the diagnosis and treatment of orbital pathologies.

Semi-automatic segmentation of elongated interventional instruments for online calibration of C-arm imaging system.

Chabi N, Illanes A, Beuing O, Behme D, Preim B, Saalfeld S

pubmed logopapersJun 26 2025
The C-arm biplane imaging system, designed for cerebral angiography, detects pathologies like aneurysms using dual rotating detectors for high-precision, real-time vascular imaging. However, accuracy can be affected by source-detector trajectory deviations caused by gravitational artifacts and mechanical instabilities. This study addresses calibration challenges and suggests leveraging interventional devices with radio-opaque markers to optimize C-arm geometry. We propose an online calibration method using image-specific features derived from interventional devices like guidewires and catheters (In the remainder of this paper, the term"catheter" will refer to both catheter and guidewire). The process begins with gantry-recorded data, refined through iterative nonlinear optimization. A machine learning approach detects and segments elongated devices by identifying candidates via thresholding on a weighted sum of curvature, derivative, and high-frequency indicators. An ensemble classifier segments these regions, followed by post-processing to remove false positives, integrating vessel maps, manual correction and identification markers. An interpolation step filling gaps along the catheter. Among the optimized ensemble classifiers, the one trained on the first frames achieved the best performance, with a specificity of 99.43% and precision of 86.41%. The calibration method was evaluated on three clinical datasets and four phantom angiogram pairs, reducing the mean backprojection error from 4.11 ± 2.61 to 0.15 ± 0.01 mm. Additionally, 3D accuracy analysis showed an average root mean square error of 3.47% relative to the true marker distance. This study explores using interventional tools with radio-opaque markers for C-arm self-calibration. The proposed method significantly reduces 2D backprojection error and 3D RMSE, enabling accurate 3D vascular reconstruction.

Improving Clinical Utility of Fetal Cine CMR Using Deep Learning Super-Resolution.

Vollbrecht TM, Hart C, Katemann C, Isaak A, Voigt MB, Pieper CC, Kuetting D, Geipel A, Strizek B, Luetkens JA

pubmed logopapersJun 26 2025
Fetal cardiovascular magnetic resonance is an emerging tool for prenatal congenital heart disease assessment, but long acquisition times and fetal movements limit its clinical use. This study evaluates the clinical utility of deep learning super-resolution reconstructions for rapidly acquired, low-resolution fetal cardiovascular magnetic resonance. This prospective study included participants with fetal congenital heart disease undergoing fetal cardiovascular magnetic resonance in the third trimester of pregnancy, with axial cine images acquired at normal resolution and low resolution. Low-resolution cine data was subsequently reconstructed using a deep learning super-resolution framework (cine<sub>DL</sub>). Acquisition times, apparent signal-to-noise ratio, contrast-to-noise ratio, and edge rise distance were assessed. Volumetry and functional analysis were performed. Qualitative image scores were rated on a 5-point Likert scale. Cardiovascular structures and pathological findings visible in cine<sub>DL</sub> images only were assessed. Statistical analysis included the Student paired <i>t</i> test and the Wilcoxon test. A total of 42 participants were included (median gestational age, 35.9 weeks [interquartile range (IQR), 35.1-36.4]). Cine<sub>DL</sub> acquisition was faster than cine images acquired at normal resolution (134±9.6 s versus 252±8.8 s; <i>P</i><0.001). Quantitative image quality metrics and image quality scores for cine<sub>DL</sub> were higher or comparable with those of cine images acquired at normal-resolution images (eg, fetal motion, 4.0 [IQR, 4.0-5.0] versus 4.0 [IQR, 3.0-4.0]; <i>P</i><0.001). Nonpatient-related artifacts (eg, backfolding) were more pronounced in Cine<sub>DL</sub> compared with cine images acquired at normal-resolution images (4.0 [IQR, 4.0-5.0] versus 5.0 [IQR, 3.0-4.0]; <i>P</i><0.001). Volumetry and functional results were comparable. Cine<sub>DL</sub> revealed additional structures in 10 of 42 fetuses (24%) and additional pathologies in 5 of 42 fetuses (12%), including partial anomalous pulmonary venous connection. Deep learning super-resolution reconstructions of low-resolution acquisitions shorten acquisition times and achieve diagnostic quality comparable with standard images, while being less sensitive to fetal bulk movements, leading to additional diagnostic findings. Therefore, deep learning super-resolution may improve the clinical utility of fetal cardiovascular magnetic resonance for accurate prenatal assessment of congenital heart disease.

Exploring the Design Space of 3D MLLMs for CT Report Generation

Mohammed Baharoon, Jun Ma, Congyu Fang, Augustin Toma, Bo Wang

arxiv logopreprintJun 26 2025
Multimodal Large Language Models (MLLMs) have emerged as a promising way to automate Radiology Report Generation (RRG). In this work, we systematically investigate the design space of 3D MLLMs, including visual input representation, projectors, Large Language Models (LLMs), and fine-tuning techniques for 3D CT report generation. We also introduce two knowledge-based report augmentation methods that improve performance on the GREEN score by up to 10\%, achieving the 2nd place on the MICCAI 2024 AMOS-MM challenge. Our results on the 1,687 cases from the AMOS-MM dataset show that RRG is largely independent of the size of LLM under the same training protocol. We also show that larger volume size does not always improve performance if the original ViT was pre-trained on a smaller volume size. Lastly, we show that using a segmentation mask along with the CT volume improves performance. The code is publicly available at https://github.com/bowang-lab/AMOS-MM-Solution

HyperSORT: Self-Organising Robust Training with hyper-networks

Samuel Joutard, Marijn Stollenga, Marc Balle Sanchez, Mohammad Farid Azampour, Raphael Prevost

arxiv logopreprintJun 26 2025
Medical imaging datasets often contain heterogeneous biases ranging from erroneous labels to inconsistent labeling styles. Such biases can negatively impact deep segmentation networks performance. Yet, the identification and characterization of such biases is a particularly tedious and challenging task. In this paper, we introduce HyperSORT, a framework using a hyper-network predicting UNets' parameters from latent vectors representing both the image and annotation variability. The hyper-network parameters and the latent vector collection corresponding to each data sample from the training set are jointly learned. Hence, instead of optimizing a single neural network to fit a dataset, HyperSORT learns a complex distribution of UNet parameters where low density areas can capture noise-specific patterns while larger modes robustly segment organs in differentiated but meaningful manners. We validate our method on two 3D abdominal CT public datasets: first a synthetically perturbed version of the AMOS dataset, and TotalSegmentator, a large scale dataset containing real unknown biases and errors. Our experiments show that HyperSORT creates a structured mapping of the dataset allowing the identification of relevant systematic biases and erroneous samples. Latent space clusters yield UNet parameters performing the segmentation task in accordance with the underlying learned systematic bias. The code and our analysis of the TotalSegmentator dataset are made available: https://github.com/ImFusionGmbH/HyperSORT

MedPrompt: LLM-CNN Fusion with Weight Routing for Medical Image Segmentation and Classification

Shadman Sobhan, Kazi Abrar Mahmud, Abduz Zami

arxiv logopreprintJun 26 2025
Current medical image analysis systems are typically task-specific, requiring separate models for classification and segmentation, and lack the flexibility to support user-defined workflows. To address these challenges, we introduce MedPrompt, a unified framework that combines a few-shot prompted Large Language Model (Llama-4-17B) for high-level task planning with a modular Convolutional Neural Network (DeepFusionLab) for low-level image processing. The LLM interprets user instructions and generates structured output to dynamically route task-specific pretrained weights. This weight routing approach avoids retraining the entire framework when adding new tasks-only task-specific weights are required, enhancing scalability and deployment. We evaluated MedPrompt across 19 public datasets, covering 12 tasks spanning 5 imaging modalities. The system achieves a 97% end-to-end correctness in interpreting and executing prompt-driven instructions, with an average inference latency of 2.5 seconds, making it suitable for near real-time applications. DeepFusionLab achieves competitive segmentation accuracy (e.g., Dice 0.9856 on lungs) and strong classification performance (F1 0.9744 on tuberculosis). Overall, MedPrompt enables scalable, prompt-driven medical imaging by combining the interpretability of LLMs with the efficiency of modular CNNs.

Lightweight Physics-Informed Zero-Shot Ultrasound Plane Wave Denoising

Hojat Asgariandehkordi, Mostafa Sharifzadeh, Hassan Rivaz

arxiv logopreprintJun 26 2025
Ultrasound Coherent Plane Wave Compounding (CPWC) enhances image contrast by combining echoes from multiple steered transmissions. While increasing the number of angles generally improves image quality, it drastically reduces the frame rate and can introduce blurring artifacts in fast-moving targets. Moreover, compounded images remain susceptible to noise, particularly when acquired with a limited number of transmissions. We propose a zero-shot denoising framework tailored for low-angle CPWC acquisitions, which enhances contrast without relying on a separate training dataset. The method divides the available transmission angles into two disjoint subsets, each used to form compound images that include higher noise levels. The new compounded images are then used to train a deep model via a self-supervised residual learning scheme, enabling it to suppress incoherent noise while preserving anatomical structures. Because angle-dependent artifacts vary between the subsets while the underlying tissue response is similar, this physics-informed pairing allows the network to learn to disentangle the inconsistent artifacts from the consistent tissue signal. Unlike supervised methods, our model requires no domain-specific fine-tuning or paired data, making it adaptable across anatomical regions and acquisition setups. The entire pipeline supports efficient training with low computational cost due to the use of a lightweight architecture, which comprises only two convolutional layers. Evaluations on simulation, phantom, and in vivo data demonstrate superior contrast enhancement and structure preservation compared to both classical and deep learning-based denoising methods.
Page 136 of 3363359 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.