Sort by:
Page 83 of 2252246 results

Advances in disease detection through retinal imaging: A systematic review.

Bilal H, Keles A, Bendechache M

pubmed logopapersJun 6 2025
Ocular and non-ocular diseases significantly impact millions of people worldwide, leading to vision impairment or blindness if not detected and managed early. Many individuals could be prevented from becoming blind by treating these diseases early on and stopping their progression. Despite advances in medical imaging and diagnostic tools, the manual detection of these diseases remains labor-intensive, time-consuming, and dependent on the expert's experience. Computer-aided diagnosis (CAD) has been transformed by machine learning (ML), providing promising methods for the automated detection and grading of diseases using various retinal imaging modalities. In this paper, we present a comprehensive systematic literature review that discusses the use of ML techniques to detect diseases from retinal images, utilizing both single and multi-modal imaging approaches. We analyze the efficiency of various Deep Learning and classical ML models, highlighting their achievements in accuracy, sensitivity, and specificity. Even with these advancements, the review identifies several critical challenges. We propose future research directions to address these issues. By overcoming these challenges, the potential of ML to enhance diagnostic accuracy and patient outcomes can be fully realized, opening the way for more reliable and effective ocular and non-ocular disease management.

Full Conformal Adaptation of Medical Vision-Language Models

Julio Silva-Rodríguez, Leo Fillioux, Paul-Henry Cournède, Maria Vakalopoulou, Stergios Christodoulidis, Ismail Ben Ayed, Jose Dolz

arxiv logopreprintJun 6 2025
Vision-language models (VLMs) pre-trained at large scale have shown unprecedented transferability capabilities and are being progressively integrated into medical image analysis. Although its discriminative potential has been widely explored, its reliability aspect remains overlooked. This work investigates their behavior under the increasingly popular split conformal prediction (SCP) framework, which theoretically guarantees a given error level on output sets by leveraging a labeled calibration set. However, the zero-shot performance of VLMs is inherently limited, and common practice involves few-shot transfer learning pipelines, which cannot absorb the rigid exchangeability assumptions of SCP. To alleviate this issue, we propose full conformal adaptation, a novel setting for jointly adapting and conformalizing pre-trained foundation models, which operates transductively over each test data point using a few-shot adaptation set. Moreover, we complement this framework with SS-Text, a novel training-free linear probe solver for VLMs that alleviates the computational cost of such a transductive approach. We provide comprehensive experiments using 3 different modality-specialized medical VLMs and 9 adaptation tasks. Our framework requires exactly the same data as SCP, and provides consistent relative improvements of up to 27% on set efficiency while maintaining the same coverage guarantees.

Query Nearby: Offset-Adjusted Mask2Former enhances small-organ segmentation

Xin Zhang, Dongdong Meng, Sheng Li

arxiv logopreprintJun 6 2025
Medical segmentation plays an important role in clinical applications like radiation therapy and surgical guidance, but acquiring clinically acceptable results is difficult. In recent years, progress has been witnessed with the success of utilizing transformer-like models, such as combining the attention mechanism with CNN. In particular, transformer-based segmentation models can extract global information more effectively, compensating for the drawbacks of CNN modules that focus on local features. However, utilizing transformer architecture is not easy, because training transformer-based models can be resource-demanding. Moreover, due to the distinct characteristics in the medical field, especially when encountering mid-sized and small organs with compact regions, their results often seem unsatisfactory. For example, using ViT to segment medical images directly only gives a DSC of less than 50\%, which is far lower than the clinically acceptable score of 80\%. In this paper, we used Mask2Former with deformable attention to reduce computation and proposed offset adjustment strategies to encourage sampling points within the same organs during attention weights computation, thereby integrating compact foreground information better. Additionally, we utilized the 4th feature map in Mask2Former to provide a coarse location of organs, and employed an FCN-based auxiliary head to help train Mask2Former more quickly using Dice loss. We show that our model achieves SOTA (State-of-the-Art) performance on the HaNSeg and SegRap2023 datasets, especially on mid-sized and small organs.Our code is available at link https://github.com/earis/Offsetadjustment\_Background-location\_Decoder\_Mask2former.

Reliable Evaluation of MRI Motion Correction: Dataset and Insights

Kun Wang, Tobit Klug, Stefan Ruschke, Jan S. Kirschke, Reinhard Heckel

arxiv logopreprintJun 6 2025
Correcting motion artifacts in MRI is important, as they can hinder accurate diagnosis. However, evaluating deep learning-based and classical motion correction methods remains fundamentally difficult due to the lack of accessible ground-truth target data. To address this challenge, we study three evaluation approaches: real-world evaluation based on reference scans, simulated motion, and reference-free evaluation, each with its merits and shortcomings. To enable evaluation with real-world motion artifacts, we release PMoC3D, a dataset consisting of unprocessed Paired Motion-Corrupted 3D brain MRI data. To advance evaluation quality, we introduce MoMRISim, a feature-space metric trained for evaluating motion reconstructions. We assess each evaluation approach and find real-world evaluation together with MoMRISim, while not perfect, to be most reliable. Evaluation based on simulated motion systematically exaggerates algorithm performance, and reference-free evaluation overrates oversmoothed deep learning outputs.

TissUnet: Improved Extracranial Tissue and Cranium Segmentation for Children through Adulthood

Markian Mandzak, Elvira Yang, Anna Zapaishchykova, Yu-Hui Chen, Lucas Heilbroner, John Zielke, Divyanshu Tak, Reza Mojahed-Yazdi, Francesca Romana Mussa, Zezhong Ye, Sridhar Vajapeyam, Viviana Benitez, Ralph Salloum, Susan N. Chi, Houman Sotoudeh, Jakob Seidlitz, Sabine Mueller, Hugo J. W. L. Aerts, Tina Y. Poussaint, Benjamin H. Kann

arxiv logopreprintJun 6 2025
Extracranial tissues visible on brain magnetic resonance imaging (MRI) may hold significant value for characterizing health conditions and clinical decision-making, yet they are rarely quantified. Current tools have not been widely validated, particularly in settings of developing brains or underlying pathology. We present TissUnet, a deep learning model that segments skull bone, subcutaneous fat, and muscle from routine three-dimensional T1-weighted MRI, with or without contrast enhancement. The model was trained on 155 paired MRI-computed tomography (CT) scans and validated across nine datasets covering a wide age range and including individuals with brain tumors. In comparison to AI-CT-derived labels from 37 MRI-CT pairs, TissUnet achieved a median Dice coefficient of 0.79 [IQR: 0.77-0.81] in a healthy adult cohort. In a second validation using expert manual annotations, median Dice was 0.83 [IQR: 0.83-0.84] in healthy individuals and 0.81 [IQR: 0.78-0.83] in tumor cases, outperforming previous state-of-the-art method. Acceptability testing resulted in an 89% acceptance rate after adjudication by a tie-breaker(N=108 MRIs), and TissUnet demonstrated excellent performance in the blinded comparative review (N=45 MRIs), including both healthy and tumor cases in pediatric populations. TissUnet enables fast, accurate, and reproducible segmentation of extracranial tissues, supporting large-scale studies on craniofacial morphology, treatment effects, and cardiometabolic risk using standard brain T1w MRI.

Comparative analysis of convolutional neural networks and vision transformers in identifying benign and malignant breast lesions.

Wang L, Fang S, Chen X, Pan C, Meng M

pubmed logopapersJun 6 2025
Various deep learning models have been developed and employed for medical image classification. This study conducted comprehensive experiments on 12 models, aiming to establish reliable benchmarks for research on breast dynamic contrast-enhanced magnetic resonance imaging image classification. Twelve deep learning models were systematically compared by analyzing variations in 4 key hyperparameters: optimizer (Op), learning rate, batch size (BS), and data augmentation. The evaluation criteria encompassed a comprehensive set of metrics including accuracy (Ac), loss value, precision, recall rate, F1-score, and area under the receiver operating characteristic curve. Furthermore, the training times and model parameter counts were assessed for holistic performance comparison. Adjustments in the BS within Adam Op had a minimal impact on Ac in the convolutional neural network models. However, altering the Op and learning rate while maintaining the same BS significantly affected the Ac. The ResNet152 network model exhibited the lowest Ac. Both the recall rate and area under the receiver operating characteristic curve for the ResNet152 and Vision transformer-base (ViT) models were inferior compared to the others. Data augmentation unexpectedly reduced the Ac of ResNet50, ResNet152, VGG16, VGG19, and ViT models. The VGG16 model boasted the shortest training duration, whereas the ViT model, before data augmentation, had the longest training time and smallest model weight. The ResNet152 and ViT models were not well suited for image classification tasks involving small breast dynamic contrast-enhanced magnetic resonance imaging datasets. Although data augmentation is typically beneficial, its application should be approached cautiously. These findings provide important insights to inform and refine future research in this domain.

Data Driven Models Merging Geometric, Biomechanical, and Clinical Data to Assess the Rupture of Abdominal Aortic Aneurysms.

Alloisio M, Siika A, Roy J, Zerwes S, Hyhlik-Duerr A, Gasser TC

pubmed logopapersJun 6 2025
Despite elective repair of a large portion of stable abdominal aortic aneurysms (AAAs), the diameter criterion cannot prevent all small AAA ruptures. Since rupture depends on many factors, this study explored whether machine learning (ML) models (logistic regression [LogR], linear and non-linear support vector machine [SVM-Lin and SVM-Nlin], and Gaussian Naïve Bayes [GNB]) might improve the diameter based risk assessment by comparing already ruptured (diameter 52.8 - 174.5 mm) with asymptomatic (diameter 40.4 - 95.5 mm) aortas. A retrospective case-control observational study included ruptured AAAs from two centres (2010 - 2012) with computed tomography angiography images for finite element analysis. Clinical patient data and geometric and biomechanical AAA properties were fed into ML models, whose output was compared with the results from intact cases. Classifications were explored for all cases and those having diameters below 70 mm. All data trained and validated the ML models, with a five fold cross-validation. SHapley Additive exPlanations (SHAP) analysis ranked the factors for rupture identification. One hundred and seven ruptured (20% female, mean age 77 years, mean diameter 86.3 mm) and 200 non-ruptured aneurysmal infrarenal aortas (22% female, mean age 74 years, mean diameter 57 mm) were investigated through cross-validation methods. Given the entire dataset, the diameter threshold of 55 mm in men and 50 mm in women provided a 58% accurate rupture classification. It was 99% sensitive (AAA rupture identified correctly) and 36% specific (intact AAAs identified correctly). ML models improved accuracy (LogR 90.2%, SVM-Lin 89.48%, SVM-Nlin 88.7%, and GNB 86.4%); accuracy decreased when trained on the ≤ 70 mm group (55/50 mm diameter threshold 44.2%, LogR 82.5%, SVM-Lin 83.6%, SVM-Nlin 65.9%, and GNB: 84.7%). SHAP ranked biomechanical parameters other than the diameter as the most relevant. A multiparameter estimate enhanced the purely diameter based approach. The proposed predictability method should be further tested in longitudinal studies.

Development of a Deep Learning Model for the Volumetric Assessment of Osteonecrosis of the Femoral Head on Three-Dimensional Magnetic Resonance Imaging.

Uemura K, Takashima K, Otake Y, Li G, Mae H, Okada S, Hamada H, Sugano N

pubmed logopapersJun 6 2025
Although volumetric assessment of necrotic lesions using the Steinberg classification predicts future collapse in osteonecrosis of the femoral head (ONFH), quantifying these lesions using magnetic resonance imaging (MRI) generally requires time and effort, allowing the Steinberg classification to be routinely used in clinical investigations. Thus, this study aimed to use deep learning to develop a method for automatically segmenting necrotic lesions using MRI and for automatically classifying them according to the Steinberg classification. A total of 63 hips from patients who had ONFH and did not have collapse were included. An orthopaedic surgeon manually segmented the femoral head and necrotic lesions on MRI acquired using a spoiled gradient-echo sequence. Based on manual segmentation, 22 hips were classified as Steinberg grade A, 23 as Steinberg grade B, and 18 as Steinberg grade C. The manually segmented labels were used to train a deep learning model that used a 5-layer Dynamic U-Net system. A four-fold cross-validation was performed to assess segmentation accuracy using the Dice coefficient (DC) and average symmetric distance (ASD). Furthermore, hip classification accuracy according to the Steinberg classification was evaluated along with the weighted Kappa coefficient. The median DC and ASD for the femoral head region were 0.95 (interquartile range [IQR], 0.95 to 0.96) and 0.65 mm (IQR, 0.59 to 0.75), respectively. For necrotic lesions, the median DC and ASD were 0.89 (IQR, 0.85 to 0.92) and 0.76 mm (IQR, 0.58 to 0.96), respectively. Based on the Steinberg classification, the grading matched in 59 hips (accuracy: 93.7%), with a weighted Kappa coefficient of 0.98. The proposed deep learning model exhibited high accuracy in segmenting and grading necrotic lesions according to the Steinberg classification using MRI. This model can be used to assist clinicians in the volumetric assessment of ONFH.

ResPF: Residual Poisson Flow for Efficient and Physically Consistent Sparse-View CT Reconstruction

Changsheng Fang, Yongtong Liu, Bahareh Morovati, Shuo Han, Yu Shi, Li Zhou, Shuyi Fan, Hengyong Yu

arxiv logopreprintJun 6 2025
Sparse-view computed tomography (CT) is a practical solution to reduce radiation dose, but the resulting ill-posed inverse problem poses significant challenges for accurate image reconstruction. Although deep learning and diffusion-based methods have shown promising results, they often lack physical interpretability or suffer from high computational costs due to iterative sampling starting from random noise. Recent advances in generative modeling, particularly Poisson Flow Generative Models (PFGM), enable high-fidelity image synthesis by modeling the full data distribution. In this work, we propose Residual Poisson Flow (ResPF) Generative Models for efficient and accurate sparse-view CT reconstruction. Based on PFGM++, ResPF integrates conditional guidance from sparse measurements and employs a hijacking strategy to significantly reduce sampling cost by skipping redundant initial steps. However, skipping early stages can degrade reconstruction quality and introduce unrealistic structures. To address this, we embed a data-consistency into each iteration, ensuring fidelity to sparse-view measurements. Yet, PFGM sampling relies on a fixed ordinary differential equation (ODE) trajectory induced by electrostatic fields, which can be disrupted by step-wise data consistency, resulting in unstable or degraded reconstructions. Inspired by ResNet, we introduce a residual fusion module to linearly combine generative outputs with data-consistent reconstructions, effectively preserving trajectory continuity. To the best of our knowledge, this is the first application of Poisson flow models to sparse-view CT. Extensive experiments on synthetic and clinical datasets demonstrate that ResPF achieves superior reconstruction quality, faster inference, and stronger robustness compared to state-of-the-art iterative, learning-based, and diffusion models.

TissUnet: Improved Extracranial Tissue and Cranium Segmentation for Children through Adulthood

Markiian Mandzak, Elvira Yang, Anna Zapaishchykova, Yu-Hui Chen, Lucas Heilbroner, John Zielke, Divyanshu Tak, Reza Mojahed-Yazdi, Francesca Romana Mussa, Zezhong Ye, Sridhar Vajapeyam, Viviana Benitez, Ralph Salloum, Susan N. Chi, Houman Sotoudeh, Jakob Seidlitz, Sabine Mueller, Hugo J. W. L. Aerts, Tina Y. Poussaint, Benjamin H. Kann

arxiv logopreprintJun 6 2025
Extracranial tissues visible on brain magnetic resonance imaging (MRI) may hold significant value for characterizing health conditions and clinical decision-making, yet they are rarely quantified. Current tools have not been widely validated, particularly in settings of developing brains or underlying pathology. We present TissUnet, a deep learning model that segments skull bone, subcutaneous fat, and muscle from routine three-dimensional T1-weighted MRI, with or without contrast enhancement. The model was trained on 155 paired MRI-computed tomography (CT) scans and validated across nine datasets covering a wide age range and including individuals with brain tumors. In comparison to AI-CT-derived labels from 37 MRI-CT pairs, TissUnet achieved a median Dice coefficient of 0.79 [IQR: 0.77-0.81] in a healthy adult cohort. In a second validation using expert manual annotations, median Dice was 0.83 [IQR: 0.83-0.84] in healthy individuals and 0.81 [IQR: 0.78-0.83] in tumor cases, outperforming previous state-of-the-art method. Acceptability testing resulted in an 89% acceptance rate after adjudication by a tie-breaker(N=108 MRIs), and TissUnet demonstrated excellent performance in the blinded comparative review (N=45 MRIs), including both healthy and tumor cases in pediatric populations. TissUnet enables fast, accurate, and reproducible segmentation of extracranial tissues, supporting large-scale studies on craniofacial morphology, treatment effects, and cardiometabolic risk using standard brain T1w MRI.
Page 83 of 2252246 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.