Sort by:
Page 11 of 1331322 results

No Modality Left Behind: Adapting to Missing Modalities via Knowledge Distillation for Brain Tumor Segmentation

Shenghao Zhu, Yifei Chen, Weihong Chen, Shuo Jiang, Guanyu Zhou, Yuanhan Wang, Feiwei Qin, Changmiao Wang, Qiyuan Tian

arxiv logopreprintSep 18 2025
Accurate brain tumor segmentation is essential for preoperative evaluation and personalized treatment. Multi-modal MRI is widely used due to its ability to capture complementary tumor features across different sequences. However, in clinical practice, missing modalities are common, limiting the robustness and generalizability of existing deep learning methods that rely on complete inputs, especially under non-dominant modality combinations. To address this, we propose AdaMM, a multi-modal brain tumor segmentation framework tailored for missing-modality scenarios, centered on knowledge distillation and composed of three synergistic modules. The Graph-guided Adaptive Refinement Module explicitly models semantic associations between generalizable and modality-specific features, enhancing adaptability to modality absence. The Bi-Bottleneck Distillation Module transfers structural and textural knowledge from teacher to student models via global style matching and adversarial feature alignment. The Lesion-Presence-Guided Reliability Module predicts prior probabilities of lesion types through an auxiliary classification task, effectively suppressing false positives under incomplete inputs. Extensive experiments on the BraTS 2018 and 2024 datasets demonstrate that AdaMM consistently outperforms existing methods, exhibiting superior segmentation accuracy and robustness, particularly in single-modality and weak-modality configurations. In addition, we conduct a systematic evaluation of six categories of missing-modality strategies, confirming the superiority of knowledge distillation and offering practical guidance for method selection and future research. Our source code is available at https://github.com/Quanato607/AdaMM.

Transplant-Ready? Evaluating AI Lung Segmentation Models in Candidates with Severe Lung Disease

Jisoo Lee, Michael R. Harowicz, Yuwen Chen, Hanxue Gu, Isaac S. Alderete, Lin Li, Maciej A. Mazurowski, Matthew G. Hartwig

arxiv logopreprintSep 18 2025
This study evaluates publicly available deep-learning based lung segmentation models in transplant-eligible patients to determine their performance across disease severity levels, pathology categories, and lung sides, and to identify limitations impacting their use in preoperative planning in lung transplantation. This retrospective study included 32 patients who underwent chest CT scans at Duke University Health System between 2017 and 2019 (total of 3,645 2D axial slices). Patients with standard axial CT scans were selected based on the presence of two or more lung pathologies of varying severity. Lung segmentation was performed using three previously developed deep learning models: Unet-R231, TotalSegmentator, MedSAM. Performance was assessed using quantitative metrics (volumetric similarity, Dice similarity coefficient, Hausdorff distance) and a qualitative measure (four-point clinical acceptability scale). Unet-R231 consistently outperformed TotalSegmentator and MedSAM in general, for different severity levels, and pathology categories (p<0.05). All models showed significant performance declines from mild to moderate-to-severe cases, particularly in volumetric similarity (p<0.05), without significant differences among lung sides or pathology types. Unet-R231 provided the most accurate automated lung segmentation among evaluated models with TotalSegmentator being a close second, though their performance declined significantly in moderate-to-severe cases, emphasizing the need for specialized model fine-tuning in severe pathology contexts.

Integrating artificial intelligence with Gamma Knife radiosurgery in treating meningiomas and schwannomas: a review.

Alhosanie TN, Hammo B, Klaib AF, Alshudifat A

pubmed logopapersSep 18 2025
Meningiomas and schwannomas are benign tumors that affect the central nervous system, comprising up to one-third of intracranial neoplasms. Gamma Knife radiosurgery (GKRS), or stereotactic radiosurgery (SRS), is a form of radiation therapy. Although referred to as "surgery," GKRS does not involve incisions. The GK medical device effectively utilizes highly focused gamma rays to treat lesions or tumors, primarily in the brain. In radiation oncology, machine learning (ML) has been used in various aspects, including outcome prediction, quality control, treatment planning, and image segmentation. This review will showcase the advantages of integrating artificial intelligence with Gamma Knife technology in treating schwannomas and meningiomas.This review adheres to PRISMA guidelines. We searched the PubMed, Scopus, and IEEE databases to identify studies published between 2021 and March 2025 that met our inclusion and exclusion criteria. The focus was on AI algorithms applied to patients with vestibular schwannoma and meningioma treated with GKRS. Two reviewers participated in the data extraction and quality assessment process.A total of nine studies were reviewed in this analysis. One distinguished deep learning (DL) model is a dual-pathway convolutional neural network (CNN) that integrates T1-weighted (T1W) and T2-weighted (T2W) MRI scans. This model was tested on 861 patients who underwent GKRS, achieving a Dice Similarity Coefficient (DSC) of 0.90. ML-based radiomics models have also demonstrated that certain radiomic features can predict the response of vestibular schwannomas and meningiomas to radiosurgery. Among these, the neural network model exhibited the best performance. AI models were also employed to predict complications following GKRS, such as peritumoral edema. A Random Survival Forest (RSF) model was developed using clinical, semantic, and radiomics variables, achieving a C-index score of 0.861 and 0.780. This model enables the classification of patients into high-risk and low-risk categories for developing post-GKRS edema.AI and ML models show great potential in tumor segmentation, volumetric assessment, and predicting treatment outcomes for vestibular schwannomas and meningiomas treated with GKRS. However, their successful clinical implementation relies on overcoming challenges related to external validation, standardization, and computational demands. Future research should focus on large-scale, multi-institutional validation studies, integrating multimodal data, and developing cost-effective strategies for deploying AI technologies.

Bridging the quality gap: Robust colon wall segmentation in noisy transabdominal ultrasound.

Gago L, González MAF, Engelmann J, Remeseiro B, Igual L

pubmed logopapersSep 18 2025
Colon wall segmentation in transabdominal ultrasound is challenging due to variations in image quality, speckle noise, and ambiguous boundaries. Existing methods struggle with low-quality images due to their inability to adapt to varying noise levels, poor boundary definition, and reduced contrast in ultrasound imaging, resulting in inconsistent segmentation performance. We present a novel quality-aware segmentation framework that simultaneously predicts image quality and adapts the segmentation process accordingly. Our approach uses a U-Net architecture with a ConvNeXt encoder backbone, enhanced with a parallel quality prediction branch that serves as a regularization mechanism. Our model learns robust features by explicitly modeling image quality during training. We evaluate our method on the C-TRUS dataset and demonstrate superior performance compared to state-of-the-art approaches, particularly on challenging low-quality images. Our method achieves Dice scores of 0.7780, 0.7025, and 0.5970 for high, medium, and low-quality images, respectively. The proposed quality-aware segmentation framework represents a significant step toward clinically viable automated colon wall segmentation systems.

Patient-Specific Cardio-Respiratory Model for Optimization of Cardiac Radioablation.

Rigal L, Bellec J, Lemaire L, Duverge L, Benali K, Lederlin M, Martins R, De Crevoisier R, Simon A

pubmed logopapersSep 17 2025
Stereotactic Arrhythmia Radioablation (STAR) is a promising treatment for refractory ventricular tachycardia. However, its precision may be hampered by cardiac and respiratory motions. Multiple techniques exist to mitigate the effects of these displacements. The purpose of this work was, based on cardiac and respiratory dynamic CT scans, to generate a patient-specific dynamic model of the structures of interest, that enables simulation of treatments for evaluation of motion management methods. Deep learning-based segmentation was used to extract the geometry of the cardiac structures, whose deformations and displacements were assessed using deformable and rigid image registrations. The combination of the model with dose maps enabled to evaluate the dose locally accumulated during the treatment. The reproducibility of each step was evaluated considering expert references, and treatment simulations were evaluated using data of a physical phantom. The exploitation of the model was illustrated on the data of nine patients, demonstrating that the impact of cardiorespiratory dynamics is potentially important and highly patient-specific, and allowing for future evaluations of motion management methods.

Consistent View Alignment Improves Foundation Models for 3D Medical Image Segmentation

Puru Vaish, Felix Meister, Tobias Heimann, Christoph Brune, Jelmer M. Wolterink

arxiv logopreprintSep 17 2025
Many recent approaches in representation learning implicitly assume that uncorrelated views of a data point are sufficient to learn meaningful representations for various downstream tasks. In this work, we challenge this assumption and demonstrate that meaningful structure in the latent space does not emerge naturally. Instead, it must be explicitly induced. We propose a method that aligns representations from different views of the data to align complementary information without inducing false positives. Our experiments show that our proposed self-supervised learning method, Consistent View Alignment, improves performance for downstream tasks, highlighting the critical role of structured view alignment in learning effective representations. Our method achieved first and second place in the MICCAI 2025 SSL3D challenge when using a Primus vision transformer and ResEnc convolutional neural network, respectively. The code and pretrained model weights are released at https://github.com/Tenbatsu24/LatentCampus.

Advancing X-ray microcomputed tomography image processing of avian eggshells: An improved registration metric for multiscale 3D images and resolution-enhanced segmentation of eggshell pores using edge-attentive neural networks.

Jia S, Piché N, McKee MD, Reznikov N

pubmed logopapersSep 17 2025
Avian eggs exhibit a variety of shapes and sizes, reflecting different reproductive strategies. The eggshell not only protects the egg contents, but also regulates gas and water vapor exchange vital for embryonic development. While many studies have explored eggshell ultrastructure, the distribution of pores across the entire shell is less well understood because of a trade-off between resolution and field-of-view in imaging. To overcome this, a neural network was developed for resolution enhancement of low-resolution 3D tomographic data, while performing voxel-wise labeling. Trained on X-ray microcomputed tomography images of ostrich, guillemot and crow eggshells from a natural history museum collection, the model used stepwise magnification to create low- and high-resolution training sets. Registration performance was validated with a novel metric based on local grayscale gradients. An edge-attentive loss function prevented bias towards the dominant background class (95% of all voxels), ensuring accurate labeling of eggshell (5%) and pore (0.1%) voxels. The results indicate that besides edge-attention and class balancing, 3D context preservation and 3D convolution are of paramount importance for extrapolating subvoxel features.

Non-iterative and uncertainty-aware MRI-based liver fat estimation using an unsupervised deep learning method.

Meneses JP, Tejos C, Makalic E, Uribe S

pubmed logopapersSep 17 2025
Liver proton density fat fraction (PDFF), the ratio between fat-only and overall proton densities, is an extensively validated biomarker associated with several diseases. In recent years, numerous deep learning-based methods for estimating PDFF have been proposed to optimize acquisition and post-processing times without sacrificing accuracy, compared to conventional methods. However, the lack of interpretability and the often poor generalizability of these DL-based models undermine the adoption of such techniques in clinical practice. In this work, we propose an Artificial Intelligence-based Decomposition of water and fat with Echo Asymmetry and Least-squares (AI-DEAL) method, designed to estimate both proton density fat fraction (PDFF) and the associated uncertainty maps. Once trained, AI-DEAL performs a one-shot MRI water-fat separation by first calculating the nonlinear confounder variables, R<sub>2</sub><sup>∗</sup> and off-resonance field. It then employs a weighted least squares approach to compute water-only and fat-only signals, along with their corresponding covariance matrix, which are subsequently used to derive the PDFF and its associated uncertainty. We validated our method using in vivo liver CSE-MRI, a fat-water phantom, and a numerical phantom. AI-DEAL demonstrated PDFF biases of 0.25% and -0.12% at two liver ROIs, outperforming state-of-the-art deep learning-based techniques. Although trained using in vivo data, our method exhibited PDFF biases of -3.43% in the fat-water phantom and -0.22% in the numerical phantom with no added noise. The latter bias remained approximately constant when noise was introduced. Furthermore, the estimated uncertainties showed good agreement with the observed errors and the variations within each ROI, highlighting their potential value for assessing the reliability of the resulting PDFF maps.

DLMUSE: Robust Brain Segmentation in Seconds Using Deep Learning.

Bashyam VM, Erus G, Cui Y, Wu D, Hwang G, Getka A, Singh A, Aidinis G, Baik K, Melhem R, Mamourian E, Doshi J, Davison A, Nasrallah IM, Davatzikos C

pubmed logopapersSep 17 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content</i>. Purpose To introduce an open-source deep learning brain segmentation model for fully automated brain MRI segmentation, enabling rapid segmentation and facilitating large-scale neuroimaging research. Materials and Methods In this retrospective study, a deep learning model was developed using a diverse training dataset of 1900 MRI scans (ages 24-93 with a mean of 65 years (SD: 11.5 years) and 1007 females and 893 males) with reference labels generated using a multiatlas segmentation method with human supervision. The final model was validated using 71391 scans from 14 studies. Segmentation quality was assessed using Dice similarity and Pearson correlation coefficients with reference segmentations. Downstream predictive performance for brain age and Alzheimer's disease was evaluated by fitting machine learning models. Statistical significance was assessed using Mann-Whittney U and McNemar's tests. Results The DLMUSE model achieved high correlation (r = 0.93-0.95) and agreement (median Dice scores = 0.84-0.89) with reference segmentations across the testing dataset. Prediction of brain age using DLMUSE features achieved a mean absolute error of 5.08 years, similar to that of the reference method (5.15 years, <i>P</i> = .56). Classification of Alzheimer's disease using DLMUSE features achieved an accuracy of 89% and F1-score of 0.80, which were comparable to values achieved by the reference method (89% and 0.79, respectively). DLMUSE segmentation speed was over 10000 times faster than that of the reference method (3.5 seconds vs 14 hours). Conclusion DLMUSE enabled rapid brain MRI segmentation, with performance comparable to that of state-of-theart methods across diverse datasets. The resulting open-source tools and user-friendly web interface can facilitate large-scale neuroimaging research and wide utilization of advanced segmentation methods. ©RSNA, 2025.

U-net-based segmentation of foreign bodies and ghost images in panoramic radiographs.

Çelebi E, Akkaya N, Ünsal G

pubmed logopapersSep 17 2025
This study aimed to develop and evaluate a deep convolutional neural network (CNN) model for the automatic segmentation of foreign bodies and ghost images in panoramic radiographs (PRs), which can complicate diagnostic interpretation. A dataset of 11,226 PRs from four devices was annotated by two radiologists using the Computer Vision Annotation Tool. A U-Net-based CNN model was trained and evaluated using Intersection over Union (IoU), Dice coefficient, accuracy, precision, recall, and F1 score. For foreign body segmentation, the model achieved validation Dice and IoU scores of 0.9439 and 0.9043, and test scores of 0.9657 and 0.9371. For ghost image segmentation, validation Dice and IoU were 0.8234 and 0.7388, with test scores of 0.8749 and 0.8145. Overall test accuracy exceeded 0.999. The AI model showed high accuracy in segmenting foreign bodies and ghost images in PRs, indicating its potential to assist radiologists. Further clinical validation is recommended.
Page 11 of 1331322 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.