Sort by:
Page 42 of 99990 results

Modality-Aware Feature Matching: A Comprehensive Review of Single- and Cross-Modality Techniques

Weide Liu, Wei Zhou, Jun Liu, Ping Hu, Jun Cheng, Jungong Han, Weisi Lin

arxiv logopreprintJul 30 2025
Feature matching is a cornerstone task in computer vision, essential for applications such as image retrieval, stereo matching, 3D reconstruction, and SLAM. This survey comprehensively reviews modality-based feature matching, exploring traditional handcrafted methods and emphasizing contemporary deep learning approaches across various modalities, including RGB images, depth images, 3D point clouds, LiDAR scans, medical images, and vision-language interactions. Traditional methods, leveraging detectors like Harris corners and descriptors such as SIFT and ORB, demonstrate robustness under moderate intra-modality variations but struggle with significant modality gaps. Contemporary deep learning-based methods, exemplified by detector-free strategies like CNN-based SuperPoint and transformer-based LoFTR, substantially improve robustness and adaptability across modalities. We highlight modality-aware advancements, such as geometric and depth-specific descriptors for depth images, sparse and dense learning methods for 3D point clouds, attention-enhanced neural networks for LiDAR scans, and specialized solutions like the MIND descriptor for complex medical image matching. Cross-modal applications, particularly in medical image registration and vision-language tasks, underscore the evolution of feature matching to handle increasingly diverse data interactions.

Applications of artificial intelligence and advanced imaging in pediatric diffuse midline glioma.

Haddadi Avval A, Banerjee S, Zielke J, Kann BH, Mueller S, Rauschecker AM

pubmed logopapersJul 30 2025
Diffuse midline glioma (DMG) is a rare, aggressive, and fatal tumor that largely occurs in the pediatric population. To improve outcomes, it is important to characterize DMGs, which can be performed via magnetic resonance imaging (MRI) assessment. Recently, artificial intelligence (AI) and advanced imaging have demonstrated their potential to improve the evaluation of various brain tumors, gleaning more information from imaging data than is possible without these methods. This narrative review compiles the existing literature on the intersection of MRI-based AI use and DMG tumors. The applications of AI in DMG revolve around classification and diagnosis, segmentation, radiogenomics, and prognosis/survival prediction. Currently published articles have utilized a wide spectrum of AI algorithms, from traditional machine learning and radiomics to neural networks. Challenges include the lack of cohorts of DMG patients with publicly available, multi-institutional, multimodal imaging and genomics datasets as well as the overall rarity of the disease. As an adjunct to AI, advanced MRI techniques, including diffusion-weighted imaging, perfusion-weighted imaging, and Magnetic Resonance Spectroscopy (MRS), as well as positron emission tomography (PET), provide additional insights into DMGs. Establishing AI models in conjunction with advanced imaging modalities has the potential to push clinical practice toward precision medicine.

WSDC-ViT: a novel transformer network for pneumonia image classification based on windows scalable attention and dynamic rectified linear unit convolutional modules.

Gu Y, Bai H, Chen M, Yang L, Zhang B, Wang J, Lu X, Li J, Liu X, Yu D, Zhao Y, Tang S, He Q

pubmed logopapersJul 30 2025
Accurate differential diagnosis of pneumonia remains a challenging task, as different types of pneumonia require distinct treatment strategies. Early and precise diagnosis is crucial for minimizing the risk of misdiagnosis and for effectively guiding clinical decision-making and monitoring treatment response. This study proposes the WSDC-ViT network to enhance computer-aided pneumonia detection and alleviate the diagnostic workload for radiologists. Unlike existing models such as Swin Transformer or CoAtNet, which primarily improve attention mechanisms through hierarchical designs or convolutional embedding, WSDC-ViT introduces a novel architecture that simultaneously enhances global and local feature extraction through a scalable self-attention mechanism and convolutional refinement. Specifically, the network integrates a scalable self-attention mechanism that decouples the query, key, and value dimensions to reduce computational overhead and improve contextual learning, while an interactive window-based attention module further strengthens long-range dependency modeling. Additionally, a convolution-based module equipped with a dynamic ReLU activation function is embedded within the transformer encoder to capture fine-grained local details and adaptively enhance feature expression. Experimental results demonstrate that the proposed method achieves an average classification accuracy of 95.13% and an F1-score of 95.63% on a chest X-ray dataset, along with 99.36% accuracy and a 99.34% F1-score on a CT dataset. These results highlight the model's superior performance compared to existing automated pneumonia classification approaches, underscoring its potential clinical applicability.

Evaluation of GPT-4o for multilingual translation of radiology reports across imaging modalities.

Terzis R, Salam B, Nowak S, Mueller PT, Mesropyan N, Oberlinkels L, Efferoth AF, Kravchenko D, Voigt M, Ginzburg D, Pieper CC, Hayawi M, Kuetting D, Afat S, Maintz D, Luetkens JA, Kaya K, Isaak A

pubmed logopapersJul 29 2025
Large language models (LLMs) like GPT-4o offer multilingual and real-time translation capabilities. This study aims to evaluate GPT-4o's effectiveness in translating radiology reports into different languages. In this experimental two-center study, 100 real-world radiology reports from four imaging modalities (X-ray, ultrasound, CT, MRI) were randomly selected and fully anonymized. Reports were translated using GPT-4o with zero-shot prompting from German into four languages including English, French, Spanish, and Russian (n = 400 translations). Eight bilingual radiologists (two per language) evaluated the translations for general readability, overall quality, and utility for translators using 5-point Likert scales (ranging from 5 [best score] to 1 [worst score]). Binary questions (yes/no) were conducted to evaluate potential harmful errors, completeness, and factual correctness. The average processing time of GPT-4o for translating reports ranged from 9 to 24 s. The overall quality of translations achieved a median of 4.5 (IQR 4-5), with English (5 [4,5]), French and Spanish (each 4.5 [4,5]) significantly outperforming Russian (4 [3.5-4]; each p < 0.05). Usefulness for translators was rated highest for English (5 [5-5], p < 0.05 against other languages). Readability scores and translation completeness were significantly higher for translations into Spanish, English and French compared to Russian (each p < 0.05). Factual correctness averaged 79 %, with English (84 %) and French (83 %) outperforming Russian (69 %) (each p < 0.05). Potentially harmful errors were identified in 4 % of translations, primarily in Russian (9 %). GPT-4o demonstrated robust performance in translating radiology reports across multiple languages, with limitations observed in Russian translations.

Multi-Faceted Consistency learning with active cross-labeling for barely-supervised 3D medical image segmentation.

Wu X, Xu Z, Tong RK

pubmed logopapersJul 29 2025
Deep learning-driven 3D medical image segmentation generally necessitates dense voxel-wise annotations, which are expensive and labor-intensive to acquire. Cross-annotation, which labels only a few orthogonal slices per scan, has recently emerged as a cost-effective alternative that better preserves the shape and precise boundaries of the 3D object than traditional weak labeling methods such as bounding boxes and scribbles. However, learning from such sparse labels, referred to as barely-supervised learning (BSL), remains challenging due to less fine-grained object perception, less compact class features and inferior generalizability. To tackle these challenges and foster collaboration between model training and human expertise, we propose a Multi-Faceted ConSistency learning (MF-ConS) framework with a Diversity and Uncertainty Sampling-based Active Learning (DUS-AL) strategy, specifically designed for the active BSL scenario. This framework combines a cross-annotation BSL strategy, where only three orthogonal slices are labeled per scan, with an AL paradigm guided by DUS to direct human-in-the-loop annotation toward the most informative volumes under a fixed budget. Built upon a teacher-student architecture, MF-ConS integrates three complementary consistency regularization modules: (i) neighbor-informed object prediction consistency for advancing fine-grained object perception by encouraging the student model to infer complete segmentation from masked inputs; (ii) prototype-driven consistency, which enhances intra-class compactness and discriminativeness by aligning latent feature and decision spaces using fused prototypes; and (iii) stability constraint that promotes model robustness against input perturbations. Extensive experiments on three benchmark datasets demonstrate that MF-ConS (DUS-AL) consistently outperforms state-of-the-art methods under extremely limited annotation.

Feature Selection in Healthcare Datasets: Towards a Generalizable Solution.

Maruotto I, Ciliberti FK, Gargiulo P, Recenti M

pubmed logopapersJul 29 2025
The increasing dimensionality of healthcare datasets presents major challenges for clinical data analysis and interpretation. This study introduces a scalable ensemble feature selection (FS) strategy optimized for multi-biometric healthcare datasets aiming to: address the need for dimensionality reduction, identify the most significant features, improve machine learning models' performance, and enhance interpretability in a clinical context. The novel waterfall selection, that integrates sequentially (a) tree-based feature ranking and (b) greedy backward feature elimination, produces as output several sets of features. These subsets are then combined using a specific merging strategy to produce a single set of clinically relevant features. The overall method is applied to two healthcare datasets: the biosignal-based BioVRSea dataset, containing electromyography, electroencephalography, and center-of-pressure data for postural control and motion sickness assessment, and the image-based SinPain dataset, which includes MRI and CT-scan data to study knee osteoarthritis. Our ensemble FS approach demonstrated effective dimensionality reduction, achieving over a 50% decrease in certain feature subsets. The new reduced feature set maintained or improved the model classification metrics when tested with Support Vector Machine and Random Forest models. The proposed ensemble FS method retains selected features essential for distinguishing clinical outcomes, leading to models that are both computationally efficient and clinically interpretable. Furthermore, the adaptability of this method across two heterogeneous healthcare datasets and the scalability of the algorithm indicates its potential as a generalizable tool in healthcare studies. This approach can advance clinical decision support systems, making high-dimensional healthcare datasets more accessible and clinically interpretable.

MFFBi-Unet: Merging Dynamic Sparse Attention and Multi-scale Feature Fusion for Medical Image Segmentation.

Sun B, Liu C, Wang Q, Bi K, Zhang W

pubmed logopapersJul 29 2025
The advancement of deep learning has driven extensive research validating the effectiveness of U-Net-style symmetric encoder-decoder architectures based on Transformers for medical image segmentation. However, the inherent design requiring attention mechanisms to compute token affinities across all spatial locations leads to prohibitive computational complexity and substantial memory demands. Recent efforts have attempted to address these limitations through sparse attention mechanisms. However, existing approaches employing artificial, content-agnostic sparse attention patterns demonstrate limited capability in modeling long-range dependencies effectively. We propose MFFBi-Unet, a novel architecture incorporating dynamic sparse attention through bi-level routing, enabling context-aware computation allocation with enhanced adaptability. The encoder-decoder module integrates BiFormer to optimize semantic feature extraction and facilitate high-fidelity feature map reconstruction. A novel Multi-scale Feature Fusion (MFF) module in skip connections synergistically combines multi-level contextual information with processed multi-scale features. Extensive evaluations on multiple public medical benchmarks demonstrate that our method consistently exhibits significant advantages. Notably, our method achieves statistically significant improvements, outperforming state-of-the-art approaches like MISSFormer by 2.02% and 1.28% Dice scores on respective benchmarks.

AI generated annotations for Breast, Brain, Liver, Lungs, and Prostate cancer collections in the National Cancer Institute Imaging Data Commons.

Murugesan GK, McCrumb D, Soni R, Kumar J, Nuernberg L, Pei L, Wagner U, Granger S, Fedorov AY, Moore S, Van Oss J

pubmed logopapersJul 29 2025
The Artificial Intelligence in Medical Imaging (AIMI) initiative aims to enhance the National Cancer Institute's (NCI) Image Data Commons (IDC) by releasing fully reproducible nnU-Net models, along with AI-assisted segmentation for cancer radiology images. In this extension of our earlier work, we created high-quality, AI-annotated imaging datasets for 11 IDC collections, spanning computed tomography (CT) and magnetic resonance imaging (MRI) of the lungs, breast, brain, kidneys, prostate, and liver. Each nnU-Net model was trained on open-source datasets, and a portion of the AI-generated annotations was reviewed and corrected by board-certified radiologists. Both the AI and radiologist annotations were encoded in compliance with the Digital Imaging and Communications in Medicine (DICOM) standard, ensuring seamless integration into the IDC collections. By making these models, images, and annotations publicly accessible, we aim to facilitate further research and development in cancer imaging.

Implicit Spatiotemporal Bandwidth Enhancement Filter by Sine-activated Deep Learning Model for Fast 3D Photoacoustic Tomography

I Gede Eka Sulistyawan, Takuro Ishii, Riku Suzuki, Yoshifumi Saijo

arxiv logopreprintJul 28 2025
3D photoacoustic tomography (3D-PAT) using high-frequency hemispherical transducers offers near-omnidirectional reception and enhanced sensitivity to the finer structural details encoded in the high-frequency components of the broadband photoacoustic (PA) signal. However, practical constraints such as limited number of channels with bandlimited sampling rate often result in sparse and bandlimited sensors that degrade image quality. To address this, we revisit the 2D deep learning (DL) approach applied directly to sensor-wise PA radio-frequency (PARF) data. Specifically, we introduce sine activation into the DL model to restore the broadband nature of PARF signals given the observed band-limited and high-frequency PARF data. Given the scarcity of 3D training data, we employ simplified training strategies by simulating random spherical absorbers. This combination of sine-activated model and randomized training is designed to emphasize bandwidth learning over dataset memorization. Our model was evaluated on a leaf skeleton phantom, a micro-CT-verified 3D spiral phantom and in-vivo human palm vasculature. The results showed that the proposed training mechanism on sine-activated model was well-generalized across the different tests by effectively increasing the sensor density and recovering the spatiotemporal bandwidth. Qualitatively, the sine-activated model uniquely enhanced high-frequency content that produces clearer vascular structure with fewer artefacts. Quantitatively, the sine-activated model exhibits full bandwidth at -12 dB spectrum and significantly higher contrast-to-noise ratio with minimal loss of structural similarity index. Lastly, we optimized our approach to enable fast enhanced 3D-PAT at 2 volumes-per-second for better practical imaging of a free-moving targets.

Continual learning in medical image analysis: A comprehensive review of recent advancements and future prospects.

Kumari P, Chauhan J, Bozorgpour A, Huang B, Azad R, Merhof D

pubmed logopapersJul 28 2025
Medical image analysis has witnessed remarkable advancements, even surpassing human-level performance in recent years, driven by the rapid development of advanced deep-learning algorithms. However, when the inference dataset slightly differs from what the model has seen during one-time training, the model performance is greatly compromised. The situation requires restarting the training process using both the old and the new data, which is computationally costly, does not align with the human learning process, and imposes storage constraints and privacy concerns. Alternatively, continual learning has emerged as a crucial approach for developing unified and sustainable deep models to deal with new classes, tasks, and the drifting nature of data in non-stationary environments for various application areas. Continual learning techniques enable models to adapt and accumulate knowledge over time, which is essential for maintaining performance on evolving datasets and novel tasks. Owing to its popularity and promising performance, it is an active and emerging research topic in the medical field and hence demands a survey and taxonomy to clarify the current research landscape of continual learning in medical image analysis. This systematic review paper provides a comprehensive overview of the state-of-the-art in continual learning techniques applied to medical image analysis. We present an extensive survey of existing research, covering topics including catastrophic forgetting, data drifts, stability, and plasticity requirements. Further, an in-depth discussion of key components of a continual learning framework, such as continual learning scenarios, techniques, evaluation schemes, and metrics, is provided. Continual learning techniques encompass various categories, including rehearsal, regularization, architectural, and hybrid strategies. We assess the popularity and applicability of continual learning categories in various medical sub-fields like radiology and histopathology. Our exploration considers unique challenges in the medical domain, including costly data annotation, temporal drift, and the crucial need for benchmarking datasets to ensure consistent model evaluation. The paper also addresses current challenges and looks ahead to potential future research directions.
Page 42 of 99990 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.