Sort by:
Page 83 of 3493486 results

FractMorph: A Fractional Fourier-Based Multi-Domain Transformer for Deformable Image Registration

Shayan Kebriti, Shahabedin Nabavi, Ali Gooya

arxiv logopreprintAug 17 2025
Deformable image registration (DIR) is a crucial and challenging technique for aligning anatomical structures in medical images and is widely applied in diverse clinical applications. However, existing approaches often struggle to capture fine-grained local deformations and large-scale global deformations simultaneously within a unified framework. We present FractMorph, a novel 3D dual-parallel transformer-based architecture that enhances cross-image feature matching through multi-domain fractional Fourier transform (FrFT) branches. Each Fractional Cross-Attention (FCA) block applies parallel FrFTs at fractional angles of 0{\deg}, 45{\deg}, 90{\deg}, along with a log-magnitude branch, to effectively extract local, semi-global, and global features at the same time. These features are fused via cross-attention between the fixed and moving image streams. A lightweight U-Net style network then predicts a dense deformation field from the transformer-enriched features. On the ACDC cardiac MRI dataset, FractMorph achieves state-of-the-art performance with an overall Dice Similarity Coefficient (DSC) of 86.45%, an average per-structure DSC of 75.15%, and a 95th-percentile Hausdorff distance (HD95) of 1.54 mm on our data split. We also introduce FractMorph-Light, a lightweight variant of our model with only 29.6M parameters, which maintains the superior accuracy of the main model while using approximately half the memory. Our results demonstrate that multi-domain spectral-spatial attention in transformers can robustly and efficiently model complex non-rigid deformations in medical images using a single end-to-end network, without the need for scenario-specific tuning or hierarchical multi-scale networks. The source code of our implementation is available at https://github.com/shayankebriti/FractMorph.

In vivo 3D ultrasound computed tomography of musculoskeletal tissues with generative neural physics

Zhijun Zeng, Youjia Zheng, Chang Su, Qianhang Wu, Hao Hu, Zeyuan Dong, Shan Gao, Yang Lv, Rui Tang, Ligang Cui, Zhiyong Hou, Weijun Lin, Zuoqiang Shi, Yubing Li, He Sun

arxiv logopreprintAug 17 2025
Ultrasound computed tomography (USCT) is a radiation-free, high-resolution modality but remains limited for musculoskeletal imaging due to conventional ray-based reconstructions that neglect strong scattering. We propose a generative neural physics framework that couples generative networks with physics-informed neural simulation for fast, high-fidelity 3D USCT. By learning a compact surrogate of ultrasonic wave propagation from only dozens of cross-modality images, our method merges the accuracy of wave modeling with the efficiency and stability of deep learning. This enables accurate quantitative imaging of in vivo musculoskeletal tissues, producing spatial maps of acoustic properties beyond reflection-mode images. On synthetic and in vivo data (breast, arm, leg), we reconstruct 3D maps of tissue parameters in under ten minutes, with sensitivity to biomechanical properties in muscle and bone and resolution comparable to MRI. By overcoming computational bottlenecks in strongly scattering regimes, this approach advances USCT toward routine clinical assessment of musculoskeletal disease.

The use of artificial intelligence (AI) to safely reduce the workload of breast cancer screening: a retrospective simulation study.

Gialias P, Wiberg MK, Brehl AK, Bjerner T, Gustafsson H

pubmed logopapersAug 17 2025
BackgroundArtificial intelligence (AI)-based systems have the potential to increase the efficiency and effectiveness of breast cancer screening programs but need to be carefully validated before clinical implementation.PurposeTo retrospectively evaluate an AI system to safely reduce the workload of a double-reading breast cancer screening program.Material and MethodsAll digital mammography (DM) screening examinations of women aged 40-74 years between August 2021 and January 2022 in Östergötland, Sweden were included. Analysis of the interval cancers (ICs) was performed in 2024. Each examination was double-read by two breast radiologists and processed by the AI system, which assigned a score of 1-10 to each examination based on increasing likelihood of cancer. In a retrospective simulation, the AI system was used for triaging; low-risk examinations (score 1-7) were selected for single reading and high-risk examinations (score 8-10) for double reading.ResultsA total of 15,468 DMs were included. Using an AI triaging strategy, 10,473 (67.7%) examinations received scores of 1-7, resulting in a 34% workload reduction. Overall, 52/53 screen-detected cancers were assigned a score of 8-10 by the AI system. One cancer was missed by the AI system (score 4) but was detected by the radiologists. In total, 11 cases of IC were found in the 2024 analysis.ConclusionReplacing one reader in breast cancer screening with an AI system for low-risk cases could safely reduce workload by 34%. In total, 11 cases of IC were found in the 2024 analysis; of them, three were identified correctly by the AI system at the 2021-2022 examination.

A Computer Vision and Machine Learning Approach to Classify Views in Distal Radius Radiographs.

Vemu R, Birhiray D, Darwish B, Hollis R, Unnam S, Chilukuri S, Deveza L

pubmed logopapersAug 17 2025
Advances in computer vision and machine learning have augmented the ability to analyze orthopedic radiographs. A critical but underexplored component of this process is the accurate classification of radiographic views and localization of relevant anatomical regions, both of which can impact the performance of downstream diagnostic models. This study presents a deep learning object detection model and mobile application designed to classify distal radius radiographs into standard views-anterior-posterior (AP), lateral (LAT), and oblique (OB)- while localizing the anatomical region most relevant to distal radius fractures. A total of 1593 deidentified radiographs were collected from a single institution between 2021 and 2023 (544 AP, 538 LAT, and 521 OB). Each image was annotated using Labellerr software to draw bounding boxes encompassing the region spanning from the second digit MCP joint to the distal third of the radius, with annotations verified by an experienced orthopedic surgeon. A YOLOv5 object detection model was fine-tuned and trained using a 70/15/15 train/validation/test split. The model achieved an overall accuracy of 97.3%, with class-specific accuracies of 99% for AP, 100% for LAT, and 93% for OB. Overall precision and recall were 96.8% and 97.5%, respectively. Model performance exceeded the expected accuracy from random guessing (p < 0.001, binomial test). A Streamlit-based mobile application was developed to support clinical deployment. This automated view classification step reduces feature space by isolating only the relevant anatomy. Focusing subsequent models on the targeted region can minimize distraction from irrelevant areas and improve the accuracy of downstream fracture classification models.

Weighted loss for imbalanced glaucoma detection: Insights from visual explanations.

Nugraha DJ, Yudistira N, Widodo AW

pubmed logopapersAug 17 2025
Glaucoma is a leading cause of irreversible vision loss in ophthalmology, primarily resulting from damage to the optic nerve. Early detection is crucial but remains challenging due to the inherent class imbalance in glaucoma fundus image datasets. This study addresses this limitation by applying a weighted loss function to Convolutional Neural Networks (CNNs), evaluated on the standardized SMDG-19 dataset, which integrates data from 19 publicly available sources. Key performance metrics including recall, F1-score, precision, accuracy, and AUC were analyzed, and interpretability was assessed using Grad-CAM.The results demonstrate that recall increased from 60.3% to 87.3%, representing a relative improvement of 44.75%, while F1-score improved from 66.5% to 71.4% (+7.25%). Minor trade-offs were observed in precision, which declined from 74.5% to 69.6% (-6.53%), and in accuracy, which dropped from 84.2% to 80.7% (-4.10%). In contrast, AUC rose from 84.2% to 87.4%, reflecting a relative gain of 3.21%. Grad-CAM visualizations showed consistent focus on clinically relevant regions of the optic nerve head, underscoring the effectiveness of the weighted loss strategy in improving both the performance and interpretability of CNN-based glaucoma detection systems.

Defining and Benchmarking a Data-Centric Design Space for Brain Graph Construction

Qinwen Ge, Roza G. Bayrak, Anwar Said, Catie Chang, Xenofon Koutsoukos, Tyler Derr

arxiv logopreprintAug 17 2025
The construction of brain graphs from functional Magnetic Resonance Imaging (fMRI) data plays a crucial role in enabling graph machine learning for neuroimaging. However, current practices often rely on rigid pipelines that overlook critical data-centric choices in how brain graphs are constructed. In this work, we adopt a Data-Centric AI perspective and systematically define and benchmark a data-centric design space for brain graph construction, constrasting with primarily model-centric prior work. We organize this design space into three stages: temporal signal processing, topology extraction, and graph featurization. Our contributions lie less in novel components and more in evaluating how combinations of existing and modified techniques influence downstream performance. Specifically, we study high-amplitude BOLD signal filtering, sparsification and unification strategies for connectivity, alternative correlation metrics, and multi-view node and edge features, such as incorporating lagged dynamics. Experiments on the HCP1200 and ABIDE datasets show that thoughtful data-centric configurations consistently improve classification accuracy over standard pipelines. These findings highlight the critical role of upstream data decisions and underscore the importance of systematically exploring the data-centric design space for graph-based neuroimaging. Our code is available at https://github.com/GeQinwen/DataCentricBrainGraphs.

Deep learning-based identification of necrosis and microvascular proliferation in adult diffuse gliomas from whole-slide images

Guo, Y., Huang, H., Liu, X., Zou, W., Qiu, F., Liu, Y., Chai, R., Jiang, T., Wang, J.

medrxiv logopreprintAug 16 2025
For adult diffuse gliomas (ADGs), most grading can be achieved through molecular subtyping, retaining only two key histopathological features for high-grade glioma (HGG): necrosis (NEC) and microvascular proliferation (MVP). We developed a deep learning (DL) framework to automatically identify and characterize these features. We trained patch-level models to detect and quantify NEC and MVP using a dataset that employed active learning, incorporating patches from 621 whole-slide images (WSIs) from the Chinese Glioma Genome Atlas (CGGA). Utilizing trained patch-level models, we effectively integrated the predicted outcomes and positions of individual patches within WSIs from The Cancer Genome Atlas (TCGA) cohort to form datasets. Subsequently, we introduced a patient-level model, named PLNet (Probability Localization Network), which was trained on these datasets to facilitate patient diagnosis. We also explored the subtypes of NEC and MVP based on the features extracted from patch-level models with clustering process applied on all positive patches. The patient-level models demonstrated exceptional performance, achieving an AUC of 0.9968, 0.9995 and AUPRC of 0.9788, 0.9860 for NEC and MVP, respectively. Compared to pathological reports, our patient-level models achieved the accuracy of 88.05% for NEC and 90.20% for MVP, along with a sensitivity of 73.68% and 77%. When sensitivity was set at 80%, the accuracy for NEC reached 79.28% and for MVP reached 77.55%. DL models enabled more efficient and accurate histopathological image analysis which will aid traditional glioma diagnosis. Clustering-based analyses utilizing features extracted from patch-level models could further investigate the subtypes of NEC and MVP.

VariMix: A variety-guided data mixing framework for explainable medical image classifications.

Xiong X, Sun Y, Liu X, Ke W, Lam CT, Gao Q, Tong T, Li S, Tan T

pubmed logopapersAug 16 2025
Modern deep neural networks are highly over-parameterized, necessitating the use of data augmentation techniques to prevent overfitting and enhance generalization. Generative adversarial networks (GANs) are popular for synthesizing visually realistic images. However, these synthetic images often lack diversity and may have ambiguous class labels. Recent data mixing strategies address some of these issues by mixing image labels based on salient regions. Since the main diagnostic information is not always contained within the salient regions, we aim to address the resulting label mismatches in medical image classifications. We propose a variety-guided data mixing framework (VariMix), which exploits an absolute difference map (ADM) to address the label mismatch problems of mixed medical images. VariMix generates ADM using the image-to-image (I2I) GAN across multiple classes and allows for bidirectional mixing operations between the training samples. The proposed VariMix achieves the highest accuracy of 99.30% and 94.60% with a SwinT V2 classifier on a Chest X-ray (CXR) dataset and a Retinal dataset, respectively. It also achieves the highest accuracy of 87.73%, 99.28%, 95.13%, and 95.81% with a ConvNeXt classifier on a Breast Ultrasound (US) dataset, a CXR dataset, a Retinal dataset, and a Maternal-Fetal US dataset, respectively. Furthermore, the medical expert evaluation on generated images shows the great potential of our proposed I2I GAN in improving the accuracy of medical image classifications. Extensive experiments demonstrate the superiority of VariMix compared with the existing GAN- and Mixup-based methods on four public datasets using Swin Transformer V2 and ConvNeXt architectures. Furthermore, by projecting the source image to the hyperplanes of the classifiers, the proposed I2I GAN can generate hyperplane difference maps between the source image and the hyperplane image, demonstrating its ability to interpret medical image classifications. The source code is provided in https://github.com/yXiangXiong/VariMix.

SibBMS: Siberian Brain Multiple Sclerosis Dataset with lesion segmentation and patient meta information

Tuchinov, B., Prokaeva, A., Vasilkiv, L., Stankevich, Y., Korobko, D., Malkova, N., Tulupov, A.

medrxiv logopreprintAug 16 2025
Multiple sclerosis (MS) is a chronic inflammatory neurodegenerative disorder of the central nervous system (CNS) and represents the leading cause of non-traumatic disability among young adults. Magnetic resonance imaging (MRI) has revolutionized both the clinical management and scientific understanding of MS, serving as an indispensable paraclinical tool. Its high sensitivity and diagnostic accuracy enable early detection and timely therapeutic intervention, significantly impacting patient outcomes. Recent technological advancements have facilitated the integration of artificial intelligence (AI) algorithms for automated lesion identification, segmentation, and longitudinal monitoring. The ongoing refinement of deep learning (DL) and machine learning (ML) techniques, alongside their incorporation into clinical workflows, holds great promise for improving healthcare accessibility and quality in MS management. Despite the encouraging performance of DL models in MS lesion segmentation and disease progression tracking, their effectiveness is frequently constrained by the scarcity of large, diverse, and publicly available datasets. Open-source initiatives such as MSLesSeg, MS-Baghdad, MS-Shift, and MSSEG-2 have provided valuable contributions to the research community. Building upon these foundations, we introduce the SibBMS dataset to further advance data-driven research in MS. In this study, we present the SibBMS dataset, a carefully curated, open-source resource designed to support MS research utilizing structural brain MRI. The dataset comprises imaging data from 93 patients diagnosed with MS or radiologically isolated syndrome (RIS), alongside 100 healthy controls. All lesion annotations were manually delineated and rigorously reviewed by a three-tier panel of experienced neuroradiologists to ensure clinical relevance and segmentation accuracy. Additionally, the dataset includes comprehensive demographic metadata--such as age, sex, and disease duration--enabling robust stratified analyses and facilitating the development of more generalizable predictive models. Our dataset is available via a request-access form at https://forms.gle/VqTenJ4n8S8qvtxQA.

Diagnostic performance of deep learning for predicting glioma isocitrate dehydrogenase and 1p/19q co-deletion in MRI: a systematic review and meta-analysis.

Farahani S, Hejazi M, Tabassum M, Di Ieva A, Mahdavifar N, Liu S

pubmed logopapersAug 16 2025
We aimed to evaluate the diagnostic performance of deep learning (DL)-based radiomics models for the noninvasive prediction of isocitrate dehydrogenase (IDH) mutation and 1p/19q co-deletion status in glioma patients using MRI sequences, and to identify methodological factors influencing accuracy and generalizability. Following PRISMA guidelines, we systematically searched major databases (PubMed, Scopus, Embase, Web of Science, and Google Scholar) up to March 2025, screening studies that utilized DL to predict IDH and 1p/19q co-deletion status from MRI data. We assessed study quality and risk of bias using the Radiomics Quality Score and the QUADAS-2 tool. Our meta-analysis employed a bivariate model to compute pooled sensitivity and specificity, and meta-regression to assess interstudy heterogeneity. Among the 1517 unique publications, 104 were included in the qualitative synthesis, and 72 underwent meta-analysis. Pooled estimates for IDH prediction in test cohorts yielded a sensitivity of 0.80 (95% CI: 0.77-0.83) and specificity of 0.85 (95% CI: 0.81-0.87). For 1p/19q co-deletion, sensitivity was 0.75 (95% CI: 0.65-0.82) and specificity was 0.82 (95% CI: 0.75-0.88). Meta-regression identified the tumor segmentation method and the extent of DL integration into the radiomics pipeline as significant contributors to interstudy variability. Although DL models demonstrate strong potential for noninvasive molecular classification of gliomas, clinical translation requires several critical steps: harmonization of multi-center MRI data using techniques such as histogram matching and DL-based style transfer; adoption of standardized and automated segmentation protocols; extensive multi-center external validation; and prospective clinical validation. Question Can DL based radiomics using routine MRI noninvasively predict IDH mutation and 1p/19q co-deletion status in gliomas, and what factors affect diagnostic accuracy? Findings Meta-analysis showed 80% sensitivity and 85% specificity for predicting IDH mutation, and 75% sensitivity and 82% specificity for 1p/19q co-deletion status. Clinical relevance MRI-based DL models demonstrate clinically useful accuracy for noninvasive glioma molecular classification, but data harmonization, standardized automated segmentation, and rigorous multi-center external validation are essential for clinical adoption.
Page 83 of 3493486 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.