Sort by:
Page 28 of 1401395 results

RepViT-CXR: A Channel Replication Strategy for Vision Transformers in Chest X-ray Tuberculosis and Pneumonia Classification

Faisal Ahmed

arxiv logopreprintSep 10 2025
Chest X-ray (CXR) imaging remains one of the most widely used diagnostic tools for detecting pulmonary diseases such as tuberculosis (TB) and pneumonia. Recent advances in deep learning, particularly Vision Transformers (ViTs), have shown strong potential for automated medical image analysis. However, most ViT architectures are pretrained on natural images and require three-channel inputs, while CXR scans are inherently grayscale. To address this gap, we propose RepViT-CXR, a channel replication strategy that adapts single-channel CXR images into a ViT-compatible format without introducing additional information loss. We evaluate RepViT-CXR on three benchmark datasets. On the TB-CXR dataset,our method achieved an accuracy of 99.9% and an AUC of 99.9%, surpassing prior state-of-the-art methods such as Topo-CXR (99.3% accuracy, 99.8% AUC). For the Pediatric Pneumonia dataset, RepViT-CXR obtained 99.0% accuracy, with 99.2% recall, 99.3% precision, and an AUC of 99.0%, outperforming strong baselines including DCNN and VGG16. On the Shenzhen TB dataset, our approach achieved 91.1% accuracy and an AUC of 91.2%, marking a performance improvement over previously reported CNN-based methods. These results demonstrate that a simple yet effective channel replication strategy allows ViTs to fully leverage their representational power on grayscale medical imaging tasks. RepViT-CXR establishes a new state of the art for TB and pneumonia detection from chest X-rays, showing strong potential for deployment in real-world clinical screening systems.

Calibration and Uncertainty for multiRater Volume Assessment in multiorgan Segmentation (CURVAS) challenge results.

Riera-Marín M, O K S, Rodríguez-Comas J, May MS, Pan Z, Zhou X, Liang X, Erick FX, Prenner A, Hémon C, Boussot V, Dillenseger JL, Nunes JC, Qayyum A, Mazher M, Niederer SA, Kushibar K, Martín-Isla C, Radeva P, Lekadir K, Barfoot T, Garcia Peraza Herrera LC, Glocker B, Vercauteren T, Gago L, Englemann J, Kleiss JM, Aubanell A, Antolin A, García-López J, González Ballester MA, Galdrán A

pubmed logopapersSep 10 2025
Deep learning (DL) has become the dominant approach for medical image segmentation, yet ensuring the reliability and clinical applicability of these models requires addressing key challenges such as annotation variability, calibration, and uncertainty estimation. This is why we created the Calibration and Uncertainty for multiRater Volume Assessment in multiorgan Segmentation (CURVAS), which highlights the critical role of multiple annotators in establishing a more comprehensive ground truth, emphasizing that segmentation is inherently subjective and that leveraging inter-annotator variability is essential for robust model evaluation. Seven teams participated in the challenge, submitting a variety of DL models evaluated using metrics such as Dice Similarity Coefficient (DSC), Expected Calibration Error (ECE), and Continuous Ranked Probability Score (CRPS). By incorporating consensus and dissensus ground truth, we assess how DL models handle uncertainty and whether their confidence estimates align with true segmentation performance. Our findings reinforce the importance of well-calibrated models, as better calibration is strongly correlated with the quality of the results. Furthermore, we demonstrate that segmentation models trained on diverse datasets and enriched with pre-trained knowledge exhibit greater robustness, particularly in cases deviating from standard anatomical structures. Notably, the best-performing models achieved high DSC and well-calibrated uncertainty estimates. This work underscores the need for multi-annotator ground truth, thorough calibration assessments, and uncertainty-aware evaluations to develop trustworthy and clinically reliable DL-based medical image segmentation models.

Few-shot learning for highly accelerated 3D time-of-flight MRA reconstruction.

Li H, Chiew M, Dragonu I, Jezzard P, Okell TW

pubmed logopapersSep 10 2025
To develop a deep learning-based reconstruction method for highly accelerated 3D time-of-flight MRA (TOF-MRA) that achieves high-quality reconstruction with robust generalization using extremely limited acquired raw data, addressing the challenge of time-consuming acquisition of high-resolution, whole-head angiograms. A novel few-shot learning-based reconstruction framework is proposed, featuring a 3D variational network specifically designed for 3D TOF-MRA that is pre-trained on simulated complex-valued, multi-coil raw k-space datasets synthesized from diverse open-source magnitude images and fine-tuned using only two single-slab experimentally acquired datasets. The proposed approach was evaluated against existing methods on acquired retrospectively undersampled in vivo k-space data from five healthy volunteers and on prospectively undersampled data from two additional subjects. The proposed method achieved superior reconstruction performance on experimentally acquired in vivo data over comparison methods, preserving most fine vessels with minimal artifacts with up to eight-fold acceleration. Compared to other simulation techniques, the proposed method generated more realistic raw k-space data for 3D TOF-MRA. Consistently high-quality reconstructions were also observed on prospectively undersampled data. By leveraging few-shot learning, the proposed method enabled highly accelerated 3D TOF-MRA relying on minimal experimentally acquired data, achieving promising results on both retrospective and prospective in vivo data while outperforming existing methods. Given the challenges of acquiring and sharing large raw k-space datasets, this holds significant promise for advancing research and clinical applications in high-resolution, whole-head 3D TOF-MRA imaging.

Live(r) Die: Predicting Survival in Colorectal Liver Metastasis

Muhammad Alberb, Helen Cheung, Anne Martel

arxiv logopreprintSep 10 2025
Colorectal cancer frequently metastasizes to the liver, significantly reducing long-term survival. While surgical resection is the only potentially curative treatment for colorectal liver metastasis (CRLM), patient outcomes vary widely depending on tumor characteristics along with clinical and genomic factors. Current prognostic models, often based on limited clinical or molecular features, lack sufficient predictive power, especially in multifocal CRLM cases. We present a fully automated framework for surgical outcome prediction from pre- and post-contrast MRI acquired before surgery. Our framework consists of a segmentation pipeline and a radiomics pipeline. The segmentation pipeline learns to segment the liver, tumors, and spleen from partially annotated data by leveraging promptable foundation models to complete missing labels. Also, we propose SAMONAI, a novel zero-shot 3D prompt propagation algorithm that leverages the Segment Anything Model to segment 3D regions of interest from a single point prompt, significantly improving our segmentation pipeline's accuracy and efficiency. The predicted pre- and post-contrast segmentations are then fed into our radiomics pipeline, which extracts features from each tumor and predicts survival using SurvAMINN, a novel autoencoder-based multiple instance neural network for survival analysis. SurvAMINN jointly learns dimensionality reduction and hazard prediction from right-censored survival data, focusing on the most aggressive tumors. Extensive evaluation on an institutional dataset comprising 227 patients demonstrates that our framework surpasses existing clinical and genomic biomarkers, delivering a C-index improvement exceeding 10%. Our results demonstrate the potential of integrating automated segmentation algorithms and radiomics-based survival analysis to deliver accurate, annotation-efficient, and interpretable outcome prediction in CRLM.

Diffusion MRI of the prenatal fetal brain: a methodological scoping review.

Di Stefano M, Ciceri T, Leemans A, de Zwarte SMC, De Luca A, Peruzzo D

pubmed logopapersSep 10 2025
Fetal diffusion-weighted magnetic resonance imaging (dMRI) represents a promising modality for the assessment of white matter fiber organization, microstructure and development during pregnancy. Over the past two decades, research using this technology has significantly increased, but no consensus has yet been established on how to best implement and standardize the use of fetal dMRI across clinical and research settings. This scoping review aims to synthesize the various methodological approaches for the analysis of fetal dMRI brain data and their applications. We identified a total of 54 relevant articles and analyzed them across five primary domains: (1) datasets, (2) acquisition protocols, (3) image preprocessing/denoising, (4) image processing/modeling, and (5) brain atlas construction. The review of these articles reveals a predominant reliance on Diffusion Tensor Imaging (DTI) (n=37) to study fiber properties, and deterministic tractography approaches to investigate fiber organization (n=23). However, there is an emerging trend towards the adoption of more advanced techniques that address the inherent limitations of fetal dMRI (e.g. maternal and fetal motion, intensity artifacts, fetus's fast and uneven development), particularly through the application of artificial intelligence-based approaches (n=8). In our view, the results suggest that the potential of fetal brain dMRI is hindered by the methodological heterogeneity of the proposed solutions and the lack of publicly available data and tools. Nevertheless, clinical applications demonstrate its utility in studying brain development in both healthy and pathological conditions.

Integrating Perfusion with AI-derived Coronary Calcium on CT attenuation scans to improve selection of low-risk studies for stress-only SPECT MPI.

Miller RJH, Barrett O, Shanbhag A, Rozanski A, Dey D, Lemley M, Van Kriekinge SD, Kavanagh PB, Feher A, Miller EJ, Einstein AJ, Ruddy TD, Bateman T, Kaufmann PA, Liang JX, Berman DS, Slomka PJ

pubmed logopapersSep 10 2025
In many contemporary laboratories a completely normal stress perfusion SPECT-MPI is required for rest imaging cancelation. We hypothesized that an artificial intelligence (AI)-derived CAC score of 0 from computed tomography attenuation correction (CTAC) scans obtained during hybrid SPECT/CT, may identify additional patients at low risk of MACE who could be selected for stress-only imaging. Patients without known coronary artery disease who underwent SPECT/CT MPI and had stress total perfusion deficit (TPD) <5% were included. Stress TPD was categorized as no abnormality (stress TPD 0%) or minimal abnormality (stress TPD 1-4%). CAC was automatically quantified from the CTAC scans. We evaluated associations with major adverse cardiovascular events (MACE). In total, 6,884 patients (49.4% males and median age 63 years) were included. Of these, 9.7% experienced MACE (15% non-fatal MI, 2.7% unstable angina, 38.5% coronary revascularization and 43.8% deaths). Compared to patients with TPD 0%, those with TPD 1-4% and CAC 0 had lower MACE risk (hazard ratio [HR] 0.58; 95% confidence interval [CI] 0.45-0.76), while those with TPD 1-4% and CAC score>0 had a higher MACE risk (HR 1.90; 95%CI 1.56-2.30). Compared to canceling rest scans only in patients with normal perfusion (TPD 0%), by canceling rest scans in patients with CAC 0, more than twice as many rest scans (55% vs 25%) could be cancelled. Using AI-derived CAC 0 on CT scans with hybrid SPECT/CT in patients with a stress TPD<5% can double the proportion of patients in whom stress-only procedures could be safely performed.

Implicit Neural Representations of Intramyocardial Motion and Strain

Andrew Bell, Yan Kit Choi, Steffen E Peterson, Andrew King, Muhummad Sohaib Nazir, Alistair A Young

arxiv logopreprintSep 10 2025
Automatic quantification of intramyocardial motion and strain from tagging MRI remains an important but challenging task. We propose a method using implicit neural representations (INRs), conditioned on learned latent codes, to predict continuous left ventricular (LV) displacement -- without requiring inference-time optimisation. Evaluated on 452 UK Biobank test cases, our method achieved the best tracking accuracy (2.14 mm RMSE) and the lowest combined error in global circumferential (2.86%) and radial (6.42%) strain compared to three deep learning baselines. In addition, our method is $\sim$380$\times$ faster than the most accurate baseline. These results highlight the suitability of INR-based models for accurate and scalable analysis of myocardial strain in large CMR datasets.

Spherical Harmonics Representation Learning for High-Fidelity and Generalizable Super-Resolution in Diffusion MRI.

Wu R, Cheng J, Li C, Zou J, Fan W, Ma X, Guo H, Liang Y, Wang S

pubmed logopapersSep 9 2025
Diffusion magnetic resonance imaging (dMRI) often suffers from low spatial and angular resolution due to inherent limitations in imaging hardware and system noise, adversely affecting the accurate estimation of microstructural parameters with fine anatomical details. Deep learning-based super-resolution techniques have shown promise in enhancing dMRI resolution without increasing acquisition time. However, most existing methods are confined to either spatial or angular super-resolution, disrupting the information exchange between the two domains and limiting their effectiveness in capturing detailed microstructural features. Furthermore, traditional pixel-wise loss functions only consider pixel differences, and struggle to recover intricate image details essential for high-resolution reconstruction. We propose SHRL-dMRI, a novel Spherical Harmonics Representation Learning framework for high-fidelity, generalizable super-resolution in dMRI to address these challenges. SHRL-dMRI explores implicit neural representations and spherical harmonics to model continuous spatial and angular representations, simultaneously enhancing both spatial and angular resolution while improving the accuracy of microstructural parameter estimation. To further preserve image fidelity, a data-fidelity module and wavelet-based frequency loss are introduced, ensuring the super-resolved images preserve image consistency and retain fine details. Extensive experiments demonstrate that, compared to five other state-of-the-art methods, our method significantly enhances dMRI data resolution, improves the accuracy of microstructural parameter estimation, and provides better generalization capabilities. It maintains stable performance even under a 45× downsampling factor. The proposed method can effectively improve the resolution of dMRI data without increasing the acquisition time, providing new possibilities for future clinical applications.

YOLOv12 Algorithm-Aided Detection and Classification of Lateral Malleolar Avulsion Fracture and Subfibular Ossicle Based on CT Images: A Multicenter Study.

Liu J, Sun P, Yuan Y, Chen Z, Tian K, Gao Q, Li X, Xia L, Zhang J, Xu N

pubmed logopapersSep 9 2025
Lateral malleolar avulsion fracture (LMAF) and subfibular ossicle (SFO) are distinct entities that both present as small bone fragments near the lateral malleolus on imaging, yet require different treatment strategies. Clinical and radiological differentiation is challenging, which can impede timely and precise management. On imaging, magnetic resonance imaging (MRI) is the diagnostic gold standard for differentiating LMAF from SFO, whereas radiological differentiation on computed tomography (CT) alone is challenging in routine practice. Deep convolutional neural networks (DCNNs) have shown promise in musculoskeletal imaging diagnostics, but robust, multicenter evidence in this specific context is lacking. To evaluate several state-of-the-art DCNNs-including the latest YOLOv12 algorithm - for detecting and classifying LMAF and SFO on CT images, using MRI-based diagnoses as the gold standard, and to compare model performance with radiologists reading CT alone. In this retrospective study, 1,918 patients (LMAF: 1253, SFO: 665) were enrolled from two hospitals in China between 2014 and 2024. MRI served as the gold standard and was independently interpreted by two senior musculoskeletal radiologists. Only CT images were used for model training, validation, and testing. CT images were manually annotated with bounding boxes. The cohort was randomly split into a training set (n=1,092), internal validation set (n=476), and external test set (n=350). Four deep learning models - Faster R-CNN, SSD, RetinaNet, and YOLOv12 - were trained and evaluated using identical procedures. Model performance was assessed using mean average precision at IoU=0.5 (mAP50), area under the receiver-operating curve (AUC), accuracy, sensitivity, and specificity. The external test set was also independently interpreted by two musculoskeletal radiologists with 7 and 15 years of experience, with results compared to the best performing model. Saliency maps were generated using Shapley values to enhance interpretability. Among the evaluated models, YOLOv12 achieved the highest detection and classification performance, with a mAP50 of 92.1% and an AUC of 0.983 on the external test set - significantly outperforming Faster R-CNN (mAP50: 63.7%, AUC: 0.79), SSD (mAP50 63.0%, AUC 0.63), and RetinaNet (mAP50: 67.0%, AUC: 0.73) (all P < .05). When using CT alone, radiologists performed at a moderate level (accuracy: 75.6%/69.1%; sensitivity: 75.0%/65.2%; specificity: 76.0%/71.1%), whereas YOLOv12 approached MRI-based reference performance (accuracy: 92.0%; sensitivity: 86.7%; specificity: 82.2%). Saliency maps corresponded well with expert-identified regions. While MRI (read by senior radiologists) is the gold standard for distinguishing LMAF from SFO, CT-based differentiation is challenging for radiologists. A CT-only DCNN (YOLOv12) achieved substantially higher performance than radiologists reading CT alone and approached the MRI-based reference standard, highlighting its potential to augment CT-based decision-making where MRI is limited or unavailable.

Machine learning for myocarditis diagnosis using cardiovascular magnetic resonance: a systematic review, diagnostic test accuracy meta-analysis, and comparison with human physicians.

Łajczak P, Sahin OK, Matyja J, Puglla Sanchez LR, Sayudo IF, Ayesha A, Lopes V, Majeed MW, Krishna MM, Joseph M, Pereira M, Obi O, Silva R, Lecchi C, Schincariol M

pubmed logopapersSep 9 2025
Myocarditis is an inflammation of heart tissue. Cardiovascular magnetic resonance imaging (CMR) has emerged as an important non-invasive imaging tool for diagnosing myocarditis, however, interpretation remains a challenge for novice physicians. Advancements in machine learning (ML) models have further improved diagnostic accuracy, demonstrating good performance. Our study aims to assess the diagnostic accuracy of ML in identifying myocarditis using CMR. A systematic search was performed using PubMed, Embase, Web of Science, Cochrane, and Scopus to identify studies reporting the diagnostic accuracy of ML in the detection of myocarditis using CMR. The included studies evaluated both image-based and report-based assessments using various ML models. Diagnostic accuracy was estimated using a Random-Effects model (R software). We found a total of 141 ML model results from a total of 12 studies, which were included in the systematic review. The best models achieved 0.93 (95% Confidence Interval (CI) 0.88-0.96) sensitivity and 0.95 (95% CI 0.89-0.97) specificity. Pooled area under the curve was 0.97 (95% CI 0.93-0.98). Comparisons with human physicians showed comparable results for diagnostic accuracy of myocarditis. Quality assessment concerns and heterogeneity were present. CMR augmented using ML models with advanced algorithms can provide high diagnostic accuracy for myocarditis, even surpassing novice CMR radiologists. However, high heterogeneity, quality assessment concerns, and lack of information on cost-effectiveness may limit the clinical implementation of ML. Future investigations should explore cost-effectiveness and minimize biases in their methodologies.
Page 28 of 1401395 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.